These slides, based on a presentation at distinguished lecture at IBM Almaden in March, 2017 explore some of the challenges to machine learning and some recent work. It is a newer version of the slides originally presented at IJCAI 2016.
Social Machines: The coming collision of Artificial Intelligence, Social Netw...James Hendler
Will your next doctor be a human being—or a machine? Will you have a choice? If you do, what should you know before making it?This book introduces the reader to the pitfalls and promises of artificial intelligence (AI) in its modern incarnation and the growing trend of systems to "reach off the Web" into the real world. The convergence of AI, social networking, and modern computing is creating an historic inflection point in the partnership between human beings and machines with potentially profound impacts on the future not only of computing but of our world and species.AI experts and researchers James Hendler—co-originator of the Semantic Web (Web 3.0)—and Alice Mulvehill—developer of AI-based operational systems for DARPA, the Air Force, and NASA—explore the social implications of AI systems in the context of a close examination of the technologies that make them possible. The authors critically evaluate the utopian claims and dystopian counterclaims of AI prognosticators. Social Machines: The Coming Collision of Artificial Intelligence, Social Networking, and Humanity is your richly illustrated field guide to the future of your machine-mediated relationships with other human beings and with increasingly intelligent machines.
A talk presented at IBM's "Academy of Technology" exploring, in brief, what the research community has to learn from Watson (and the techniques derived therefrom) and some new research ideas that can be explored therefrom. All known proprietary information from either IBM or RPI has been removed from the original talk.
Knowledge Representation in the Age of Deep Learning, Watson, and the Semanti...James Hendler
IJCAI 16 keynote on the need to bring modern AI accomplishments of recent years into connection with the more traditional goals of symbolic AI (and vice versa).
Why Watson Won: A cognitive perspectiveJames Hendler
In this talk, we present how the Watson program, IBM's famous Jeopardy playing computer, works (based on papers published by IBM), we look at some aspects of potential scoring approaches, and we examine how Watson compares to several well known systems and some preliminary thoughts on using it in future artificial intelligence and cognitive science approaches.
Digital Archiving, The Semantic Web, and Modern AIJames Hendler
This was my keynote talk on accepted the "Spotlight Award" from the association of moving image archivists. The talk relates needs of archiving, use of semantic (web) metadata, and deep learning for archiving.
Social Machines: The coming collision of Artificial Intelligence, Social Netw...James Hendler
Will your next doctor be a human being—or a machine? Will you have a choice? If you do, what should you know before making it?This book introduces the reader to the pitfalls and promises of artificial intelligence (AI) in its modern incarnation and the growing trend of systems to "reach off the Web" into the real world. The convergence of AI, social networking, and modern computing is creating an historic inflection point in the partnership between human beings and machines with potentially profound impacts on the future not only of computing but of our world and species.AI experts and researchers James Hendler—co-originator of the Semantic Web (Web 3.0)—and Alice Mulvehill—developer of AI-based operational systems for DARPA, the Air Force, and NASA—explore the social implications of AI systems in the context of a close examination of the technologies that make them possible. The authors critically evaluate the utopian claims and dystopian counterclaims of AI prognosticators. Social Machines: The Coming Collision of Artificial Intelligence, Social Networking, and Humanity is your richly illustrated field guide to the future of your machine-mediated relationships with other human beings and with increasingly intelligent machines.
A talk presented at IBM's "Academy of Technology" exploring, in brief, what the research community has to learn from Watson (and the techniques derived therefrom) and some new research ideas that can be explored therefrom. All known proprietary information from either IBM or RPI has been removed from the original talk.
Knowledge Representation in the Age of Deep Learning, Watson, and the Semanti...James Hendler
IJCAI 16 keynote on the need to bring modern AI accomplishments of recent years into connection with the more traditional goals of symbolic AI (and vice versa).
Why Watson Won: A cognitive perspectiveJames Hendler
In this talk, we present how the Watson program, IBM's famous Jeopardy playing computer, works (based on papers published by IBM), we look at some aspects of potential scoring approaches, and we examine how Watson compares to several well known systems and some preliminary thoughts on using it in future artificial intelligence and cognitive science approaches.
Digital Archiving, The Semantic Web, and Modern AIJames Hendler
This was my keynote talk on accepted the "Spotlight Award" from the association of moving image archivists. The talk relates needs of archiving, use of semantic (web) metadata, and deep learning for archiving.
Keynote talk presented at WebScience 2020 conference. Looks at roots of Web/Web Science and explores two possible futures and what web scientists and others can do about it. Even starts with a quote from Charles Dickins.
The Unreasonable Effectiveness of MetadataJames Hendler
Invited talk at VIVO 2017 conference - explores the view of the semantic web as enriched metadata, and how that kind of information can be used in new and interesting ways.
On Beyond OWL: challenges for ontologies on the WebJames Hendler
The need for ontologies in the real world is manifest and increasing. On the Web, ontologies are everywhere — but OWL isn’t. In this talk, I look at some of the things that are not in OWL, but which are needed for the use of OWL in many Web domains. This talk explores some of the needs for ontologies on the Web in data integration, emerging technologies, and linked data applications – and asks where the features needed for these are in OWL. The talk ends with some challenges to the OWL, and greater ontology, community needed to see more eventual use of standard ontologies on the Web.
Social Machines - 2017 Update (University of Iowa)James Hendler
This is an update to the talk entitled "Social Machines: the coming collision of artificial intelligence, social networks and humanity." It was presented as an ACM Distinguished Speaker lecture at the "University of Iowa Computing Conference" 2017-02-24
A 1015 update to the 2012 "Data Big and Broad" talk - http://www.slideshare.net/jahendler/data-big-and-broad-oxford-2012 - extends coverage, brings more in context of recent "big data" work.
In this talk I review some of the early visions of the Semantic Web, some of the different views, and I follow through on a thread of how Semantic Web technology has been adopted in search engines (and other companies). I end with a challenge to the research community to keep pursuing this research, rather than letting industry take over the "low end" and keep new work from flourishing.
Presented to a webinar hosted by Nuance Inc, under the title "The Semantic Web: What it is and Why you should care" on 2/29/2012.
This talk presents a fast overview of the Semantic Web and recent application deployment in the space.
Facilitating Web Science Collaboration through Semantic MarkupJames Hendler
These are the slides that accompanied the paper "Dominic DiFranzo, John S. Erickson, Marie Joan Kristine T. Gloria, Joanne S. Luciano, Deborah McGuinness, & James Hendler, The Web Observatory Extension: Facilitating Web Science Collaboration through Semantic Markup, Proc. WWW 2014 (Web Science Track), Seoul, Korea, 2014." They describe an extension to schema.org that can be used for sharing Web-related datasets and projects.
"Why the Semantic Web will Never Work" (note the quotes)James Hendler
This talk refutes some criticisms of the semantic web, but also outlines some research challenges we must overcome if we are to ever realize Tim Berners-Lee's original Semantic Web vision.
HyperMembrane Structures for Open Source Cognitive ComputingJack Park
Open source "cognitive computing" systems, specifically OpenSherlock; describes a HyperMembrane structure, a kind of information fabric, for machine reading, literature-based discovery, deep question answering. Platform is open source, uses ElasticSearch, topic maps, JSON, link-grammar parsing, and qualitative process models.
How to Build a Research Roadmap (avoiding tempting dead-ends)Aaron Sloman
What's a Research Roadmap For?
Why do we need one?
How can we avoid the usual trap of making bold promises to do X, Y and Z,
then hope that our previous promises will not be remembered the next time we apply for funds to do X, Y and Z?
How can we produce a sensible, well informed roadmap?
Originally presented at the euCognition Research Roadmap discussion in Munich on 12 Jan 2007
This suggests a way to avoid tempting dead ends (repeating old promises that proved unrealistic) by examining many long term goals, including describing existing human and animal competences not yet achieved by robots, then working backwards systematically by investigating requirements for those competences, and requirements for meeting those requirements, etc. Insread of generating a single linear roadmap this should produce a partially ordered network of intermediate targets, leading back, to short term goals that may be achievable starting from where we are.
Such a roadmap will inevitably have mistakes: over-optimistic goals, missing preconditions, unrecognised opportunities. But if the work is done in many teams in a fully open manner with as much collaboration as possible, it should be possible to make faster, deeper, progress than can be achieved by brain-storming discussions of where we can get in a few years.
An updated "what is happening on the Semantic Web" presentation for 2010 - includes business use, government use, and some speculation on the current areas of excitement and development. A very accessible talk, not aimed solely at a technical audience.
A discussion of the nature of AI/ML as an empirical science. Covering concepts in the field, how to position ourselves, how to plan for research, what are empirical methods in AI/ML, and how to build up a theory of AI.
Virtuality, causation and the mind-body relationshipAaron Sloman
Extends my previous introductions to virtual machines and their role both in artefacts and products of biological evolution. This attempts to correct various erroneous assumptions about computation, functionalism, supervenience, life, information, and causation. See also http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
Man’s dreams of ‘intelligences and robots’ go back thousands of years to the worship of gods and statues; mythologies: talisman and puppets; people, places and objects with supposed magical and (often) judgemental/punitive abilities. But it wasn’t until the electronic revolution in 1915, accelerated by WWII that we saw the realisation of two game changing-machines: Colossus (Decoding Machine of Bletchley Park) 1943 and ENIAC (Artillery Computation Engine and Nuclear Bomb Design @ The University of Pennsylvania) 1946.
And so in 1950 the modern AI movement was optimistically projecting what machines would be capable of ‘almost anything’ by 1960/70. Unfortunately, there was no understanding of the complexity to be addressed, and all the projections were wildly wrong; leading to a deep trough of disparagement and disillusionment of some 30 years. However, 70 years on and the original AI optimism and projections of what might be had at least been largely achieved with AI outgunning humans at every board and card game including Poker and GO, and of course; general knowledge, medical diagnosis, image and information pattern recognition…
Man’s dreams of ‘intelligences and robots’ goes back thousands of years to the worship of gods and statues; mythologies: talisman and puppets; people, places and objects with supposed magical and (often) judgemental/punitive abilities. But it wasn’t until the electronic revolution in 1915, accelerated by WWII that we saw the realisation of two game changing-machines: Colossus (Decoding Machine of Bletchley Park) 1943 and ENIAC (Artillery Computation Engine and Nuclear Bomb Design @ The University of Pennsylvania) 1946.
And so in 1950 the modern AI movement was optimistically projecting what machines would be capable of ‘almost anything’ by 1960/70. Unfortunately, there was no understanding of the complexity to be addressed, and all the projections were wildly wrong; leading to a deep trough of disparagement and disillusionment of some 30 years. However, 70 years on and the original AI optimism and projections of what might be have at least been largely achieved with AI outgunning humans at every board and card game including Poker and GO, and of course; general knowledge, medical diagnosis, image and information pattern recognition…
Keynote talk presented at WebScience 2020 conference. Looks at roots of Web/Web Science and explores two possible futures and what web scientists and others can do about it. Even starts with a quote from Charles Dickins.
The Unreasonable Effectiveness of MetadataJames Hendler
Invited talk at VIVO 2017 conference - explores the view of the semantic web as enriched metadata, and how that kind of information can be used in new and interesting ways.
On Beyond OWL: challenges for ontologies on the WebJames Hendler
The need for ontologies in the real world is manifest and increasing. On the Web, ontologies are everywhere — but OWL isn’t. In this talk, I look at some of the things that are not in OWL, but which are needed for the use of OWL in many Web domains. This talk explores some of the needs for ontologies on the Web in data integration, emerging technologies, and linked data applications – and asks where the features needed for these are in OWL. The talk ends with some challenges to the OWL, and greater ontology, community needed to see more eventual use of standard ontologies on the Web.
Social Machines - 2017 Update (University of Iowa)James Hendler
This is an update to the talk entitled "Social Machines: the coming collision of artificial intelligence, social networks and humanity." It was presented as an ACM Distinguished Speaker lecture at the "University of Iowa Computing Conference" 2017-02-24
A 1015 update to the 2012 "Data Big and Broad" talk - http://www.slideshare.net/jahendler/data-big-and-broad-oxford-2012 - extends coverage, brings more in context of recent "big data" work.
In this talk I review some of the early visions of the Semantic Web, some of the different views, and I follow through on a thread of how Semantic Web technology has been adopted in search engines (and other companies). I end with a challenge to the research community to keep pursuing this research, rather than letting industry take over the "low end" and keep new work from flourishing.
Presented to a webinar hosted by Nuance Inc, under the title "The Semantic Web: What it is and Why you should care" on 2/29/2012.
This talk presents a fast overview of the Semantic Web and recent application deployment in the space.
Facilitating Web Science Collaboration through Semantic MarkupJames Hendler
These are the slides that accompanied the paper "Dominic DiFranzo, John S. Erickson, Marie Joan Kristine T. Gloria, Joanne S. Luciano, Deborah McGuinness, & James Hendler, The Web Observatory Extension: Facilitating Web Science Collaboration through Semantic Markup, Proc. WWW 2014 (Web Science Track), Seoul, Korea, 2014." They describe an extension to schema.org that can be used for sharing Web-related datasets and projects.
"Why the Semantic Web will Never Work" (note the quotes)James Hendler
This talk refutes some criticisms of the semantic web, but also outlines some research challenges we must overcome if we are to ever realize Tim Berners-Lee's original Semantic Web vision.
HyperMembrane Structures for Open Source Cognitive ComputingJack Park
Open source "cognitive computing" systems, specifically OpenSherlock; describes a HyperMembrane structure, a kind of information fabric, for machine reading, literature-based discovery, deep question answering. Platform is open source, uses ElasticSearch, topic maps, JSON, link-grammar parsing, and qualitative process models.
How to Build a Research Roadmap (avoiding tempting dead-ends)Aaron Sloman
What's a Research Roadmap For?
Why do we need one?
How can we avoid the usual trap of making bold promises to do X, Y and Z,
then hope that our previous promises will not be remembered the next time we apply for funds to do X, Y and Z?
How can we produce a sensible, well informed roadmap?
Originally presented at the euCognition Research Roadmap discussion in Munich on 12 Jan 2007
This suggests a way to avoid tempting dead ends (repeating old promises that proved unrealistic) by examining many long term goals, including describing existing human and animal competences not yet achieved by robots, then working backwards systematically by investigating requirements for those competences, and requirements for meeting those requirements, etc. Insread of generating a single linear roadmap this should produce a partially ordered network of intermediate targets, leading back, to short term goals that may be achievable starting from where we are.
Such a roadmap will inevitably have mistakes: over-optimistic goals, missing preconditions, unrecognised opportunities. But if the work is done in many teams in a fully open manner with as much collaboration as possible, it should be possible to make faster, deeper, progress than can be achieved by brain-storming discussions of where we can get in a few years.
An updated "what is happening on the Semantic Web" presentation for 2010 - includes business use, government use, and some speculation on the current areas of excitement and development. A very accessible talk, not aimed solely at a technical audience.
A discussion of the nature of AI/ML as an empirical science. Covering concepts in the field, how to position ourselves, how to plan for research, what are empirical methods in AI/ML, and how to build up a theory of AI.
Virtuality, causation and the mind-body relationshipAaron Sloman
Extends my previous introductions to virtual machines and their role both in artefacts and products of biological evolution. This attempts to correct various erroneous assumptions about computation, functionalism, supervenience, life, information, and causation. See also http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
Man’s dreams of ‘intelligences and robots’ go back thousands of years to the worship of gods and statues; mythologies: talisman and puppets; people, places and objects with supposed magical and (often) judgemental/punitive abilities. But it wasn’t until the electronic revolution in 1915, accelerated by WWII that we saw the realisation of two game changing-machines: Colossus (Decoding Machine of Bletchley Park) 1943 and ENIAC (Artillery Computation Engine and Nuclear Bomb Design @ The University of Pennsylvania) 1946.
And so in 1950 the modern AI movement was optimistically projecting what machines would be capable of ‘almost anything’ by 1960/70. Unfortunately, there was no understanding of the complexity to be addressed, and all the projections were wildly wrong; leading to a deep trough of disparagement and disillusionment of some 30 years. However, 70 years on and the original AI optimism and projections of what might be had at least been largely achieved with AI outgunning humans at every board and card game including Poker and GO, and of course; general knowledge, medical diagnosis, image and information pattern recognition…
Man’s dreams of ‘intelligences and robots’ goes back thousands of years to the worship of gods and statues; mythologies: talisman and puppets; people, places and objects with supposed magical and (often) judgemental/punitive abilities. But it wasn’t until the electronic revolution in 1915, accelerated by WWII that we saw the realisation of two game changing-machines: Colossus (Decoding Machine of Bletchley Park) 1943 and ENIAC (Artillery Computation Engine and Nuclear Bomb Design @ The University of Pennsylvania) 1946.
And so in 1950 the modern AI movement was optimistically projecting what machines would be capable of ‘almost anything’ by 1960/70. Unfortunately, there was no understanding of the complexity to be addressed, and all the projections were wildly wrong; leading to a deep trough of disparagement and disillusionment of some 30 years. However, 70 years on and the original AI optimism and projections of what might be have at least been largely achieved with AI outgunning humans at every board and card game including Poker and GO, and of course; general knowledge, medical diagnosis, image and information pattern recognition…
AI and the Researcher: ChatGPT and DALL-E in Scholarly Writing and PublishingErin Owens
The artificial intelligence tool ChatGPT has taken the world by storm, prompting concerns about student plagiarism. But A.I. text and image generators also pose ethical and legal conundrums for scholarly researchers. This session will delve into some of the emerging issues and developments that may affect faculty in scholarly writing and publishing.
Semantic Integration of Citizen Sensor Data and Multilevel Sensing: A compreh...Amit Sheth
Amit Sheth, "Semantic Integration of Citizen Sensor Data and Multilevel Sensing: A comprehensive path towards event monitoring and situational awareness", Keynote at
From E-Gov to Connected Governance: the Role of Cloud Computing, Web 2.0 and Web 3.0 Semantic Technologies, Falls Church, VA, February 17, 2009. http://semanticommunity.wik.is/
by Samantha Adams, Met Office.
Originally purely academic research fields, Machine Learning and AI are now definitely mainstream and frequently mentioned in the Tech media (and regular media too).
We’ve also got the explosion of Data Science which encompasses these fields and more. There’s a lot of interesting things going on and a lot of positive as well as negative hype. The terms ML and AI are often used interchangeably and techniques are also often described as being inspired by the brain.
In this talk I will explore the history and evolution of these fields, current progress and the challenges in making artificial brains
From the FreshTech 2017 conference by TechExeter
www.techexeter.uk
Manichean Progress: Positive and Negative States of the Art in Web-Scale Data...Lewis Shepherd
Discussion of current Microsoft Research projects and prospects which help drive open innovation and agile experimentation via cloud-based services; and projects which aim at advancing the state-of-the-art in knowledge representation and reasoning under uncertainty at web scale. I also begin by discussing potential malign implications of mass automated implementations of linked-data systems, as functions of what governments (and users of public data) can/should/shouldn’t do in promoting mass activity.
Introduction to Artificial intelligence and MLbansalpra7
**Title: Understanding the Landscape of Artificial Intelligence: A Comprehensive Exploration**
**I. Introduction**
In recent decades, Artificial Intelligence (AI) has emerged as a transformative force, reshaping industries, influencing daily life, and pushing the boundaries of human capabilities. This comprehensive exploration delves into the multifaceted landscape of AI, encompassing its origins, key concepts, applications, ethical considerations, and future prospects.
**II. Historical Perspective**
AI's roots can be traced back to ancient history, where philosophers contemplated the nature of intelligence. However, it wasn't until the mid-20th century that AI as a field of study gained momentum. The influential Dartmouth Conference in 1956 marked the official birth of AI, with early pioneers like Alan Turing laying the theoretical groundwork.
**III. Foundations of AI**
Understanding AI requires grasping its foundational principles. Machine Learning (ML), a subset of AI, empowers machines to learn patterns and make decisions without explicit programming. Within ML, various approaches, such as supervised learning, unsupervised learning, and reinforcement learning, play crucial roles in shaping AI applications.
**IV. Types of Artificial Intelligence**
AI is not a monolithic entity; it spans a spectrum of capabilities. Narrow AI, also known as Weak AI, excels in specific tasks, like image recognition or language translation. In contrast, General AI, or Strong AI, would possess human-like intelligence across a wide range of tasks, a goal that remains a long-term aspiration.
**V. Applications of AI**
AI's impact is felt across diverse sectors. In healthcare, AI aids in diagnostics and personalized treatment plans. In finance, it enhances fraud detection and risk assessment. Self-driving cars exemplify AI in transportation, while virtual assistants like Siri and Alexa showcase its role in daily life. The convergence of AI with other technologies, such as the Internet of Things (IoT) and robotics, amplifies its transformative potential.
**VI. Machine Learning Algorithms**
The backbone of AI lies in its algorithms. Linear regression, decision trees, neural networks, and deep learning models are among the many tools in the ML toolkit. Exploring the mechanics of these algorithms reveals the intricacies of how AI processes information, learns from data, and makes predictions.
Please download this slideshare ppt, as it will give you access to all the youtube and slideshare streams that are embedded in this presentation. In this narrative powerpoint which connects to the work of others, I envision the future of humanity influenced by technology.
This talks summarizes some of the main trends on the Semantic Web in the past year. It includes discussion of recent industrial trends, government data, and some Web 3.0 thoughts. This particular version was presented at a Korean Workshop (http://wsms2010.kaist.ac.kr/) in 2010.
Everything is an illusion. - Do we live in a computer simulation?Harshal Hayatnagarkar
In the words of legendary science fiction writer Arthur C Clarke, "Any sufficiently advanced technology is indistinguishable from magic." In last few hundred years, we humans have made great progress in science and technology . With computing innovations, the pace of progress is rather accelerating, and will leave a lasting impact on us as a species.
Transhumanism is an intellectual movement to study this impact, and aims to enhance physical, psychological, and intellectual potential to leap into a posthuman era. In this talk, we will discuss a speculative idea from transhumanism, where even the magic is real.
Dark Data: A Data Scientists Exploration of the Unknown by Rob Witoff PyData ...PyData
Modern Data Science is enabling NASA's engineers uncover actionable information from our "dark" data coffers. From starting small to operating at scale, Rob will discuss applications in telemetry, workforce analytics and liberating data from the Mars Rovers. Tools include iPython, Pandas, Boto and more.
NYAI #27: Cognitive Architecture & Natural Language Processing w/ Dr. Catheri...Maryam Farooq
For more AI talks, visit: nyai.co
These slides are from NYAI #27: Cognitive Architecture & Natural Language Processing w/ Dr. Catherine Havasi, which took place Tues, 12/18/19 at Kirkland & Ellis NYC.
[Speaker Bio] Dr. Catherine Havasi is a technology strategist, artificial intelligence researcher, and entrepreneur. In the late 90s, she co-founded the Common Sense Computing Initiative, or ConceptNet, the first crowd-sourced project for artificial intelligence and the largest open knowledge graph for language understanding. ConceptNet has played a role in thousands of AI projects and will be turning 20 next year. She has started several companies commercializing AI research, including Luminoso where she acts as Chief Strategy Officer. She is currently a visiting scientist at the MIT Media Lab where she works on computational creativity and previously directed the Digital Intuition group.
[Abstract] People who build everything from entertainment experiences to financial management face a dilemma: how can you scale what you’re building for broader consumption, yet maintain the personalization that makes it special? A fundamental tension exists between building something individualized, and scaling it to consumers such as visitors at a theme park, or gamers exploring the latest Zelda adventure. True disruption happens when we overcome the idea that one must sacrifice personalization to achieve mass production — like it has in advertising, recommendations, and web search.
Artificial Intelligence practitioners, especially in natural language understanding, dialogue, and cognitive modeling, face the same issue: how can we personalize our models for all audiences without relying on unscalable efforts such as writing specific rules, building dialogue trees, or designing knowledge graphs? Catherine Havasi believes we can remove this dichotomy and achieve “mass personalization.” In this session we’ll discuss how to understand domain text and build believable digital characters. We’ll talk about how adding a little common sense, cognitive architectures, and planning is making this all possible.
nyai.co
Cognitive computing is the simulation of human thought processes in a computerized model. It's goal is to create automated non-organic systems that are capable of solving real problems (business and personal) without requiring
human assistance.
Similar to The Future of AI: Going BeyondDeep Learning, Watson, and the Semantic Web (20)
Knowing what AI Systems Don't know and Why it mattersJames Hendler
A discussion of chatGPT and some other examples with respect to accuracy and other issues - a general background talk for those interested in the subject
Exploring the Boundaries of Artificial Intelligence (or "Modern AI")James Hendler
A discussion of the strengths and limitations of some current AI systems including chatGPT and DALL-E. Originally presented at University of Leicester Feb 2023.
The original abstract, title and bio were generated by chatGPT -- the first three slides show corrections -- original talk announcement included:
"Please note: The title, abstract and Hendler’s bio above were written by “GPT3,” a modern AI system. It contains information which is both correct and incorrect. That will be the topic of this talk."
Presentation at "International knowledge graph workshop" at KDD 2020. The short overview talk shows how we have moved from Semantic Web to Linked Data to Knowledge Graphs. We argue that the same "a little semantics goes a long way" principle from the early days of the Semantic Web still is needed today -- some lessons learned and steps ahead are outlined.
Capacity Building: Data Science in the University At Rensselaer Polytechnic ...James Hendler
In this short talk, presented at the ITU's Capacity Building Symposium, I review some of the pedagogical innovation in data science happening at Rensselaer (RPI) and some aspects of teaching data science that are crucial to larger success.
Enhancing Precision Wellness with Knowledge Graphs and Semantic Analytics: O...James Hendler
Talk presented at Bio-IT 2018 (machine learning track) - explores some approaches to overcoming challenges of using machine learning systems in healthcare applications.
This talk presents areas of investigation underway at the Rensselaer Institute for Data Exploration and Applications. First presented at Flipkart, Bangalore India, 3/2015.
The Rensselaer Institute for Data Exploration and Applications is addressing new modes of data exploration and integration to enhance the work of campus researchers (and beyond). This talk outlines the "data exploration" technologies being explored
The death of the Web has been prematurely reported -- the best is yet to come! In this talk, from the Kshitij at IIT Kharagpur 2012, I talk about what Web 3.0 will feature, and some thoughts on key technology trends on the Web.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
The Future of AI: Going BeyondDeep Learning, Watson, and the Semantic Web
1. Tetherless World Constellation, RPI
The Future of AI: Going Beyond
Deep Learning, Watson, and the
Semantic Web
Jim Hendler
Tetherless World Professor of Computer, Web and Cognitive Sciences
Director, Institute for Data Exploration and Applications
Rensselaer Polytechnic Institute
http://www.cs.rpi.edu/~hendler
@jahendler (twitter)
Major talks at: http://www.slideshare.net/jahendler
2. Tetherless World Constellation, RPI
Knowledge and Learning
Knowledge representation in the age of
Deep Learning, Watson, and the Semantic Web
Jim Hendler
Tetherless World Professor of Computer, Web and Cognitive Sciences
Director, Institute for Data Exploration and Applications
Rensselaer Polytechnic Institute
http://www.cs.rpi.edu/~hendler
Major talks at: http://www.slideshare.net/jahendler
4. Tetherless World Constellation, RPI
New Journal: Data Intelligence
Knowledge graph is one of the
topics we are interested in, please
consider submitting a paper!
(handout in your conference
bag)
5. Tetherless World Constellation, RPI
What has happened?
• Several important AI technologies have
moved through “knees in the curve” bringing
much of the attention to AI again
–Deep Learning (eg AlphaGo, vision processing)
–Associative learning (eg Watson)
–Semantic Web (eg search and schema.org)
15. Tetherless World Constellation, RPI
Summary: AI has done some way cool stuff
• Deep Learning: neural learning from data with high
quality,
• Watson: Associative learning from data with high
quality
• Semantic Web/Knowledge Graph: Graph links
formation from extraction, clustering and learning
but, there are still problems…
19. Tetherless World Constellation, RPI
Why did knowledge graph need
“”Human Judgments”?
Association ≠ Correctness
P. Mika, 2014 w.permission
Michelangelo
Leonardo
Raphael
Donnatello
20. Tetherless World Constellation, RPI
Knowledge Representation?
• A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the thing
itself, used to enable an entity to determine consequences by thinking rather than acting,
i.e., by reasoning about the world rather than taking action in it.
• It is a set of ontological commitments, i.e., an answer to the question: In what terms should I think
about the world?
• It is a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) the
representation's fundamental conception of intelligent reasoning; (ii) the set of inferences the
representation sanctions; and (iii) the set of inferences it recommends.
• It is a medium for pragmatically efficient computation, i.e., the computational environment in which
thinking is accomplished. One contribution to this pragmatic efficiency is supplied by the guidance a
representation provides for organizing information so as to facilitate making the recommended
inferences.
• It is a medium of human expression, i.e., a language in which we say things about the
world.
R. Davis, H. Shrobe, P. Szolovits (1993)
23. Tetherless World Constellation, RPI
“Saying things about the world” does
"If I was telling it to a
kid, I'd probably say
something like 'the cat
has fur and four legs
and goes meow, the
duck is a bird and it
swims and goes
quack’. "
How would you explain the difference between a
duck and a cat to a child?
Woof
26. Tetherless World Constellation, RPI
The challenge of “background knowledge”
What is the relationship
between this man and
this woman?
27. Tetherless World Constellation, RPI
AI systems coming along well…
What is the relationship
between this man and
this woman?
Deep learning produced Scene Graph w/relationships
(Klawonn & Heims, 2018)
28. Tetherless World Constellation, RPI
But the challenges remain
What is the relationship
between this man and
this woman?
Deep learning produced Scene Graph w/relationships
(Klawonn, 2018)
Seeing the bride adds
significant information
that cannot be easily
learned w/o background
knowledge
31. Tetherless World Constellation, RPI
KR: Surrogate knowledge?
Which could you sit in?
What is most likely to bite what?
Which one is most likely to become a computer
scientist someday?
…
32. Tetherless World Constellation, RPI
“Surrogate” knowledge
Which could you sit in?
What is most likely to bite what?
Which one is most likely to become a computer
scientist someday?
How would they go about doing it?
33. Tetherless World Constellation, RPI
KR: Recommended vs. Possible inference
Which one would you save if the house was on
fire?
34. Tetherless World Constellation, RPI
Ethical AI systems need certainty
Which one would you save if the house was on
fire? Would you use a robot baby-sitter
without knowing which of the three
possibilities it would choose?
35. Tetherless World Constellation, RPI
Human-Aware AI
• Context is key
– AI learning systems still perform best in well-defined
contexts (or trained situations, or where their
document set is complete, etc.)
– Humans are good at recognizing context and deciding
when extraneous factors don’t make sense
• Or add extra “inferencing” (the bride example)
36. Tetherless World Constellation, RPI
The challenge
• If we want to implement KR systems on top of
neural and associative learners we have an issue
– The numbers coming out of Deep Learning and
Associative graphs are not probabilities
– They don’t necessarily ground in human-meaningful
symbols
• ”sub-symbolic” learning …
• Association by clustering …
• Errorful extraction …
37. Tetherless World Constellation, RPI
The challenges
• Can we avoid throwing out the reasoning baby
with the grounding bathwater?
– Long-term planning
– Rules that need to be followed
– Human Interaction
• Even if computers don’t need to be symbolic communicators,
WE DO!!!
– Background knowledge (context) is symbolic
38. Tetherless World Constellation, RPI
Human-AI interaction
• Evidence that “centaurs” win
– Human(s) and computer(s) teams currently beat either
at chess (Go centaurs underway)
– Anecdotal evidence that humans w/Watson top choices
outperform Watson or human alone at Jeopardy
– Medical study (diagnostic):
• Doctor w/computer outperformed just doctor, just computer,
two doctors
39. Tetherless World Constellation, RPI
And this matters!
“There was no rule about how long we were allowed to think before we reported a strike …
but we knew that every second of procrastination took away valuable time, that the Soviet
Union’s military and political leadership needed to be informed without delay. All I had to do
was to reach for the phone; to raise the direct line to our top commanders — but I couldn’t
move. I felt like I was sitting on a hot frying pan … when people start a war, they don’t
start it with only five missiles …”
We must all strive to be like Petrov and learn to trust the
combination of AI training and human intuition.
Stanislav Petrov: The man who saved the world