The methodological stages presented in embodied Artificial Intelligence are given. Systematically we broaden the concept AI so finally we can approach systems related to Artificial Life.
This document discusses machine learning interpretability and explainability. It begins with introducing the problem of making black box machine learning models more interpretable and defining key concepts. Next, it reviews popular interpretability methods like LIME, LRP, DeepLIFT and SHAP. It then describes the authors' proposed model CAMEL, which uses clustering to learn local interpretable models without sampling. The document concludes by discussing evaluation of interpretability models and important considerations like the tradeoff between performance and interpretability.
Brief introduction is given to Rapid Single Flux Quantum (RSFQ) electronics. It can be useful both for physicist and electrical engineer. Idea of classical superconducting computer is explained and such computer has also potential to be integrated with superconducting quantum computer.
Introduction to Multimodal Language models with LLaVA. What are Multimodal models, how do they work, the LLaVA papers/models, and Image classification experiment.
We know that we are in an AI take-off, what is new is that we are in a math take-off. A math take-off is using math as a formal language, beyond the human-facing math-as-math use case, for AI to interface with the computational infrastructure. The message of generative AI and LLMs (large language models like GPT) is not that they speak natural language to humans, but that they speak formal languages (programmatic code, mathematics, physics) to the computational infrastructure, implying the ability to create a much larger problem-solving apparatus for humanity-benefitting applications in biology, energy, and space science, however not without risk.
This presentation covers
- Introduction to Artificial Intelligence
- Philosophy of A.I
- Real-life Examples
- Major Applications Of A.I
- A.I.-: Need Of The Hour
- Drawbacks.
-Vigyanam RKGIT, Ghaziabad
The document provides an introduction to physics-informed machine learning. It discusses the limitations of traditional modeling approaches and machine learning alone. Physics-informed machine learning aims to embed physical laws and constraints into machine learning models. There are three main approaches: incorporating observational biases, inductive biases from physics, and learning biases like physics-informed neural networks (PINNs). PINNs have been applied to problems with complex geometries and different physical laws but can have convergence issues that require further research. Overall, physics-informed machine learning shows promise for improving simulations but many open problems remain.
Artificial intelligence is changing society in both positive and negative ways. Positively, AI can improve work efficiency, enhance human abilities by automating repetitive tasks, and liberate time previously spent commuting. However, AI also risks replacing human jobs such as manufacturing workers. Additionally, while AI may help solve crimes, its use raises privacy concerns that legal systems are working to address.
This document discusses machine learning interpretability and explainability. It begins with introducing the problem of making black box machine learning models more interpretable and defining key concepts. Next, it reviews popular interpretability methods like LIME, LRP, DeepLIFT and SHAP. It then describes the authors' proposed model CAMEL, which uses clustering to learn local interpretable models without sampling. The document concludes by discussing evaluation of interpretability models and important considerations like the tradeoff between performance and interpretability.
Brief introduction is given to Rapid Single Flux Quantum (RSFQ) electronics. It can be useful both for physicist and electrical engineer. Idea of classical superconducting computer is explained and such computer has also potential to be integrated with superconducting quantum computer.
Introduction to Multimodal Language models with LLaVA. What are Multimodal models, how do they work, the LLaVA papers/models, and Image classification experiment.
We know that we are in an AI take-off, what is new is that we are in a math take-off. A math take-off is using math as a formal language, beyond the human-facing math-as-math use case, for AI to interface with the computational infrastructure. The message of generative AI and LLMs (large language models like GPT) is not that they speak natural language to humans, but that they speak formal languages (programmatic code, mathematics, physics) to the computational infrastructure, implying the ability to create a much larger problem-solving apparatus for humanity-benefitting applications in biology, energy, and space science, however not without risk.
This presentation covers
- Introduction to Artificial Intelligence
- Philosophy of A.I
- Real-life Examples
- Major Applications Of A.I
- A.I.-: Need Of The Hour
- Drawbacks.
-Vigyanam RKGIT, Ghaziabad
The document provides an introduction to physics-informed machine learning. It discusses the limitations of traditional modeling approaches and machine learning alone. Physics-informed machine learning aims to embed physical laws and constraints into machine learning models. There are three main approaches: incorporating observational biases, inductive biases from physics, and learning biases like physics-informed neural networks (PINNs). PINNs have been applied to problems with complex geometries and different physical laws but can have convergence issues that require further research. Overall, physics-informed machine learning shows promise for improving simulations but many open problems remain.
Artificial intelligence is changing society in both positive and negative ways. Positively, AI can improve work efficiency, enhance human abilities by automating repetitive tasks, and liberate time previously spent commuting. However, AI also risks replacing human jobs such as manufacturing workers. Additionally, while AI may help solve crimes, its use raises privacy concerns that legal systems are working to address.
This document provides an overview of approaches to artificial intelligence including acting humanly through the Turing test, thinking humanly through cognitive modeling, thinking rationally through Aristotelian logic and syllogisms, and acting rationally as autonomous agents. It defines AI as "the science and engineering of making intelligent machines" and notes key characteristics include natural language processing, knowledge representation, automated reasoning, machine learning, computer vision, and robotics.
The document provides an overview of the state of natural language processing (NLP) and Amazon's NLP offering Amazon Comprehend. It discusses the evolution of NLP from rule-based systems to modern neural models like BERT and Transformer and the increasing complexity of NLP tasks. The document also describes Amazon Comprehend's capabilities in areas like sentiment analysis, named entity recognition, keyphrase extraction, and language detection.
This tutorial extensively covers the definitions, nuances, challenges, and requirements for the design of interpretable and explainable machine learning models and systems in healthcare. We discuss many uses in which interpretable machine learning models are needed in healthcare and how they should be deployed. Additionally, we explore the landscape of recent advances to address the challenges model interpretability in healthcare and also describe how one would go about choosing the right interpretable machine learnig algorithm for a given problem in healthcare.
The document provides an overview of the history and development of artificial intelligence (AI). Some key points:
- The field of AI was established in 1956 at the Dartmouth Conference where researchers proposed using computers to simulate human intelligence.
- Early milestones included programs that played games like checkers and proved mathematical theorems. Research focused on symbolic and knowledge-based approaches.
- In the 1980s, expert systems flourished but funding declined amid doubts about progress, known as an "AI winter." Subsymbolic approaches using neural networks also emerged.
- Modern AI incorporates both symbolic and subsymbolic techniques, with successes in games, robotics, machine learning and other domains. Knowledge representation and common-sense reasoning
This presentation provides an overview of artificial intelligence (AI), including its definition, introduction, foundations, advantages, applications, and limitations. AI is defined as the intelligence demonstrated by machines and the branch of computer science which aims to create intelligent agents. The presentation traces the foundations of AI through various fields such as philosophy, mathematics, neuroscience, and computer engineering. It also outlines the advantages of AI, such as reducing errors and exploring new possibilities, and the potential disadvantages like overreliance on AI and job losses. The presentation concludes that while AI tools can help solve problems, they cannot replace human capabilities.
The document discusses explainability and bias in machine learning/AI models. It covers several topics:
1. Why explainability of models is important, including for laypeople using models and potential legal needs for explanations of decisions.
2. Methods for explainability including using interpretable models directly and post-hoc explainability methods like LIME and SHAP which provide feature attributions.
3. Issues with bias in machine learning models and different definitions of fairness. It also discusses techniques for measuring and mitigating bias, such as reweighting data or using adversarial learning.
Artificial intelligence (AI) is the human-like intelligence exhibited by machines or software. The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer science, psychology, linguistics, philosophy and neuroscience, as well as other specialized fields such as artificial psychology. Major AI researchers and textbooks define the field as "the study and design of intelligent agents",[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines".[4]
AI research is highly technical and specialised, and is deeply divided into subfields that often fail to communicate with each other.[5] Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.[6] General intelligence (or "strong AI") is still among the field's long term goals.[7] Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others.
Artificial intelligence (AI) is the intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. This document provides an overview of AI, including its history beginning in 1943, main branches such as logical AI and pattern recognition, and applications like expert systems, speech recognition, computer vision, robotics. The advantages of AI are discussed, such as improving lives and doing dangerous jobs, but also potential disadvantages like unemployment and enhancing laziness in humans. The future of AI could include personal robots but also risks of robots being hacked or developing anti-social objectives.
The ppt Sujoy and I made for the Psi Phi ( An Inter School Competition held by our School). Our Topic was Artificial Intelligence.
Credits:
Theme Images from ESET NOD32 (My Antivirus of Choice)
Backgrounds from SwimChick.net (Amazing designs here)
Credits Image from Full Metal Alchemist (One of my favorite Anime).
This document provides an overview of artificial intelligence (AI) including definitions of different types of AI, a brief history of AI, potential application fields and use cases, and the future outlook for AI. It defines AI as ranging from everyday applications to self-driving cars. It discusses narrow AI, general AI, and superintelligence. The document also summarizes key milestones in the development of AI from 1955 to the present and potential opportunities and challenges of AI including automation, ethics, and politics. It provides examples of Austrian AI startups and their technologies. The outlook suggests that human-level AI may be achieved by 2040 and superintelligence by 2060 with impacts on robotics, climate change, human enhancement, and autonomous
This presentation will give you a brief about the Artificial intelligence concept with the below-mentioned contents
- What is AI?
- Need for AI
- Languages used for AI development
- History of AI
- Types of AI
- Agents in AI
- How AI works
- Technologies of AI
- Application of AI
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, as well as critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI from societal, legal, customer/end-user, and model developer perspectives. [Note: Due to time constraints, we will not focus on techniques/tools for providing explainability as part of AI/ML systems.] Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges / implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the research community.
These slides were presented at a meetup in Kansas City by Bahador Khaleghi of H2O.ai.
More details can be viewed here: https://www.meetup.com/Kansas-City-Artificial-Intelligence-Deep-Learning/events/265662978/
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Introduction to artifcial intelligence
Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen. 'Strong' AI is usually labelled as AGI (Artificial General Intelligence) while attempts to emulate 'natural' intelligence have been called ABI (Artificial Biological Intelligence). Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving"
Introduction to Interpretable Machine LearningNguyen Giang
This document discusses interpretable machine learning and explainable AI. It begins with definitions of key terms and an overview of interpretable methods. Deep learning models are often treated as "black boxes" that are difficult to interpret. Interpretability can be achieved by using inherently interpretable models like linear models or decision trees, adding attention mechanisms, or interpreting models before, during or after building them. Later sections discuss specific interpretable techniques like understanding data through examples, MMD-Critic for learning prototypes and criticisms, and visualizing convolutional neural networks to understand predictions. The document emphasizes the importance of interpretability and explains several approaches to make machine learning models more transparent to humans.
Introduction To Artificial Intelligence Powerpoint Presentation SlidesSlideTeam
Introduction to Artificial Intelligence is for the mid level managers giving information about what is AI, AI levels, types of AI, where AI is used. You can also know the difference between AI vs Machine learning vs Deep learning to understand expert system in a better way for business growth. https://bit.ly/2V0reNa
This document provides an overview and syllabus for an Artificial Intelligence course. It introduces the instructor, Dr. Zulfiqar Ali, and provides contact information. It lists the primary textbook as Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig and notes other reference materials. The course aims to provide understanding of fundamental AI techniques like agents, search, knowledge representation, and planning under uncertainty. It outlines the topics to be covered in each section of the course.
It is technology and a branch of computer science that studies and develops intelligent machines and software. Major AI researchers and textbooks define the field as "the study and design of intelligent agents", where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "The science and engineering of making intelligent machines".
This document provides an overview of approaches to artificial intelligence including acting humanly through the Turing test, thinking humanly through cognitive modeling, thinking rationally through Aristotelian logic and syllogisms, and acting rationally as autonomous agents. It defines AI as "the science and engineering of making intelligent machines" and notes key characteristics include natural language processing, knowledge representation, automated reasoning, machine learning, computer vision, and robotics.
The document provides an overview of the state of natural language processing (NLP) and Amazon's NLP offering Amazon Comprehend. It discusses the evolution of NLP from rule-based systems to modern neural models like BERT and Transformer and the increasing complexity of NLP tasks. The document also describes Amazon Comprehend's capabilities in areas like sentiment analysis, named entity recognition, keyphrase extraction, and language detection.
This tutorial extensively covers the definitions, nuances, challenges, and requirements for the design of interpretable and explainable machine learning models and systems in healthcare. We discuss many uses in which interpretable machine learning models are needed in healthcare and how they should be deployed. Additionally, we explore the landscape of recent advances to address the challenges model interpretability in healthcare and also describe how one would go about choosing the right interpretable machine learnig algorithm for a given problem in healthcare.
The document provides an overview of the history and development of artificial intelligence (AI). Some key points:
- The field of AI was established in 1956 at the Dartmouth Conference where researchers proposed using computers to simulate human intelligence.
- Early milestones included programs that played games like checkers and proved mathematical theorems. Research focused on symbolic and knowledge-based approaches.
- In the 1980s, expert systems flourished but funding declined amid doubts about progress, known as an "AI winter." Subsymbolic approaches using neural networks also emerged.
- Modern AI incorporates both symbolic and subsymbolic techniques, with successes in games, robotics, machine learning and other domains. Knowledge representation and common-sense reasoning
This presentation provides an overview of artificial intelligence (AI), including its definition, introduction, foundations, advantages, applications, and limitations. AI is defined as the intelligence demonstrated by machines and the branch of computer science which aims to create intelligent agents. The presentation traces the foundations of AI through various fields such as philosophy, mathematics, neuroscience, and computer engineering. It also outlines the advantages of AI, such as reducing errors and exploring new possibilities, and the potential disadvantages like overreliance on AI and job losses. The presentation concludes that while AI tools can help solve problems, they cannot replace human capabilities.
The document discusses explainability and bias in machine learning/AI models. It covers several topics:
1. Why explainability of models is important, including for laypeople using models and potential legal needs for explanations of decisions.
2. Methods for explainability including using interpretable models directly and post-hoc explainability methods like LIME and SHAP which provide feature attributions.
3. Issues with bias in machine learning models and different definitions of fairness. It also discusses techniques for measuring and mitigating bias, such as reweighting data or using adversarial learning.
Artificial intelligence (AI) is the human-like intelligence exhibited by machines or software. The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer science, psychology, linguistics, philosophy and neuroscience, as well as other specialized fields such as artificial psychology. Major AI researchers and textbooks define the field as "the study and design of intelligent agents",[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines".[4]
AI research is highly technical and specialised, and is deeply divided into subfields that often fail to communicate with each other.[5] Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.[6] General intelligence (or "strong AI") is still among the field's long term goals.[7] Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others.
Artificial intelligence (AI) is the intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. This document provides an overview of AI, including its history beginning in 1943, main branches such as logical AI and pattern recognition, and applications like expert systems, speech recognition, computer vision, robotics. The advantages of AI are discussed, such as improving lives and doing dangerous jobs, but also potential disadvantages like unemployment and enhancing laziness in humans. The future of AI could include personal robots but also risks of robots being hacked or developing anti-social objectives.
The ppt Sujoy and I made for the Psi Phi ( An Inter School Competition held by our School). Our Topic was Artificial Intelligence.
Credits:
Theme Images from ESET NOD32 (My Antivirus of Choice)
Backgrounds from SwimChick.net (Amazing designs here)
Credits Image from Full Metal Alchemist (One of my favorite Anime).
This document provides an overview of artificial intelligence (AI) including definitions of different types of AI, a brief history of AI, potential application fields and use cases, and the future outlook for AI. It defines AI as ranging from everyday applications to self-driving cars. It discusses narrow AI, general AI, and superintelligence. The document also summarizes key milestones in the development of AI from 1955 to the present and potential opportunities and challenges of AI including automation, ethics, and politics. It provides examples of Austrian AI startups and their technologies. The outlook suggests that human-level AI may be achieved by 2040 and superintelligence by 2060 with impacts on robotics, climate change, human enhancement, and autonomous
This presentation will give you a brief about the Artificial intelligence concept with the below-mentioned contents
- What is AI?
- Need for AI
- Languages used for AI development
- History of AI
- Types of AI
- Agents in AI
- How AI works
- Technologies of AI
- Application of AI
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, as well as critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI from societal, legal, customer/end-user, and model developer perspectives. [Note: Due to time constraints, we will not focus on techniques/tools for providing explainability as part of AI/ML systems.] Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges / implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the research community.
These slides were presented at a meetup in Kansas City by Bahador Khaleghi of H2O.ai.
More details can be viewed here: https://www.meetup.com/Kansas-City-Artificial-Intelligence-Deep-Learning/events/265662978/
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Introduction to artifcial intelligence
Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen. 'Strong' AI is usually labelled as AGI (Artificial General Intelligence) while attempts to emulate 'natural' intelligence have been called ABI (Artificial Biological Intelligence). Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving"
Introduction to Interpretable Machine LearningNguyen Giang
This document discusses interpretable machine learning and explainable AI. It begins with definitions of key terms and an overview of interpretable methods. Deep learning models are often treated as "black boxes" that are difficult to interpret. Interpretability can be achieved by using inherently interpretable models like linear models or decision trees, adding attention mechanisms, or interpreting models before, during or after building them. Later sections discuss specific interpretable techniques like understanding data through examples, MMD-Critic for learning prototypes and criticisms, and visualizing convolutional neural networks to understand predictions. The document emphasizes the importance of interpretability and explains several approaches to make machine learning models more transparent to humans.
Introduction To Artificial Intelligence Powerpoint Presentation SlidesSlideTeam
Introduction to Artificial Intelligence is for the mid level managers giving information about what is AI, AI levels, types of AI, where AI is used. You can also know the difference between AI vs Machine learning vs Deep learning to understand expert system in a better way for business growth. https://bit.ly/2V0reNa
This document provides an overview and syllabus for an Artificial Intelligence course. It introduces the instructor, Dr. Zulfiqar Ali, and provides contact information. It lists the primary textbook as Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig and notes other reference materials. The course aims to provide understanding of fundamental AI techniques like agents, search, knowledge representation, and planning under uncertainty. It outlines the topics to be covered in each section of the course.
It is technology and a branch of computer science that studies and develops intelligent machines and software. Major AI researchers and textbooks define the field as "the study and design of intelligent agents", where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "The science and engineering of making intelligent machines".
The document discusses artificial intelligence and provides details about:
- The goals of AI including deduction, reasoning, problem solving, knowledge representation, planning, natural language processing, motion and manipulation, perception, and social intelligence.
- The history and origins of AI research dating back to the 1950s.
- Popular AI programming languages like Lisp and how it is well suited for knowledge representation.
- Categories of AI approaches including conventional symbolic AI and computational intelligence methods.
- Applications of AI in fields like medicine, industry, games, speech recognition, natural language understanding, computer vision, and expert systems.
This document provides an overview of the history and use of artificial intelligence on computer systems. It discusses:
1) The origins of AI research beginning in the 1940s and 1950s with pioneers like Alan Turing and the first conference on AI at Dartmouth College in 1956.
2) The development of AI through boom periods in the 1960s with significant government funding, and bust periods in the 1970s and 1980s due to limitations and funding cuts.
3) Recent advances in AI from the 1990s to present using techniques like deep learning, access to large datasets, and increasing computational power which have led to applications in areas like logistics, data mining, and medical diagnosis.
Applications of Artificial Intelligence & Associated Technologiesdbpublications
This paper reviews the meaning of artificial intelligence and its various advantages and disadvantages including its applications. It also considers the current progress of this technology in the real world and discusses the applications of AI in the fields of heavy industries, gaming, aviation, weather forecasting, expert systems with the focus being on expert systems. The paper concludes by analyzing the future potential of Artificial Intelligence.
Define artificial intelligence.
Mention the four approaches to AI.
What are the capabilities of AI that have to process with computer?
Mention the foundations of AI?
Mention the crude comparison of the raw computational resources available to computer and human brain.
Briefly explain the history of AI.
What are rational action and intelligent agent?
algorithme de recherche en intelligence artificielleSlimAmiri
This document provides an overview of artificial intelligence (AI), including its history and applications. It defines AI as the simulation of human intelligence through computer programs. The document then discusses early work in AI from the 1950s through the 1970s, including the development of languages like LISP and expert systems. It also outlines current applications of AI such as autonomous vehicles, robotics, personalized assistants and more. Finally, it lists some common programming languages and agent platforms used in modern AI development.
This document discusses intelligent computing relating to cloud computing. It introduces applying artificial intelligence to cloud computing to develop self-managing computer systems. For example, developing software that regulates computer power consumption to reduce energy use. The document also discusses using affective computing and advanced intelligence to improve cloud computing efficiency by allowing applications to anticipate situations and make real-time decisions over the internet. Finally, it proposes that true cloud computing should be based on natural language understanding to allow access via lightweight devices like phones, not just traditional computers.
Application Of Artificial Intelligence In Electrical EngineeringAmy Roman
This document summarizes the application of artificial intelligence in electrical engineering. It discusses how AI techniques like neural networks can help address problems that are difficult for humans to solve in fields involving high voltage power systems and electrical machine drives. The document provides an overview of artificial intelligence, including definitions, subfields, and challenges. It also describes different architectural approaches to AI like symbolic, sub-symbolic, and learning-based methods and how they aim to mimic human cognition and problem-solving abilities.
Artificial intelligence (AI) is the field of computer science focusing on creating intelligent machines. Researchers are developing systems that can understand speech, beat humans at chess, and perform other intelligent tasks. The term was first coined in 1956, and since then AI has made advances in areas like machine learning, natural language processing, and robotics. However, fully human-level AI remains an ongoing challenge. Researchers take different approaches to developing AI, such as attempting to replicate the human brain through neural networks, or developing intelligent programs through symbolic reasoning.
Artificial intelligence (AI) is the field of computer science focusing on creating intelligent machines. Researchers are developing systems that can understand speech, beat humans at chess, and perform other intelligent tasks. The term was first coined in 1956, and since then AI has made advances in areas like machine learning, natural language processing, and robotics. However, fully human-level AI remains an ongoing challenge. Researchers take different approaches, such as attempting to replicate the human brain through neural networks or developing intelligent programs through symbolic reasoning. AI is used today for applications like logistics, data mining, and medical diagnosis.
This document outlines the course content for an introduction to artificial intelligence class. It will cover topics such as the definition of AI, intelligent agents, logic and knowledge representation, machine learning algorithms like neural networks and genetic algorithms, and elements of natural language processing. The course will also discuss visions of AI like systems that think or act rationally or like humans. It provides historical context on the development of the field and successes in AI.
This document outlines the course for an Artificial Intelligence class. It introduces topics like intelligent agents, logic, knowledge representation, reasoning, machine learning, and natural language processing. It also discusses definitions of artificial and natural intelligence and different visions of AI like systems that think or act like humans versus those that think or act rationally. The history of AI is covered from early developments in neural networks and problem solving systems to more recent successes in games, robotics, and commercial applications.
DWX 2018 Session about Artificial Intelligence, Machine and Deep LearningMykola Dobrochynskyy
Artificial intelligence, machine learning, and deep learning provide benefits but also risks that should be addressed ethically and responsibly. AI has progressed due to exponential data growth, large unstructured datasets, improved hardware, and falling error rates. Deep learning in particular has advanced areas like computer vision, speech recognition and games. While concerns exist around a potential artificial general intelligence, AI also enables applications in healthcare, transportation, science and more. Individuals and companies are encouraged to start experimenting with and adopting machine learning.
The document provides an introduction to artificial intelligence (AI), including its key concepts, scope, components, types, and applications. It defines AI as the science and engineering of creating intelligent machines, especially computer programs. The main types of AI discussed are narrow/weak AI, which can perform specific tasks, and general AI, which aims to create human-level intelligence. The document also outlines the core components of AI in areas like logic, cognition, and computation, and how these combine to form knowledge-based systems. Common applications of AI mentioned include gaming, natural language processing, and robotics.
Artificial intelligence (AI) is a broad field that combines computer science, psychology, and philosophy with the goal of creating machines that can think like humans. AI aims to develop intelligent agents that can perceive their environment and take actions to maximize their success. The main fields of AI include machine vision, expert systems, and creating machines that can think rationally or act like humans. The goals of AI include solving complex problems, enhancing human and computer interactions, and developing the theory and practice of building intelligent machines.
Rahman Ali gave a lecture on artificial intelligence (AI) at Quaid-e-Azam College of Commerce, University of Peshawar. The lecture defined intelligence and AI, discussed the differences between intelligent and conventional computing, and outlined the history and applications of AI. It also reviewed how other fields like philosophy, mathematics, and neuroscience contribute to AI's development.
The document provides an overview of artificial intelligence (AI), including definitions, a brief history, comparisons to the human brain, applications, and pros and cons. It discusses how AI aims to create intelligent machines that can learn, problem solve, and act rationally like humans. The document also summarizes key developments in AI research from the 1950s to present day and provides examples of how AI is used in areas like natural language processing, computer vision, robotics, and more.
This talk is about PLEA, the virtual being and the robot. It is about the vision how PLEA is made and what is her story. She samples its environment to determine how a person feels, and then demonstrates the affection back. She analyses and interprets different sources of social signals from those who interact with to generate hypotheses. Then she produces non-verbal expressions using information visualization techniques. PLEA is a proof-of-concept, and she was presented at many festivals including British Science Festival and Art & AI Festival in Leicester, the UK. At the end of this talk if we are lucky, PLEA would visit the audience from the screen.
Similar to From embodied Artificial Intelligence to Artificial Life (20)
The document introduces the Keldysh technique, which is used to calculate non-equilibrium Green's functions. It discusses equilibrium Green's functions, the Keldysh contour used to calculate non-equilibrium Green's functions, and the application of the Keldysh technique. Key aspects covered include Green's functions on the Keldysh contour, the Larkin representation, identities among Green's functions, equations of motion for Green's functions on the Keldysh contour, and Keldysh Green's functions. The goal is to provide an overview of the Keldysh technique.
Preparation and properties of polycrystalline YBa2Cu3o7-x and Fe mixturesKrzysztof Pomorski
The polycrystalline samples of YBa2Cu3O7-x High-Temperature Superconductor (HTS), also called „YBCO-123”, were prepared by mixing (II) oxide (CuO), carbonate (BaCO3) and yttrium (III) oxide (Y2O3) powders and followed by a heat treatment high temperature (900 °C - 950 °C) flowing oxygen. The polycrystalline samples of YBCO-Fe composites were prepared by grinding the mixture of single phased YBCO-123 and small of iron (1% and 3% wt.), followed over by a heat treatment . The results of structural (SEM, EDS, Raman spectroscopy), magnetic (AC susceptibility and magnetization measurements) and magneto-transport on produced composites will be presented. Scanning electron microscopy (SEM) for YBCO and Fe mixtures showed iron particles homogeneously placed on YBCO grains boundaries. As the concentration of iron particles increased the critical temperature decreased. The magnetization measurements LN temperature revealed transition from diamagnetic to paramagnetic behaviour of YBCO-Fe samples originated from the iron grains.
Justification of canonical quantization of Josephson effect in various physic...Krzysztof Pomorski
This document discusses the justification of canonical quantization of the Josephson effect and modifications due to large capacitance energy. It presents the commonly used canonical quantization approach and derives the Josephson junction Hamiltonian in second quantization. It also discusses corrections to the Cooper pair box model that arise when the capacitance energy is comparable to the superconducting gap.
This document provides an overview of a lecture on classical and quantum information theory. It discusses topics such as Maxwell's demon, the laws of thermodynamics, Shannon information theory, quantum measurement, the qubit model, and differences between classical and quantum information theory. The lecture aims to compare classical and quantum information concepts and highlight new properties that emerge from quantum mechanics.
Field Induced Josephson Junction (FIJJ) is defined as the physical system made by placement of ferromagnetic strip directly or indirectly [insulator layer in-between] on the top of superconducting strip [3, 4, 7]. The analysis conducted in extended Ginzburg-Landau, Bogoliubov-de Gennes and RCSJ [11] models essentially points that the system is in most case a weak-link Josephson junction [2] and sometimes has features of tunneling Josephson junction [1]. Generalization of Field Induced Josephson junctions leads to the case of network of robust coupled field induced Josephson junctions [4] that interact in inductive way. Also the scheme of superconducting Random Access Memory (RAM) for Rapid Single Flux [8, 9] quantum (RSFQ) computer is drawn [6, 10] using the concept of tunneling Josephson junction [1] and Field Induced Josephson junction [3, 4].
The given presentation is also available by YouTube (https://www.youtube.com/watch?v=uIqXqiwDsSM).
Literature
[1]. B.D.Josephson, Possible new effects in superconductive tunnelling, PL, Vol.1, No. 251, 1962
[2]. K.Likharev, Josephson junctions Superconducting weak links, RMP, Vol. 51, No. 101, 1979
[3]. K.Pomorski and P.Prokopow, Possible existence of field induced Josephson junctions, PSS B, Vol.249, No.9, 2012
[4]. K.Pomorski, PhD thesis: Physical description of unconventional Josephson junction, Jagiellonian University, 2015
[4]. K.Pomorski, H.Akaike, A.Fujimaki, Towards robust coupled field induced Josephson junctions, arxiv:1607.05013, 2016
[6]. K.Pomorski, H.Akaike, A.Fujimaki, Relaxation method in description of RAM memory cell in RSFQ computer, Procedings of Applied Conference 2016 (in progress)
[7]. J.Gelhausen and M.Eschrig, Theory of a weak-link superconductor-ferromagnet Josephson structure, PRB, Vol.94, 2016
[8]. K.K. Likharev, Rapid Single Flux Quantum Logic (http://pavel.physics.sunysb.edu/RSFQ/Research/WhatIs/rsfqre2m.html)
[9]. Proceedings of Applied Superconductivity Confence 2016, plenary talk by N.Yoshikawa, Low-energy high-performance computing based on superconducting technology (http://ieeecsc.org/pages/plenary-series-applied-superconductivity-conference-2016-asc-2016#Plenary7)
[10]. A.Y.Herr and Q.P.Herr, Josephson magnetic random access memory system and method, International patent nr:8 270 209 B2, 2012
[11]. J.A.Blackburn, M.Cirillo, N.Gronbech-Jensen, A survey of classical and quantum interpretations of experiments on Josephson junctions at very low temperatures, arXiv:1602.05316v1, 2016
Basic ideas contributing to development of Artificial Life discipline are presented, so anybody from science or humanistic field can get introduction to the field.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Height and depth gauge linear metrology.pdfq30122000
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
From embodied Artificial Intelligence to Artificial Life
1. Lecture II: Embodied Artificial Intelligence as first stage
towards Artificial Life
Krzysztof Pomorski1,2
1: Faculty of Electronics, Computer Science and Telecommunications
Department of Telecommunications
AGH
2: Faculty of Physics, University of Warsaw kdvpomorski@gmail.com
March 13, 2018
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 1 / 43
2. Overview
1 Human intelligence
2 Humanistic definition of intelligence
3 Artificial Intelligence
4 Turing idea
5 Philosophy in approach towards robotic design
6 Braitenberg vehicles
7 Concept of emergence and embodiment
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 2 / 43
3. Human intelligence
Human intelligence is the intellectual prowess of humans, which is marked
by complex cognitive feats and high levels of motivation and
self-awareness.
Through their intelligence, humans possess the cognitive abilities to learn,
form concepts, understand, apply logic, and reason, including the
capacities to recognize patterns, comprehend ideas, plan, solve problems,
make decisions, retain information, and use language to communicate.
Intelligence enables humans to experience and think.
Note that the definition and characteristics of human intelligence are
interpreted differently by individuals depending on their culture.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 3 / 43
4. Your own defintions of intelligence
Please give your own definition of AI.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 4 / 43
5. Humanistic definition of intelligence
1 ”The ability to use memory, knowledge, experience, understanding,
reasoning, imagination and judgement in order to solve problems and
adapt to new situations.” AllWords Dictionary, 2006.
2 ”The capacity to acquire and apply knowledge. The American
Heritage Dictionary, fourth edition, 2000”
3 ”The ability to learn, understand and make judgments or have
opinions that are based on reason”- Cambridge Advance Learners
Dictionary, 2006
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 5 / 43
6. Types of intelligence:
Linguistic intelligence: People high in linguistic intelligence have an
affinity for words, both spoken and written.
Logical-mathematics intelligence: Is logical and mathematical ability,
as well as scientific ability. Howard Gardner believed Jean Piaget may
have thought he was studying all intelligence, but in truth, Piaget was
really only focusing on the logical-mathematical intelligence.
Spatial intelligence: The ability to form a mental model of a spatial
world and to be able to maneuver and operate using that model.
Musical intelligence: Those with musical Intelligence have excellent
pitch, and may even be absolute pitch.
Bodily-kinesthetic intelligence: The ability to solve problems or to
fashion products using one’s whole body, or parts of the body. For
example, dancers, athletes, surgeons, craftspeople, etc.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 6 / 43
7. Types of intelligence:
Interpersonal intelligence: The ability to see things from the
perspective of others, or to understand people in the sense of
empathy. Strong interpersonal intelligence would be an asset in those
who are teachers, politicians, clinicians, religious leaders, etc.
Intrapersonal intelligence: A correlative ability, turned inward. It is a
capacity to form an accurate, veridical model of oneself and to be
able to use that model to operate effectively in life.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 7 / 43
8. Artificial Intelligence
Artificial intelligence (AI, also machine intelligence, MI) is intelligence
displayed by machines, in contrast with the natural intelligence (NI)
displayed by humans and other animals.
In computer science AI research is defined as the study of ”intelligent
agents”: any device that perceives its environment and takes actions that
maximize its chance of success at some goal.
Colloquially, the term ”artificial intelligence” is applied when a machine
mimics ”cognitive” functions that humans associate with other human
minds, such as ”learning” and ”problem solving”.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 8 / 43
10. Machines with finite vs infinite memory
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 10 / 43
11. Idea of Turing machine
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 11 / 43
12. Basic idea of Turing machine
Above is a very simple representation of a Turing machine. It consists of
an infinitely-long tape which acts like the memory in a typical computer,
or any other form of data storage. The squares on the tape are usually
blank at the start and can be written with symbols. In this case, the
machine can only process the symbols 0 and 1 and ” ” (blank), and is thus
said to be a 3-symbol Turing machine.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 12 / 43
13. Atomic operations - Turing machine
There are just six types of fundamental operation that a Turing machine
performs in the course of a computation. It can:
read (i.e. identify) the symbol currently under the head
write a symbol on the square currently under the head (after first
deleting the symbol already written there, if any)
move the tape left one square
move the tape right one square change state halt.
Classical Turing machine can mimic any operation of classical computer.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 13 / 43
15. Figure: Software vs hardware ... , Rene Descartes
Mindbody dualism: body as extensible substance [ciao jako substancja
rozciga]+ thinking substance [brain]. Concept of humanoid robots was
affected by it before Rodney Brooks invention.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 15 / 43
16. Figure: Strategy of robotic design was affected by Rene Descarates [as presented
by Pawel Mroczkowski, UAM 2014].
In radical version of embodied AI there is no seperation between software
and hardware. However in practical technical design such seperation is
existing.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 16 / 43
17. Figure: Spinoza monism: one substance as combination of hardware and
software=human body [by KP and Pawel Mroczkowski, UAM 2014]
Concept of programmable matter???
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 17 / 43
18. Figure: Example of programmable matter.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 18 / 43
19. Programmable matter
Programmable matter is a term originally coined in 1991 by Toffoli and
Margolus to refer to an ensemble of fine-grained computing elements
arranged in space.
Their paper describes a computing substrate that is composed of
fine-grained compute nodes distributed throughout space which
communicate using only nearest neighbor interactions.
In this context, programmable matter refers to compute models similar to
cellular automata and lattice gas automata.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 19 / 43
20. Practical examples of AI
Technical examples of systems with AI:
1. Humanoid robot
2. Vacuum cleaner
3. Google tools ...
4. Automatic car ...
5. Cyberfish
5. Intelligent telecommunication systems ...
... many others
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 20 / 43
21. Problems to be tackled ...
1. Schedule of buses in the city ...
2. Optimal planning of lectures at the university
3. Travelling salesman problem
4. Detection of sound.
5. Detection of symbols in the image ...
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 21 / 43
25. Role of evolution in shaping morphology of agents
[as by R.Pfeiffer]
Why plants have no brain???
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 25 / 43
32. Concept of Braitenberg vehicle
Figure: Braitenberg vehicle
Generalized cognitive particle having sensors + motor system + sensors
generating social force
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 32 / 43
33. Complex behaviour of Braitenberg vehicles
It can be interpreted as liking or dislinking source of light.
www.youtube.com/watchv=86ohmWIze4Y&feature=youtu.be
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 33 / 43
34. Concept of social force
3rd Newtonian principle is not valid!!! Nevertheless many concepts as
known from N body problem in gravitation can be applied.
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 34 / 43
35. [from ”Adaptability and Diversity in Simulated Turn-taking Behaviour” by
Hiroyuki Iizuka and Takashi Ikegami,2003 ]
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 35 / 43
36. Feedback between single agent and enviroment
[from ”Adaptability and Diversity in Simulated Turn-taking Behaviour” by
Hiroyuki Iizuka and Takashi Ikegami,2003 ]
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 36 / 43
37. Emergence of complex patterns
Emergence of complex patterns in Braitenberg vehicles during artificial evolution [from ”Adaptability and Diversity in Simulated
Turn-taking Behaviour” by Hiroyuki Iizuka and Takashi Ikegami, 2003 ].
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 37 / 43
38. Principle of embodied AI (Science 2007, R.Pfeiffer et al.)
Broaden view is given by tube.switch.ch/videos/9d73741c .
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 38 / 43
39. Network of connections as in marine traffic
Graph theory, network science, econophysics, telecommunication ... .
Number of agents tend to infinity ... and we have probability distributions
...
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 39 / 43
40. Network of connections as by planes in traffic
Figure: Network of air traffic at given instant of time
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 40 / 43
41. Network of connections - GEO satellites
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 41 / 43
43. References
Rofl Pfeifer, Josh Bongard, How the Body Shapes the Way We Think: A New View of Intelligence
Definition of natural intelligence, arXiv:0706.3639v1, 2007
http://www.alanturing.net
Nils J. Nilsson, The quest for artificial intelligence: a history of ideas and achievements, Cambridge University Press,
Stanford University
http://www.shanghailectures.org
John W. Romanishin et al., ,3D M-Blocks: Self-reconguring Robots Capable of Locomotion via Pivoting in Three
Dimensions, 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015
Adaptability and Diversity in Simulated Turn-taking Behaviour by Hiroyuki Iizuka and Takashi Ikegami,2003
https://arxiv.org/pdf/nlin/0310041.pdf
Krzysztof Pomorski1,2
(AGH, UW) From AI to ALife March 13, 2018 43 / 43