Define artificial intelligence.
Mention the four approaches to AI.
What are the capabilities of AI that have to process with computer?
Mention the foundations of AI?
Mention the crude comparison of the raw computational resources available to computer and human brain.
Briefly explain the history of AI.
What are rational action and intelligent agent?
Artificial intelligence (AI) is the field of computer science that develops machines or software with human-like intelligence. AI can perform tasks like humans or even better than humans through activities like speech recognition, decision making, and translation. There are two main categories of AI: narrow AI, which is dedicated to a specific task, and strong/general AI, which does not currently exist but is being researched to allow machines to think like humans through their own intelligence and self-awareness. AI has many applications across industries like healthcare, transportation, education, and more. The evolution of AI began in the 1940s and important milestones include the invention of the Turing test in 1950, the development of machine learning in the 1950
This document provides an overview of artificial intelligence, including:
- A brief history noting the term was coined in 1956.
- Comparisons between human and computer intelligence in terms of speed/memory versus understanding of intellectual mechanisms.
- Categories of AI including narrow/weak AI, general/strong AI, and super intelligence.
- Applications like expert systems, natural language processing, speech recognition, computer vision, robotics, and automatic programming.
- Both positive and negative potential impacts are imagined, such as robots assisting with tasks but also potentially being programmed with antisocial intentions.
This document provides an overview of artificial intelligence (AI), including its history, applications, advantages, and disadvantages. It discusses early milestones in AI like the Turing Test (1950) and Logic Theorist (1956). Applications mentioned include agriculture, astronomy, gaming, robotics, and more. Advantages are high speed, reliability in risky situations. Disadvantages include high costs, inability to think outside programmed tasks, and lack of emotions. The conclusion states AI could solve many problems and unlock a future where computers make more informed decisions based on understanding our world through data.
This document presents an overview of machine learning. It defines machine learning as a field that allows computers to learn without being explicitly programmed, and discusses how machine learning enables computers to automatically analyze large datasets to make predictions. The document then summarizes different types of machine learning techniques including supervised learning, unsupervised learning, reinforcement learning, and more. It provides examples of applications of machine learning like face recognition, speech recognition, and self-driving cars. In conclusion, it states that machine learning is already used across many industries and can improve lives in numerous ways.
just hvae a look, m sure u whould lyk it...............................................................................................................................................................................its all about artificial machines.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
The document discusses various present and future applications of artificial intelligence including helping the aging population through robots, using rescue robots during disasters, developing speech recognition and reading tutorials, creating robots that can learn and adapt like humans, developing telepresence robots for communication, developing automated therapists and conversational search engines, and considerations around whether AI poses a threat to humanity.
This presentation provides an overview of artificial intelligence (AI), including its definition, introduction, foundations, advantages, applications, and limitations. AI is defined as the intelligence demonstrated by machines and the branch of computer science which aims to create intelligent agents. The presentation traces the foundations of AI through various fields such as philosophy, mathematics, neuroscience, and computer engineering. It also outlines the advantages of AI, such as reducing errors and exploring new possibilities, and the potential disadvantages like overreliance on AI and job losses. The presentation concludes that while AI tools can help solve problems, they cannot replace human capabilities.
The document discusses various applications of artificial intelligence including in web technologies, medicine, transportation, heavy industry, and more. It provides definitions of AI and the Turing test. It also outlines several computer science applications of AI such as natural language processing, computer vision, knowledge representation, and data mining.
Artificial intelligence (AI) is the field of computer science that develops machines or software with human-like intelligence. AI can perform tasks like humans or even better than humans through activities like speech recognition, decision making, and translation. There are two main categories of AI: narrow AI, which is dedicated to a specific task, and strong/general AI, which does not currently exist but is being researched to allow machines to think like humans through their own intelligence and self-awareness. AI has many applications across industries like healthcare, transportation, education, and more. The evolution of AI began in the 1940s and important milestones include the invention of the Turing test in 1950, the development of machine learning in the 1950
This document provides an overview of artificial intelligence, including:
- A brief history noting the term was coined in 1956.
- Comparisons between human and computer intelligence in terms of speed/memory versus understanding of intellectual mechanisms.
- Categories of AI including narrow/weak AI, general/strong AI, and super intelligence.
- Applications like expert systems, natural language processing, speech recognition, computer vision, robotics, and automatic programming.
- Both positive and negative potential impacts are imagined, such as robots assisting with tasks but also potentially being programmed with antisocial intentions.
This document provides an overview of artificial intelligence (AI), including its history, applications, advantages, and disadvantages. It discusses early milestones in AI like the Turing Test (1950) and Logic Theorist (1956). Applications mentioned include agriculture, astronomy, gaming, robotics, and more. Advantages are high speed, reliability in risky situations. Disadvantages include high costs, inability to think outside programmed tasks, and lack of emotions. The conclusion states AI could solve many problems and unlock a future where computers make more informed decisions based on understanding our world through data.
This document presents an overview of machine learning. It defines machine learning as a field that allows computers to learn without being explicitly programmed, and discusses how machine learning enables computers to automatically analyze large datasets to make predictions. The document then summarizes different types of machine learning techniques including supervised learning, unsupervised learning, reinforcement learning, and more. It provides examples of applications of machine learning like face recognition, speech recognition, and self-driving cars. In conclusion, it states that machine learning is already used across many industries and can improve lives in numerous ways.
just hvae a look, m sure u whould lyk it...............................................................................................................................................................................its all about artificial machines.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
The document discusses various present and future applications of artificial intelligence including helping the aging population through robots, using rescue robots during disasters, developing speech recognition and reading tutorials, creating robots that can learn and adapt like humans, developing telepresence robots for communication, developing automated therapists and conversational search engines, and considerations around whether AI poses a threat to humanity.
This presentation provides an overview of artificial intelligence (AI), including its definition, introduction, foundations, advantages, applications, and limitations. AI is defined as the intelligence demonstrated by machines and the branch of computer science which aims to create intelligent agents. The presentation traces the foundations of AI through various fields such as philosophy, mathematics, neuroscience, and computer engineering. It also outlines the advantages of AI, such as reducing errors and exploring new possibilities, and the potential disadvantages like overreliance on AI and job losses. The presentation concludes that while AI tools can help solve problems, they cannot replace human capabilities.
The document discusses various applications of artificial intelligence including in web technologies, medicine, transportation, heavy industry, and more. It provides definitions of AI and the Turing test. It also outlines several computer science applications of AI such as natural language processing, computer vision, knowledge representation, and data mining.
Artificial Intelligence (A.I.) || Introduction of A.I. || HELPFUL FOR STUDENT...Shivangi Singh
Powerpoint Presentation on Artificial Intelligence which is helpful for students and anyone who want to gain information on A.I. . Helpful in college / school / university presentation on Artificial Student. Officials Personnel also use this for their use.
This Power Point Presentation is completely made by me.
If anyone want this ppt please email at : devashreeapplications@gmail.com
Or you can DM me on my Instagram Handle==> ID:: @theshivangirajpoot(SHERNI)
Thankyou for your interest:):)
Title: Incredible developments in Artificial intelligence which was the future scenario.
Here I discussed the with the major backbones of AI (Machine learning, Neural networks) types Machine learning and type of Artificial intelligence and with some real-time examples of AI and ML & Benefits and Future of AI with some pros and Cons of Artificial Intelligence.
Artificial Intelligence (A.I) and Its Application -SeminarBIJAY NAYAK
this presentation includes the the Basics of Artificial Intelligence and its applications in various Field. feel free to ask anything. Editors are always welcome.
This presentation covers
- Introduction to Artificial Intelligence
- Philosophy of A.I
- Real-life Examples
- Major Applications Of A.I
- A.I.-: Need Of The Hour
- Drawbacks.
-Vigyanam RKGIT, Ghaziabad
The document discusses artificial intelligence and defines it as the science and engineering of making intelligent machines, especially intelligent computer programs. It notes two main approaches to AI: engineering and cognitive modeling. Intelligence is defined as the ability to learn and solve problems, specifically the ability to solve novel problems, act rationally, and act like humans. The document also discusses various applications and techniques in AI, including search algorithms, expert systems, fuzzy logic, robotics, and genetic algorithms.
This document provides an introduction to artificial intelligence (AI) including definitions, goals, branches, and applications. It defines AI as computers with the ability to mimic human intelligence through learning from experience and handling complex problems. The main goals of AI are to better understand human intelligence by writing programs that emulate it and to create useful programs to do tasks normally requiring human experts. Branches of AI discussed include vision systems, learning systems, robotics, expert systems, and neural networks. The document also outlines some present and future aspects of AI as well as ethics and risks.
Human intelligence is the intellectual powers of humans, Learning
Decision Making
Solve Problems
Feelings(Love,Happy,Angry)
Understand
Apply logic
Experience
making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think.
Robots are autonomous or semi-autonomous machines meaning that they can act independently of external commands. Artificial intelligence is software that learns and self-improves.
Why Artificial Intelligence?
• Computers can do computations, by fixed programmed rules
• A.I machines perform tedious tasks efficiently & reliably.
• computers can’t understanding & adapting to new situations.
• A.I aims to improve machine to do such complex tasks.
Advantages of A.I:
Error Reduction
Difficult Exploration(mining & exploration processes)
Daily Application(Siri, Cortana)
Digital Assistants(interact with users)
Medical Applications(Radiosurgery)
Repetitive Jobs(monotonous)
No Breaks
Some disadvantages of A.I:
High Cost
Unemployment
Weaponization
No Replicating Humans
No Original Creativity
No Improvement with Experience
Safety/Privacy Issues
Artificial intelligence will be a Greatest invention Until Machines under the human control. Otherwise The new ERA will be There…..!
Artificial intelligence can be defined as the branch of computer science that is concerned with automation of computer system in an intelligent manner as like as humans.
Artificial Intelligence focuses on developing computer programs to solve complex problems and process.
What humans can do, now can be performed by machines too just, because of artificial intelligence.
AI is used because, it saves a whole lot of time and manpower.
Introduction to Artificial Intelligence | AI using Deep Learning | EdurekaEdureka!
The document discusses artificial intelligence and deep learning. It begins with defining AI and its applications, and then discusses machine learning as a subset of AI. Deep learning is presented as a solution to the limitations of machine learning for complex tasks like image recognition. Deep learning uses neural networks with multiple layers to learn representations of data with little human guidance. Examples of deep learning applications discussed include machine translation, image classification, and Google Lens.
Artificial intelligence (AI) is the ability of machines to mimic human intelligence through tasks like thinking, reasoning, and learning. The document discusses the history and current applications of AI as well as its future potential. Current applications of AI include finance/banking, healthcare, transportation, and consumer devices. Advancements in natural language processing, computer vision, machine learning, and robotics may allow AI to achieve human-level abilities and be applied to problems like risk management and disaster response. However, as AI becomes more advanced, it also raises social and ethical implications around robot rights, personhood, and what constitutes a moral agent.
This presentation discusses various applications of artificial intelligence technologies including neural networks, fuzzy logic, agents, genetic algorithms, natural language processing, and knowledge-based systems. It provides examples of how each technology has been applied in areas like predicting events, diagnosing cancer, automating decisions, and translating languages. The presentation concludes that while AI is still limited, it has matured into an effective tool that allows for new approaches to problem solving in fields like engineering.
Artificial intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. There are four main schools of thought in AI: thinking humanly, thinking rationally, acting humanly, and acting rationally. Popular techniques used in AI include machine learning, deep learning, and natural language processing. The document then discusses the growth of AI and its applications in various domains like healthcare, law, education, and more. It also lists the top companies leading the development of AI like DeepMind, Google, Facebook, Microsoft, and others. Finally, it provides perspectives on the future impact and adoption of AI.
Artificial intelligence (AI) techniques can help alleviate issues in software engineering by managing knowledge more effectively. AI is applied in software engineering through approaches like expert systems, neural networks, and risk management. Current applications of AI include financial analysis, weather forecasting, robotics, speech recognition, and game playing. However, fully achieving human-level ability in areas like natural language understanding, computer vision, and building expert systems remains challenging.
This document is a presentation on artificial intelligence. It begins with a definition of AI and discusses its foundations. It then covers information and applications of AI, its growth, top AI countries including the US, India, and China, and the robot Sophia. The presentation also outlines advantages such as error reduction and difficult exploration, as well as disadvantages including high costs and lack of improvement with experience. It concludes with a bibliography of sources.
The document discusses artificial intelligence and how it works. It defines intelligence and AI, explaining that AI aims to make computers as intelligent as humans. It describes how AI uses artificial neurons and networks to function similarly to the human brain. Examples of AI applications are given, like expert systems used in various domains. The document also compares human and artificial intelligence, noting their differing strengths and weaknesses.
Artificial Intelligence - It's meaning, uses, past and future.
Artificial intelligence is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals including humans
Artificial intelligence (AI) is a branch of computer science dealing with intelligent behavior in machines. It has a long history dating back to 1943, with early milestones like Samuel's checker program in the 1950s. AI aims to create human-like intelligence through techniques like perception, reasoning, and learning. While computers have advantages in speed and memory, they still lack human-level understanding. AI has many applications including expert systems, natural language processing, computer vision, and robotics. Popular programming languages for developing AI include Lisp, Python, Prolog, Java, and C++. The future of AI is uncertain but most believe it will continue advancing to handle more complex problems.
This document provides an overview of artificial intelligence (AI), including definitions, a brief history, methods, applications, achievements, and the future of AI. It defines AI as the science and engineering of making intelligent machines, especially intelligent computer programs. The document outlines two categories of AI methods - symbolic AI and computational intelligence - and discusses applications of AI in domains like finance, medicine, gaming, and robotics. It also notes some achievements of AI and predicts that AI will continue growing exponentially and potentially change the world.
This document provides an overview of the history and development of artificial intelligence (AI). It discusses early pioneers like Alan Turing and his proposal of the Turing Test. Key developments include the first AI programs for games in the 1950s, the Dartmouth Conference in 1956 which defined the field, and John McCarthy's creation of the Lisp programming language. The document outlines a variety of applications of AI throughout its history from gaming to robotics to military uses. It concludes by discussing predictions for the future role of AI and its potential to solve major problems and change the world.
Artificial intelligence (AI) is the ability of digital computers or robots to perform tasks commonly associated with intelligent beings. The idea of AI has its origins in ancient Greece but the field began in the 1950s. Today, AI is used in applications like IBM's Watson, driverless cars, automated assembly lines, surgical robots, and traffic control systems. The future of AI depends on whether researchers can achieve human-level or superhuman intelligence through techniques like whole brain emulation. Critics argue key challenges remain in replicating general human intelligence and consciousness with technology.
This was our ppt which was created by us for the 1st time to participate in an a paper presentation that was conducted in our college. I know that there will be some mistakes but we apologize for all those things. We think that this was one of the colorful PPT for artificial intelligence as said by the judges in the hall :P lol, but even we will feel the same thanks to all of u guys Support us for our development thanks :)
Artificial intelligence (AI) is defined as making computers do intelligent tasks like humans. It works using artificial neurons in neural networks and scientific theorems. Neural networks are composed of interconnected artificial neurons that mimic biological neurons. The Turing test tests a machine's ability to demonstrate intelligence through conversation. Machine learning allows AI to learn in three ways: from failures, being told, and exploration. Expert systems apply human expertise to problem solving. While AI can process large data quickly, it lacks common sense, intuition, and critical thinking that humans have. Overall, AI is an attempt to build models of human intelligence.
Artificial Intelligence (A.I.) || Introduction of A.I. || HELPFUL FOR STUDENT...Shivangi Singh
Powerpoint Presentation on Artificial Intelligence which is helpful for students and anyone who want to gain information on A.I. . Helpful in college / school / university presentation on Artificial Student. Officials Personnel also use this for their use.
This Power Point Presentation is completely made by me.
If anyone want this ppt please email at : devashreeapplications@gmail.com
Or you can DM me on my Instagram Handle==> ID:: @theshivangirajpoot(SHERNI)
Thankyou for your interest:):)
Title: Incredible developments in Artificial intelligence which was the future scenario.
Here I discussed the with the major backbones of AI (Machine learning, Neural networks) types Machine learning and type of Artificial intelligence and with some real-time examples of AI and ML & Benefits and Future of AI with some pros and Cons of Artificial Intelligence.
Artificial Intelligence (A.I) and Its Application -SeminarBIJAY NAYAK
this presentation includes the the Basics of Artificial Intelligence and its applications in various Field. feel free to ask anything. Editors are always welcome.
This presentation covers
- Introduction to Artificial Intelligence
- Philosophy of A.I
- Real-life Examples
- Major Applications Of A.I
- A.I.-: Need Of The Hour
- Drawbacks.
-Vigyanam RKGIT, Ghaziabad
The document discusses artificial intelligence and defines it as the science and engineering of making intelligent machines, especially intelligent computer programs. It notes two main approaches to AI: engineering and cognitive modeling. Intelligence is defined as the ability to learn and solve problems, specifically the ability to solve novel problems, act rationally, and act like humans. The document also discusses various applications and techniques in AI, including search algorithms, expert systems, fuzzy logic, robotics, and genetic algorithms.
This document provides an introduction to artificial intelligence (AI) including definitions, goals, branches, and applications. It defines AI as computers with the ability to mimic human intelligence through learning from experience and handling complex problems. The main goals of AI are to better understand human intelligence by writing programs that emulate it and to create useful programs to do tasks normally requiring human experts. Branches of AI discussed include vision systems, learning systems, robotics, expert systems, and neural networks. The document also outlines some present and future aspects of AI as well as ethics and risks.
Human intelligence is the intellectual powers of humans, Learning
Decision Making
Solve Problems
Feelings(Love,Happy,Angry)
Understand
Apply logic
Experience
making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think.
Robots are autonomous or semi-autonomous machines meaning that they can act independently of external commands. Artificial intelligence is software that learns and self-improves.
Why Artificial Intelligence?
• Computers can do computations, by fixed programmed rules
• A.I machines perform tedious tasks efficiently & reliably.
• computers can’t understanding & adapting to new situations.
• A.I aims to improve machine to do such complex tasks.
Advantages of A.I:
Error Reduction
Difficult Exploration(mining & exploration processes)
Daily Application(Siri, Cortana)
Digital Assistants(interact with users)
Medical Applications(Radiosurgery)
Repetitive Jobs(monotonous)
No Breaks
Some disadvantages of A.I:
High Cost
Unemployment
Weaponization
No Replicating Humans
No Original Creativity
No Improvement with Experience
Safety/Privacy Issues
Artificial intelligence will be a Greatest invention Until Machines under the human control. Otherwise The new ERA will be There…..!
Artificial intelligence can be defined as the branch of computer science that is concerned with automation of computer system in an intelligent manner as like as humans.
Artificial Intelligence focuses on developing computer programs to solve complex problems and process.
What humans can do, now can be performed by machines too just, because of artificial intelligence.
AI is used because, it saves a whole lot of time and manpower.
Introduction to Artificial Intelligence | AI using Deep Learning | EdurekaEdureka!
The document discusses artificial intelligence and deep learning. It begins with defining AI and its applications, and then discusses machine learning as a subset of AI. Deep learning is presented as a solution to the limitations of machine learning for complex tasks like image recognition. Deep learning uses neural networks with multiple layers to learn representations of data with little human guidance. Examples of deep learning applications discussed include machine translation, image classification, and Google Lens.
Artificial intelligence (AI) is the ability of machines to mimic human intelligence through tasks like thinking, reasoning, and learning. The document discusses the history and current applications of AI as well as its future potential. Current applications of AI include finance/banking, healthcare, transportation, and consumer devices. Advancements in natural language processing, computer vision, machine learning, and robotics may allow AI to achieve human-level abilities and be applied to problems like risk management and disaster response. However, as AI becomes more advanced, it also raises social and ethical implications around robot rights, personhood, and what constitutes a moral agent.
This presentation discusses various applications of artificial intelligence technologies including neural networks, fuzzy logic, agents, genetic algorithms, natural language processing, and knowledge-based systems. It provides examples of how each technology has been applied in areas like predicting events, diagnosing cancer, automating decisions, and translating languages. The presentation concludes that while AI is still limited, it has matured into an effective tool that allows for new approaches to problem solving in fields like engineering.
Artificial intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. There are four main schools of thought in AI: thinking humanly, thinking rationally, acting humanly, and acting rationally. Popular techniques used in AI include machine learning, deep learning, and natural language processing. The document then discusses the growth of AI and its applications in various domains like healthcare, law, education, and more. It also lists the top companies leading the development of AI like DeepMind, Google, Facebook, Microsoft, and others. Finally, it provides perspectives on the future impact and adoption of AI.
Artificial intelligence (AI) techniques can help alleviate issues in software engineering by managing knowledge more effectively. AI is applied in software engineering through approaches like expert systems, neural networks, and risk management. Current applications of AI include financial analysis, weather forecasting, robotics, speech recognition, and game playing. However, fully achieving human-level ability in areas like natural language understanding, computer vision, and building expert systems remains challenging.
This document is a presentation on artificial intelligence. It begins with a definition of AI and discusses its foundations. It then covers information and applications of AI, its growth, top AI countries including the US, India, and China, and the robot Sophia. The presentation also outlines advantages such as error reduction and difficult exploration, as well as disadvantages including high costs and lack of improvement with experience. It concludes with a bibliography of sources.
The document discusses artificial intelligence and how it works. It defines intelligence and AI, explaining that AI aims to make computers as intelligent as humans. It describes how AI uses artificial neurons and networks to function similarly to the human brain. Examples of AI applications are given, like expert systems used in various domains. The document also compares human and artificial intelligence, noting their differing strengths and weaknesses.
Artificial Intelligence - It's meaning, uses, past and future.
Artificial intelligence is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals including humans
Artificial intelligence (AI) is a branch of computer science dealing with intelligent behavior in machines. It has a long history dating back to 1943, with early milestones like Samuel's checker program in the 1950s. AI aims to create human-like intelligence through techniques like perception, reasoning, and learning. While computers have advantages in speed and memory, they still lack human-level understanding. AI has many applications including expert systems, natural language processing, computer vision, and robotics. Popular programming languages for developing AI include Lisp, Python, Prolog, Java, and C++. The future of AI is uncertain but most believe it will continue advancing to handle more complex problems.
This document provides an overview of artificial intelligence (AI), including definitions, a brief history, methods, applications, achievements, and the future of AI. It defines AI as the science and engineering of making intelligent machines, especially intelligent computer programs. The document outlines two categories of AI methods - symbolic AI and computational intelligence - and discusses applications of AI in domains like finance, medicine, gaming, and robotics. It also notes some achievements of AI and predicts that AI will continue growing exponentially and potentially change the world.
This document provides an overview of the history and development of artificial intelligence (AI). It discusses early pioneers like Alan Turing and his proposal of the Turing Test. Key developments include the first AI programs for games in the 1950s, the Dartmouth Conference in 1956 which defined the field, and John McCarthy's creation of the Lisp programming language. The document outlines a variety of applications of AI throughout its history from gaming to robotics to military uses. It concludes by discussing predictions for the future role of AI and its potential to solve major problems and change the world.
Artificial intelligence (AI) is the ability of digital computers or robots to perform tasks commonly associated with intelligent beings. The idea of AI has its origins in ancient Greece but the field began in the 1950s. Today, AI is used in applications like IBM's Watson, driverless cars, automated assembly lines, surgical robots, and traffic control systems. The future of AI depends on whether researchers can achieve human-level or superhuman intelligence through techniques like whole brain emulation. Critics argue key challenges remain in replicating general human intelligence and consciousness with technology.
This was our ppt which was created by us for the 1st time to participate in an a paper presentation that was conducted in our college. I know that there will be some mistakes but we apologize for all those things. We think that this was one of the colorful PPT for artificial intelligence as said by the judges in the hall :P lol, but even we will feel the same thanks to all of u guys Support us for our development thanks :)
Artificial intelligence (AI) is defined as making computers do intelligent tasks like humans. It works using artificial neurons in neural networks and scientific theorems. Neural networks are composed of interconnected artificial neurons that mimic biological neurons. The Turing test tests a machine's ability to demonstrate intelligence through conversation. Machine learning allows AI to learn in three ways: from failures, being told, and exploration. Expert systems apply human expertise to problem solving. While AI can process large data quickly, it lacks common sense, intuition, and critical thinking that humans have. Overall, AI is an attempt to build models of human intelligence.
Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think.
This document summarizes an IT seminar about artificial intelligence. It defines intelligence and AI, discussing early pioneers like Alan Turing. It provides examples of modern AI applications, including facial and speech recognition, learning, planning, and problem solving. Bees' ability to recognize faces from different angles is discussed, as well as conversational bots like Buddhabot. Research into building cognitive computers that mimic the brain is also summarized. The document concludes with discussing limitations of AI and potential future applications.
Artificial intelligence aims to program computers with human-like capabilities such as learning, reasoning, and self-correction. Researchers study how to simulate creativity and intuition computationally. The problem of simulating general intelligence has been broken down into specific traits or capabilities, but systems have difficulties displaying accurate information. Early work in the 1940s explored connections between neurology, information theory, and cybernetics to build machines exhibiting basic intelligence through electronic networks. Honda's humanoid robot ASIMO leads in mobility with human-like movement through decades of research and progress toward creating artificial minds and humanoid robots. Machine learning, including supervised and unsupervised learning, has been central to AI research from the beginning and is used for classification and pattern
This document provides an overview of artificial intelligence (AI), including its history, key concepts, tools, applications, future potential, and limitations. It discusses how AI aims to recreate human intelligence using computers and draws on fields like computer science, mathematics, psychology and more. The document also summarizes the development of AI technologies over time, from early work in the 1940s to modern applications in areas like computer vision, robotics and question answering systems. Both opportunities and challenges of advancing AI are considered, such as how superintelligent machines could potentially help or harm humanity.
NATIONAL ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT STRATEGIC PLAN Daniel Dufourt
"This AI R&D Strategic Plan defines a high-level framework that can be used to identify scientific and technological gaps in AI and track the Federal R&D investments that are designed to fill those gaps. The AI R&D Strategic Plan identifies strategic priorities for both near-term and long-term support of AI that address important technical and societal challenges. The AI R&D Strategic Plan, however, does not define specific research agendas for individual Federal agencies. Instead, it sets objectives for the Executive Branch, within which agencies may pursue priorities consistent with their missions, capabilities, authorities, and budgets, so that the overall research portfolio is consistent with the AI R&D Strategic Plan."
Artificial intelligence is applied in many domains including finance, hospitals, heavy industry, telecommunications, gaming, music, and antivirus software. In finance, AI is used for operations, investing, loan investigations and ATM design. In hospitals, AI organizes bed schedules, staff rotation, and provides medical information. Robots are effectively used in heavy industry for dangerous, repetitive, or degrading jobs. Telecommunications companies use AI for workforce scheduling. AI is also applied to video games through bots and to music composition, performance, and sound processing. Antivirus detection has increasingly integrated AI techniques to improve performance.
Artificial intelligence my ppt by hemant sankhlaHemant Sankhla
This document discusses a PowerPoint presentation (PPT) that was created in Microsoft Office 13. It notes that some transitions and effects may not display properly since the PPT was created in an older version. It also mentions that some fonts used in the PPT may not display if the viewer does not have those fonts, but the fonts can be changed as needed.
Machine Learning and Artificial Intelligence; Our future relationship with th...Alex Poon
The difference between Machine Learning and Artificial Intelligence. A discussion on the various future scenarios of working with Big Data and how human can compliment machines to solve more complex challenges
This document provides an overview of artificial intelligence, including its branches and fields of application. It discusses how AI aims to create intelligent machines through approaches like symbolic and statistical AI. The document also outlines key differences between human and artificial intelligence, noting that AI is non-creative, consistent, precise, and able to multitask, while humans are more creative but can contain errors or inconsistencies. It concludes by stating that combining knowledge from different fields including computer science, mathematics, psychology and more will benefit progress in creating intelligent artificial beings.
The document discusses artificial intelligence, including its history, applications, and languages. It provides an overview of AI, noting that it aims to recreate human intelligence through machine learning and problem solving. The document then covers key topics like the philosophy of AI, limits on machine intelligence, and comparisons between human and artificial brains. It also gives brief histories of AI and machine learning. The document concludes by discussing popular AI programming languages like Lisp and Prolog, as well as various applications of AI technologies.
Artificial intelligence is the study and design of intelligent agents, with no single goal. It aims to put the human mind into computers by developing machines that can achieve goals through computation. The origins of AI began in the 1940s with the development of electronic computers. Significant early developments included the first stored program computer in the 1950s, the Dartmouth Conference which coined the term "artificial intelligence" in the 1950s, and the development of the LISP programming language. In the following decades, AI research expanded and led to applications in fields like expert systems, games, and military systems. While progress has been made, the full extent of intelligence and the future of AI remains unknown.
This Presentation will give you an overview about Artificial Intelligence : definition, advantages , disadvantages , benefits , applications .
We hope it to be useful .
This document provides an introduction to artificial intelligence (AI). It defines AI as a branch of computer science dealing with symbolic and non-algorithmic problem solving. The document discusses the evolution of AI from early programs in the 1950s to current applications in areas like expert systems, natural language processing, computer vision, robotics, and automatic programming. It also notes both potential positive futures where intelligent robots assist humans as well as potential negative outcomes if robots are used for anti-social purposes. The conclusion is that AI has increased understanding of intelligence while also revealing its complexity.
See how Artificial Intelligence (AI) plays a wide range of increasingly sophisticated roles in creating better customer interactions at the user interface (UI) in trend 1 of Tech Vision 2017.
This document provides an overview of artificial intelligence (AI), including definitions, a brief history, methods, applications, achievements, and the future of AI. It defines AI as the science and engineering of making intelligent machines, especially intelligent computer programs. The document outlines different methods of AI such as symbolic AI, neural networks, and computational intelligence. It also discusses a wide range of applications of AI such as finance, medicine, gaming, robotics, and more. Finally, it discusses some achievements of AI and envisions continued growth and importance of AI in the future.
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
The document provides an overview of deep learning, including its history, key concepts, applications, and recent advances. It discusses the evolution of deep learning techniques like convolutional neural networks, recurrent neural networks, generative adversarial networks, and their applications in computer vision, natural language processing, and games. Examples include deep learning for image recognition, generation, segmentation, captioning, and more.
Artificial intelligence (AI) is the intelligence exhibited by machines and the branch of computer science which develops it. The document defines AI and its history, compares human and computer intelligence, outlines the main branches of AI including logical AI, pattern recognition, and natural language processing. It discusses current applications such as expert systems, speech recognition, computer vision, robotics, and the potential outcomes, advantages, and disadvantages of AI. The future of AI could see more human-like robots assisting with daily tasks but may also carry risks if robots gain full cognitive abilities and power similar to humans.
Artificial Intelligence an Amazing presentation By Group4.
Group4 is a unique group of Govt.postgraduate College sheikhupura affiliated with Punjab University of Punjab,Pakistan..
Contact details..
Shamimaqsoodulhassan@yahoo.com or Shamimaqsood@gmail.com
Phone Number: 03045128753
The document provides an overview of artificial intelligence and robotics. It begins with an introduction from the CSE department of Mewar University and includes sections on definitions of AI, approaches of AI like strong AI and weak AI, techniques in AI like neural networks and genetic algorithms, famous AI systems such as Deep Blue and ALVINN, the history and foundations of AI, areas of AI like robotics and natural language processing, and recommended reference books. It discusses concepts like the Turing test, the Chinese room argument and architectures for general intelligence including LIDA and Sloman's architectures.
Artificial intelligence (AI) is an area of computer science that aims to design machines that can think and act intelligently, like humans. The document discusses several key aspects of AI including:
- The goals of AI such as learning, reasoning, understanding language.
- Examples of modern AI applications like defeating chess champions, driving vehicles autonomously, and assisting with medical diagnoses.
- The history and development of AI from its origins in the 1950s to modern areas like neural networks.
- Challenges in developing truly intelligent machines that can match all aspects of human intelligence like creativity and common sense.
This document provides an introduction to artificial intelligence. It defines AI as the branch of computer science concerned with automating intelligent behavior. Some key abilities considered intelligent include responding flexibly to situations, making sense of ambiguous messages, recognizing importance, finding similarities, and drawing distinctions. The document also discusses the Turing test for machine intelligence and major areas of AI like expert systems, natural language processing, machine learning, robotics, and computer vision.
This document provides an overview of artificial intelligence (AI). It discusses the history of AI beginning in the mid-20th century. It describes how AI works using artificial neurons and neural networks that mimic the human brain. The document outlines several goals and applications of AI including expert systems, natural language processing, computer vision, robotics, and more. It also discusses both the advantages and disadvantages of AI as well as considerations for its future development and impact.
Artificial intelligence is the study of how to create machines that can think and act like humans by learning and solving problems on their own. It is a branch of computer science that aims to help machines find solutions to complex problems like humans. While the idea of AI dates back to ancient Greece, significant work in the field began in the 20th century with pioneers like Turing developing the first computer programs and algorithms for problem solving. Major advances and achievements in AI have included programs that can play games, recognize speech and images, and perform human-like tasks through robotics.
The document provides an overview of artificial intelligence (AI), including definitions, a brief history, comparisons to the human brain, applications, and pros and cons. It discusses how AI aims to create intelligent machines that can learn, problem solve, and act rationally like humans. The document also summarizes key developments in AI research from the 1950s to present day and provides examples of how AI is used in areas like natural language processing, computer vision, robotics, and more.
This document provides an introduction to artificial intelligence (AI) including its evolution, branches, applications, and conclusions. It discusses key concepts like the Turing test, definitions of AI, and intelligence. The history of AI is explored from early programs in the 1940s-50s to expert systems in the 1980s. Applications mentioned include expert systems, natural language processing, speech recognition, computer vision, and robotics. Both positive and negative potential futures of AI and robotics are considered. In conclusion, AI has increased understanding of intelligence while also revealing its complexity, providing ongoing challenges and opportunities.
This document provides an overview of artificial intelligence (AI), including its history, goals, applications, and future prospects. It discusses how AI works using artificial neural networks and logic. Some key applications mentioned are expert systems, natural language processing, computer vision, speech recognition, and robotics. Both advantages like fast response time and ability to process large data and disadvantages like lack of common sense and potential dangerous self-modification are outlined. The future of AI having both benefits of assistance and risks of robot rebellion if given full cognition is explored.
The document provides an overview of artificial intelligence including definitions, types of AI tasks, foundations of AI, history of AI, current capabilities and limitations of AI systems, and techniques for problem solving and planning. It discusses machine learning, natural language processing, expert systems, neural networks, search problems, constraint satisfaction problems, linear and non-linear planning approaches. The key objectives of the course are introduced as understanding common AI concepts and having an idea of current and future capabilities of AI systems.
This document provides an overview of selected topics in computer science, including artificial intelligence, robotics, machine learning, and the internet of things. It will cover these topics through a series of sessions, discussing introductions and basic concepts for each. The first session introduces AI and compares it to machine learning. Subsequent sessions will cover robotics and its types, applications of machine learning, and laws of robotics. Students will work on individual or group projects related to these topics.
It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
Artificial intelligence aims to replicate human intelligence by enabling computers and machines to perform tasks typically requiring human intelligence like decision making, problem solving, and learning. Early pioneers in the field developed the concepts in the 1940s-1950s, and the field has since made progress in areas like expert systems, machine learning, and natural language processing. While AI has many potential benefits, fully replicating general human intelligence with machines remains a challenge due to our limited understanding of cognition, learning, and other human attributes like creativity.
This document provides an overview of artificial intelligence and its applications. It begins with an introduction defining AI as giving machines human-like thinking abilities. It then discusses how AI works through techniques like planning, pattern recognition, ontology, robotics, and more. Applications of AI discussed include medicine, the military, games, language processing, and expert systems. The document concludes with predictions for AI's future role in technologies like telephone translation, expanded use of expert systems, passing the Turing test, and research assistants.
The document discusses a presentation on artificial intelligence given by Biswajit Mondal, including a definition of AI as making computers able to mimic human brain functions, the various fields that contribute to AI like philosophy and computer engineering, and examples of applications like game playing and robotics.
Artificial intelligence (AI) is the intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. This document provides an overview of AI, including its history beginning in 1943, main branches such as logical AI and pattern recognition, and applications like expert systems, speech recognition, computer vision, robotics. The advantages of AI are discussed, such as improving lives and doing dangerous jobs, but also potential disadvantages like unemployment and enhancing laziness in humans. The future of AI could include personal robots but also risks of robots being hacked or developing anti-social objectives.
This document provides an introduction and overview of artificial intelligence (AI). It discusses the history of AI, including early programs in the 1950s-1960s and advances such as neural networks and deep learning. It defines AI and describes its goals such as reasoning, knowledge representation, planning, natural language processing, perception, and social intelligence. The document outlines two main categories of AI: conventional AI which uses symbolic and statistical methods, and computational intelligence which uses machine learning techniques like neural networks. It gives examples of applications such as pattern recognition, robotics, and game playing. Finally, it discusses related fields where AI is used such as automation, cybernetics, and intelligent control systems.
This document provides an overview of artificial intelligence (AI), including its history, categories, branches, applications, and tools. It discusses how AI has evolved through different generations of computing. Key topics covered include expert systems, neural networks, programming languages used in AI, the American Association for Artificial Intelligence (AAAI), and perspectives on AI's future potential impacts and applications.
Muhammad Ashik Iqbal presented his final year defense project on a web based document archiving and sharing system. The system allows users to store, view, search, and share scanned documents stored in the cloud. Users can preview PDF documents in their web browser and download original documents. Documents can be shared securely using encrypted URLs. The system was implemented using ASP.NET, C#, SQL Server, and other technologies. While functional, the initial system has some limitations but provides a foundation for future enhancements to document management.
This Report Presented in Partial Fulfillment of the Requirements for the Degree of Masters of Science in Computer Science & Engineering, Daffodil International University, Bangladesh
The document shares sayings of the prophet Muhammad (PBUH) about various topics such as showing respect to one's mother, preparing for death, and avoiding inappropriate interactions with women. It emphasizes the importance of following the teachings of the Quran and avoiding sinful behaviors. It encourages readers to share Islamic messages and pray for others.
This document discusses virtual memory and cache memory. It defines virtual memory as a technique that allows programs to behave as if they have contiguous memory even if the actual physical memory is fragmented. It also describes how virtual memory provides each process with its own address space and hides fragmentation. The document also defines cache memory as a small, fast memory located close to the CPU that stores frequently accessed instructions and data to improve performance. It describes levels 1 and 2 caches and how they work with memory and disk caches.
This document provides an introduction and overview of cloud computing. It discusses how cloud computing allows software programs and documents to be stored on servers accessed over the internet rather than on a personal computer. Anyone with permission can access and collaborate on documents in real time from any computer with an internet connection. The document outlines the types of cloud services including Infrastructure as a Service, Platform as a Service, and Software as a Service. It also discusses some benefits of cloud computing such as collaboration, access from any location, and cost savings.
Commercially Available Fiber Optic CablesAshik Iqbal
This presentation summarizes two types of commercially available fiber optic cables: standard simplex/multiple cables and fiber distribution cables. It describes the characteristics of each type, including their features, benefits, specifications, and applications. Standard simplex/multiple cables are used for equipment interconnection, high-speed data transfer, and telecom networking. They have on-site manufacturing, guaranteed satisfaction, and limitless customization. Fiber distribution cables are used for outside plant, principle networks, and interconnects. They have industry standard terminations and definable cable breakout styles.
The document discusses pipeline mechanisms in modern processors. It describes two types of interlock delays that can occur: data dependence delays related to instruction latency, and reservation delays when instructions need shared processor resources. It also summarizes two methods for describing processor pipelines in GCC: the older method and the preferred automaton-based method. Pipelining works by splitting instruction processing into stages to allow more parallel execution and higher throughput. Advantages include reduced cycle time and circuitry, while disadvantages include added complexity and potential pipeline stalls.
The document provides directives for Muslims during the blessed month of Ramadan based on teachings from the Quran and hadith. It emphasizes abstaining from all sins, controlling one's desires, and devoting oneself to worshipping Allah to truly benefit from Ramadan and gain his forgiveness and pleasure. Fasting, praying at night, avoiding prohibited acts, and focusing on obedience and remembrance of Allah are among the most important ways to profit from this month.
The document proposes a mobile/digital wallet system for e-payments in Bangladesh. The system would allow senders to transfer funds from their bank accounts to receivers' accounts or cash out at paypoints using just their mobile phones. Paypoints would function like ATMs or bank branches to provide cash access even in rural areas. The system could reduce the use of cash and enable simple, fast electronic payments. It would initially connect 23 Bangladeshi banks already on the VISA network and utilize the existing VISA infrastructure for online transactions. There are no legal or regulatory barriers as the inter-bank transfers would use permitted VISA channels. A pilot test is planned for 2009 with full implementation in 2010.
The RSA algorithm document describes the steps to generate a public/private key pair for encryption and decryption. It involves selecting two large prime numbers p and q, computing n as their product, and using n along with the prime factors to calculate the private key exponent d that corresponds to the public key exponent e, such that ed = 1 mod φ(n). The example demonstrates computing the values for a specific case where p=3 and q=11.
The Diffie-Hellman key exchange document describes a protocol where Alice and Bob can establish a shared secret key over an insecure channel. It involves Alice generating a public value gx mod p and Bob generating a public value gy mod p, then each uses the other
The document contains pictures and information about the graves and burial sites of important prophets and religious figures from Islam and the Abrahamic faiths. Some of the individuals mentioned include Adam, Eve, Abel, Noah, Youshe, Ibrahim, Lot, Saleh, Dawood, Musa, Haroon, Zakaria, Yahya, Bibi Amina, Bibi Haleema, and Bilal Habashi. The locations span several countries in the Middle East, including Sri Lanka, Jordan, Iraq, Israel, Syria, Saudi Arabia, and areas that are now within modern day Israel and Palestine.
The document discusses cryptographic hash algorithms and focuses on MD5. It provides a list of hash algorithms and their properties. MD5 is described in detail, including its algorithm, applications, and history of attacks. While formerly widely used, MD5 is now considered broken due to vulnerabilities found in 2004 and 2008. The document concludes by emphasizing the importance of hashing in cryptography and information security.
This document discusses cryptographic hash algorithms and focuses on MD5. It lists common hash algorithms and their applications such as file integrity verification and password storage. It then provides details on MD5, describing how it is a 128-bit hash function that was designed in 1991 as a replacement for MD4. The document outlines the MD5 algorithm, providing examples of how it hashes messages, and describes how it has been employed in security applications but is now considered broken due to collisions being found.
Carlos invites the recipient to a barbecue at his rustic seaside property this weekend and offers to provide transportation via boat. He shares details about accommodations and activities, asking the recipient to confirm attendance and number of guests, and expresses disappointment but understanding if they cannot make it.
This document provides a summary of the Handbook of Islamic Banking edited by M. Kabir Hassan and Mervyn K. Lewis. The handbook contains 25 chapters contributed by various experts in the field of Islamic banking and finance. It covers the foundations, operations, instruments and markets, systems and globalization of Islamic banking. The handbook aims to provide comprehensive coverage of the principles, practices and contemporary issues related to Islamic banking and finance.
A young couple moves into a new neighborhood. The wife criticizes her neighbor's laundry hanging on the line, saying it is not clean and she doesn't know how to wash properly. This goes on for a month until one day the wife notices the laundry is clean and comments on it. The husband reveals he cleaned their windows that morning, implying the wife's critical view of the neighbor was due to the dirty windows, not the laundry itself. The document encourages people to examine their own mindset and cleanse their perspective before judging others.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
1. Daffodil International University
Assignment # 1
Advanced Artificial Intelligence
Submitted To:
Dr. Md. Ismail Jabiullah
Professor
Department of CSE
Faculty of Science & Information Technology
DIU
Submitted By:
Muhammad Ashik Iqbal
M.Sc. in CSE
ID: 092-25-127
DIU
Submission Date:
26 October 2009
2. Questions:
A. Define artificial intelligence.
B. Mention the four approaches to AI.
C. What are the capabilities of AI that have to process with computer?
D. Mention the foundations of AI?
E. Mention the crude comparison of the raw computational resources available to
computer and human brain.
F. Briefly explain the history of AI.
G. What are rational action and intelligent agent?
3. Answers:
A. Define artificial intelligence.
The branch of computer science concerned with making computers behave like humans. The term was
coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial intelligence
includes
Games Playing: programming computers to play games such as chess and checkers
Expert Systems : programming computers to make decisions in real-life situations (for example,
some expert systems help doctors diagnose diseases based on symptoms).
A computer application that performs a task that would otherwise be performed by a human expert. For
example, there are expert systems that can diagnose human illnesses, make financial forecasts, and
schedule routes for delivery vehicles. Some expert systems are designed to take the place of human
experts, while others are designed to aid them.
Expert systems are part of a general category of computer applications known as artificial intelligence .
To design an expert system, one needs a knowledge engineer, an individual who studies how human
experts make decisions and translates the rules into terms that a computer can understand.
Natural Language : programming computers to understand natural human languages. A human
language. For example, English, French, and Chinese are natural languages. Computer
languages, such as FORTRAN and C, are not.
Probably the single most challenging problem in computer science is to develop computers that
can understand natural languages. So far, the complete solution to this problem has proved
elusive, although a great deal of progress has been made. Fourth-generation languages are the
programming languages closest to natural languages.
Neural Networks : Systems that simulate intelligence by attempting to reproduce the types of
physical connections that occur in animal brains
Robotics : programming computers to see and hear and react to other sensory stimuli. The field
of computer science and engineering concerned with creating robots, devices that can move
and react to sensory input. Robotics is one branch of artificial intelligence.
Robots are now widely used in factories to perform high-precision jobs such as welding and riveting.
They are also used in special situations that would be dangerous for humans -- for example, in cleaning
toxic wastes or defusing bombs.
Although great advances have been made in the field of robotics during the last decade, robots are still
not very useful in everyday life, as they are too clumsy to perform ordinary household chores.
Robot was coined by Czech playwright Karl Capek in his play R.U.R (Rossum's Universal Robots), which
opened in Prague in 1921. Robota is the Czech word for forced labor.
4. The term robotics was introduced by writer Isaac Asimov. In his science fiction book I, Robot,
published in 1950, he presented three laws of robotics:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to
harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict
with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First
or Second Law.
Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human behavior).
The greatest advances have occurred in the field of games playing. The best computer chess programs
are now capable of beating humans. In May, 1997, an IBM super-computer called Deep Blue defeated
world chess champion Gary Kasparov in a chess match.
In the area of robotics, computers are now widely used in assembly plants, but they are capable only of
very limited tasks. Robots have great difficulty identifying objects based on appearance or feel, and they
still move and handle objects clumsily.
Natural-language processing offers the greatest potential rewards because it would allow people to
interact with computers without needing any specialized knowledge. You could simply walk up to a
computer and talk to it. Unfortunately, programming computers to understand natural languages has
proved to be more difficult than originally thought. Some rudimentary translation systems that translate
from one human language to another are in existence, but they are not nearly as good as human
translators. There are also voice recognition systems that can convert spoken sounds into written words,
but they do not understand what they are writing; they simply take dictation. Even these systems are
quite limited -- you must speak slowly and distinctly.
In the early 1980s, expert systems were believed to represent the future of artificial intelligence and of
computers in general. To date, however, they have not lived up to expectations. Many expert systems
help human experts in such fields as medicine and engineering, but they are very expensive to produce
and are helpful only in special situations.
Today, the hottest area of artificial intelligence is neural networks, which are proving successful in a
number of disciplines such as voice recognition and natural-language processing.
There are several programming languages that are known as AI languages because they are used almost
exclusively for AI applications. The two most common are LISP and Prolog.
5. B. Mention the four approaches to AI.
The approaches are given below
• Acting humanly: The Turing Test approach
• Thinking humanly: The cognitive modeling approach
• Thinking rationally: The laws of thought approach
• Acting rationally: The rational agent approach
Acting humanly: The Turing Test approach:
The Turing Test, proposed by Alan Turing in 1950, was designed to provide a satisfactory operational
definition of intelligence. The test he proposed is that the computer should be interrogated by a human
via a teletype, and passes the test if the interrogator cannot tell if here is a computer or a human at the
other end.
The computer would need to possess the following capabilities to pass the test:
• Natural language processing to enable it to communicate successfully in English (or some
other human language);
• Knowledge representation to store information provided before or during the interrogation;
• Automated reasoning to use the stored information to answer questions and to draw new
conclusions;
• Machine learning to adapt to new circumstances and to detect and extrapolate patterns.
Turing's test deliberately avoided direct physical interaction between the interrogator and the
computer. However, the total Turing Test includes a video signal so that the interrogator can test the
subject's perceptual abilities, as well as the opportunity for the interrogator to pass physical objects
"through the hatch." To pass the total Turing Test, the computer will need
• Computer vision to perceive objects,
• Robotics to move them about.
Within AI, there has not been a big effort to try to pass the Turing test. The issue of acting like a human
comes up primarily when AI programs have to interact with people, as when an expert system explains
how it came to its diagnosis, or a natural language processing system has a dialogue with a user. These
programs must behave according to certain normal conventions of human interaction in order to make
themselves understood.
Thinking humanly: The cognitive modeling approach
If we are going to say that a given program thinks like a human, we must have some way of determining
how humans think. We need to get inside the actual workings of human minds.
• There are two ways to do this:
• Through introspection—trying to catch our own thoughts as they go by
• Through psychological experiments.
6. Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as a
computer program. If the program's input/output and timing behavior matches human behavior, that is
evidence that some of the program's mechanisms may also be operating in humans. For example,
Newell and Simon, who developed GPS, the "General Problem Solver" (Newell and Simon, 1961), were
not content to have their program correctly solve problems. They were more concerned with comparing
the trace of its reasoning steps to traces of human subjects solving the same problems.
The interdisciplinary field of cognitive science brings together computer models from AI and
experimental techniques from psychology to try to construct precise and testable theories of the
workings of the human mind. Real cognitive science is necessarily based on experimental investigation
of actual humans or animals, and we assume that the reader only has access to a computer for
experimentation. We will simply note that AI and cognitive science continue to fertilize each other,
especially in the areas of vision, natural language, and learning.
Thinking rationally: The laws of thought approach
The Greek philosopher Aristotle was one of the first to attempt to codify "right thinking," that is,
irrefutable reasoning processes. His famous syllogisms provided patterns for argument structures that
always gave correct conclusions given correct premises. For example, "Socrates is a man; all men are
mortal; therefore Socrates is mortal." These laws of thought were supposed to govern the operation of
the mind, and initiated the field of logic.
The development of formal logic in the late nineteenth and early twentieth century’s provided a precise
notation for statements about all kinds of things in the world and the relations between them. By 1965,
programs existed that could, given enough time and memory, take a description of a problem in logical
notation and find the solution to the problem, if one exists. If there is no solution, the program might
never stop looking for it. The logicist tradition within artificial intelligence hopes to build on such
programs to create intelligent systems.
There are two main obstacles to this approach. First, it is not easy to take informal knowledge and state
it in the formal terms required by logical notation, particularly when the knowledge is less than 100%
certain. Second, there is a big difference between being able to solve a problem "in principle" and doing
so in practice.
Both of these obstacles apply to any attempt to build computational reasoning systems, they appeared
first in the logicist tradition because the power of the representation and reasoning systems are
welldefined and fairly well understood.
Acting rationally: The rational agent approach
Acting rationally means acting so as to achieve one's goals, given one's beliefs. An agent is just
something that perceives and acts. In this approach, AI is viewed as the study and construction of
rational agents.
In the "laws of thought" approach to AI, the whole emphasis was on correct inferences. Making correct
inferences is sometimes part of being a rational agent, because one way to act rationally is to reason
logically to the conclusion that a given action will achieve one's goals, and then to act on that
conclusion. On the other hand, correct inference is not all of rationality; because there are often
7. situations where there is no provably correct thing to do, yet something must still be done. There are
also ways of acting rationally that cannot be reasonably said to involve inference. For example, pulling
one's hand off of a hot stove is a reflex action that is more successful than a slower action taken after
careful deliberation.
The study of AI as rational agent design therefore has two advantages. First, it is more general than the
"laws of thought" approach, because correct inference is only a useful mechanism for achieving
rationality, and not a necessary one. Second, it is more amenable to scientific development than
approaches based on human behavior or human thought, because the standard of rationality is clearly
defined and completely general. We will see before too long that achieving perfect rationality—always
doing the right thing—is not possible in complicated environments. The computational demands are just
too high.
8. C. What are the capabilities of AI that have to process with computer?
The computer passes the test if a human interrogator, after posing some written questions, cannot tell
whether the written responses come from a person or not. Programming a computer to pass the test
provides plenty to work on.
The Computer would need to posses the following capabilities of AI:
Natural language processing to enable it to communicate successfully in English.
Knowledge representation to store what it knows or hears;
Automated reasoning to use the stored information to answer questions and to draw new
conclusions;
Machine learning to adapt to new circumstances and to detect and extrapolate patterns.
9. D. Mention the foundations of AI?
Philosophy (428 BC to present) :
• Logic, methods of reasoning, mind as physical system foundations of learning, language,
rationality
• Philosophers made AI conceivable by considering the ideas that the mind is in some ways like a
machine, that it operates on knowledge encoded in some internal language, and that thought
can be used to choose what actions to take.
Mathematics (c. 800 to present) :
• Formal representation and proof algorithms, computation, (un)decidability, (in)tractability,
probability
• Mathematicians provided the tools to manipulate statements of logical certainty as well as
uncertain, probabilistic statements. They also set the groundwork for understanding
computation and reasoning about algorithms.
Economics (1776 to present) :
• utility, decision theory
• Economists formalized the problem of making decisions that maximize the expected outcome to
the decision-maker.
Neuroscience (1861 to present) :
• physical substrate for mental activity
Psychology (1879 to present) :
• phenomena of perception and motor control, experimental techniques.
• Psychologists adopted the idea that humans and animals can be considered information
processing machines. Linguists showed that language use fits into this model.
Computer engineering (1940 to present) :
• building fast computers.
• Computer engineers provided the artifacts that make AI applications possible. AI programs tend
to be large, and they could not work without the great advances in speed and memory that the
computer industry has provided.
Control theory and Cybernetics (1948 to present) :
• design systems that maximize an objective function over time.
• Control theory deals with designing devices that act optimally on the basis of feedback from the
environment. Initially, the mathematical tools of control theory were quite different from AI, but
the fields are coming closer together.
Linguistics (1957 to present) :
• knowledge representation, grammar
10. E. Mention the crude comparison of the raw computational resources available to
computer and human brain.
Throughout history, people have compared the brain to
different inventions. In the past, the brain has been said to
be like a water clock and a telephone switchboard. These
days, the favorite invention that the brain is compared to is
a computer. Some people use this comparison to say that
the computer is better than the brain; some people say that
the comparison shows that the brain is better than the computer. Perhaps, it is best to say that the brain
is better at doing some jobs and the computer is better at doing other jobs.
Computers are designed in the way that they can perform any action right after another in high speeds
such as million operations just in a second. Although the human brain contains more active details, it
works with lower speed. This issue is the main difference between computers and the human brain.
Computers can perform sequential operations fast and precise, but in the parallel arena human brain
works better. We can clarify this issue with an example; if a human wants to calculate the summation of
two hundred digit numbers, the suggested result will take a considerable time and at the end the
answer might not be true. Hence, computers can perform this kind of operations fast and exact. Because
of the sequential origin of summation, electrical mechanism of CPU performs faster than chemical
mechanism of human brain. However the human brain has a wonderful speed in recognizing an image.
As long as the human brain was presented as a dynamic system with parallel structure and different
processing ways in comparison with other usual methods, researches and interests about ANN
commenced.
Let's see how the brain and the computer are similar and different.
The Brain vs. the Computer: Similarities and Differences
Similarity Difference
The brain uses chemicals to transmit information; the computer uses electricity.
Even though electrical signals travel at high speeds in the nervous system, they
Both use electrical travel even faster through the wires in a computer.
signals to send
messages.
A computer uses switches that are either on or off ("binary"). In a way, neurons
in the brain are either on or off by either firing an action potential or not firing
an action potential. However, neurons are more than just on or off because the
"excitability" of a neuron is always changing. This is because a neuron is
constantly getting information from other cells through synaptic contacts.
Both transmit
Information traveling across a synapse does NOT always result in a action
information.
potential. Rather, this information alters the chance that an action potential will
be produced by raising or lowering the threshold of the neuron.
11. Computer memory grows by adding computer chips. Memories in the brain
grow by stronger synaptic connections.
Both have a memory
that can grow.
It is much easier and faster for the brain to learn new things. Yet, the computer
can do many complex tasks at the same time ("multitasking") that are difficult
for the brain. For example, try counting backwards and multiplying 2 numbers
at the same time. However, the brain also does some multitasking using the
autonomic nervous system. For example, the brain controls breathing, heart
Both can adapt and
rate and blood pressure at the same time it performs a mental task.
learn.
The human brain has weighed in at about 3 pounds for about the last 100,000
years. Computers have evolved much faster than the human brain. Computers
have been around for only a few decades, yet rapid technological
advancements have made computers faster, smaller and more powerful.
Both have evolved
over time.
The brain needs nutrients like oxygen and sugar for power; the computer needs
electricity to keep working.
Both need energy.
It is easier to fix a computer - just get new parts. There are no new or used parts
for the brain. However, some work is being done with transplantation of nerve
cells for certain neurological disorders such as Parkinson's disease. Both a
computer and a brain can get "sick" - a computer can get a "virus" and there are
Both can be many diseases that affect the brain. The brain has "built-in back up systems" in
damaged. some cases. If one pathway in the brain is damaged, there is often another
pathway that will take over this function of the damaged pathway.
The brain is always changing and being modified. There is no "off" for the brain -
even when an animal is sleeping, its brain is still active and working. The
computer only changes when new hardware or software is added or something
Both can change and is saved in memory. There IS an "off" for a computer. When the power to a
be modified. computer is turned off, signals are not transmitted.
The computer is faster at doing logical things and computations. However, the
brain is better at interpreting the outside world and coming up with new ideas.
The brain is capable of imagination.
12. Both can do math and
other logical tasks.
Scientists understand how computers work. There are thousands of
neuroscientists studying the brain. Nevertheless, there is still much more to
learn about the brain. "There is more we do NOT know about the brain, than
Both brains and what we do know about the brain"
computers are
studied by scientists.
13. F. Briefly explain the history of AI
The gestation of artificial intelligence (1943-1956):
The first work that is now generally recognized as AI was done by Warren McCulloch and Walter Pitts
(1943). They drew on three sources: knowledge of the basic physiology and function of neurons in the
brain; the formal analysis of propositional logic due to Russell and Whitehead; and Turing's theory of
computation. They proposed a model of artificial neurons in which each neuron is characterized as being
"on" or "off," with a switch to "on" occurring in response to stimulation by a sufficient number of
neighboring neurons. The state of a neuron was conceived of as "factually equivalent to a proposition
which proposed its adequate stimulus. "They showed, for example, that any computable function could
be computed by some network of connected neurons, and that all the logical connectives could be
implemented by simple net structures. McCulloch and Pitts also suggested that suitably defined
networks could learn. Donald Hebb (1949) demonstrated a simple updating rule for modifying the
connection strengths between neurons, such that learning could take place.
The work of McCulloch and Pitts was arguably the forerunner of both the logicist tradition in AI and the
connectionist tradition. In the early 1950s, Claude Shannon (1950) and Alan Turing (1953) were writing
chess programs for von Neumann-style conventional computers.12 At the same time, two graduate
students in the Princeton mathematics department, Marvin Minsky and Dean Edmonds, built the first
neural network computer in 1951. The SNARC, as it was called, used 3000 vacuum tubes and a surplus
automatic pilot mechanism from a B-24 bomber to simulate a network of 40 neurons. Minsky's Ph.D.
committee was skeptical whether this kind of work should be considered mathematics, but von
Neumann was on the committee and reportedly said, "If it isn't now it will be someday." Ironically,
Minsky was later to prove theorems that contributed to the demise of much of neural network research
during the 1970s.12 Shannon actually had no real computer to work with, and Turing was eventually
denied access to his own team's computers by the British government, on the grounds that research
into artificial intelligence was surely frivolous.
Princeton was home to another influential figure in AI, John McCarthy. After graduation, McCarthy
moved to Dartmouth College, which was to become the official birthplace of the field. McCarthy
convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him bring together U.S.
researchers interested in automata theory, neural nets, and the study of intelligence. They organized a
two-month workshop at Dartmouth in the summer of 1956. All together there were ten attendees,
including Trenchard More from Princeton, Arthur Samuel from IBM, and Ray Solomonoff and Oliver
Selfridge from MIT. Two researchers from Carnegie Tech,13 Alien Newell and Herbert Simon, rather
stole the show. Although the others had ideas and in some cases programs for particular applications
such as checkers, Newell and Simon already had a reasoning program, the Logic Theorist, about which
Simon claimed, "We have invented a computer program capable of thinking nonnumerically, and
thereby solved the venerable mind-body problem."Soon after the workshop, the program was able to
prove most of the theorems in Chapter 2 of Russell and Whitehead's Principia Mathematica. Russell was
reportedly delighted when Simon showed him that the program had come up with a proof for one
theorem that was shorter than the one in Principia. The editors of the Journal of Symbolic Logic were
less impressed; they rejected a paper coauthored by Newell, Simon, and Logic Theorist. The Dartmouth
14. workshop did not lead to any new breakthroughs, but it did introduce all the major figures to each
other. For the next 20 years, the field would be dominated by these people and their students and
colleagues at MIT, CMU, Stanford, and IBM. Perhaps the most lasting thing to come out of the workshop
was an agreement to adopt McCarthy's new name for the field: artificial intelligence.
Early enthusiasm, great expectations (1952-1969):
The early years of AI were full of successes in a limited way. Given the primitive computers and
programming tools of the time, and the fact that only a few years earlier computers were seen as things
that could do arithmetic and no more, it was astonishing whenever a computer did anything remotely
clever. The intellectual establishment, by and large, preferred to believe that "a machine can never do
X". AI researchers naturally responded by demonstrating one X after another. Some modern AI
researchers refer to this period as the "Look, Ma, no hands!" era. Newell and Simon's early success was
followed up with the General Problem Solver, or GPS. Unlike Logic Theorist, this program was designed
from the start to imitate human problem-solving protocols. Within the limited class of puzzles it could
handle, it turned out that the order in which the program considered subgoals and possible actions was
similar to the way humans approached the same problems. Thus, GPS was probably the first program to
embody the "thinking humanly" approach. The combination of AI and cognitive science has continued at
CMU up to the present day. Now Carnegie Mellon University (CMU). Newell and Simon also invented a
list-processing language, IPL, to write LT. They had no compiler, and translated it into machine code by
hand. To avoid errors, they worked in parallel, calling out binary numbers to each other as they wrote
each instruction to make sure they agreed. At IBM, Nathaniel Rochester and his colleagues produced
some of the first AI programs.
Herbert Gelernter (1959) constructed the Geometry Theorem Prover. Like the Logic Theorist, it proved
theorems using explicitly represented axioms. Gelernter soon found that there were too many possible
reasoning paths to follow, most of which turned out to be dead ends. To help focus the search, he
added the capability to create a numerical representation of a diagram a particular case of the general
theorem to be proved. Before the program tried to prove something, it could first check the diagram to
see if it was true in the particular case. Starting in 1952, Arthur Samuel wrote a series of programs for
checkers (draughts) that eventually learned to play tournament-level checkers. Along the way, he
disproved the idea that computers can only do what they are told to, as his program quickly learned to
play a better game than its creator. The program was demonstrated on television in February 1956,
creating a very strong impression. Like Turing, Samuel had trouble finding computer time. Working at
night, he used machines that were still on the testing floor at IBM's manufacturing plant. John McCarthy
moved from Dartmouth to MIT and there made three crucial contributions in one historic year: 1958. In
MIT AI Lab Memo No. 1, McCarthy defined the high-level language Lisp, which was to become the
dominant AI programming language. Lisp is the second-oldest language in current use. With Lisp,
McCarthy had the tool he needed, but access to scarce and expensive computing resources was also a
serious problem. Thus, he and others at MIT invented time sharing. After getting an experimental time-
sharing system up at MIT, McCarthy eventually attracted the interest of a group of MIT grads who
formed Digital Equipment Corporation, which was to become the world's second largest computer
manufacturer, thanks to their time-sharing minicomputers. Also in 1958, McCarthy published a paper
entitled Programs with Common Sense, in which he described the Advice Taker, a hypothetical program
that can be seen as the first complete AI system. Like the Logic Theorist and Geometry Theorem Prover,
McCarthy's program was designed to use knowledge to search for solutions to problems. But unlike the
15. others, it was to embody general knowledge of the world. For example, he showed how some simple
axioms would enable the program to generate a plan to drive to the airport to catch a plane. The
program was also designed so that it could accept new axioms in the normal course of operation,
thereby allowing it to achieve competence in new areas without being reprogrammed. The Advice Taker
thus embodied the central principles of knowledge representation and reasoning: that it is useful to
have a formal, explicit representation of the world and the way an agent's actions affect the world, and
to be able to manipulate these representations with deductive processes. It is remarkable how much of
the 1958 paper remains relevant after more than 35 years. 1958 also marked the year that Marvin
Minsky moved to MIT. For years he and McCarthy were inseparable as they defined the field together.
But they grew apart as McCarthy stressed representation and reasoning in formal logic, whereas Minsky
was more interested in getting programs to work, and eventually developed an anti-logical outlook. In
1963, McCarthy took the opportunity to go to Stanford and start the AI lab there. His research agenda of
using logic to build the ultimate Advice Taker was advanced by J. A. Robinson's discovery of the
resolution method.
Work at Stanford emphasized general-purpose methods for logical reasoning. Applications of FORTRAN
is one year older than Lisp.logic included Cordell Green's question answering and planning systems
(Green, 1969b), and the Shakey robotics project at the new Stanford Research Institute (SRI). The latter
project was the first to demonstrate the complete integration of logical reasoning and physical activity.
Minsky supervised a series of students who chose limited problems that appeared to require
intelligence to solve. These limited domains became known as microworlds. James Slagle's SAINT
program (1963a) was able to solve closed-form integration problems typical of first-year college calculus
courses. Tom Evans's ANALOGY program (1968) solved geometric analogy problems that appear in IQ
tests. Bertram Raphael's (1968) SIR (Semantic Information Retrieval) was able to accept input
statements in a very restricted subset of English and answer questions thereon. Daniel Bobrow's
Student program (1967) solved algebra story problems such as If the number of customers Tom gets is
twice the square of 20 percent of the number of advertisements he runs, and the number of
advertisements he runs is 45, what is the number of customers Tom gets? The most famous microworld
was the blocks world, which consists of a set of solid blocksnplaced on a tabletop. A task in this world is
to rearrange the blocks in a certain way, using a robot hand that can pick up one block at a time. The
blocks world was home to the vision project of David Huffman (1971), the vision and constraint-
propagation work of David Waltz (1975), the learning theory of Patrick Winston (1970), the natural
language understanding program of Terry Winograd (1972), and the planner of Scott Fahlman (1974).
Early work building on the neural networks of McCulloch and Pitts also flourished. The work of Winograd
and Cowan (1963) showed how a large number of elements could collectively represent an individual
concept, with a corresponding increase in robustness and parallelism. Hebb's learning methods were
enhanced by Bernie Widrow (Widrow and Hoff, 1960; Widrow, 1962), who called his networks adalines,
and by Frank Rosenblatt (1962) with his perceptrons. A task for the robot might be "Pick up a big red
block," expressed either in natural language or in a formal notation. Rosenblatt proved the famous
perceptron convergence theorem, showing that his learning algorithm could adjust the connection
strengths of a perceptron to match any input data, provided such a match existed.
A dose of reality (1966-1974):
From the beginning, AI researchers were not shy in making predictions of their coming successes.The
following statement by Herbert Simon in 1957 is often quoted:It is not my aim to surprise or shock you
16. —but the simplest way I can summarize is to say that there are now in the world machines that think,
that learn and that create. Moreover, their ability to do these things is going to increase rapidly until—in
a visible future—the range of problems they can handle will be coextensive with the range to which
human mind has been applied. Although one might argue that terms such as "visible future" can be
interpreted in various ways, some of Simon's predictions were more concrete. In 1958, he predicted that
within 10 years a computer would be chess champion, and an important new mathematical theorem
would be proved by machine. Claims such as these turned out to be wildly optimistic. The barrier that
faced almost all AI research projects was that methods that sufficed for demonstrations on one or two
simple examples turned out to fail miserably when tried out on wider selections of problems and on
more difficult problems. The first kind of difficulty arose because early programs often contained little or
no knowledge of their subject matter, and succeeded by means of simple syntactic manipulations.
Weizenbaum's ELIZA program (1965), which could apparently engage in serious conversation on any
topic, actually just borrowed and manipulated the sentences typed into it by a human. A typical story
occurred in early machine translation efforts, which were generously funded by the National Research
Council in an attempt to speed up the translation of Russian scientific papers in the wake of the Sputnik
launch in 1957. It was thought initially that simple syntactic transformations based on the grammars of
Russian and English, and word replacement using an electronic dictionary, would suffice to preserve the
exact meanings of sentences. In fact, translation requires general knowledge of the subject matter in
order to resolve ambiguity and establish the content of the sentence. The famous retranslation of "the
spirit is willing but the flesh is weak' as "the vodka is good but the meat is rotten" illustrates the
difficulties encountered. In 1966, a report by an advisory committee found that "there has been no
machine translation of general scientific text, and none is in immediate prospect." All U.S. government
funding for academic translation projects was cancelled. The second kind of difficulty was the
intractability of many of the problems that AI was attempting tosolve. Most of the early AI programs
worked by representing the basic facts about a problem and trying out a series of steps to solve it,
combining different combinations of steps until the right one was found. The early programs were
feasible only because microworlds contained veiy few objects. Before the theory of NP-completeness
was developed, it was widely thought that "scaling up" to larger problems was simply a matter of faster
hardware and larger memories. The optimism that accompanied the development of resolution
theorem proving, for example, was soon dampened when researchers failed to prove theorems
involving more than a few dozen facts. The fact that a program can find a solution in principle does not
mean that the program contains any of the mechanisms needed to find it in practice. The illusion of
unlimited computational power was not confined to problem-solving programs. Earh experiments in
machine evolution (now called genetic algorithms) (Friedberg, 1958; Friedberg et al,, 1959) were based
on the undoubtedly correct belief that by making an appropriate series of small mutations to a machine
code program, one can generate a program with good performance for any particular simple task. The
idea, then, was to try random mutations and then apply a selection process to preserve mutations that
seemed to improve behavior. Despite thousands of hours of CPU time, almost no progress was
demonstrated. Failure to come to grips with the "combinatorial explosion" was one of the main
criticisms of AI contained in the Lighthill report (Lighthill, 1973), which formed the basis for the decision
by the British government to end support for AI research in all but two universities.
A third difficulty arose because of some fundamental limitations on the basic structures being used to
generate intelligent behavior. For example, in 1969, Minsky and Papert's book Perceptrons (1969)
proved that although perceptrons could be shown to learn anything they were capable of representing,
they could represent very little. In particular, a two-input perceptron could not be .rained to recognize
17. when its two inputs were different. Although their results did not appb to more complex, multilayer
networks, research funding for neural net research soon dwindled to almost nothing. Ironically, the new
back-propagation learning algorithms for multilayer networks that were to cause an enormous
resurgence in neural net research in the late 1980s were actually discovered first in 1969 (Bryson and
Ho, 1969).
Knowledge-based systems: (1969-1979):
The picture of problem solving that had arisen during the first decade of AI research was of a general-
purpose search mechanism trying to string together elementary reasoning steps to find complete
solutions. Such approaches have been called weak methods, because they use weak information about
the domain. For niany complex domains, it turns out that their performance is also weak. The only way
around this is to use knowledge more suited to making larger reasoning steps and to solving typically
occurring cases in narrow areas of expertise. One might say that to solve a hard problem, you almost
have to know the answer already.The DENDRAL program was an early example of this approach. It was
developed at Stanford, where Ed Feigenbaum , Bruce Buchanan, and Joshua Lederberg teamed up to
solve the problem of inferring molecular structure from the information provided by a mass
spectrometer. The input to the program consists of the elementary formula of the molecule, and the
mass spectrum giving the masses of the various fragments of the molecule generated when it is
bombarded by an electron beam. For example, the mass spectrum might contain a peak at in- 15
corresponding to the mass of a methyl (CHi) fragment. The naive version of the program generated all
possible structures consistent with the formula, and then predicted what mass spectrum would be
observed for each, comparing this with the actual spectrum. As one might expect, this rapidly became
intractable for decent-sized molecules. The DENDRAL researchers consulted analytical chemists and
found that they worked by looking for well-known patterns of peaks in the spectrum that suggested
common substructures in the molecule. The DENDRAL team concluded that the new system was
powerful because All the relevant theoretical knowledge to solve these problems has been mapped over
from its general form in the to efficient special forms .The significance of DENDRAL was that it was
arguably the first successful knowledge-intensive system: its expertise derived from large numbers of
special-purpose rules. Later systems also incorporated the main theme of McCarthy's Advice Taker
approach— the clean separation of the knowledge and the reasoning component. With this lesson in
mind, Feigenbaum and others at Stanford began the Heuristic Programming Project (HPP), to investigate
the extent to which the new methodology of expert systems could be applied to other areas of human
expertise. The next major effort was in the area of medical diagnosis. Feigenbaum, Buchanan, and Dr.
Edward Shortliffe developed MYCIN to diagnose blood infections. With about 450 rules, MYCIN was able
to perform as well as some experts, and considerably better than junior doctors. It also contained two
major differences from DENDRAL. First, unlike the DENDRAL rules, no general theoretical model existed
from which the MYCIN rules could be deduced. They had to be acquired from extensive interviewing of
experts, who in turn acquired them from direct experience of cases. Second, the rules had to reflect the
uncertainty associated with medical knowledge. MYCIN incorporated a calculus of uncertainty called
certainty factors , which seemed to fit well with how doctors assessed the impact of evidence on the
diagnosis. Other approaches to medical diagnosis were also followed. At Rutgers University, Saul
Amarel's Computers in Biomedicine project began an ambitious attempt to diagnose diseases based on
explicit knowledge of the causal mechanisms of the disease process. Meanwhile, large groups at MIT
and the New England Medical Center were pursuing an approach to diagnosis and treatment based on
the theories of probability and utility. Their aim was to build systems that gave provably optimal medical
18. recommendations. In medicine, the Stanford approach using rules provided by doctors proved more
popular at first. But another probabilistic reasoning system, PROSPECTOR , generated enormous
publicity by recommending exploratory drilling at a geological site that proved to contain a large
molybdenum deposit. The importance of domain knowledge was also apparent in the area of
understanding natural language. Although Winograd's SHRDLU system for understanding natural
language had engendered a good deal of excitement, its dependence on syntactic analysis caused some
of the same problems as occurred in the early machine translation work. It was able to overcome
ambiguity and understand pronoun references, but this was mainly because it was designed specifically
for one area—the blocks world. Several researchers, including Eugene Charniak, a fellow graduate
student of Winograd's at MIT, suggested that robust language understanding would require general
knowledge about the world and a general method for using that knowledge. At Yale, the linguist-turned-
Al-researcher Roger Schank emphasized this point by claiming, "There is no such thing as syntax," which
upset a lot of linguists, but did serve to start a useful discussion. Schank and his students built a series of
programs (Schank and Abelson, 1977; Schank and Riesbeck, 1981; Dyer, 1983) that all had the task of
understanding natural language.The emphasis, however, was less on language per se and more on the
problems of representing and reasoning with the knowledge required for language understanding. The
problems included representing stereotypical situations (Cullingford, 1981), describing human memory
organization (Rieger, 1976; Kolodner, 1983), and understanding plans and goals (Wilensky, 1983).
William Woods (1973) built the LUNAR system, which allowed geologists to ask questions in English
about the rock samples brought back by the Apollo moon mission. LUNAR was the first natural language
program that was used by people other than the system's author to get real work done. Since then,
many natural language programs have been used as interfaces to databases. The widespread growth of
applications to real-world problems caused a concomitant increase in the demands for workable
knowledge representation schemes. A large number of different representation languages were
developed. Some were based on logic—for example, the Prolog language became popular in Europe,
and the PLANNER family in the United States. Others, following Minsky's idea of frames (1975), adopted
a rather more structured approach, collecting together facts about particular object and event types,
and arranging the types into a large taxonomic hierarchy analogous to a biological taxonomy.
AI industry (1980-1988):
The first successful commercial expert system, Rl, began operation at Digital Equipment Corporation
(McDermott, 1982). The program helped configure orders for new computer systems, and by 1986, it
was saving the company an estimated $40 million a year. By 1988, DEC's AI group had 40 deployed
expert systems, with more on the way. Du Pont had 100 in use and 500 in development, saving an
estimated $10 million a year. Nearly every major U.S. corporation hadits own AI group and was either
using or investigating expert system technology.
In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan to build intelligent
computers running Prolog in much the same way that ordinary computers run machine code. The idea
was that with the ability to make millions of inferences per second, computers would be able to take
advantage of vast stores of rules. The project proposed to achieve full-scale natural language
understanding, among other ambitious goals. The Fifth Generation project fueled interest in AI, and by
taking advantage of fears of Japanese domination, researchers and corporations were able to generate
support for a similar investment in the United States. The Microelectronics and Computer Technology
Corporation (MCC) was formed as a research consortium to counter the Japanese project. In Britain, the
19. Alvey report reinstated the funding that was cut by the Lighthill report. In both cases, AI was part of a
broad effort, including chip design and human-interface research. The booming AI industry also included
companies such as Carnegie Group, Inference, Intellicorp, and Teknowledge that offered the software
tools to build expert systems, and hardware companies such as Lisp Machines Inc., Texas Instruments,
Symbolics, and Xerox that; were building workstations optimized for the development of Lisp programs.
Over a hundred companies built industrial robotic vision systems. Overall, the industry went from a few
million in sales in 1980 to $2 billion in 1988.
The neural networks (1986-present):
Although computer science had neglected the field of neural networks after Minsky and Papert's
Perceptrons book, work had continued in other fields, particularly physics. Large collections ' of simple
neurons could be understood in much the same way as large collections of atoms in solids. Physicists
such as Hopfield (1982) used techniques from statistical mechanics to analyze the storage and
optimization properties of networks, leading to significant cross-fertilization of j ideas. Psychologists
including David Rumelhart and Geoff Hinton continued the study of neural net models of memory.The
real impetus came in the mid-1980s when at least four different groups reinvented the back-
propagation learning algorithm first found in 1969 by Bryson and Ho. The algorithm was applied to many
learning problems in computer science and psychology, and the widespread dissemination of the
results in the collection Parallel Distributed Processing (Rumelhart and McClelland, 1986) caused great
excitement. At about the same time, some disillusionment was occurring concerning the applicability of
the expert system technology derived from MYCIN-type systems.- Many corporations and research
groups found that building a successful expert system involved much more than simply buying a
reasoning system and filling it with rules. Some predicted an "AI Winter" in which AI funding would be
squeezed severely. It was perhaps this fear, and the historical factors on the neural network side, that
led to a period in which neural networks and traditional AI were seen as rival fields, rather than as
mutually supporting approaches to the same problem.
Recent events (1987-present):
Recent years have seen a sea change in both the content and the methodology of research in artificial
intelligence. It is now more common to build on existing theories than to propose brand new ones, to
base claims on rigorous theorems or hard experimental evidence rather than on intuition, and to show
relevance to real-world applications rather than toy examples. The field of speech recognition illustrates
the pattern. In the 1970s, a wide variety of different architectures and approaches were tried. Many of
these were rather ad hoc and fragile, and were demonstrated on a few specially selected examples. In
recent years, approaches based on hidden Markov models (HMMs) have come to dominate the area.
Two aspects of HMMs are relevant to the present discussion. First, they are based on a rigorous
mathematical theory. This has allowed speech researchers to build on several decades of mathematical
results developed in other fields. Second, they are generated by a process of training on a large corpus
of real speech data. This ensures that the performance is robust, and in rigorous blind tests the HMMs
have been steadily improving their scores. Speech technology and the related field of handwritten
character recognition are already making the transition to widespread industrial and consumer
applications.
Another area that seems to have benefited from formalization is planning. Early work by Austin Tate
(1977), followed up by David Chapman (1987), has resulted in an elegant synthesis of existing planning
20. programs into a simple framework. There have been a number of advances that built upon each other
rather than starting from scratch each time. The result is that planning systems that were only good for
micro worlds in the 1970s are now used for scheduling of factory
work and space missions, among other things. Judea Pearl's (1988) Probabilistic Reasoning in Intelligent
Systems marked a new acceptance of probability and decision theory in AI, following a resurgence of
interest epitomized by Peter Cheeseman's (1985) article "In Defense of Probability." The belief network
formalism was invented to allow efficient reasoning about the combination of uncertain evidence. This
approach largely overcomes the problems with probabilistic reasoning systems of the 1960s and
1970s,and has come to dominate AI research on uncertain reasoning and expert systems. Work by Judea
Pearl (1982a) and by Eric Horvitz and David Heckerman (Horvitz and Heckerman, 1986; Horvitz et al.,
1986) promoted the idea of normative expert systems: ones that act rationally according to the laws of
decision theory and do not try to imitate human experts.Some have characterized this change as a
victory of the neats those who think that AI theories should be grounded in mathematical rigor over the
scruffles those who would rather try out lots of ideas, write some programs, and then assess what
seems to be working. Both approaches are important. A shift toward increased neatness implies that the
field has reached a level of stability and maturity. Similar gentle revolutions have occurred in robotics,
computer vision, machine learning and knowledge representation. A better understanding of the
problems and their complexity properties, combined with increased mathematical sophistication, has
led to workable research agendas and robust methods. Perhaps encouraged by the progress in solving
the subproblems of AI, researchers have also started to look at the "whole agent" problem again. The
work of Alien Newell, John Laird, and Paul Rosenbloom on SOAR (Newell, 1990; Laird et al., 1987) is the
best-known example of a complete agent architecture in AI. The so-called "situated" movement aims to
understand the workings of agents embedded in real environments with continuous sensory inputs.
Many interesting results are coming out of such work, including the realization that the previously
isolated subfields of AI may need to be reorganized somewhat when their results are to be tied together
into a single agent design.
21. G. What are rational action and intelligent agent?
Rational action:
The view that intelligence is concerned mainly with rational action.
Intelligent agent:
Ideally, an intelligent agent takes the best possible action in a situation. The Problem of building gents
those are intelligent in this sense an ideal concept of intelligence, which we will call rationality.
A rationalist approach involves a combination of mathematics and engineering. People in each group
sometimes cast aspersions on work done in the other groups, but the truth is that each direction has
yielded valuable insights. Let us look at each in more detail.
Acting Rationally: Rational Agent
• Rational behavior: doing the right thing.
• The right thing: that which is expected to maximize goal achievement, given the available
information
• Doesn't necessarily involve thinking
o blinking reflex
o But thinking should be in the service of rational action.
• An agent is an entity that perceives and acts.
• This course is about designing rational agents.
• Abstractly, an agent is a function from percept histories to actions:
[f: P* à A] For any given class of environments and tasks, we seek the agent (or class of agents) with the
best performance.
• Caveat: computational limitations make perfect rationality unachievable.
à Design best program for given machine resources.
Some definitions of AI, they are organized into four categories:
• Systems that think like humans.
• Systems that act like humans.
• Systems that think rationally.
• Systems that act rationally.
1. Acting humanly: The Turing Test approach
2. Thinking humanly: The cognitive modeling approach
3. Thinking rationally: The laws of thought approach
22. 4. Acting rationally: The rational agent approach
Thinking rationally: The laws of thought approach
The Greek philosopher Aristotle was one of the first to attempt to codify "right thinking," that is,
Irrefutable reasoning processes. His famous syllogisms provided patterns for argument structures that
always gave correct conclusions given correct premises. For example, "Socrates is a man; all men are
mortal; therefore Socrates is mortal." These laws of thought were supposed to govern the operation of
the mind, and initiated the field of logic.
The so-called logicist tradition within artificial intelligence hopes to build on such programs to create
intelligent systems. There are two main obstacles to this approach. First, it is not easy to take informal
knowledge and state it in the formal terms required by logical notation, particularly when the
knowledge is less than 100% certain. Second, there is a big difference between being able to solve a
problem "in principle" and doing so in practice. Even problems with just a few dozen facts can exhaust
the computational resources of any computer unless it has some guidance as to which reasoning steps
to try first. Although both of these obstacles apply to any attempt to build computational reasoning
systems, they appeared first in the logicist tradition because the power of the representation and
reasoning systems are well-defined and fairly well understood.
Acting rationally: The rational agent approach
Acting rationally means acting so as to achieve one's goals, given one's beliefs. An agent is just
something that perceives and acts. (This may be an unusual use of the word, but you will get used to it.)
In this approach, AI is viewed as the study and construction of rational agents. In the "laws of thought"
approach to AI, the whole emphasis was on correct inferences. Making correct inferences is sometimes
part of being a rational agent, because one way to act rationally is to reason logically to the conclusion
that a given action will achieve one's goals, and then to act on that conclusion. On the other hand,
correct inference is not all of rationality; because there are often situations where there is no provably
correct thing to do, yet something must still be done.
There are also ways of acting rationally that cannot be reasonably said to involve inference. For
example, pulling one's hand off of a hot stove is a reflex action that is more successful than a slower
action taken after careful deliberation. All the "cognitive skills" needed for the Turing test is there to
allow rational actions. Thus, we need the ability to represent knowledge and reason with it because this
enables us to reach good decisions in a wide variety of situations. We need to be able to generate
comprehensible sentences in natural language because saying those sentences helps us get by in a
complex society. We need learning not just for erudition, but because having a better idea of how the
world works able us to generate more effective strategies for dealing with it. We need visual perception
not just because seeing is fun, but in order to get a better idea of what an action might achieve—for
example, being able to see a tasty morsel helps one to move toward it.
The study of AI as rational agent design therefore has two advantages. First, it is more general than the
"laws of thought" approach, because correct inference is only a useful mechanism for achieving
rationality, and not a necessary one. Second, it is more amenable to scientific development than
approaches based on human behavior or human thought, because the standard of rationality is clearly
defined and completely general. Human behavior, on the other hand, is well-adapted for one specific
environment and is the product, in part, of a complicated and largely unknown evolutionary process
23. that still may be far from achieving perfection. This book will therefore concentrate on general principles
of rational agents, and on components for constructing them. We will see that despite the apparent
simplicity with which the problem can be stated, an enormous variety of issues comes up when we try
to solve it.
One important point to keep in mind: we will see before too long that achieving perfect rationality—
always doing the right thing—is not possible in complicated environments. The computational demands
are just too high. However, for most of the book, we will adopt the working hypothesis that
understanding perfect decision making is a good place to start. It simplifies the problem and provides
the appropriate setting for most of the foundational material in the field.