The era of artificial intelligence

  • 1,446 views
Uploaded on

 

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
1,446
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
125
Comments
0
Likes
2

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. F.I.T PresentationPrajjwal Kushwaha …….. Flyers 1 B
  • 2.  What is artificial  Philosophy intelligence  Real life applications History of artificial  Thank you intelligence Problems Approaches Tools Logics Neural network Evaluating progress applications
  • 3. Artificial intelligence (AI) is the intelligence ofmachines and the branch of computer science that aimsto create it. AI textbooks define the field as "the studyand design of intelligent agents" where an intelligentagent is a system that perceives its environment andtakes actions that maximize its chances of success. JohnMcCarthy who coined the term in 1955,defines it as "thescience and engineering of making intelligentmachines."
  • 4. Thinking machines and artificial beings appear in Greekmyths, such as Talos of Crete, the bronze robot ofHephaestus, and Pygmalions Galatea.Human likenesses believed to have intelligence werebuilt in every major civilization: animated cult imageswere worshipped in Egypt and Greece and humanoidautomatons were built by Yan Shi, Hero of Alexandriaand Al-Jazari. It was also widely believed that artificialbeings had been created by Jābir ibn Hayyān, JudahLoew and Paracelsus. By the 19th and 20thcenturies, artificial beings had become a commonfeature in fiction, as in Mary Shelleys Frankenstein orKarel Čapeks R.U.R. (Rossums Universal Robots).Pamela McCorduck argues that all of these are examplesof an ancient urge, as she describes it, "to forge thegods".[ Stories of these creatures and their fates discussmany of the same hopes, fears and ethical concerns thatare presented by artificial intelligence.
  • 5. By 1985 the market for AI had reached over a billion dollars. At the sametime, Japans fifth generation computer project inspired the U.S and Britishgovernments to restore funding for academic research in the field.However, beginning with the collapse of the Lisp Machine market in 1987, AI onceagain fell into disrepute, and a second, longer lasting AI winter began.In the 1990sand early 21st century, AI achieved its greatest successes, albeit somewhat behindthe scenes. Artificial intelligence is used for logistics, data mining, medicaldiagnosis and many other areas throughout the technology industry. The successwas due to several factors: the increasing computational power of computers, agreater emphasis on solving specific sub problems, the creation of new tiesbetween AI and other fields working on similar problems, and a new commitmentby researchers to solid mathematical methods and rigorous scientificstandards.On 11 May 1997, Deep Blue became the first computer chess-playingsystem to beat a reigning world chess champion, Garry Kasparov. In 2005, aStanford robot won the DARPA Grand Challenge by driving autonomously for 131miles along an unrehearsed desert trail. Two years later, a team from CMU wonthe DARPA Urban Challenge when their vehicle autonomously navigated 55 milesin an Urban environment while adhering to traffic hazards and all traffic laws. InFebruary 2011, in a Jeopardy! quiz show exhibition match, IBMs questionanswering system, Watson, defeated the two greatest Jeopardy! champions, BradRutter and Ken Jennings, by a significant margin.
  • 6. The general problem of simulating (or creating)intelligence has been brokendown into a number of specificsub-problems. These consist ofparticular traits or capabilitiesthat researchers would like anintelligent system to display.The traits described below havereceived the most attention.
  • 7. There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues. A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering? Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems? Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require "sub- symbolic" processing? John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence, a term which has since been adopted by some non-GOFAI researchers.
  • 8. Cybernetics and brain Symbolicsimulation When access to digital computers becameIn the 1940s and 1950s, a number of possible in the middle 1950s, AI researchresearchers explored the connection began to explore the possibility that humanbetween neurology, information intelligence could be reduced to symboltheory, and cybernetics. Some of manipulation. The research was centered inthem built machines that used three institutions: CMU, Stanford andelectronic networks to exhibit MIT, and each one developed its own stylerudimentary intelligence, such as W. of research. John Haugeland named theseGrey Walters turtles and the Johns approaches to AI "good old fashioned AI" orHopkins Beast. Many of these "GOFAI". During the 1960s, symbolicresearchers gathered for meetings ofthe Teleological Society at Princeton approaches had achieved great success atUniversity and the Ratio Club in simulating high-level thinking in smallEngland. By 1960, this approach was demonstration programs. Approacheslargely abandoned, although based on cybernetics or neural networkselements of it would be revived in were abandoned or pushed into thethe 1980s. background.
  • 9. Statistical Intelligent agent paradigm An intelligent agent is a system thatIn the 1990s, AI researchers developed perceives its environment and takes actionssophisticated mathematical tools to solve which maximize its chances of success.specific subproblems. These tools are truly programs that solve specific problems. Morescientific, in the sense that their results are complicated agents include human beingsboth measurable and verifiable, and they and organizations of human beings . Thehave been responsible for many of AIs paradigm gives researchers license to studyrecent successes. The shared mathematical isolated problems and find solutions thatlanguage has also permitted a high level of are both verifiable and useful, withoutcollaboration with more established fields agreeing on one single approach. An agent(like mathematics, economics or operations that solves a specific problem can use anyresearch). Stuart Russell and Peter Norvig approach that works – some agents aredescribe this movement as nothing less symbolic and logical, some are sub-than a "revolution" and "the victory of the symbolic neural networks and others mayneats." Critics argue that these techniques use new approaches. The paradigm alsoare too focused on particular problems and gives researchers a common language tohave failed to address the long term goal of communicate with other fields—such asgeneral intelligence. decision theory and economics—that also use concepts of abstract agents.
  • 10. Many problems in AI can be solved in theory by intelligently searching throughmany possible solutions:Reasoning can be reduced to performing a search. Forexample, logical proof can be viewed as searching for a path that leads frompremises to conclusions, where each step is the application of an inference rule.Planning algorithms search through trees of goals and subgoals, attempting tofind a path to a target goal, a process called means-ends analysis. Roboticsalgorithms for moving limbs and grasping objects use local searches inconfiguration space. Many learning algorithms use search algorithms based onoptimization.Simple exhaustive searches are rarely sufficient for most real world problems: thesearch space (the number of places to search) quickly grows to astronomicalnumbers. The result is a search that is too slow or never completes. Thesolution, for many problems, is to use "heuristics" or "rules of thumb" thateliminate choices that are unlikely to lead to the goal (called "pruning the searchtree"). Heuristics supply the program with a "best guess" for the path on which thesolution lies.
  • 11. Logic is used for knowledge representation and problem solving, but it can be applied toother problems as well. For example, the satplan algorithm uses logic for planning andinductive logic programming is a method for learning.Several different forms of logic are used in AI research. Propositional or sentential logic isthe logic of statements which can be true or false. First-order logic also allows the use ofquantifiers and predicates, and can express facts about objects, their properties, and theirrelations with each other. Fuzzy logicis a version of first-order logic which allows the truthof a statement to be represented as a value between 0 and 1, rather than simply True (1) orFalse (0). Fuzzy systems can be used for uncertain reasoning and have been widely used inmodern industrial and consumer product control systems. Subjective logic modelsuncertainty in a different and more explicit manner than fuzzy-logic: a given binomialopinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By thismethod, ignorance can be distinguished from probabilistic statements that an agent makeswith high confidence.Default logics, non-monotonic logics and circumscription are forms of logic designed tohelp with default reasoning and the qualification problem. Several extensions of logic havebeen designed to handle specific domains of knowledge, such as: description logics;situation calculus, event calculus and fluent calculus (for representing events and time);causal calculus; belief calculus; and modal logics.
  • 12. The study of artificial neural networks began in the decade before the fieldAI research was founded, in the work of Walter Pitts and WarrenMcCullough. Other important early researchers were FrankRosenblatt, who invented the perceptron and Paul Werbos who developedthe backpropagation algorithm.The main categories of networks are acyclic or feedforward neuralnetworks (where the signal passes in only one direction) and recurrentneural networks (which allow feedback). Among the most popularfeedforward networks are perceptrons, multi-layer perceptrons and radialbasis networks. Among recurrent networks, the most famous is theHopfield net, a form of attractor network, which was first described byJohn Hopfield in 1982. Neural networks can be applied to the problem ofintelligent control (for robotics) or learning, using such techniques asHebbian learning and competitive learning.Hierarchical temporal memory is an approachthat models some of the structural and algorithmic properties of the neocortex.
  • 13. In 1950, Alan Turing proposed a general procedure to test the intelligenceof an agent now known as the Turing test. This procedure allows almostall the major problems of artificial intelligence to be tested. However, it isa very difficult challenge and at present all agents fail.Artificial intelligence can also be evaluated on specific problems such assmall problems in chemistry, hand-writing recognition and game-playing.Such tests have been termed subject matter expert Turing tests. Smallerproblems provide more achievable goals and there are an ever-increasingnumber of positive results. One classification for outcomes of an AI test is: Optimal: it is not possible to perform better. Strong super-human: performs better than all humans. Super-human: performs better than most humans. Sub-human: performs worse than most humans.
  • 14. …Artificial intelligence techniques arepervasive and are too numerous to list.Frequently, when a technique reachesmainstream use, it is no longer considered artificialintelligence; this phenomenon is described as the AIeffect.There are a number of competitions and prizes topromote research in artificial intelligence. The mainareas promoted are: general machineintelligence, conversational behavior, data-mining, driverless cars, robot soccer and games.
  • 15. A platform (or "computing platform") is defined as "some sort of hardware architecture or software framework (including applicationframeworks), that allows software to run." As Rodney Brookspointed out many years ago, it is not just the artificial intelligencesoftware that defines the AI features of the platform, but rather theactual platform itself that affects the AI that results, i.e., there needsto be work in AI problems on real-world platforms rather than inisolation.A wide variety of platforms has allowed different aspects of AI todevelop, ranging from expert systems, albeit PC-based but still anentire real-world system, to arious robot platforms such as thewidely available Roomba with open interface.
  • 16. Artificial intelligence, by claiming to be able to recreate the capabilities of the humanmind, is both a challenge and an inspiration for philosophy. Are there limits to howintelligent machines can be? Is there an essential difference between human intelligenceand artificial intelligence? Can a machine have a mind and consciousness?Turings "polite convention": We need not decide if a machine can "think"; we need onlydecide if a machine can act as intelligently as a human being. This approach to thephilosophical problems associated with artificial intelligence forms the basis of the Turingtest.The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence canbe so precisely described that a machine can be made to simulate it." This conjecture wasprinted in the proposal for the Dartmouth Conference of 1956, and represents the positionof most working AI researchers.Newell and Simons physical symbol system hypothesis : "A physical symbol system has thenecessary and sufficient means of general intelligent action." Newell and Simon argue thatintelligences consist of formal operations on symbols. Hubert Dreyfus argued that, on thecontrary, human expertise depends on unconscious instinct rather than conscious symbolmanipulation and on having a "feel" for the situation rather than explicit symbolicknowledge. (See Dreyfus critique of AI.)Gödels incompleteness theorem: A formal system (such as a computer program) cannotprove all true statements. Roger Penrose is among those who claim that Gödels theoremlimits what machines can do.
  • 17. ROBOTIn this movie rajnikanth madea robot which has all theabilities of a human being withthe help of artificialintelligence………
  • 18. RA.ONEShahrukh khan makes a gamein which with the help ofartificial intelligence he canperform various stunts…………
  • 19. TerminatorTerminator is a movie inwhich they show that howwith the help of artificialintelligence robots can ruleon the world.
  • 20. Wall-EIn this movie robots aresend on other planets tofind the signs of life withtheir intelligence.
  • 21. AISHAAISHA is a artificialintelligence software whichare installed on themicromax mobiles