Used in so many ways today, from terrorist detection to purchasing trends to spam filtering Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition ( BBC News , 2001). A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and to provide medical information. Many practical applications are dependent on artificial neural networks ﾑ networks that pattern their organization in mimicry of a brain's neurons, which have been found to excel in pattern recognition. Financial institutions have long used such systems to detect charges or claims outside of the norm, flagging these for human investigation. Neural networks are also being widely deployed in homeland security , speech and text recognition, medical diagnosis (such as in Concept Processing technology in EMR software), data mining , and e-mail spam filtering. Robots have also become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have also proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration, and other jobs which humans may find degrading. General Motors uses around 16,000 robots for tasks such as painting, welding, and assembly. Japan is the leader in using robots in the world. In 1995, 700,000 robots were in use worldwide; over 500,000 of which were from Japan (Encarta, 2006).
Goal - help humans achieve their goals
Conventional AI and Computational Intelligence (CI). Conventional AI mostly involves methods now classified as machine learning , characterized by formalism and statistical analysis . This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI) . Computational Intelligence involves iterative development or learning (e.g. parameter tuning e.g. in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing . With hybrid intelligent systems attempts are made to combine these two groups. Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R . It is thought that the human brain uses multiple techniques to both formulate and cross-check results. Thus, integration is seen as promising and perhaps necessary for true AI.s
Early in the 17th century , Ren e Descartes envisioned the bodies of animals as complex but reducible machines, thus formulating the mechanistic theory , also known as the &quot;clockwork paradigm&quot;. Bertrand Russell and Alfred North Whitehead published Principia Mathematica in 1910-1913, which revolutionized formal logic. The Turing Test is a proposal for a test of a machine 's capability to perform human -like conversation . Described by Alan Turing in the 1950 paper &quot; Computing machinery and intelligence ,&quot; it proceeds as follows: a human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test. It is assumed that both the human and the machine try to appear human. In order to keep the test setting simple and universal (to explicitly test the linguistic capability of the machine instead of its ability to render words into audio), the conversation is usually limited to a text-only channel such as a teletype machine as Turing suggested or, more recently IRC or instant messaging . 1956 - Dartmouth College - John McCarthy - AI describe computers with the ability to mimic or duplicate the functions of the human brain - LISP language ELIZA is a famous 1966 computer program by Joseph Weizenbaum , which parodied a Rogerian therapist , largely by rephrasing many of the patient's statements as questions and posing them to the patient. Thus, for example, the response to &quot;My head hurts&quot; might be &quot;Why do you say your head hurts?&quot; The response to &quot;My mother hates me&quot; might be &quot;Who else in your family hates you?” In the 1980s, neural networks became widely used due to the backpropagation algorithm, first described by Paul Werbos in 1974. The 1990s marked major achievements in many areas of AI and demonstrations of various applications. In 1995, one of Dickmanns' robot cars drove more than 1000 miles in traffic at up to 110 mph. Deep Blue , a chess-playing computer, beat Garry Kasparov in a famous six-game match in 1997. DARPA stated that the costs saved by implementing AI methods for scheduling units in the first Persian Gulf War have repaid the US government's entire investment in AI research since the 1950s. Honda built the first prototypes of humanoid robots like the one depicted above.During the 1990s and 2000s AI has become very influenced by probability theory and statistics. Bayesian networks are the focus of this movement, providing links to more rigorous topics in statistics and engineering such as Markov models and Kalman filters , and bridging the divide between `neat' and `scruffy' approaches. The last few years have also seen a big interest in game theory applied to AI decision making. This new school of AI is sometimes called `machine learning'. After the September 11, 2001 attacks there has been much renewed interest and funding for threat-detection AI systems, including machine vision research and data-mining .
In the 1980s, neural networks became widely used due to the backpropagation algorithm, first described by Paul Werbos in 1974. The 1990s marked major achievements in many areas of AI and demonstrations of various applications. In 1995, one of Dickmanns' robot cars drove more than 1000 miles in traffic at up to 110 mph. Deep Blue , a chess-playing computer, beat Garry Kasparov in a famous six-game match in 1997. DARPA stated that the costs saved by implementing AI methods for scheduling units in the first Persian Gulf War have repaid the US government's entire investment in AI research since the 1950s. Honda built the first prototypes of humanoid robots like the one depicted above.During the 1990s and 2000s AI has become very influenced by probability theory and statistics. Bayesian networks are the focus of this movement, providing links to more rigorous topics in statistics and engineering such as Markov models and Kalman filters , and bridging the divide between `neat' and `scruffy' approaches. The last few years have also seen a big interest in game theory applied to AI decision making. This new school of AI is sometimes called `machine learning'. After the September 11, 2001 attacks there has been much renewed interest and funding for threat-detection AI systems, including machine vision research and data-mining .
Leans to play a game Don’t make bad moves
1.Controlling processes (e.g. an industrial robot or an autonomous vehicle ).2.Detecting events (e.g. for visual surveillance)3.Organizing information (e.g. for indexing databases of images and image sequences),4.Modeling objects or environments (e.g. industrial inspection, medical image analysis or topographical modeling),5.Interaction (e.g. as the input to a device for computer-human interaction ). Justice department uses for fingerprint matching
Jobs humans don’t want - dangerous or boring A robot is an electro-mechanical device that can perform autonomous or preprogrammed tasks. A robot may act under the direct control of a human (eg. the robotic arm of the space shuttle ) or autonomously under the control of a programmed computer .
An expert system also known as a knowledge based system, is a computer program that contains some of the subject-specific knowledge of one or more human experts. This class of program was first developed by researchers in artificial intelligence during the 1960s and 1970s and applied commercially throughout the 1980s . The most common form of expert systems is a program made up of a set of rules that analyze information (usually supplied by the user of the system) about a specific class of problems , as well as providing mathematical analysis of the problem(s), and, depending upon their design , recommend a course of user action in order to implement corrections. It is a system that utilizes what appear to be reasoning capabilities to reach conclusions.
Natural language processing (NLP) is a subfield of artificial intelligence and linguistics . It studies the problems of automated generation and understanding of natural human languages . Natural language generation systems convert information from computer databases into normal-sounding human language, and natural language understanding systems convert samples of human language into more formal representations that are easier for computer programs to manipulate. Natural Language Generation (NLG) is the natural language processing task of generating natural language from a machine representation system such as a knowledge base or a logical form . Information retrieval (IR) is the science of searching for information in documents, searching for documents themselves, searching for metadata which describe documents, or searching within databases , whether relational stand-alone databases or hypertext networked databases such as the Internet or intranets, for text, sound, images or data. There is a common confusion, however, between data retrieval, document retrieval , information retrieval, and text retrieval , and each of these has its own bodies of literature, theory, praxis and technologies. A chatterbot is a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. Though many appear to be intelligently interpreting the human input prior to providing a response, most chatterbots simply scan for keywords within the input and pull a reply with the most matching keywords or the most similar wording pattern from a local database . Chatterbots may also be referred to as talk bots , chat bots , or chatterboxes .
Fuzzy logic is derived from fuzzy set theory dealing with reasoning that is approximate rather than precisely deduced from classical predicate logic . It can be thought of as the application side of fuzzy set theory dealing with well thought out real world expert values for a complex problem. (Klir 1997).
A genetic algorithm is a search technique used in computing to find true or approximate solutions to optimization and search problems . The term &quot;genetic algorithm&quot; is often abbreviated as GA . Genetic algorithms find application in computer science , engineering , economics , physics , mathematics and other fields. Genetic algorithms are categorized as global search heuristics . Genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance , mutation , selection , and crossover (also called recombination ).Genetic algorithms are implemented as a computer simulation in which a population of abstract representations (called chromosomes or the genotype ) of candidate solutions (called individuals, creatures, or phenotypes ) to an optimization problem evolves toward better solutions . Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (mutated or recombined) to form a new population. The new population is then used in the next iteration of the algorithm .
In computer science , an intelligent agent (IA) is a software agent that exhibits some form of artificial intelligence that assists the user and will act on their behalf, in performing repetitive computer-related tasks. While the working of software agents used for operator assistance or data mining (sometimes referred to as bots ) is often based on fixed pre-programmed rules, &quot; intelligent &quot; here implies the ability to adapt and learn .In some literature IAs are also referred to as autonomous intelligent agents , which means they act independently, and will learn and adapt to changing circumstances. According to Nikola Kasabov  IA systems should exhibit the following characteristics: ･ learn and improve through interaction with the environment ( embodiment ) ･ adapt online and in real time ･ learn quickly from large amounts of data ･ accommodate new problem solving rules incrementally ･ have memory based exemplar storage and retrieval capacities ･ have parameters to represent short and long term memory, age, forgetting, etc. ･ be able to analyze itself in terms of behavior, error and success. Buyer Agent  Buyer agents travel around a network (i.e. the internet) retreiving information about goods and services. These agents, also known as 'shopping bots', work very efficiently for commodity products such as CDs, books, electronic components, and other one-size-fits-all products. Amazon.com is a good example of a shopping bot. The website will offer you a list of books that you might like to buy on the basis of what you're buying now and what you have bought in the past. 2. User or Personal Agents User agents, or personal agents, are intelligent agents that take action on your behalf. In this category belong those intelligent agents that already perform, or will shortly perform, the following tasks: ･ Check your e-mail, sort it according to priority (your priority), and alert you when good stuff comes through - like college acceptance letters ･ Play computer games as your opponent or patrol game areas for you ･ Assemble customized news reports for you. There are several versions of these, CNN being a prime example ･ Find information for you on the subject of your choice ･ Fill out forms on the Web automatically for you, storing your information for future reference ･ Scan Web pages looking for and highlighting text that constitutes the &quot;important&quot; part of the information there ･ &quot;Discuss&quot; topics with you ranging from your deepest fears to sports 3. Monitoring-And-Surveillance Agents  These agents, also known as &quot;predictive agents&quot;, are intelligent agents that observe and report on equipment. For example, NASA's Jet Propulsion Laboratory has an agent that monitors inventory, planning, and scheduling equipment ordering to keep costs down, as well as food storage facilities. These agents usually monitor complex computer networks that can keep track of the configuration of each computer connected to the network. 4. Data Mining Agents A data mining agent operates in a data warehouse discovering information. A 'data warehouse' brings together information from lots of different sources. 'Data mining' is the process of looking through the data warehouse to find information that you can use to take action, such as ways to increase sales or keep customers who are considering defecting. 'Classification' is one of the most common types of data mining, which finds patterns in information and categorizes them into different classes. Data mining agents can also detect major shifts in trends or a key indicator and can detect the presence of new information and alert you to it.
Application areas include system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition and more), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, &quot;KDD&quot;), visualisation and e-mail spam filtering.
In science fiction AI ﾑ almost always strong AI ﾑ is commonly portrayed as an upcoming power trying to overthrow human authority as in HAL 9000 , Skynet , Colossus and The Matrix or as service humanoids like C-3PO , Marvin , Data , KITT and KARR , the Bicentennial Man , the Mechas in A.I. , Cortana from the Halo series or Sonny in I, Robot .A notable exception is Mycroft in Robert A. Heinlein 's The Moon Is a Harsh Mistress : a supercomputer that becomes aware and aids in a local revolution.The inevitability of world domination by out-of-control AI is also argued by some fiction writers like Kevin Warwick . In works such as the Japanese manga Ghost in the Shell , the existence of intelligent machines questions the definition of life as organisms rather than a broader category of autonomous entities, establishing a notional concept of systemic intelligence. See list of fictional computers and list of fictional robots and androids . Some science fiction writers, such as Vernor Vinge , have also speculated that the advent of strong AI is likely to cause abrupt and dramatic societal change. The period of abrupt change is sometimes referred to as &quot; the Singularity &quot;.Author Frank Herbert explored the idea of a time when mankind might ban smart machines entirely. His Dune series makes mention of a rebellion called the Butlerian Jihad in which mankind defeats the smart machines of the future and then imposes a death penalty against any who would again create thinking machines. Often quoted from the Orange Catholic Bible , &quot;Thou shalt not make a machine in the likeness of a human mind.&quot;
Artificial Intelligence, Vision, and Robotics Making computers think like humans
What is AI? A branch of computer science that deals with intelligent behavior, learning, and adaptation in machines.