Your SlideShare is downloading. ×
Artificial intelligence research
Artificial intelligence research
Artificial intelligence research
Artificial intelligence research
Artificial intelligence research
Artificial intelligence research
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Artificial intelligence research

962

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
962
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
30
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. <ul><li>Artificial Intelligence (AI)</li></ul>Artificial Intelligence (AI) is the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times and today with the advent of the computer and 50 years of research into AI programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chess player, and countless other feats never before possible. <br />History of Artificial Intelligence<br />Evidence of Artificial Intelligence folklore can be traced back to ancient Egypt, but with the development of the electronic computer in 1941, the technology finally became available to create machine intelligence. The term artificial intelligence was first coined in 1956, at the Dartmouth conference, and since then Artificial Intelligence has expanded because of the theories and principles developed by its dedicated researchers. Through its short modern history, advancement in the fields of AI have been slower than first estimated, progress continues to be made. From its birth 4 decades ago, there have been a variety of AI programs, and they have impacted other technological advancements.<br />Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the golden robots of Hephaestus and Pygmalion's Galatea.[13] Human likenesses believed to have intelligence were built in every major civilization: animated statues were worshipped in Egypt and Greece[14] and humanoid automatons were built by Yan Shi,[15] Hero of Alexandria,[16] Al-Jazari[17] and Wolfgang von Kempelen.[18] It was also widely believed that artificial beings had been created by Jābir ibn Hayyān,[19] Judah Loew[20] and Paracelsus.[21] By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots).[22] Pamela McCorduck argues that all of these are examples of an ancient urge, as she describes it, " to forge the gods" .[6] Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.<br />Mechanical or " formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as " 0" and " 1" , could simulate any conceivable act of mathematical deduction.[23] This, along with recent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.[24]<br />The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956.[25] The attendees, including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the leaders of AI research for many decades.[26] They and their students wrote programs that were, to most people, simply astonishing: HYPERLINK " http://en.wikipedia.org/wiki/Artificial_intelligence" l " cite_note-26" [27] computers were solving word problems in algebra, proving logical theorems and speaking English.[28] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[29] and laboratories had been established around the world.[30] AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that " machines will be capable, within twenty years, of doing any work a man can do" [31] and Marvin Minsky agreed, writing that " within a generation ... the problem of creating 'artificial intelligence' will substantially be solved" .[32]<br />They had failed to recognize the difficulty of some of the problems they faced.[33] In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, the U.S. and British governments cut off all undirected, exploratory research in AI. The next few years, when funding for projects was hard to find, would later be called an " AI winter" .[34]<br />In the early 1980s, AI research was revived by the commercial success of expert systems, HYPERLINK " http://en.wikipedia.org/wiki/Artificial_intelligence" l " cite_note-EXPERT-34" [35] a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.[36] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.[37]<br />In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry.[9] The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific sub problems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[38]<br />Methods Used to Create Intelligence<br />In the quest to create intelligent machines, the field of Artificial Intelligence has split into several different approaches based on the opinions about the most promising methods and theories. These rivaling theories have lead researchers in one of two basic approaches; bottom-up and top-down. Bottom-up theorists believe the best way to achieve artificial intelligence is to build electronic replicas of the human brain's complex network of neurons, while the top-down approach attempts to mimic the brain's behavior with computer programs. <br />Neural Networks and Parallel Computation<br />The human brain is made up of a web of billions of cells called neurons, and understanding its complexities is seen as one of the last frontiers in scientific research. It is the aim of AI researchers who prefer this bottom-up approach to construct electronic circuits that act as neurons do in the human brain. Although much of the working of the brain remains unknown, the complex network of neurons is what gives humans intelligent characteristics. By itself, a neuron is not intelligent, but when grouped together, neurons are able to pass electrical signals through networks.<br />Using this theory, McCulloch and Pitts then designed electronic replicas of neural networks, to show how electronic networks could generate logical processes. They also stated that neural networks may, in the future, be able to learn, and recognize patterns. The results of their research and two of Weiner's books served to increase enthusiasm, and laboratories of computer simulated neurons were set up across the country.<br />Top Down Approaches; Expert Systems<br />Because of the large storage capacity of computers, expert systems had the potential to interpret statistics, in order to formulate rules. An expert system works much like a detective solves a mystery. Using the information, and logic or rules, an expert system can solve the problem. <br />2 Types of Artificial Intelligence<br />Strong artificial intelligence<br />Strong artificial intelligence research deals with the creation of some form of computer-based artificial intelligence that can truly reason and solve problems; a strong form of AI is said to be sentient, or self-aware. In theory, there are two types of strong AI: <br />Human-like AI, in which the computer program thinks and reasons much like a human mind. <br />Non-human-like AI, in which the computer program develops a totally non-human sentience, and a non-human way of thinking and reasoning. <br />Weak artificial intelligence<br />Weak artificial intelligence research deals with the creation of some form of computer-based artificial intelligence that can reason and solve problems only in a limited domain; such a machine would, in some ways, act as if it were intelligent, but it would not possess true intelligence or sentience. The classical test for such abilities is the Turing test. <br />There are several fields of weak AI, one of which is natural language. Many weak AI fields have specialised software or programming languages created for them. For example, the 'most-human' natural language chatterbot A.L.I.C.E. uses a programming language AIML that is specific to its program, and the various clones, named Alicebots. <br />To date, much of the work in this field has been done with computer simulations of intelligence based on predefined sets of rules. Very little progress has been made in strong AI. Depending on how one defines one's goals, a moderate amount of progress has been made in weak AI. <br />When viewed with a moderate dose of cynicism, weak artificial intelligence can be viewed as ‘the set of computer science problems without good solutions at this point.’ Once a sub-discipline results in useful work, it is carved out of artificial intelligence and given its own name. Examples of this are pattern recognition, image processing, neural networks, natural language processing, robotics and game theory. While the roots of each of these disciplines is firmly established as having been part of artificial intelligence, they are now thought of as somewhat separate. <br />Applications of Artificial Intelligence<br />We have been studying this issue of AI application for quite some time now and know all the terms and facts. But what we all really need to know is what we can do to get our hands on some AI today. How can we as individuals use our own technology? We hope to discuss this in depth (but as briefly as possible) so that you the consumer can use AI as it is intended. <br />First, we should be prepared for a change. Our conservative ways stand in the way of progress. AI is a new step that is very helpful to the society. Machines can do jobs that require detailed instructions followed and mental alertness. AI with its learning capabilities can accomplish those tasks but only if the worlds conservatives are ready to change and allow this to be a possibility. It makes us think about how early man finally accepted the wheel as a good invention, not something taking away from its heritage or tradition. <br />Secondly, we must be prepared to learn about the capabilities of AI. The more use we get out of the machines the less work is required by us. In turn less injuries and stress to human beings. Human beings are a species that learn by trying, and we must be prepared to give AI a chance seeing AI as a blessing, not an inhibition. <br />Finally, we need to be prepared for the worst of AI. Something as revolutionary as AI is sure to have many kinks to work out. There is always that fear that if AI is learning based; will machines learn that being rich and successful is a good thing, then wage war against economic powers and famous people? There are so many things that can go wrong with a new system so we must be as prepared as we can be for this new technology. <br />However, even though the fear of the machines are there, their capabilities are infinite Whatever we teach AI, they will suggest in the future if a positive outcome arrives from it. AI is like children that need to be taught to be kind, well mannered, and intelligent. If they are to make important decisions, they should be wise. We as citizens need to make sure AI programmers are keeping things on the level. We should be sure they are doing the job correctly, so that no future accidents occur. <br />Emerging Trends and New Developments of Artificial Intelligence<br />Artificial Intelligence (AI) is the key technology in many of today's novel applications, ranging from banking systems that detect attempted credit card fraud, to telephone systems that understand speech, to software systems that notice when you're having problems and offer appropriate advice. These technologies would not exist today without the sustained federal support of fundamental AI research over the past three decades. <br />Although there are some fairly pure applications of AI -- such as industrial robots, or the IntellipathTM pathology diagnosis system recently approved by the American Medical Association and deployed in hundreds of hospitals worldwide -- for the most part, AI does not produce stand-alone systems, but instead adds knowledge and reasoning to existing applications, databases, and environments, to make them friendlier, smarter, and more sensitive to user behavior and changes in their environments. The AI portion of an application (e.g., a logical inference or learning module) is generally a large system, dependent on a substantial infrastructure. Industrial R&D, with its relatively short time-horizons, could not have justified work of the type and scale that has been required to build the foundation for the civilian and military successes that AI enjoys today. And beyond the myriad of currently deployed applications, ongoing efforts that draw upon these decades of federally-sponsored fundamental research point towards even more impressive future capabilities: <br /> Autonomous vehicles: A DARPA-funded onboard computer system from Carnegie Mellon University drove a van all but 52 of the 2849 miles from Washington, DC to San Diego, averaging 63 miles per hour day and night, rain or shine; <br /> Computer chess: Deep Blue, a chess computer built by IBM researchers, defeated world champion Gary Kasparov in a landmark performance; <br /> Mathematical theorem proving: A computer system at Argonne National Laboratories proved a long-standing mathematical conjecture about algebra using a method that would be considered creative if done by humans; <br /> Scientific classification: A NASA system learned to classify very faint signals as either stars or galaxies with superhuman accuracy, by studying examples classified by experts; <br /> Advanced user interfaces: PEGASUS is a spoken language interface connected to the American Airlines EAASY SABRE reservation system, which allows subscribers to obtain flight information and make flight reservations via a large, on-line, dynamic database, accessed through their personal computer over the telephone. <br />Intellectually, AI depends on a broad intercourse with computing disciplines and with fields outside computer science, including logic, psychology, linguistics, philosophy, neuroscience, mechanical engineering, statistics, economics, and control theory, among others. This breadth has been necessitated by the grandness of the dual challenges facing AI: creating mechanical intelligence and understanding the information basis of its human counterpart. AI problems are extremely difficult, far more difficult than was imagined when the field was founded. However, as much as AI has borrowed from many fields, it has returned the favor: through its interdisciplinary relationships, AI functions as a channel of ideas between computing and other fields, ideas that have profoundly changed those fields. For example, basic notions of computation such as memory and computational complexity play a critical role in cognitive psychology, and AI theories of knowledge representation and search have reshaped portions of philosophy, linguistics, mechanical engineering and, control theory. <br /><ul><li>Commentary</li></ul>Artificial Intelligence has a great impact on helping people on making their wise decision throughout their lives but it has a lot of disadvantages. Even though it has no need of biological urges such as sleeping, restroom, and eating, it has a limited sensory input. Compared to a biological mind, an artificial mind is only capable of taking in a small amount of information. This is because of the need for individual input devices. The most important input that we humans take in is the condition of our bodies. Because we feel what is going on with our own bodies, we can maintain them much more efficiently than an artificial mind. At this point, it is unclear whether that would be possible with a computer system. Problem-solving for AI does not require emotional considerations. When people make decisions, sometimes those decisions are based on emotion rather than logic. <br />Life without emotion is death. If a human does not capable of making decisions while considering emotion, it may become rational. He or she will make a decision without considering consequences among his family, friends, or love ones. AI has a limited sensory input. It may remember some memorable experiences but it cannot feel the significance while it is living. AI does not capable of healing. If biological system can heal with time and treatment, AI needs to shut down for maintenance. If robots start replacing human resources in every field, we will have to deal with serious issues like unemployment in turn leading to mental depression, poverty and crime in the society. Human beings deprived of their work life may not find any means to channelize their energies and harness their expertise. Human beings will be left with empty time.Secondly, replacing human beings with robots in every field may not be a right decision to make. There are many jobs that require the human touch. Intelligent machines will surely not be able to substitute for the caring behavior of hospital nurses or the promising voice of a doctor. Intelligent machines may not be the right choice for customer service.<br />Apart from these concerns, there are chances that intelligent machines overpower human beings. Machines may enslave human beings and start ruling the world. Imagine artificial intelligence taking over human intellect! The picture is definitely not rosy.Some thinkers consider it ethically wrong to create artificial intelligent machines. According to them, intelligence is God’s gift to mankind. It is not correct to even try to recreate intelligence. It is against ethics to create replicas of human beings.<br />Polytechnic University of the Philippines<br />Anonas St. Sta. Mesa, Manila<br />Introduction to ICT With Laboratory<br />Research Assignment:<br />Submitted By:<br />Perez, Raphael Ray L.<br />BSCP/ 3rd yr. – 3s<br />Submitted To:<br />Prof. Flordeliz Garcia<br />-Professor of Intro to ICT with Lab-<br />

×