PPT

767 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
767
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
25
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

PPT

  1. 1. Intelligent Systems Colloquium 3 Future of AI. Dangers and problems of development of AI.
  2. 2. Agenda <ul><li>Motivation of creating of artificial mind </li></ul><ul><li>Objective necessity to development of artificial human-like mind </li></ul><ul><li>How to develop artificial human-like mind </li></ul><ul><li>How to estimate and testing of artificial mind </li></ul><ul><li>How to save controllability of artificial mind </li></ul><ul><li>What can be consequences of creating of artificial mind </li></ul><ul><li>Main centers in World in development of AI </li></ul>
  3. 3. Motivation of creating of mind <ul><li>Wish to develop of helper for hard works </li></ul><ul><li>Wish to understand of myself, how we are constructed and think </li></ul><ul><li>Wish to improve capabilities of human to process and store of information </li></ul><ul><li>Creating of artificial mind is a part of common tendency to creating of alternative kinds of life: genetic engineering, artificial life (part of AI), virtual reality in computer games </li></ul><ul><li>The reason of that – wish of human mind to obtain of new information from environment </li></ul>
  4. 4. Objective necessity to development of artificial human-like mind <ul><li>Necessity of comfortable and safe interaction with (human-like interaction) </li></ul><ul><li>Necessity of development of self-learning, self-repairing and self-producing intelligent systems </li></ul>
  5. 5. Kismet – project of MIT Kismet, a robot designed to interact socially with humans. Kismet has an active vision system and can display a variety of facial expressions .
  6. 6. Example of self-assembling and transforming robot
  7. 7. How to develop artificial human-like mind <ul><li>What is sense ( a semantics) of our concepts (signs, words) </li></ul><ul><li>What is an emotions and what its role in mind </li></ul><ul><li>What is a consciousness? </li></ul><ul><li>How a coding of our memory is implemeted </li></ul><ul><li>Is exist a connection between our memory and genetic memory </li></ul><ul><li>Is exist a free will </li></ul>
  8. 8. The emotions <ul><li>“ The main question is whether non-intellective, that is affective and conative abilities, are admissible as factors of general intelligence. (My contention) has been that such factors are not only admissible but necessary. I have tried to show that in addition to intellective there are also definite non-intellective factors that determine intelligent behavior. If the foregoing observations are correct, it follows that we cannot expect to measure total intelligence until our tests also include some measures of the non-intellective factors” [Wechsler, 1943). </li></ul>
  9. 9. The emotions <ul><li>Last investigations of brain – the emotions bring influence on quality of memorization </li></ul><ul><li>It is possibly that emotions is a tool for control of speed of decision making by changing of level (or degree of parrallelism) for thinking </li></ul><ul><li>Emotions are connected with achievement of goal (positive is a signal of successful process of achievement and negative – is a signal about fail) </li></ul><ul><li>The emotion are closely connected with body and are older feature of brain then neocortex (its appear in reptiles) </li></ul>
  10. 10. Artificial dogs AIBO playing in soccer
  11. 11. Making of decision Associative link (inference) Classification (rocognition) of situation (task) Forming of reaction on situation (solving) decision Sensors Effectors
  12. 12. Architecture of EGO of robots of Sony
  13. 13. Human brain
  14. 14. Objective difficulties of investigation of action of brain <ul><li>Brain is a most complex system known by human ( ~10 11 of neurons, 10 4 synapses per neuron, un homogeneity of structure) </li></ul><ul><li>This system must investigate itself by itself </li></ul><ul><li>Because emergency and distribution of brain it is impossible to investigate it in parts (only in limited value) </li></ul><ul><li>It is impossible to investigate human brain in parts with causation of damage </li></ul>
  15. 15. Approaches to investigation of human mind <ul><li>Philosophy (in particular, in religions) – investigation of place of mind in Universe and in society </li></ul><ul><li>Psychology – investigation of external demonstration of action of mind (actions, emotions, communication capabilities), main goals – investigation and correction of features of behavior </li></ul><ul><li>Neurophysiology – investigation of structure of brain, role of different components and processes in mind, main goal – diagnosis and correction of any illnesses of brain </li></ul><ul><li>Artificial intelligence – investigation of principles of information processing in brain, which is reason of its functionality, by inviting, implementation and testing of models, main goal – development of human-like helper </li></ul>
  16. 16. Approaches of AI <ul><li>Logical (computational) </li></ul><ul><ul><li>Based on symbol information processing with different knowledge representation </li></ul></ul><ul><ul><li>Goal: modeling of consistent reasoning and understanding of natural language </li></ul></ul><ul><li>Connectionist (neural networks) </li></ul><ul><ul><li>Based on signal information processing </li></ul></ul><ul><ul><li>Goal: modeling of deep processing of brain with different models of neural networks </li></ul></ul><ul><li>Hybrid </li></ul><ul><ul><li>Based of combination of different models from above </li></ul></ul><ul><ul><li>Goal: modeling of human-like mind in most full sense </li></ul></ul>
  17. 17. Example of knowledge representation in logical approach
  18. 18. Example of neural network for diagnosis of underwater robot
  19. 19. Architecture of “hemi-sphere” expert system (NSTU, Novosibirsk) <ul><li>Level of store of knowledge </li></ul><ul><li>Level of processing of data and knowledge </li></ul><ul><li>Level of store of data </li></ul><ul><li>Level of processing of signals and events </li></ul>Knowledge Base Inference Blackboard Manager Neural network
  20. 20. Approaches of AI(2) <ul><li>Agent-based approach </li></ul><ul><ul><li>Based on concept of multi-agent systems </li></ul></ul><ul><ul><li>Goal: modeling of un homogeneous structure of brain as collective of interacting subsytems </li></ul></ul><ul><li>Evolutional approach </li></ul><ul><ul><li>Based on genetic algorithms </li></ul></ul><ul><ul><li>Goal: modeling of building and learning of un homogeneous structure of brain during evolution </li></ul></ul><ul><li>Quantum approach </li></ul><ul><ul><li>Based on performance about wave processes as basis of action of brain </li></ul></ul><ul><ul><li>Goal: modeling of wave activity of brain during of its action </li></ul></ul>
  21. 21. Agent-oriented approach is developed and tested in soccer championships “Robocup”
  22. 22. Example of using of genetic algorithms for forming of best gate of robot
  23. 23. How to estimate and testing of artificial mind <ul><li>Test of Turing </li></ul><ul><li>Two approaches to estimate of mind: </li></ul><ul><li>Deep testing </li></ul><ul><ul><li>Deal with understanding how we are thinking, learning and store of knowledge </li></ul></ul><ul><li>Brief testing </li></ul><ul><ul><li>Deal with similarity of behavior </li></ul></ul>
  24. 24. Objective of difficulties of testing of artificial mind <ul><li>Artificial mind as natural mind is a complex emergent system </li></ul><ul><li>We don’t know many features of mind and brain and even sometimes don’t know what is normal mind, for example, sometimes there is very small difference between schizophrenia and genius </li></ul><ul><li>Artificial mind haven’t many features for interaction of environment connected with human body </li></ul>
  25. 25. How to save controllability of artificial mind First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov's Laws of Robotics :
  26. 26. A deadlock problem was the key feature of the short story in which Asimov first introduced the laws. He constructed the type of stand- off commonly referred to as the &quot;Buridan's ass&quot; problem. It involved a balance between a strong third- law self- protection tendency, causing the robot to try to avoid a source of danger, and a weak second- law order to approach that danger. &quot;The conflict between the various rules is [meant to be] ironed out by the different positronic potentials in the brain,&quot; but in this case the robot &quot;follows a circle around [the source of danger], staying on the locus of all points of ... equilibrium.&quot; Deadlock is also possible within a single law. An example under the first law would be two humans threatened with equal danger and the robot unable to contrive a strategy to protect one without sacrificing the other. Under the second law, two humans might give contradictory orders of equivalent force. The later novels address this question with greater sophistication: What was troubling the robot was what roboticists called an equipotential of contradiction on the second level. Obedience was the Second Law and [the robot] was suffering from two roughly equal and contradictory orders. Robot- block was what the general population called it or, more frequently, roblock for short . . . [or] `mental freeze- out.' No matter how subtle and intricate a brain might be, there is always some way of setting up a contradiction. This is a fundamental truth of mathematics.
  27. 27. <ul><li>Conflict between hardness (certainty) of Asimov’s laws and necessity of development of human-like artificial mind </li></ul><ul><li>Human-like artificial mind will be exposed to same dangers as human mind with using of unsafe principles of morals in different religions (its didn’t defend of mankind from wars, crimes, victims) </li></ul>
  28. 28. What can be consequences of creating of artificial mind <ul><li>We are only step of evolution of mind on Earth (N.Amosov, 1963), film “AI” </li></ul><ul><li>Revolution of machines (different films: Matrix, Terminator) </li></ul><ul><li>War between supporters and opponents of creating of artificial mind (de Garis, 2001) </li></ul><ul><li>Creating of cyborgs as new generation of people (Worvick, about 2000) </li></ul>
  29. 29. Hanson Robotics Robot “Eva” Robot “Philip Dick”
  30. 30. Japan robot Replee Q1
  31. 31. Robot Valery of Animatronics
  32. 32. Possible future – planet of robots?
  33. 33. Intelligent Robots Human Robotics of MIT http://www.ai.mit.edu/projects/humanoid-robotics-group/ Stanford University Http:// cs.stanford.edu /Research / Edinburg University Http://www.informatics.ed.ac.uk Aibo of Sony http :// www.aibo-europe.com / ATR http ://www.sarcos.com/ USC http://www-robotics.usc.edu/ Carnegi-Mellon University http://www.cs.cmu.edu Androids of Hanson Robotics http://www.human-robot.com Manchester University http://www.cs.man.ac.uk/robotics/

×