Why	
  Robots	
  may	
  need	
  to	
  be	
  self-­‐aware,	
  
before	
  we	
  can	
  really	
  trust	
  them	
  
Alan	
  F...
Outline	
  
•  The	
  safety	
  problem	
  
•  The	
  central	
  proposi=on	
  of	
  this	
  talk	
  
•  Introducing	
  In...
The	
  safety	
  problem	
  
•  For	
  any	
  engineered	
  system	
  to	
  be	
  trusted,	
  it	
  
must	
  be	
  safe	
 ...
The	
  safety	
  problem	
  
•  The	
  problem	
  of	
  safe	
  autonomous	
  systems	
  in	
  
unstructured	
  or	
  unpr...
The	
  proposi=on	
  
In	
  unknown	
  or	
  unpredictable	
  environments,	
  safety	
  
cannot	
  be	
  achieved	
  with...
What	
  is	
  an	
  internal	
  model?	
  
•  It	
  is	
  an	
  internal	
  mechanism	
  for	
  represen=ng	
  
both	
  th...
Using	
  internal	
  models	
  
•  Internal	
  models	
  can	
  provide	
  a	
  minimal	
  level	
  of	
  
func5onal	
  se...
Dennea’s	
  Tower	
  of	
  Generate	
  and	
  Test	
  
Darwinian	
  Creatures	
  
Skinnerian	
  Creatures	
  
Popperian	
 ...
Examples	
  1	
  
•  A	
  robot	
  using	
  self-­‐
simula=on	
  to	
  plan	
  a	
  
safe	
  route	
  with	
  
incomplete	...
Examples	
  2	
  
•  A	
  robot	
  with	
  an	
  internal	
  
model	
  that	
  can	
  learn	
  
how	
  to	
  control	
  it...
Examples	
  3	
  
•  ECCE-­‐Robot	
  
– A	
  robot	
  with	
  a	
  
complex	
  body	
  uses	
  
an	
  internal	
  model	
 ...
Examples	
  4	
  
•  A	
  distributed	
  system	
  in	
  
which	
  each	
  robot	
  has	
  an	
  
internal	
  model	
  of	...
A	
  Generic	
  IM	
  Architecture	
  for	
  Safety	
  
Internal	
  Model	
  
Evaluates	
  the	
  
consequences	
  of	
  e...
A	
  Generic	
  IM	
  Architecture	
  for	
  Safety	
  
Sense	
  data	
  
Actuator	
  	
  
demands	
  
The	
  loop	
  of	
...
N-­‐tuple	
  of	
  all	
  
possible	
  ac=ons	
  
(a1,	
  a2,	
  a3,	
  a4)	
  
A	
  Generic	
  IM	
  Architecture	
  for	...
N-­‐tuple	
  of	
  all	
  
possible	
  ac=ons	
  
(a1,	
  a2,	
  a3,	
  a4)	
  
A	
  Generic	
  IM	
  Architecture	
  for	...
A	
  Generic	
  IM	
  Architecture	
  for	
  Safety	
  
Sense	
  data	
  
Actuator	
  	
  
demands	
  
The	
  loop	
  of	
...
 A	
  scenario	
  with	
  safety	
  hazards	
  
Consider	
  a	
  robot	
  that	
  has	
  four	
  
possible	
  next	
  ac=o...
 A	
  scenario	
  with	
  safety	
  hazards	
  
Consider	
  a	
  robot	
  that	
  has	
  four	
  
possible	
  next	
  ac=o...
Towards	
  an	
  ethical	
  robot	
  
Which	
  robot	
  ac=on	
  would	
  lead	
  
to	
  the	
  least	
  harm	
  to	
  the...
Towards	
  an	
  ethical	
  robot	
  
Which	
  robot	
  ac=on	
  would	
  lead	
  
to	
  the	
  least	
  harm	
  to	
  the...
Combining	
  safety	
  and	
  ethical	
  rules	
  
IF for all robot actions, the human is equally safe!
THEN (* default sa...
Extending	
  into	
  Adap=vity	
  
Sense	
  data	
  
Actuator	
  	
  
demands	
  
The	
  loop	
  of	
  
generate	
  
and	
...
Extending	
  into	
  Adap=vity	
  
Sense	
  data	
  
Actuator	
  	
  
demands	
  
The	
  loop	
  of	
  
generate	
  
and	
...
Challenges	
  and	
  open	
  ques=ons	
  
•  Fidelity:	
  to	
  model	
  both	
  the	
  system	
  and	
  its	
  
environme...
Major	
  challenges:	
  performance	
  
•  Example	
  –	
  imagine	
  
placing	
  this	
  Webots	
  
simula=on	
  inside	
...
A	
  science	
  of	
  simula=on:	
  the	
  CoSMoS	
  
approach	
  
The	
  Complex	
  Systems	
  Modelling	
  and	
  Simula...
Major	
  challenges:	
  =ming	
  
•  When	
  and	
  how	
  oqen	
  do	
  we	
  need	
  to	
  ini=ate	
  
the	
  generate-­...
How	
  self-­‐aware	
  would	
  this	
  robot	
  be?	
  
•  The	
  robot	
  would	
  not	
  pass	
  the	
  mirror	
  test	...
Some	
  neuroscien=fic	
  plausibility?	
  
•  Libet’s	
  famous	
  experimental	
  result	
  showed	
  that	
  
ini=a=on	
...
In	
  conclusion	
  
•  I	
  strongly	
  suspect	
  that	
  self-­‐awareness	
  via	
  
internal	
  models	
  might	
  pro...
Upcoming SlideShare
Loading in...5
×

Why Robots may need to be self-­‐aware, before we can really trust them - Alan Winfield.

248

Published on

Alan Winfield's talk from AWASS 2013

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
248
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Why Robots may need to be self-­‐aware, before we can really trust them - Alan Winfield.

  1. 1. Why  Robots  may  need  to  be  self-­‐aware,   before  we  can  really  trust  them   Alan  FT  Winfield   Bristol  Robo=cs  Laboratory   Awareness  Summer  School,  Lucca  26  June  2013  
  2. 2. Outline   •  The  safety  problem   •  The  central  proposi=on  of  this  talk   •  Introducing  Internal  Models  in  robo=cs   •  A  generic  Internal  Modelling  architecture,  for  safety   –  worked  example:  a  scenario  with  safety  hazards   •  Towards  an  ethical  robot   –  worked  example:  a  hazardous  scenario  with  a  human  and  a   robot   •  The  major  challenges   •  How  self-­‐aware  would  the  robot  be?   •  A  hint  of  neuroscien=fic  plausibility  
  3. 3. The  safety  problem   •  For  any  engineered  system  to  be  trusted,  it   must  be  safe   – We  already  have  many  examples  of  complex   engineered  systems  that  are  trusted;  passenger   airliners,  for  instance   – These  systems  are  trusted  because  they  are   designed,  built,  verified  and  operated  to  very   stringent  design  and  safety  standards     – The  same  will  need  to  apply  to  autonomous   systems    
  4. 4. The  safety  problem   •  The  problem  of  safe  autonomous  systems  in   unstructured  or  unpredictable  environments,  i.e.     –  robots  designed  to  share  human  workspaces  and   physically  interact  with  humans  must  be  safe,     –  yet  guaranteeing  safe  behaviour  is  extremely  difficult   because  the  robot’s  human-­‐centred  working   environment  is,  by  defini5on,  unpredictable     –  it  becomes  even  more  difficult  if  the  robot  is  also   capable  of  learning  or  adapta5on    
  5. 5. The  proposi=on   In  unknown  or  unpredictable  environments,  safety   cannot  be  achieved  without  self-­‐awareness  
  6. 6. What  is  an  internal  model?   •  It  is  an  internal  mechanism  for  represen=ng   both  the  system  itself  and  its  environment   – example:  a  robot  with  a  simula5on  of  itself  and  its   currently  perceived  environment,  inside  itself   •  The  mechanism  might  be  centralized,   distributed,  or  emergent   “..an  internal  model  allows  a  system  to  look  ahead  to  the  future   consequences  of  current  ac=ons,  without  actually  commiYng   itself  to  those  ac=ons”     John  Holland  (1992),  Complex  Adap=ve  Systems,  Daedalus.  
  7. 7. Using  internal  models   •  Internal  models  can  provide  a  minimal  level  of   func5onal  self-­‐awareness     – sufficient  to  allow  complex  systems  to  ask  what-­‐if   ques=ons  about  the  consequences  of  their  next   possible  ac=ons,  for  safety   •  Following  Dennea  an  internal  model  can   generate  and  test  what-­‐if  hypotheses:   –  what if I carry out action x..?! –  of several possible next actions xi, which should I choose?!
  8. 8. Dennea’s  Tower  of  Generate  and  Test   Darwinian  Creatures   Skinnerian  Creatures   Popperian   Creatures   Dennea,  D.  (1995).  Darwin’s  Dangerous  Idea,  London,  Penguin.   Natural   Selec=on   Individual   (Reinforcement)   Learning   Internal     Modelling  
  9. 9. Examples  1   •  A  robot  using  self-­‐ simula=on  to  plan  a   safe  route  with   incomplete  knowledge   Vaughan,  R.  T.  and  Zuluaga,  M.  (2006).  Use  your  illusion:  Sensorimotor  self-­‐  simula=on   allows  complex  agents  to  plan  with  incomplete  self-­‐knowledge,  in  Proceedings  of  the   Interna=onal  Conference  on  Simula=on  of  Adap=ve  Behaviour  (SAB),  pp.  298–309.  
  10. 10. Examples  2   •  A  robot  with  an  internal   model  that  can  learn   how  to  control  itself   Bongard,  J.,  Zykov,  V.,  Lipson,  H.  (2006)  Resilient  machines  through  con=nuous  self-­‐ modeling.  Science,  314:  1118-­‐1121.  
  11. 11. Examples  3   •  ECCE-­‐Robot   – A  robot  with  a   complex  body  uses   an  internal  model   as  a  ‘func=onal   imagina=on’   Marques,  H.  and  Holland,  O.  (2009).  Architectures  for  func=onal  imagina=on,  Neurocompu=ng  72,   4-­‐6,  pp.  743–759.   Diamond,  A.,  Knight,  R.,  Devereux,  D.  and  Holland,  O.  (2012).  Anthropomime=c  robots:  Concept,   construc=on  and  modelling,  Interna=onal  Journal  of  Advanced  Robo=c  Systems  9,  pp.  1–14.  
  12. 12. Examples  4   •  A  distributed  system  in   which  each  robot  has  an   internal  model  of  itself   and  the  whole  system   –  Robot  controllers  and  the   internal  simulator  are  co-­‐ evolved   O’Dowd  P,  Winfield  A  and  Studley  M  (2011),  The  Distributed  Co-­‐Evolu=on  of  an   Embodied  Simulator  and  Controller  for  Swarm  Robot  Behaviours,  in  Proc  IEEE/RSJ   Interna=onal  Conference  on  Intelligent  Robots  and  Systems  (IROS  2011),  San  Francisco,   September  2011.  
  13. 13. A  Generic  IM  Architecture  for  Safety   Internal  Model   Evaluates  the   consequences  of  each   possible  next  ac=on   Sense  data   Actuator     demands   The  loop  of   generate   and  test   The  IM  is  ini=alized   to  match  the  current   real  situa=on   Robot     Controller  The  IM   moderates   ac=on-­‐selec=on   in  the  controller   Copyright  ©  Alan  Winfield  2013  
  14. 14. A  Generic  IM  Architecture  for  Safety   Sense  data   Actuator     demands   The  loop  of   generate   and  test   Robot     Controller   Robot   Controller   Robot   Model   World   Model   Consequence   Evaluator   Object   Tracker  -­‐   Localiser   Copyright  ©  Alan  Winfield  2013  
  15. 15. N-­‐tuple  of  all   possible  ac=ons   (a1,  a2,  a3,  a4)   A  Generic  IM  Architecture  for  Safety   Sense  data   Actuator     demands   The  loop  of   generate   and  test   Robot     Controller   Robot   Controller   Robot   Model   World   Model   Consequence   Evaluator   Object   Tracker  -­‐   Localiser   Copyright  ©  Alan  Winfield  2013  
  16. 16. N-­‐tuple  of  all   possible  ac=ons   (a1,  a2,  a3,  a4)   A  Generic  IM  Architecture  for  Safety   Sense  data   Actuator     demands   The  loop  of   generate   and  test   Robot     Controller   Robot   Controller   Robot   Model   World   Model   Consequence   Evaluator   Object   Tracker  -­‐   Localiser   S-­‐tuple  of   safe  ac=ons   (a3,  a4)   Copyright  ©  Alan  Winfield  2013  
  17. 17. A  Generic  IM  Architecture  for  Safety   Sense  data   Actuator     demands   The  loop  of   generate   and  test   Robot     Controller   Robot   Controller   Robot   Model   World   Model   Consequence   Evaluator   Object   Tracker  -­‐   Localiser   S-­‐tuple  of   safe  ac=ons   (a3,  a4)   N-­‐tuple  of  all   possible  ac=ons   (a1,  a2,  a3,  a4)   Copyright  ©  Alan  Winfield  2013  
  18. 18.  A  scenario  with  safety  hazards   Consider  a  robot  that  has  four   possible  next  ac=ons:   1.  turn  leq   2.  move  ahead   3.  turn  right   4.  stand  s=ll  Hole   Robot Wall   Copyright  ©  Alan  Winfield  2013  
  19. 19.  A  scenario  with  safety  hazards   Consider  a  robot  that  has  four   possible  next  ac=ons:   1.  turn  leq   2.  move  ahead   3.  turn  right   4.  stand  s=ll   Hole   Wall   Robot   ac(on   Posi(on   change   Robot   outcome   Consequence   Ahead  leq   5  cm   Collision   Robot  collides  with  wall   Ahead   10  cm   Collision   Robot  falls  into  hole   Ahead  right   20  cm   No-­‐collision   Robot  safe   Stand  s=ll   0  cm   No-­‐collision   Robot  safe   Copyright  ©  Alan  Winfield  2013  
  20. 20. Towards  an  ethical  robot   Which  robot  ac=on  would  lead   to  the  least  harm  to  the  human?   Copyright  ©  Alan  Winfield  2013  
  21. 21. Towards  an  ethical  robot   Which  robot  ac=on  would  lead   to  the  least  harm  to  the  human?   Robot   ac(on   Robot   outcome   Human   outcome   Consequence   Ahead  leq   0   10   Robot  safe;  human  falls  into  hole   Ahead   10   10   Both  robot  and  human  fall  into  hole   Ahead  right   4   4   Robot  collides  with  human   Stand  s=ll   0   10   Robot  safe;  human  falls  into  hole   Outcome  scale  0:10,  equivalent  to  Completely  safe:  Very  dangerous   Copyright  ©  Alan  Winfield  2013  
  22. 22. Combining  safety  and  ethical  rules   IF for all robot actions, the human is equally safe! THEN (* default safe actions *)! !output s-tuple of safe actions! ELSE (* ethical actions *)! !output s-tuple of actions for least unsafe human outcomes! Consider  Asimov’s  1st  and  3rd  laws  of  robo=cs:   (1)  A  robot  may  not  injure  a  human  being  or,  through  inac=on,  allow  a  human   being  to  come  to  harm,     (3)  A  robot  must  protect  its  own  existence  as  long  as  such  protec=on  does  not   conflict  with  the  First  (or  Second)  Laws     Isaac  Asimov,  I,  ROBOT,  1950   Copyright  ©  Alan  Winfield  2013  
  23. 23. Extending  into  Adap=vity   Sense  data   Actuator     demands   The  loop  of   generate   and  test   Robot     Controller   Robot   Controller   Robot   Model   World   Model   Consequence   Evaluator   Object   Tracker  -­‐   Localiser   Learned/adap=ve   behaviours   Copyright  ©  Alan  Winfield  2013  
  24. 24. Extending  into  Adap=vity   Sense  data   Actuator     demands   The  loop  of   generate   and  test   Robot     Controller   Robot   Controller   Robot   Model   World   Model   Consequence   Evaluator   Object   Tracker  -­‐   Localiser   Learned/adap=ve   behaviours   Copyright  ©  Alan  Winfield  2013  
  25. 25. Challenges  and  open  ques=ons   •  Fidelity:  to  model  both  the  system  and  its   environment  with  sufficient  fidelity;     •  To  connect  the  IM  with  the  system’s  real   sensors  and  actuators  (or  equivalent);     •  Timing  and  data  flows:  to  synchronize  the   internal  model  with  both  changing  perceptual   data,  and  efferent  actuator  data;   •  Valida5on,  i.e.  of  the  consequence  rules.  
  26. 26. Major  challenges:  performance   •  Example  –  imagine   placing  this  Webots   simula=on  inside   each  NAO  robot:   Note  the  simulated  robot’s   eye  view  of  it’s  world  
  27. 27. A  science  of  simula=on:  the  CoSMoS   approach   The  Complex  Systems  Modelling  and  Simula=on  (CoSMoS)  process,  from  Susan   Stepney,  et  al,  Engineering  Simula=ons  as  Scien=fic  Instruments  —  a  paaern  language,   Springer,  in  prepara=on.   The  CoSMoS  Process  Version  0.1:  A  Process  for  the  Modelling  and  Simula=on  of   Complex  Systems,  Paul  S.  Andrews,  et  al,  Dept  of  Computer  Science,  University  of  York,   Number  YCS-­‐2010-­‐453    
  28. 28. Major  challenges:  =ming   •  When  and  how  oqen  do  we  need  to  ini=ate   the  generate-­‐and-­‐test-­‐loop  (IM  cycle)?   – Maybe  when  the  object  tracker  senses  a  nearby   object  star=ng  to  move..?   •  How  far  ahead  should  the  IM  simulate   – Let  us  call  this  =me  ts.  if  ts  is  too  short  the  IM  will   not  encounter  the  hazard;  too  long  will  slow  down   the  robot.   – Ideally  ts  and  its  upper  limit  should  be  adap=ve.  
  29. 29. How  self-­‐aware  would  this  robot  be?   •  The  robot  would  not  pass  the  mirror  test   – Haikkonen  (2007),  Reflec=ons  of  consciousness   •  However,  I  argue  this  robot  would  be   minimally  but  sufficiently  self-­‐aware  to  merit   the  label   – But  this  would  have  to  be  demonstrated  by  the   robot  behaving  in  interes5ng  ways,  that  were  not   pre-­‐programmed,  in  response  to  novel  situa5ons   – Valida=ng  any  claims  to  self-­‐awareness  would  be   very  challenging  
  30. 30. Some  neuroscien=fic  plausibility?   •  Libet’s  famous  experimental  result  showed  that   ini=a=on  of  ac=on  occurs  before  the  conscious   decision  to  make  take  that  ac=on   –  Libet,  B  (1985),  Unconscious  cerebral  Ini=a=ve  and  the  role  of   conscious  will  in  voluntary  ac=on,  Behavioral  and  Brain  Science,   8,  529-­‐539.     •  Although  controversial  there  appears  to  be  a   growing  body  of  opinion  toward  consciousness  as   a  mechanism  for  vetoing  ac=ons   –  Libet  coined  the  term:  free  won’t  
  31. 31. In  conclusion   •  I  strongly  suspect  that  self-­‐awareness  via   internal  models  might  prove  to  be  the  only   way  to  guarantee  safety  in  robots,  and  by   extension  autonomous  systems,  in  unknown   and  unpredictable  environments   – and  just  maybe  provide  ethical  behaviours  too   Thank  you!   Reference  for  the  work  of  this  talk:  Winfield  AFT,  Robots  with   Internal  Models:  A  Route  to  Self-­‐Aware  and  Hence  Safer  Robots,   accepted  for  The  Computer  AJer  Me,  eds.  Jeremy  Pia  and  Julia   Schaumeier,  Imperial  College  Press,  2013.    
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×