Pagi World from RPI Licato and Bringsjord

510 views

Published on

Cognitive Systems Institute Speaker Series - presentation from RPI John Licato and Selmer Bringsjord.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
510
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Pagi World from RPI Licato and Bringsjord

  1. 1. PAGI  World:  A  simula1on   Environment  to  Challenge   Cogni1ve  Architectures   John  Licato   Selmer  Bringsjord   Rensselaer  AI  and  Reasoning  (RAIR)  Lab  
  2. 2. Last  week   •  John  Laird  talked   about  “interac1ve   task  learning”   •  Today,  we  will   present  a  simulator   to  facilitate  such   research  
  3. 3. Developmental  AI  –  Emerging  field   aOemp1ng  to  show  how,  using  an   agent  endowed  with  minimal  innate   capaci1es  embedded  in  a  sufficiently   rich  environment,  higher-­‐level   cogni1ve  abili1es  can  emerge.   What  makes  an  environment  sufficiently  rich?   Guerin,  Frank  (2011).  Learning  like  a  baby:  A  survey  of  ar1ficial  intelligence   approaches.  The  Knowledge  Engineering  Review,  26(2),  209-­‐236.  
  4. 4. Guerin  (2011)’s  requirements   A  sufficiently  rich  environment…   C1  –  is  rich  enough  to  provide  knowledge  which   would  bootstrap  the  understandings  of  concepts   rooted  in  physical  rela1onships,  e.g.:  inside  vs.   outside,  large  vs.  small,  above  vs.  below   C2  –  can  allow  for  the  modeling  and  acquisi1on  of   spa1al  knowledge  (widely  regarded  to  be  a   founda1onal  domain  of  knowledge  acquisi1on)   through  interac1on  with  the  world.   C3  –  can  support  the  crea1on  and  maintenance  of   knowledge  which  the  agent  can  verify  itself.  
  5. 5. Our  addi1onal  requirements   A  sufficiently  rich  environment…   C4  –  rich  enough  to  provide  much  of  the  sensory-­‐level   informa1on  accessible  to  a  real-­‐world  agent.   C5  –  allows  for  tes1ng  of  a  virtually  unlimited  number  of   tasks,  whether  they  test  low-­‐level  implicit  knowledge,   high-­‐level  explicit  knowledge,  or  any  of  the  other  areas   required  by  PAGI,  ideally  allowing  for  the  crea1on  of  new   tasks  without  substan1al  programming  effort.   C6  –  allows  a  wide  variety  of  AI  systems  based  on  vastly   different  theore1cal  approaches  to  aOempt  the  same   tasks,  thus  enabling  these  different  approaches  to  be   directly  compared.   CT  –  can  support  tasks  capable  of  verifying  AI  able  to  pass   the  Tailorability  Concern  
  6. 6. Tailorability  Concern  –  that  [cogni1ve   systems]  deal  almost  exclusively  with   manually  constructed  knowledge   representa1ons,  using  toy  examples  and   source  knowledge  ocen  selected  solely  to   display  some  par1cular  ability.   Gentner,  Dedre  &  Forbus,  Ken  (2011).  Computa1onal  models  of  analogy.  Wiley   Interdisciplinary  Reviews:  Cogni>ve  Science,  2(3),  266-­‐276.  
  7. 7. Licato,  J.,  Bringsjord,  S.,  &  Govindarajulu,  N.S.  (2014).  How  models  of  crea1vity  and  analogy  need  to   answer  the  tailorability  concern.  In  Besold,  T.R.,  Kühnberger,  K.-­‐u.,  Schorlemmer,  M.,  &  Smaill,  A.  (Eds.),   Computa>onal  Crea>vity  Research  :  Towards  Crea>ve  Machines.  Atlan1s  Press.  
  8. 8. Drescher  (1991):  A  star1ng  point   •  Cell-­‐based  world   •  Simple  agent  which   occupied  one  cell   •  Agent  had  a  “hand”   which  could  grasp   objects  in  the  world   •  Visual  field  rela1ve  to   the  agent’s  “body”   Drescher,  Gary  L.  (1991).  Made-­‐Up  Minds:  A  Construc>vist  Approach  to   Ar>ficial  Intelligence.  The  MIT  Press.  
  9. 9. Drescher  (1991):  A  star1ng  point   •  Used  to  show  Piage1an   (construc1vist)  boOom-­‐up   crea1on  of  knowledge   •  Simula1on  environment   was  1ghtly  coupled  with  his   schema  mechanism   •  No  realis1c  mo1on  or   physics   •  World  did  not  provide  rich   source  analogs  for  e.g.   inside  vs.  outside   Drescher,  Gary  L.  (1991).  Made-­‐Up  Minds:  A  Construc>vist  Approach  to   Ar>ficial  Intelligence.  The  MIT  Press.  
  10. 10. C5  –  allows  for  tes1ng  of  a  virtually  unlimited  number  of  tasks,  whether  they  test  low-­‐level   implicit  knowledge,  high-­‐level  explicit  knowledge,  or  any  of  the  other  areas  required  by  PAGI,   ideally  allowing  for  the  crea1on  of  new  tasks  without  substan1al  programming  effort.  
  11. 11. Controlled by PAGI-side ! ! ! ! ! ! ! ! ! ! Reflex and State Machine Controlled by AI-side ! ! ! ! ! TCP/ IP pyPAGI (optional) ! ! ! DCEC* extractor/ convertor Physics Engine Task Editor Configurable by external user Can  be  wriOen  in  almost  any  language!  
  12. 12. PAGI  World  can  be  run  on:   Windows   Mac  OS   Linux  (through  Chrome  browser)   Android  and  Iphone  (in  theory)     AI  can  be  wriCen  in:   ANY  programming  language  which   supports  TCP/IP  
  13. 13. C1  –  is  rich  enough  to  provide  knowledge  which  would  bootstrap  the  understandings  of   concepts  rooted  in  physical  rela1onships,  e.g.:  inside  vs.  outside,  large  vs.  strong   C2  –  can  allow  for  the  modeling  and  acquisi1on  of  spa1al  knowledge  (widely  regarded  to  be  a   founda1onal  domain  of  knowledge  acquisi1on)  through  interac1on  with  the  world.  
  14. 14. C3  –  can  support  the  crea1on  and  maintenance  of  knowledge  which  the  agent  can  verify  itself.   Warning:  DCEC*  is  a  highly  expressive  computa1onal  logic  and  therefore  the  cogni1on  which   it  enables  may  or  may  not  be  within  reach  of  a  given  cogni1ve  architecture.     But  PAGI  World  allows  us  to  test  and  see!  
  15. 15. C4  –  rich  enough  to  provide  much  of  the  sensory-­‐level  informa1on  accessible  to  a  real-­‐world   agent.     C6  –  allows  a  wide  variety  of  AI  systems  based  on  vastly  different  theore1cal  approaches  to   aOempt  the  same  tasks,  thus  enabling  these  different  approaches  to  be  directly  compared.   Controlled by PAGI-side ! ! ! ! ! ! ! ! ! ! Reflex and State Machine Controlled by AI-side ! ! ! ! ! TCP/ IP pyPAGI (optional) ! ! ! DCEC* extractor/ convertor Physics Engine Task Editor Configurable by external user
  16. 16. PAGI  World  allows  super   rapid  demonstra1ons  of   cogni1ve  abili1es  
  17. 17. “The%Brilliant%Boardroom”:%Cogni4ve% Compu4ng%with%the%DCEC*%and%ADR% John%Licato%%*%%Selmer%Bringsjord% Konner&Atkin&*&Maggie&Borkowski&*&Jack&Cusick&*&Kainoa&Eastlack&*&Nick&Marton&*&James&Pane;Joyce&*&Spencer&Whitehead& Abstract% This%poster%reports%on%research%and%development%done%by%the%Rensselaer%AI%and% Reasoning%(RAIR)%Lab’s%team,%in%collabora4on%with%IBM,%on%crea4ng%framework% technologies%that%can%be%used%in%many%areas%of%cogni4ve%compu4ng.%We%here%focus%on% one%such%areaMMMthe%Brilliant%Boardroom%(BB),%in%which%a%robot%or%set%of%robots,% augmented%with%mul4modal%inputs%such%as%speech%recogni4on,%synthesis,%and%basic% vision%processing,%react%and%produc4vely%add%to%a%mee4ng%of%corporate%execu4ves.%We% infuse%the%Brilliant%Boardroom%with%two%RAIRMlabMdeveloped%technologies:%the%Deon4c% Cogni4ve%Event%Calculus%(DCEC*),%a%highly%expressive%computa4onal%framework% intended%to%formally%model%and%mechanize%humanMlevel%reasoning,%decisionMmaking,% problemMsolving,%and%natural%language%communica4on;%and%AnalogicoMDeduc4ve% Reasoning%(ADR),%a%type%of%reasoning%which%is%central%to%higher%level%humanMlike% cogni4on.% Syntax S ::= Object | Agent | Self @ Agent | ActionType | Action v Event | Moment | Boolean | Fluent | Numeric f ::= action : Agent ⇥ ActionType ! Action initially : Fluent ! Boolean holds : Fluent ⇥ Moment ! Boolean happens : Event ⇥ Moment ! Boolean clipped : Moment ⇥ Fluent ⇥ Moment ! Boolean initiates : Event ⇥ Fluent ⇥ Moment ! Boolean terminates : Event ⇥ Fluent ⇥ Moment ! Boolean prior : Moment ⇥ Moment ! Boolean interval : Moment ⇥ Boolean ⇤ : Agent ! Self payoff : Agent ⇥ ActionType ⇥ Moment ! Numeric t ::= x : S | c : S | f (t1,...,tn) f ::= t : Boolean | ¬f | f ^ y | f _ y | 8x : S. f | 9x : S. f P(a,t,f) | K(a,t,f) | C(t,f) | S(a,b,t,f) | S(a,t,f) B(a,t,f) | D(a,t,holds( f ,t0)) | I(a,t,happens(action(a⇤,a),t0)) O(a,t,f,happens(action(a⇤,a),t0)) Rules of Inference C(t,P(a,t,f) ! K(a,t,f)) [R1] C(t,K(a,t,f) ! B(a,t,f)) [R2] C(t,f) t  t1 ...t  tn K(a1,t1,...K(an,tn,f)...) [R3] K(a,t,f) f [R4] C(t,K(a,t1,f1 ! f2) ! K(a,t2,f1) ! K(a,t3,f3)) [R5] C(t,B(a,t1,f1 ! f2) ! B(a,t2,f1) ! B(a,t3,f3)) [R6] C(t,C(t1,f1 ! f2) ! C(t2,f1) ! C(t3,f3)) [R7] C(t,8x. f ! f[x 7! t]) [R8] C(t,f1 $ f2 ! ¬f2 ! ¬f1) [R9] C(t,[f1 ^ ... ^ fn ! f] ! [f1 ! ... ! fn ! y]) [R10] B(a,t,f) B(a,t,f ! y) B(a,t,y) [R11a] B(a,t,f) B(a,t,y) B(a,t,y ^ f) [R11b] S(s,h,t,f) B(h,t,B(s,t,f)) [R12] I(a,t,happens(action(a⇤,a),t0)) P(a,t,happens(action(a⇤,a),t)) [R13] B(a,t,f) B(a,t,O(a⇤,t,f,happens(action(a⇤,a),t0))) O(a,t,f,happens(action(a⇤,a),t0)) K(a,t,I(a⇤,t,happens(action(a⇤,a),t0))) [R14] f $ y O(a,t,f,g) $ O(a,t,y,g) [R15] 1 DCEC*:%The%Deon4c%Cogni4ve% Event%Calculus% % The%DCEC*,%pictured%in%Figure%1,%is%a%highly%expressive%framework%that%has%been% used%for%the%mechaniza4on%of%humanMlevel%reasoning,%automated%decisionMmaking,% natural%language%parsing%and%genera4on,%and%many%other%applica4ons.%Because%it% allows%ar4ficial%agents%to%represent%arbitrarily%nested%beliefs%and%knowledge%(e.g.% that%the%execu4ve%in%chair%1%believes%that%the%execu4ve%2%in%chair%believes%that%the% execu4ve%in%chair%1%is%lying),%it%can%perform%reasoning%far%beyond%that%of%many% other%formalisms%proposed%to%represent%commonsense%knowledge.%This%sort%of% ability%is%extremely%important%in%situa4ons%where%an%ar4ficial%agent%is%asked%to%exist% in%a%complex%social%environment,%much%less%one%that%may%require%the%agent%to% provide%jus4fica4ons%for%its%conclusions%(as%the%robo4c%agent%in%our%demonstra4on% was%made%to%do).% % The%DCEC*%also%lends%itself%to%social%environments%because%of%its%inherent%capturing% of%deon4c%no4ons.%It%has%operators%such%as%O%(for%obliga4on),%which%is%treated% carefully%by%a%set%of%inference%rules%(see%Figure%1),%themselves%chosen%to%help% ensure%that%commonsense%no4ons%of%what%it%means%to%be%obliged%to%do%something% can%be%captured%through%straighWorward%applica4ons%of%deduc4ve%reasoning.% These%inference%rules%are%constantly%being%refined%and%explored%through%RAIR%lab% R&D.% % Of%course,%deduc4ve%reasoning%alone%may%be%insufficient%to%capture%the%sort%of% reasoning%expected%by%an%ar4ficial%agent%in%a%boardroom;%therefore%we%augment% our%system%with%ADR,%which%is%another%major%research%focus%of%our%lab.% ADR:%AnalogicoMDeduc4ve%Reasoning% % Although%analogical%and%deduc4ve%reasoning%can%interact%in%a%myriad%of%different% combina4ons,%the%par4cular%intersec4on%between%hypothe4coMdeduc4ve%and% analogical%reasoning,%which%we%call%ADR,%has%been%shown%to%be%par4cularly%useful%to% human%reasoners%from%young%children%performing%Piage4an%experiments%to% groundbreaking%mathema4cal%logicians%like%Gödel.%In%its%simplest%form,%ADR%involves% using%analogical%processes%to%select%poten4ally%relevant%source%analogs,%match%them% to%the%target%domain,%and%produce%hypotheses%about%the%target%domain.%However,% because%these%hypotheses%are%prone%to%error,%deduc4ve%reasoning%is%invoked%to% verify,%support,%or%refute%the%hypotheses%before%they%are%incorporated%into%a% knowledge%base.%% % In%our%demonstra4on,%the%BB%(personified%by%the%Aldebaran%NAO%Bot%pictured%in% Figure%2)%u4lized%ADR%to%answer%a%ques4on%about%how%one%of%the%boardroom% mee4ng’s%par4cipants%might%get%access%to%some%sales%dataMMMthe%correct%answer%was% to%ask%Mr.%Smith,%which%is%knowledge%that%the%robot%did%not%previously%have.%It% inferred%this%by%drawing%an%analogy%to%a%previous%instance%in%which%a%mee4ng% par4cipant%obtained%similar%sales%data%by%asking%Mr.%Johnson,%who%at%the%4me%held% the%office%currently%held%by%Mr.%Smith.%The%deduc4ve%step%did%not%find%any% contradic4ons,%and%so%the%robot%reported%its%findings.% Rensselaer*AI*and*Reasoning*(RAIR)*Lab* Rensselaer*Polytechnic*Ins9tute,*Troy,*NY* Conclusion%/%Future%Work% % The%coming%of%Cogni4ve%Compu4ng%raises%many%interes4ng%ques4ons%about%what%it% means%to%be%cogni4ve%in%the%first%place.%But%we%must%also%ask%what%we%want%our% ar4ficial%cogni4ve%companions%to%do,%even%when%those%things%may%not%be%cogni4vely% plausible.%Here%we%will%see%at%least%two%concerns:%First,%that%cogni4ve%agents%should%be% able%to%reason%ethically;%and%second,%that%these%agents%should%be%able%to%provide% jus4fica4ons%for%their%ac4ons%(in%part%to%ensure%that%the%first%concern%is%met).%Again,% the%DCEC*%and%ADR%offer%results%in%this%direc4on.%Although%it%may%turn%out%that%this% pair%of%technologies%is%not%all%that%is%needed%to%ensure%that%our%cogni4ve%companions% behave%correctly,%they%represent%a%line%of%research%that%takes%the%concerns%we%have% raised%here%seriously%and%cons4tute%a%larger%effort%that%con4nues%to%be%a%focus%of%RAIR% lab%R&D.% Figure%1.%The%Deon4c%Cogni4ve%Event%Calculus%(DCEC*).% Figure%2.%The%robot%used%as%the%personifica4on%of%the%BB.% RPI%Sugges4on%and% Jus4fica4on%Service% User% ADR% Module% Local% KB% DCEC*% Reasoner% DBPedia% Figure%3.%DCEC*%and%ADR%was%recently%used%in%a%demonstra4on% of%another%service,%hosted%in%RPI’s%“red%zone,”%accessed%from% services%hosted%on%IBM’s%“blue%zone.”%%
  18. 18. Theorem 3: There is a way to satisfy both obligations.
  19. 19. From the Licato presentation in IBM’s Cognitive Systems Institute Lecture Series: “PAGI World:A Simulation Environment to Challenge Cognitive Architectures". For more information, visit https://www.linkedin.com/groups/Cognitive-Systems-Institute-6729452
  20. 20. PAGI  World  is  a  challenge  to  AI  and   cogni1ve  architecture  researchers   Let’s  create  tasks,  AI   systems  to  solve  them,   compare  the  approaches,   and  repeat-­‐-­‐-­‐and  keep  this   field  moving  forward!  
  21. 21. How  to  access  PAGI  World  (beta  version)     Email  John  Licato  at  licatj@rpi.edu  

×