DCLA meet CIDA: Collective Intelligence Deliberation Analytics

9,307 views

Published on

DCLA14: 2nd International Workshop on Discourse-Centric Learning Analytics at LAK14: http://dcla14.wordpress.com

Abstract: This discussion paper builds a bridge between Discourse-Centric Learning Analytics (DCLA), whose focus tends to be on student discourse in formal educational contexts, and research and practice in Collective Intelligence Deliberation Analytics (CIDA), which seeks to scaffold quality deliberation in teams/collectives devising solutions to complex problems. CIDA research aims to equip networked communities with deliberation platforms capable of hosting large scale, reflective conversations, and actively feeding back to participants and moderators the ‘vital signs’ of the community and the state of its deliberations. CIDA tends to focus not on formal educational communities, although many would consider themselves learning communities in the broader sense, as they recognize the need to pool collective intelligence in order to understand, and co-evolve solutions to, complex dilemmas. We propose that the context and rationale behind CIDA efforts, and emerging CIDA implementations, contribute a research and technology stream to the DCLA community. The argument is twofold: (i) The context of CIDA work connects with the growing recognition in educational thinking that students from school age upwards should be given the opportunities to engage in authentic learning challenges, wrestling with problems and engaging in practices increasingly close to the complexity they will confront when they graduate. (ii) In the contexts of both DCLA and CIDA, different kinds of users need feedback on the state of the debate, and the quality of the conversation: the students and educators served by DCLA are mirrored by the citizens and facilitators served by CIDA. In principle, therefore, a fruitful dialogue could unfold between DCLA/CIDA researchers and practitioners, in order to better understand common and distinctive requirements.

Published in: Education
  • Be the first to comment

DCLA meet CIDA: Collective Intelligence Deliberation Analytics

  1. 1. DCLA  meet  CIDA   Collec&ve  Intelligence  Delibera&on  Analy&cs     Simon  Buckingham  Shum  &  Anna  De  Liddo    Mark  Klein   DCLA14:  2nd  Interna2onal  Workshop  on  Discourse-­‐Centric  Learning  Analy2cs   at  LAK14:  hAp://dcla14.wordpress.com  
  2. 2. Complex  societal  challenges  
  3. 3. Interest  in  the  poten2al  of  plaGorms  to  harness   Social  Innova2on   Collec2ve  Intelligence  
  4. 4. EU  Collec2ve  Awareness  PlaGorms  for  Sustainability   &  Social  Innova2on   hAp://caps2020.eu  
  5. 5. CI  means  many  things  to     many  people…   5  
  6. 6. CATALYST  Project   hAp://catalyst-­‐fp7.eu  
  7. 7. Flat  commen2ng/Threaded  discussion   what  everyone  uses  now    
  8. 8. Idea2on  plaGorms   Intui2ve  for  scaleable  brainstorming     But  studies  show  that  the  explora2on  of  he   problem  space  is  poor,  a  lot  of  repe22on,  and  weak   knowledge  building.  Labour  intensive  to  sort   through  thousands  of  ideas.  Facilitators  play  a  key   role  in  ensuring  that  ideas  get  connected.     Hard  for  analy2cs  to  gauge  quality  of  discourse   e.g.     hAp://www.spigit.com   hAp://ideascale.com  
  9. 9. Idea2on  plaGorms   hAp://www2.mitre.org/public/jsmo/call-­‐for-­‐papers-­‐lg-­‐scale-­‐idea2on%20.html  
  10. 10. Pain  Points  in  Social  Innova2on  PlaGorms  
  11. 11. Pain  Points  priori2sed  by  orgs  who  run  social   innova2on  plaGorms   !   Hard  to  visualise  the  debate     !   Poor  summarisa2on   !   Poor  commitment  to  ac2on     !   Sustaining  par2cipa2on   !   Shallow  contribu2ons  and  unsystema2c  coverage   !   Poor  idea  evalua2on     Effec2ve  visualisa2on  of  concepts,  new  ideas   and  delibera2ons  is  essen2al  for  shared   understanding,  but  suffers  both  from  a  lack   of  efficient  tools  to  create  them  and  from  a   lack  of  ways  to  reuse  them  across  plaGorms   and  debates       “As  a  user,  visualisa2on  is  my  biggest   problem.  It  is  o_en  difficult  to  get  into  the   discussion  at  the  beginning.  As  a  manager  of   these  plaGorms,  showing  people  what  is   going  on  is  the  biggest  pain  point.”    
  12. 12. Pain  Points  priori2sed  by  orgs  who  run  social   innova2on  plaGorms   !   Hard  to  visualise  the  debate     !   Poor  summarisa2on   !   Poor  commitment  to  ac2on     !   Sustaining  par2cipa2on   !   Shallow  contribu2ons  and  unsystema2c  coverage   !   Poor  idea  evalua2on     Par2cipants  struggle  to  get  a  good  overview   of  what  is  unfolding  in  an  online  community   debate.  Only  the  most  mo2vated   par2cipants  will  commit  a  lot  of  2me  to   reading  the  debate  in  order  to  iden2fy  the   key  members,  the  most  relevant  discussions,   etc.  The  majority  of  par2cipants  tend  to   respond  unsystema2cally  to  s2mulus   messages,  and  do  not  digest  earlier   contribu2ons  before  they  make  their  own   contribu2on  to  the  debate,  such  is  the   cogni2ve  overhead  and  limited  2me.    
  13. 13. Pain  Points  priori2sed  by  orgs  who  run  social   innova2on  plaGorms   !   Hard  to  visualise  the  debate     !   Poor  summarisa2on   !   Poor  commitment  to  ac2on     !   Sustaining  par2cipa2on   !   Shallow  contribu2ons  and  unsystema2c  coverage   !   Poor  idea  evalua2on     Bringing  mo2vated  audiences   to  commit  to  ac2on  is  difficult.   Enthusiasts,  those  who  have   an  interest  in  a  subject  but   have  yet  to  commit  to  taking   ac2on,  are  le_  behind.       Need  to  prompt  ac2on  in   community  members     Reaching  a  consensus  was   considered  less  important   than  being  enabled  to  act.    
  14. 14. Pain  Points  priori2sed  by  orgs  who  run  social   innova2on  plaGorms   !   Hard  to  visualise  the  debate     !   Poor  summarisa2on   !   Poor  commitment  to  ac2on     !   Sustaining  par2cipa2on   !   Shallow  contribu2ons  and  unsystema2c  coverage   !   Poor  idea  evalua2on     Mo2va2ng  par2cipants  with  widely   differing  levels  of  commitment,   exper2se  and  availability  to   contribute  to  an  online  debate  is   challenging  and  o_en   unproduc2ve.       Sustaining  par2cipa2on  more   important  than  enlarging   par2cipa2on.       “It  is  beAer  to  have  quality  input   from  a  small  group  than  a  lot  of   members  but  very  liAle  content”.    
  15. 15. Pain  Points  priori2sed  by  orgs  who  run  social   innova2on  plaGorms   !   Hard  to  visualise  the  debate     !   Poor  summarisa2on   !   Poor  commitment  to  ac2on     !   Sustaining  par2cipa2on   !   Shallow  contribu2ons  and  unsystema2c  coverage   !   Poor  idea  evalua2on     Open  innova2on  systems  tend  to  generate  a  large  number  of  rela2vely   shallow  ideas.  Poor  collabora2ve  refinement  of  ideas  that  could  allow  the   development  of  more  refined,  deeply  considered  contribu2ons.       No  easy  way  to  see  which  problem  facets  remain  under-­‐covered.  Very   par2al  coverage  of  the  solu2on  space.  
  16. 16. Pain  Points  priori2sed  by  orgs  who  run  social   innova2on  plaGorms   !   Hard  to  visualise  the  debate     !   Poor  summarisa2on   !   Poor  commitment  to  ac2on     !   Sustaining  par2cipa2on   !   Shallow  contribu2ons  and  unsystema2c  coverage   !   Poor  idea  evalua2on     Patchy  evalua2on  of  ideas     Poor  quality  jus2fica2on  for  ideas.       Hard  to  see  why  ra2ngs  have  been  given.       Unclear  which  ra2onales  are  evidence  based.  
  17. 17. CI  Delibera2on  PlaGorms:   the  addi2on  of  seman2c  structure  
  18. 18. ODET website: slides, movies, papers, tools 19 olnet.org/odet2010  
  19. 19. bCisive  online:  product  grade  argument  mapping   20  
  20. 20. DebateGraph  
  21. 21. DebateGraph  
  22. 22. MIT’s  Deliberatorium  
  23. 23. OU’s  Evidence  Hub   25  
  24. 24. OU’s  Evidence  Hub  
  25. 25. OU’s  Cohere  
  26. 26. OU’s  Cohere  
  27. 27. 29   OU’s  Cohere  
  28. 28. Consider.It  
  29. 29. Consider.It  
  30. 30. YourView  
  31. 31. Can  we  see  such  tools  in  educa2on?  
  32. 32. CI  Discourse  and  Formal  Educa2on   Discourse  —  a  mee2ng  of  minds   Collec&ve  Intelligence  for  Social  Innova&on   Formal  Educa&on   Ci2zen   Student   Moderator   Teacher   Seeking  strong  voluntary  par2cipa2on   Voluntary/required  par2cipa2on   Seeking  good  explora2on  of  the  problem,  building  on  peers’  ideas   Seeking  collec2vely  owned  solu2on   May  also  be  seeking  the  correct  solu2on   Civil  discourse,  ideally  well  argued   Ideas  from  all  stakeholders  
  33. 33. CI  vs  Educa2onal  Discourse  Tools   CI  Delibera&on  PlaDorms   Educa&onal  Argumenta&on   PlaDorms   simple,  professional  interfaces   efforGul,  more  amateur   interfaces   authen2c,  complex  problems    ar2ficial  problems   untrained  users  (ci2zens)  who   choose  to  use  the  tools   (possibly  trained)  students   who  are  required  to  use  the   tools   mul2ple,  engaging   visualiza2ons   argument  networks  
  34. 34. Approaches  to  Discourse  Analy2cs  
  35. 35. DCLA  strategies  from  AIED/CSCL   Scheuer  O,  McLaren  BM,  Loll  F  and  Pinkwart  N.  (2012)  Automated  Analysis  and  Feedback   Techniques  to  Support  Argumenta2on:  A  Survey.  In:  McLaren  BM  and  Pinkwart  N  (eds)   Educa-onal  Technologies  for  Teaching  Argumenta-on  Skills.  Bentham  Science  Publishers,  71–124     37   Analysis  Approach Descrip&on Syntac2c  analysis Rule-­‐based  approaches  that  find  syntac2c  paAerns  in   argument  diagrams   Systems:  Belvedere,  LARGO Problem-­‐  specific  analysis Use  of  a  problem-­‐specific  knowledge  base  to  analyze  student   arguments  or  synthesize  new  arguments   Systems:  Belvedere,  LARGO,  Rashi,  CATO Simula2on  of  reasoning  and  decision  making   processes Qualita2ve  and  quan2ta2ve  approaches  to  determine   believability  /  acceptability  of  statements  in  argument   models   Systems:  Zeno,  Hermes,  ArguMed,  Carneades,  Convince  Me,   Yuan  et  al.  (2008) Assessment  of  content  quality Collabora2ve  filtering,  a  technique  in  which  the  views  of  a   community  of  users  are  evaluated,  to  assess  the  quality  of   the  contribu2ons’  textual  content   Systems:  LARGO Classifica2on  of  the  current  modeling  phase Classifica2on  of  the  current  phase  a  student  is  in  according  to   a  predefined  process  model   Systems:  Belvedere,  LARGO
  36. 36. Argunaut  Moderator  Tool   38  
  37. 37. Catalyst  Project:  CI  Analy2cs  Concept  
  38. 38. Or  use  a  Na2ve     IBIS  PlaGorm  
  39. 39. Discourse  Analy2cs:   Visualiza2on  
  40. 40. DCLA  analy2cal  ques2ons  
  41. 41. CIDA  Visualiza2on  storyboarding  
  42. 42. CIDA  Visualiza2on  storyboarding  
  43. 43. CI  Dashboard  mockups  
  44. 44. Discourse  Analy2cs:   Rhetorical  Parsing  of  Discussion  Forum   Simsek  D,  Buckingham  Shum  S,  Sándor  Á,  De  Liddo  A  and  Ferguson  R.  (2013)  XIP  Dashboard:  Visual   Analy&cs  from  Automated  Rhetorical  Parsing  of  Scien&fic  Metadiscourse.  1st  Interna-onal  Workshop   on  Discourse-­‐Centric  Learning  Analy-cs,  at  3rd  Interna-onal  Conference  on  Learning  Analy-cs  &   Knowledge.  Leuven,  BE  (Apr.  8-­‐12,  2013).  Open  Access  Eprint:  hAp://oro.open.ac.uk/37391  
  45. 45. Rhetorical  discourse  analy2cs   to  what  extent  do  comments  display  the  hallmarks   of  reasoned  wri2ng  which  makes  thinking  visible?   <IMPORTANT  SUMMARY>   The  argument  is  that  the  consumer  has  benefited  because  technology  has   increasesd  consumer  access  to  markets  and  has  forced  brands  to  become  more   open  and  transparent.    Likewise,  organisa2ons  benefit  as  technology  allows  them   greater  access  to  consumer  informa2on.  So  it  seems  that  we  have  all  gained  from   the  impact  of  technology.    The  strongest  arguments  seemed  to  lean  towards  the   consumer  as  benefi2ng  most.    I  am  not  convinced.    I  think  that,  as  brands  become   more  sophis2cated  and  knowledgeable  in  their  approach,  consumer  resistance   becomes  more  difficult.       <IMPORTANT  SUMMARY  CONTRAST>   Really  good  thoughts  -­‐  I  hadn't  considered  the  other  stakeholders.  I’m  thinking  of   local  brands  ,  which  are  small  now  ,  but  have  ambi2on  to  get  bigger.  SMEs  are  not   going  to  create  huge  brand  value  overnight  ,  but  I  think  lessons  can  be  taken  from   what  the  big  brands  are  doing  and  employed  by  SMEs    
  46. 46. Rhetorical  discourse  analy2cs   to  what  extent  do  comments  display  the  hallmarks   of  reasoned  wri2ng  which  makes  thinking  visible?  
  47. 47. Rhetorical  discourse  analy2cs   to  what  extent  do  comments  display  the  hallmarks   of  reasoned  wri2ng  which  makes  thinking  visible?  
  48. 48. Discourse  Analy2cs:   Process-­‐Goal-­‐Excep2on  Analysis   Klein  M.  (2003)  A  Knowledge-­‐Based  Methodology  for  Designing  Reliable  Mul2-­‐Agent  Systems.  In:   Giorgini  P,  Mueller  JP  and  Odell  J  (eds)  Agent-­‐Oriented  SoIware  Engineering  IV.  Springer-­‐Verlag,  85   -­‐  95.     Klein  M.  (2012)  Enabling  Large-­‐Scale  Delibera2on  Using  AAen2on-­‐Media2on  Metrics.  Computer   Supported  Coopera-ve  Work  21:  449-­‐473    
  49. 49. Process-­‐Goal-­‐Excep2on  analysis   identify normative process model identify ideal goals for each subtask identify possible exceptions for each goal process decomposition process model with goals process model with goals and exceptions identify handlers for each exception
  50. 50. Process-­‐Goal-­‐Excep2on  (PGE)  analysis   process exception goal requires has-part is-violated-by is-handled-by has-part is-caused-by
  51. 51. Deliberatorium  PGE  analy2cs  modelling  
  52. 52. PGE  analysis  of  author  diversity  
  53. 53. PGE  analysis  of  diversity  of  ideas  
  54. 54. PGE  analysis  of  IBIS  syntax  checking  for   impoverished  argumenta2on  
  55. 55. Implemen2ng  handlers  using  PQL  graph   queries   Currently  hardwired  to  Deliberatorium,  but  will  work  on   data  compliant  with  a  new  interchange  format  for  cross-­‐ plaMorm  interoperability  
  56. 56. Deliberatorium  recommender  agent   priori2ses  areas  for  moderator  aAen2on  
  57. 57. CIDA—DCLA  synergies   DCLA   New  kinds  of   UX  for   structured   argumenta2on   New  kinds  of   visualiza2on  of   argumenta2on   +  domain   Authen2c  use   contexts   Moderator   tools   CIDA   Analy2cs   Taxonomies   AI   techniques   Small  scale   prototypes   AAen2on  to   quality   discourse  

×