Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Towards Contested Collective Intelligence

519 views

Published on

Towards Contested Collective Intelligence
Simon Buckingham Shum, Director Connected Intelligence Centre, University of Technology Sydney
This talk is to open up a dialogue with the important work of the SWARM project. I’ll introduce the key ideas that have shaped my work on interactive software tools to make thinking visible, shareable and contestable, some of the design prototypes, and some of the lessons we’ve learnt en route.

Published in: Science
  • Be the first to comment

Towards Contested Collective Intelligence

  1. 1. Simon Buckingham Shum Connected Intelligence Centre • University of Technology Sydney @sbuckshum • http://utscic.edu.au • http://Simon.BuckinghamShum.net Towards Contested Collective Intelligence or… A tour of the CI design space for Hypermedia Discourse University of Melbourne • SWARM Project, 12th Sept. 2017
  2. 2. Contested Collective Intelligence... In wicked problems, there is no master worldview, ontology or logic So disagreement is a necessary process and vital ingredient We can disagree well or badly CI tools should scaffold and improve this proess (e.g. amplify awareness of how stakeholders are framing the problem, reading the signals, seeing connections, and judging success) 2 De Liddo, A., Sándor, Á. and Buckingham Shum, S. (2012). Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study. Computer Supported Cooperative Work, 21, (4-5), pp. 417-448. http://doi.org/10.1007/s10606-011-9155-x
  3. 3. Dilemmas and (partial) Solutions
  4. 4. Dilemma If everyone just talks with no structure, it’s hard to synthesise CI
  5. 5. © Simon Buckingham Shum 5 Hypermedia Discourse the modelling of discourse / the discourse of modelling …reading and writing networks of documents, concepts, issues, ideas and arguments Buckingham Shum, S. (2006). Sensemaking on the Pragmatic Web: A Hypermedia Discourse Perspective. In: 1st International Conference on the Pragmatic Web, 21-22 Sept 2006, Stuttgart, Germany. ePrint: http://oro.open.ac.uk/6442
  6. 6. © Simon Buckingham Shum 6 Discourse § Dialogue § Deliberation § Argumentation § Reflection (Online & F-F Meetings)
  7. 7. © Simon Buckingham Shum 7 Hypermedia § Modelling discourse relations § Expressing different perspectives on a conceptual space § Supporting the incremental formalization of ideas § Rendering structural visualizations § Connecting heterogeneous content
  8. 8. © Simon Buckingham Shum 8 Discourse Model Key ingredients of a Hypermedia Discourse approach
  9. 9. © Simon Buckingham Shum 9 Notation / Visualisation Discourse Model Key ingredients of a Hypermedia Discourse approach
  10. 10. © Simon Buckingham Shum 10 Notation / Visualisation User Interface Discourse Model Key ingredients of a Hypermedia Discourse approach
  11. 11. © Simon Buckingham Shum 11 Notation / Visualisation User Interface Computational Services Discourse Model Key ingredients of a Hypermedia Discourse approach
  12. 12. © Simon Buckingham Shum 12 Notation / Visualisation User Interface Computational Services Literacy/ Fluency Discourse Model Key ingredients of a Hypermedia Discourse approach
  13. 13. Dilemma If users are required to structure their contributions to a CI repository, the effort must provide tangible benefit (not just potential benefits to future stakeholders)
  14. 14. Solution (in small synchronous settings) A skilled mapper resolves the cost-benefit tradeoff, adding immediate value to the sensemaking
  15. 15. Issue Mapping (or in a meeting real-time: Dialogue Mapping) based on Horst Rittel’s IBIS scheme Buckingham Shum, S. (2003). The roots of computer supported argument visualization. In P. Kirschner, S. Buckingham Shum, & C. Carr (Eds.), Visualizing Argumentation (pp. 3–24). London: Springer. ePrint: http://bit.ly/VizArgRoots
  16. 16. http://compendiuminstitute.net Issue Mapping (or in a meeting real-time: Dialogue Mapping) based on Horst Rittel’s IBIS scheme
  17. 17. https://www.youtube.com/watch?v=pxS5wUljfjE Issue Mapping (or in a meeting real-time: Dialogue Mapping) based on Horst Rittel’s IBIS scheme
  18. 18. this simple set of moves — combined with hypertext, and mapping fluency — goes a long way… UK Research Excellence Framework (REF) 2014 Impact Case
  19. 19. Compendium software (open source) visual hypermedia for managing the connections between ideas flexibly Deep acknowledgements: Jeff Conklin CogNexus Institute Al Selvin & Maarten Sierhuis NYNEX Science & Technology —> Bell Atlantic —> Verizon —> NASA http://compendiuminstitute.net
  20. 20. 20 Structure management in Compendium § Associative linking nodes in a shared context connected by graphical Map links § Categorical membership nodes in different contexts connected by common attributes via metadata Tags § Hypertextual Transclusion reuse of the same node in different views § Templates reuse of the same structure in different views § HTML, XML and RDF data exports for interoperability § Java and SQL interfaces to add services
  21. 21. Compendium Institute: international community http://CompendiumInstitute.net (now archived)
  22. 22. Global Sensemaking Network (2008~2012) http://GlobalSensemaking.net
  23. 23. CogNexus consulting: Issue/Dialogue Mapping http://cognexus.org • http://cognexusgroup.com
  24. 24. Groupaya+CogNexus consulting: Issue/Dialogue Mapping http://delta.groupaya.net
  25. 25. Seven Sigma consulting: Issue/Dialogue Mapping http://www.sevensigma.com.au/what-we- do/sensemaking.html
  26. 26. “Knowledge Artistry” (Al Selvin) Selvin, S. & Buckingham Shum, S. (2015). Constructing Knowledge Art: An Experiential Perspective on Crafting Participatory Representations. Morgan Claypool. http://doi.org/10.2200/S00593ED1V01Y201408HCI023 Hypermedia Discourse fluency at a high level
  27. 27. 27 Mapping with IBIS Issue-templates to harvest the firm’s collective intelligence on Y2K contingencies Selvin, A.M. and Buckingham Shum, S.J. (2002). Rapid Knowledge Construction: A Case Study in Corporate Contingency Planning Using Collaborative Hypermedia. Knowledge and Process Management, 9, (2), pp.119-128.
  28. 28. 28 Modelling organisational processes in Compendium using a Template
  29. 29. 29 Completing a Compendium template
  30. 30. 30 Generating Custom Documents and Diagrams from Compendium Templates Selvin, A.M. and Buckingham Shum, S.J. (2002). Rapid Knowledge Construction: A Case Study in Corporate Contingency Planning Using Collaborative Hypermedia. Knowledge and Process Management, 9, (2), pp.119-128.
  31. 31. 31 Using Compendium for personnel recovery operations planning Conversational Modelling: real time dialogue mapping combined with model driven templates (AI+IA) DARPA Co-OPR Project (PI: Austin Tate, AIAI, U. Edinburgh) http://www.aiai.ed.ac.uk/project/co-opr
  32. 32. © Simon Buckingham Shum 32 Mission Briefing: Intent template Answers to template issues provided in the JTFC Briefing. Answers may be constrained by predefined options, as specified in the XML schema
  33. 33. © Simon Buckingham Shum 33 Capturing political deliberation/rationale Dialogue Map capturing the planners’ discussion of this option
  34. 34. © Simon Buckingham Shum 34 Planning Engine input to Compendium Issues on which the I-X planning engine provided candidate Options
  35. 35. 35 Mapping with IBIS to build a NASA science team’s collective intelligence for planetary geological exploration Clancey, William J.; Sierhuis, Maarten; Alena, Richard L.; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail; Buckingham Shum, Simon J.; Shadbolt, Nigel and Rupert, Shannon M. (2007). Automating CapCom Using Mobile Agents and Robotic Assistants. In: 1st Space Exploration Conference: Continuing the Voyage ofDiscovery, 30 Jan-1 Feb 2005 , Orlando, FL, US. http://eprints.aktors.org/375
  36. 36. NASA: Mars Habitat field trials in Utah desert
  37. 37. NASA remote science team tools Scientist (Mars) Scientist (Earth) Scientist (Earth) Scientist (Mars) Scientist (Earth) Software Agent Architecture (Mars) Compendium used as a collaboration medium at all intersections: humans+agents reading+writing IBIS maps
  38. 38. Geology dialogue map between Earth-based scientists and ‘Mars’ Copyright, 2004, RIACS/NASA Ames, Open University, Southampton University Not to be used without permission
  39. 39. Compendium activity plans for surface exploration, constructed by scientists, interpreted by software agents
  40. 40. Compendium science data map, generated by software agents, for interpretation by Mars+Earth scientists
  41. 41. Meeting Replay tool: Earth scientists can browse a (simulated) Mars crew’s planning meeting using Compendium
  42. 42. this simple set of moves — combined with hypertext and mapping fluency — goes a long way… BUT…
  43. 43. Dilemma While co-located mapping is fine for ‘micro-CI’, can we scale this to support asynch. ‘macro-CI’?
  44. 44. Solution Web-based IBIS mapping
  45. 45. Numerous IBIS-based web apps http://oystr.co http://debatemapper.net http://evidence-hub.net http://litemap.net http://cci.mit.edu/klein/deliberatorium.html
  46. 46. Where our tools fit… Given a wealth of documents… 46
  47. 47. Where our tools fit… and tools to detect and render potentially significant patterns… 47
  48. 48. Where our tools fit… and tools to detect and render potentially significant patterns… 48
  49. 49. Where our tools fit: we need ways to express interpretations 49
  50. 50. 50 interpretation interpretation interpretation interpretation Where our tools fit: we need ways to express interpretations
  51. 51. 51 interpretation interpretationinterpretation interpretation interpretation (a hunch – no grounding evidence yet) interpretation Where our tools fit: we need ways to express interpretations
  52. 52. …and optionally make meaningful connections 52 predictscauses interpretation interpretationinterpretation interpretation interpretation (a hunch – no grounding evidence yet) interpretation Is pre-requisite for
  53. 53. 53 prevents predictscauses interpretation interpretationinterpretation interpretation interpretation (a hunch – no grounding evidence yet) Is inconsistent with interpretation challenges Is pre-requisite for …and optionally make meaningful connections
  54. 54. Potentially moving towards stories that make sense of the evidence… i.e. plausible narratives / arguments 54 Question Answer Supporting Argument… Challenging Argument… challengessupports responds to Assumption motivates
  55. 55. Potentially moving towards stories that make sense of the evidence… i.e. plausible narratives / arguments 55 Question Answer Supporting Argument… Challenging Argument… challengessupports responds to Hunch motivates
  56. 56. 56 Question Answer Supporting Argument… Challenging Argument… challengessupports responds to Data motivates Potentially moving towards stories that make sense of the evidence… i.e. plausible narratives / arguments
  57. 57. 57 Convergence of… web annotation social bookmarking concept mapping structured debate a prototype platform for collective intelligence Opening demo 2:30-10:30: https://www.youtube.com/watch?v=hxI5jPGScoU
  58. 58. Cohere demo (2011): web annotations with discourse connections
  59. 59. Structured deliberation and debate in which Questions, Evidence and Connections are first class entities (linkable, addressable, embeddable, contestable…) 59
  60. 60. 60 Structured deliberation and debate in which Questions, Evidence and Connections are first class entities (linkable, addressable, embeddable, contestable…)
  61. 61. — web annotation of document (Firefox extension)
  62. 62. User/community-defined visual language 62
  63. 63. 63 Structured deliberation and debate in which Questions, Evidence and Connections are first class entities (linkable, addressable, embeddable, contestable…)
  64. 64. Comparison of one’s own ideas to others De Liddo, A., Buckingham Shum, S., Quinto, I., Bachler, M. and Cannavacciuolo, L.(2011). Discourse-Centric Learning Analytics. Proc. 1st Int. Conf. Learning Analytics & Knowledge. Feb. 27-Mar 1, 2011, Banff Does the learner compare his/her own ideas to that of peers, and if so, in what ways?
  65. 65. De Liddo, A., Buckingham Shum, S., Quinto, I., Bachler, M. and Cannavacciuolo, L. (2011). Discourse-centric learning analytics. 1st Int. Conf. Learning Analytics & Knowledge (Banff, 27 Mar-1 Apr). ACM: New York. Eprint: http://oro.open.ac.uk/25829 What epistemic contributions are learners making in the community? 65 Rebecca is playing the role of broker, connecting different peers’ contributions in meaningful ways We now have the basis for recommending that you engage with people NOT like you…
  66. 66. Evidence Many users can make reasonable contributions to IBIS web apps, without training BUT…
  67. 67. Dilemma Asynchronous online mapping is tougher to curate: no on-the- spot sensemaking from a mapper
  68. 68. Solution Familiar looking web interfaces that guide users on how to contribute good IBIS
  69. 69. Evidence Hub: structured storytelling for students, practitioners and researchers Systems Learning & Leadership Evidence Hub: http://sysll.evidence-hub.net A wizard guides the user through the submission of a structured story: • What’s the Issue? • What claim are you making/addressing? • What kind of evidence supports/challenges this? • Link it to papers/data • Index it against the core themes
  70. 70. Evidence Hub: Argument Maps Systems Learning & Leadership Evidence Hub: http://sysll.evidence-hub.net The wizard then generates a structured IBIS tree showing evidence-based claims (and disagreements)
  71. 71. Evidence Hub: professional development http://learningemergence.net/2013/07/17/deed-elli-ai-ci-systemic-school-learning Issue Potential Solution Supporting Evidence (practitioner story)
  72. 72. Dilemma: Unstructured deliberation platforms provide no scaleable assistance in making sense of the collective’s progress
  73. 73. Pain Points in Social Innovation Platforms Catalyst Project Deliverable: http://catalyst-fp7.eu/wp-content/uploads/2014/02/CATALYST-Analysis-of-pain-points-and-user-feedback.pdf
  74. 74. Pain Points prioritised by orgs who run social innovation platforms Hard to visualise the debate Poor summarisation Poor commitment to action Sustaining participation Shallow contributions and unsystematic coverage Poor idea evaluation
  75. 75. Pain Points prioritised by orgs who run social innovation platforms Hard to visualise the debate Poor summarisation Poor commitment to action Sustaining participation Shallow contributions and unsystematic coverage Poor idea evaluation Effective visualisation of concepts, new ideas and deliberations is essential for shared understanding, but suffers both from a lack of efficient tools to create them and from a lack of ways to reuse them across platforms and debates “As a user, visualisation is my biggest problem. It is often difficult to get into the discussion at the beginning. As a manager of these platforms, showing people what is going on is the biggest pain point.”
  76. 76. Pain Points prioritised by orgs who run social innovation platforms Hard to visualise the debate Poor summarisation Poor commitment to action Sustaining participation Shallow contributions and unsystematic coverage Poor idea evaluation Participants struggle to get a good overview of what is unfolding in an online community debate. Only the most motivated participants will commit a lot of time to reading the debate in order to identify the key members, the most relevant discussions, etc. The majority of participants tend to respond unsystematically to stimulus messages, and do not digest earlier contributions before they make their own contribution to the debate, such is the cognitive overhead and limited time.
  77. 77. Pain Points prioritised by orgs who run social innovation platforms Hard to visualise the debate Poor summarisation Poor commitment to action Sustaining participation Shallow contributions and unsystematic coverage Poor idea evaluation Bringing motivated audiences to commit to action is difficult. Enthusiasts, those who have an interest in a subject but have yet to commit to taking action, are left behind. Need to prompt action in community members Reaching a consensus was considered less important than being enabled to act.
  78. 78. Pain Points prioritised by orgs who run social innovation platforms Hard to visualise the debate Poor summarisation Poor commitment to action Sustaining participation Shallow contributions and unsystematic coverage Poor idea evaluation Motivating participants with widely differing levels of commitment, expertise and availability to contribute to an online debate is challenging and often unproductive. Sustaining participation is more important than enlarging participation. “It is better to have quality input from a small group than a lot of members but very little content”.
  79. 79. Pain Points prioritised by orgs who run social innovation platforms Hard to visualise the debate Poor summarisation Poor commitment to action Sustaining participation Shallow contributions and unsystematic coverage Poor idea evaluation Open innovation systems tend to generate a large number of relatively shallow ideas. Poor collaborative refinement of ideas that could allow the development of more refined, deeply considered contributions. No easy way to see which problem facets remain under- covered. Very partial coverage of the solution space.
  80. 80. Pain Points prioritised by orgs who run social innovation platforms Hard to visualise the debate Poor summarisation Poor commitment to action Sustaining participation Shallow contributions and unsystematic coverage Poor idea evaluation Patchy evaluation of ideas Poor quality justification for ideas. Hard to see why ratings have been given. Unclear which rationales are evidence based.
  81. 81. Solution Activity analytics + IBIS semantics permit automated checking of the ‘health’ of a conversation
  82. 82. CI in Organisations (CSCW journal special issue) See article by Mark Klein on attention metrics
  83. 83. Crowd-scale deliberation quality metrics + alerts Lead: Mark Klein (MIT/Zurich) https://www.youtube.com/watch?v=UZMJ9mti8h0
  84. 84. Problem-Goal-Exception (PGE) analysis using IBIS syntax checking for potential weaknesses in reasoning http://catalyst-fp7.eu/wp-content/uploads/2016/01/CATALYST_WP4_D4.2b.pdf
  85. 85. Integrating deliberation metrics in the CI- dashboard http://catalyst-fp7.eu/wp-content/uploads/2016/01/CATALYST_WP4_D4.2b.pdf
  86. 86. Integrating deliberation metrics in DebateHub http://catalyst-fp7.eu/wp-content/uploads/2016/01/CATALYST_WP4_D4.2b.pdf
  87. 87. 87 “Semantic Google Scholar” — ClaimFinder Victoria Uren, Simon Buckingham Shum, Michelle Bachler, Gary Li, (2006) Sensemaking Tools for Understanding Research Literatures: Design, Implementation and User Evaluation. International Journal of Human Computer Studies, Vol.64, 5, (420-445).
  88. 88. 88 ClaiMaker returns a Lineage tree (the roots of a concept)
  89. 89. Dilemma: Deliberation schemas focus attention on cold rationality, at the expense of social warmth
  90. 90. Solution Addition of social channels in an IBIS mapping web app can restore a sense of connectedness
  91. 91. L. Iandoli, I. Quinto, S. Buckingham Shum, A. De Liddo (2015), On Online Collaboration and Construction of Shared Knowledge: Assessing Mediation Capability in Computer Supported Argument Visualization Tools, Journal of the Association for Information Science and Technology, 75 (5), pp.1052-1067 Async online IBIS Mapping + Social Cues is better than IBIS alone in some respects
  92. 92. Async online IBIS Mapping + Social Cues is better than IBIS alone in some respects
  93. 93. Async online IBIS Mapping + Social Cues is better than IBIS alone in some respects
  94. 94. Solution Addition of social channels in an IBIS mapping web app can restore a sense of connectedness BUT…
  95. 95. But the group using a Ning discussion forum still outperforms Social-IBIS and Plain-IBIS Mutual Understanding Perceived Effectiveness of Communication L. Iandoli, I. Quinto, S. Buckingham Shum, A. De Liddo (2015), On Online Collaboration and Construction of Shared Knowledge: Assessing Mediation Capability in Computer Supported Argument Visualization Tools, Journal of the Association for Information Science and Technology, 75 (5), pp.1052-1067 Debate Dashboard socially augmented Cohere mapping Ning discussion forum Cohere
  96. 96. But the group using a Ning discussion forum still outperforms Social-IBIS and Plain-IBIS Accuracy of Prediction (commodity prices)Perceived Ease of Use L. Iandoli, I. Quinto, S. Buckingham Shum, A. De Liddo (2015), On Online Collaboration and Construction of Shared Knowledge: Assessing Mediation Capability in Computer Supported Argument Visualization Tools, Journal of the Association for Information Science and Technology, 75 (5), pp.1052-1067
  97. 97. Writing is endlessly expressive and hard to improve on as a medium for collective reflection/argumentation (also a social process)
  98. 98. Dilemma: But we would still like the machine to do some work for us in making sense of the state of the CI process or product
  99. 99. Solution NLP could move us beyond simple forum metrics, and help make sense of the quality of contribution
  100. 100. Academic Writing Analytics: feedback on analytical/argumentative or reflective writing Info https://utscic.edu.au/tools/awa
  101. 101. 101 Highlighted sentences are colour- coded according to their broad type Sentences with Function Keys have more precise functions (e.g. Novelty) CIC’s automated feedback tool: analytical writing
  102. 102. CIC’s automated feedback tool: reflective writing An early paragraph which is simply setting the scene:
  103. 103. CIC’s automated feedback tool: reflective writing A concluding paragraph moving into professional reflection:
  104. 104. 1 CIC’s Text Analytics Pipeline (TAP) A set of linguistic analysis modules + AWA UI —> OSS release Preparation of texts: text cleaning –> de-identification –> indexing –> metadata management Analysis of texts: • Metrics: lengths of words, sentences, paragraphs, and statistics of these • Syllables: metrics at the word level based on syllables • Named Entities: e.g. names of People, Places • Statistics: e.g. noun-verb ratio • Vocabulary: compound words, occurrences at sentence, paragraph and document leve • Expressions: epistemic, self-critique and affective compound words • Spelling: feedback on spelling and basic grammar • Rhetorical moves: in analytical and reflective writing • Complexity: measures of the complexity of words, sentences and paragraphs
  105. 105. Disputational talk characterised by disagreement and individualised decision making. Few attempts to pool resources, to offer constructive criticism or make suggestions. Disputational talk also has some characteristic discourse features - short exchanges consisting of assertions and challenges or counter assertions ('Yes, it is.' 'No it's not!'). Cumulative talk in which speakers build positively but uncritically on what the others have said. Partners use talk to construct a 'common knowledge' by accumulation. Cumulative discourse is characterised by repetitions, confirmations and elaborations. Mercer, N. (2004). Sociocultural discourse analysis: analysing classroom talk as a social mode of thinking. Journal of Applied Linguistics, 1(2), 137-168. Disputational/Cumulative/Exploratory talk
  106. 106. Exploratory talk • Partners engage critically but constructively with each other's ideas. • Statements and suggestions are offered for joint consideration. • These may be challenged and counter-challenged, but challenges are justified and alternative hypotheses are offered. • Partners all actively participate and opinions are sought and considered before decisions are jointly made. • Compared with the other two types, in Exploratory talk knowledge is made more publicly accountable and reasoning is more visible in the talk. Disputational/Cumulative/Exploratory talk Mercer, N. (2004). Sociocultural discourse analysis: analysing classroom talk as a social mode of thinking. Journal of Applied Linguistics, 1(2), 137-168.
  107. 107. -60 -40 -20 0 20 40 60 80 9:28 9:32 9:36 9:40 9:41 9:46 9:50 9:53 9:56 10:00 10:05 10:07 10:07 10:09 10:13 10:17 10:23 10:27 10:31 10:35 10:40 10:45 10:52 10:55 11:04 11:08 11:11 11:17 11:20 11:24 11:26 11:28 11:31 11:32 11:35 11:36 11:38 11:39 11:41 11:44 11:46 11:48 11:52 11:54 12:00 12:03 12:04 12:05 Average Exploratory… Discourse analytics on webinar textchat Sheffield, UK not as sunny as yesterday - still warm Greetings from Hong Kong Morning from Wiltshire, sunny here! See you! bye for now! bye, and thank you Bye all for now Given a 2.5 hour webinar, where in the live textchat were the most effective learning conversations? Not at the start and end of a webinar… Ferguson, R., Wei, Z., He, Y. and Buckingham Shum, S., An Evaluation of Learning Analytics to Identify Exploratory Dialogue in Online Discussions. In: Proc. 3rd International Conference on Learning Analytics & Knowledge (Leuven, BE, 8-12 April, 2013). ACM. http://oro.open.ac.uk/36664
  108. 108. -60 -40 -20 0 20 40 60 80 9:28 9:32 9:36 9:40 9:41 9:46 9:50 9:53 9:56 10:00 10:05 10:07 10:07 10:09 10:13 10:17 10:23 10:27 10:31 10:35 10:40 10:45 10:52 10:55 11:04 11:08 11:11 11:17 11:20 11:24 11:26 11:28 11:31 11:32 11:35 11:36 11:38 11:39 11:41 11:44 11:46 11:48 11:52 11:54 12:00 12:03 12:04 12:05 Average Exploratory… Discourse analytics on webinar textchat …but if we zoom in on a peak… Ferguson, R., Wei, Z., He, Y. and Buckingham Shum, S., An Evaluation of Learning Analytics to Identify Exploratory Dialogue in Online Discussions. In: Proc. 3rd International Conference on Learning Analytics & Knowledge (Leuven, BE, 8-12 April, 2013). ACM. http://oro.open.ac.uk/36664
  109. 109. Discourse analytics on webinar textchat -100 -50 0 50 100 9:28 9:40 9:50 10:00 10:07 10:17 10:31 10:45 11:04 11:17 11:26 11:32 11:38 11:44 11:52 12:03 Classified as “exploratory talk” (more substantive for learning) “non- exploratory” …language is used in a manner more akin to “Exploratory Talk” (Neil Mercer) Ferguson, R., Wei, Z., He, Y. and Buckingham Shum, S., An Evaluation of Learning Analytics to Identify Exploratory Dialogue in Online Discussions. In: Proc. 3rd International Conference on Learning Analytics & Knowledge (Leuven, BE, 8-12 April, 2013). ACM. http://oro.open.ac.uk/36664
  110. 110. © Simon Buckingham Shum 110 Notation / Visualisation User Interface Computational Services Literacy/ Fluency Discourse Model So, this is the Hypermedia Discourse design space…
  111. 111. Practitioner Fluency Modelling Frameworks Computing Platform Learning Curve Mastery Domain Services Interoperability Discourse Interaction Design Effectiveness Experience Helpful evaluation criteria for CI platforms? Consolidation of the previous elements into 3 classes of evaluation criteria How does the Hypermedia Discourse design space and its tradeoffs compare to the SWARM platform? What can we learn from each other?

×