Conversational
Sensemaking
Alun Preece, Will Webberley
(Cardiff)
Dave Braines (IBM UK)
The International Technology
Alliance
2006–2016: Fundamental US/UK research into Network and
Information Science to suppor...
Introduction
The story so far
• Human-centric sensing (2012)
Srivastava,M.,Abdelzaher,T., & Szymanski,B. (2012).Human-centric sensing.P...
Pirolli & Card
“The sensemaking process for intelligence
analysis”
Foraging loop
• Gather and assemble data,
present as ev...
Reimagining Data-to-Decision
 Data sources are increasingly “smart” and communicative
 Decision-makers can operate much ...
Back to Pirolli & Card
We envisage the sensemaking process underpinned by
a conversational interaction between teams of hu...
Human-Machine
Conversational Model
Background: Format for
conversation
An appropriate form for human-machine interaction is a
challenge:
 humans prefer natu...
Our conversational model
• Draws on research in agent communicationlanguages
and philosophicallinguistics (speech acts)
• ...
Bag-of-words NLP
• The purpose of the conversational
interaction is to allow humans to use
natural language (NL)
• NL is c...
Examples of conversation
• In our ongoing research we have applied our
conversational interactions to the following
scenar...
Conversational Foraging
Introducing MOIRA
• “Moira” – Mobile Intelligence Reporting
Application
• A machine agent able to engage in
conversation
•...
Three initial experiments
20 untrained student participants viewed a
series of scenes and described them to
Moira via conf...
Enriching the shoebox
• The shoebox is central to the foraging loop
• A “messy” store of information drawn from
external d...
A sensemaking blackboard
• This “semantic shoebox” is actually a sensemaking
blackboard
– An open “sandpit” blackboard; no...
NATO protest example
• Prior to the event we modeled protests
and events
• Instances can be added by any
agent conceptuali...
Conversational Sensemaking
Blurring the boundaries
• In Pirolli & Card the distinction between foraging and
sense-making is clear
• Distinct interact...
Adding context
During our field exercise we noted that:
• Key influencers can be identified
• Data relating to events can ...
Moving to richer models
• We can “grow the shoebox” as we progress
to higher levels
• Rather than “increased schematizatio...
Presenting through storytelling
• Apply narrative framings to the body of
knowledge
• Also expressable in CE
– A generalis...
Wrapping up
Summary
• Envisage Pirolli & Card feedback loops as a series of
human-machine conversations
• Helping to harness each agen...
Conversational Sensemaking
Originally presented at:
SPIE DSS 2015 – Next Generation Analyst III
(Human Machine Interaction...
Upcoming SlideShare
Loading in …5
×

Conversational sensemaking Preece and Braines

552 views

Published on

Alun Preece from Cardiff University and Dave Braines from IBM presenting: "Conversational Sensemaking" on weekly Cognitive Systems Institute Speaker Series call, January 14, 2016.

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
552
On SlideShare
0
From Embeds
0
Number of Embeds
45
Actions
Shares
0
Downloads
11
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Conversational sensemaking Preece and Braines

  1. 1. Conversational Sensemaking Alun Preece, Will Webberley (Cardiff) Dave Braines (IBM UK)
  2. 2. The International Technology Alliance 2006–2016: Fundamental US/UK research into Network and Information Science to support coalition operations. Our ongoing research is funded by US Army Research Labs and the UK Ministry of Defence. see http://usukita.org
  3. 3. Introduction
  4. 4. The story so far • Human-centric sensing (2012) Srivastava,M.,Abdelzaher,T., & Szymanski,B. (2012).Human-centric sensing.Philosophical Transactionsof the Royal Society A:Mathematical,Physical andEngineering Sciences, 370(1958),176-197. • CE-SAM: a conversationalinterface for ISR mission support (2013) Pizzocaro,D., Parizas,C., Preece,A., Braines,D., Mott, D., & Bakdash,J. Z. (2013, May). CE- SAM: a conversationalinterfacefor ISR missionsupport.In SPIE Defense,Security,and Sensing (pp. 87580I-87580I).InternationalSociety for Optics and Photonics. • Human-machine conversations to support multi-agency missions (2014) Preece,A., Braines,D., Pizzocaro,D., & Parizas,C. (2014).Human-machineconversations to supportmulti-agency missions.ACM SIGMOBILEMobile Computing and Communications Review,18(1),75-84. • Conversationalsensing (2014) Preece,A., Gwilliams,C., Parizas,C., Pizzocaro,D., Bakdash,J. Z., & Braines,D. (2014, May). Conversational sensing.In SPIE Sensing Technology+ Applications (pp.91220I-91220I). International Societyfor Optics and Photonics. • Conversationalsensemaking (2015)
  5. 5. Pirolli & Card “The sensemaking process for intelligence analysis” Foraging loop • Gather and assemble data, present as evidence • Less focus on structure and formality Sensemaking loop • Schematize evidence, connect to hypotheses • Inform decision making • Support sharing and
  6. 6. Reimagining Data-to-Decision  Data sources are increasingly “smart” and communicative  Decision-makers can operate much nearer to the tactical edge  Humans can be sensors too; and effectors when appropriate Analytic services Decision makerData sources The traditional data-to-decision pipeline can be re-thought as peer-to-peer interactions between human and machine agents with different specialisms and focus areas
  7. 7. Back to Pirolli & Card We envisage the sensemaking process underpinned by a conversational interaction between teams of human and machine agents. • Supports forward and backward flows • Provides some structure from the start • A less segmented view of the world? • Enables co-construction of information artifacts • Structure can increase as the conversation evolves
  8. 8. Human-Machine Conversational Model
  9. 9. Background: Format for conversation An appropriate form for human-machine interaction is a challenge:  humans prefer natural language (NL) or images  these forms are difficult for machines to process, leading to ambiguity and miscommunication Compromise: controlled natural language (CNL) there is a person named p1 that is known as ‘John Smith’ and is a person of interest. low complexity | no ambiguity ITA Controlled English (CE)
  10. 10. Our conversational model • Draws on research in agent communicationlanguages and philosophicallinguistics (speech acts) • We envisage valuable conversations between: – Human and machine with mediation between Natural Language (NL) and CE to allow unambiguous but human-friendly exchanges – Machine and human asking the human for more information or informing them of relevant details as appropriate. Often “gist” (computed NL) form is useful here – Machine and machine* Exchanging information between software agents and/or pre-existing systems. Use of CNL enables easier human oversight ask/tell confirm why gist/expand * Also human and human, but that is not covered here
  11. 11. Bag-of-words NLP • The purpose of the conversational interaction is to allow humans to use natural language (NL) • NL is converted to CE through simple “bag of words” NL processing – Consult the knowledge base for matches and synonyms – Covering the model (concepts, relations, rules) and the “facts” • Confirmation of interpretation can (optionally) be sent to the user – Confirmation is in CE; the machine format but human readable – Not always appropriate to share • Model can be expanded through the conversation too
  12. 12. Examples of conversation • In our ongoing research we have applied our conversational interactions to the following scenarios: – “SPOT” reporting – Crowd-sourced information gathering – Asset tasking – Hard/soft information fusion • The potential benefits could include: – Improved agility – Reduced training – Improved effectiveness for human/machine hybrid teams
  13. 13. Conversational Foraging
  14. 14. Introducing MOIRA • “Moira” – Mobile Intelligence Reporting Application • A machine agent able to engage in conversation • Access to CE knowledge base – Can read all available knowledge, explore and answer questions – Can help the human user contribute new knowledge • Model, fact, rule • Contextual operation – Aware of the users role, location, status – Able to alert “interesting” information
  15. 15. Three initial experiments 20 untrained student participants viewed a series of scenes and described them to Moira via confirm interactions • 137 NL scene descriptions in 15min • Median CE elements per NL input = 2 39 untrained student participants crowdsourced answers to 54 questions re synthetic and natural situations in multiple locations • 718 NL inputs yielding 479 CE inputs in 30 min • 69% of users had > 1 point 18 members of the public crowdsourced answers to 30 “television trivia” questions at a BBC festival event • 101 NL answers yielding 62 CE confirms Histogram of score frequencies
  16. 16. Enriching the shoebox • The shoebox is central to the foraging loop • A “messy” store of information drawn from external data • Our “semantic” shoebox: – Contains data from multiple sources – NL and CE – Some low-level schema exist – Able to extend the schema at run-time – Human (or machine) users can add new data or new sources – Inferences can be made – Rationale and provenance can be available • This semantic shoebox can be iteratively refined – From low -> high value CE – Serving the sensemaking loop too – Can store hypotheses, presentation models and much more
  17. 17. A sensemaking blackboard • This “semantic shoebox” is actually a sensemaking blackboard – An open “sandpit” blackboard; not task/solution specific • The agents: – Human users • Define/extend the model • Capture local knowledge & insight • Direct agent activities – Machine agents • Execute logical inference rules (general) • Existing software algorithms (specialised) • Control through triggers, alerts, commands etc • The single language is ITA CE, with “rationale” for explanation
  18. 18. NATO protest example • Prior to the event we modeled protests and events • Instances can be added by any agent conceptualise an ~ event ~ E that has the time ST as ~ start time ~ and has the time ET as ~ end time ~ and ~ involves ~ the agent A and ~ is located at ~ the place P. conceptualise a ~ protest~ P that is an event. there is a protest named ‘Central Square protest’that has the time 4-9-2014-12:00as start time and involves the group ‘Blue Group’ and is located at the place ‘Central Square’. • During the event we unearthed the important difference between expected and unexpected protests – Real-time model update was made – Rule was written to detect unexpected protests – Alerting of unexpected protests • Sometimes they can be detected from text analysis of tweets
  19. 19. Conversational Sensemaking
  20. 20. Blurring the boundaries • In Pirolli & Card the distinction between foraging and sense-making is clear • Distinct interactions between the loops are possible • Human and machine tasks are acknowledged but separate • Through conversation and our “blackboard” approach we: – Support rich multi-agent integration – Enable flows between different loops and phases – Grow the shoebox upwards – Drive (some) schema downwards • Agile models & human-friendly formats to encourage more active
  21. 21. Adding context During our field exercise we noted that: • Key influencers can be identified • Data relating to events can be found • A range of possible values may be presented (e.g. for crowd size) • Conscious and subconscious biases may be present Approaches to identify (and potentially quantify) biases exist • We modeled “stance” for key influencers • Pro-NATO and Anti-NATO • Knowledge of this “stance” is important contextual knowledge for human observers and machine agents This is a good example of closing the
  22. 22. Moving to richer models • We can “grow the shoebox” as we progress to higher levels • Rather than “increased schematization”we introduce richer models, or refine models through conversation • Hypotheses can be modeled, subjectivity can be captured (or computed) – Including propagation through inference or other computation • Rationale (asking “why?”) can link higher level models to lower level information • Related work: – “Collaborative human-machineanalysis using a controllednatural language” – Mott et al – Argumentation, trust and subjective logic
  23. 23. Presenting through storytelling • Apply narrative framings to the body of knowledge • Also expressable in CE – A generalised abstractionof storytelling that can be applied to any domain – Organising the domain into an episodic sequence – Applying additional multi-modal information • Using connected hypotheses, evidence and data to tell a story • Asking “why?” to uncover rationale for information
  24. 24. Wrapping up
  25. 25. Summary • Envisage Pirolli & Card feedback loops as a series of human-machine conversations • Helping to harness each agents strengths? – Humans: Interpreting & hypothesising – Machines: large scale data, pattern collection • Rationale to promote transparency and trust • Enabling debate and argument – Reveal conflicts (and agreements) – Explore (and maybe reconcile) differences • Currently focus is on text communications • Future experiments: – Mix of human and machine-based sensing – Grow links with argumentation research
  26. 26. Conversational Sensemaking Originally presented at: SPIE DSS 2015 – Next Generation Analyst III (Human Machine Interaction) Any questions? Research was sponsored by US Army Research Laboratory and the UK Ministry of Defence and was accomplished under Agreement Number W911NF-06-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the US Army Research Laboratory, the U.S. Government, the UK Ministry of Defense, or the UK Government. The US and UK Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. Many of the examples in this paper were informed by collaborative work between the authors and members of Cardiff Universities Police Science Institute, http://www.upsi.org.uk. We especially thank Martin Innes, Colin Roberts, and Sarah Tucker for their valuable insights on policing and community reaction.

×