SlideShare a Scribd company logo
1 of 10
Proposed Working Memory Measures for Evaluating Information Visualization Tools Laura Matzen, Laura McNamara, Kerstan Cole,  Alisa Bandlow, Courtney Dornburg & Travis Bauer Sandia National Laboratories Albuquerque, NM 87185 This work was funded by Sandia’s Laboratory Research and Development Program as part of the  Networks Grand Challenge (10-119351).  Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.
Evaluation of information visualization tools Evaluations typically developed for a single, specific task and tool Time consuming, expensive, results can’t be generalized We propose using measures of cognitive resources to create standardized evaluation metrics
Why assess cognitive resources? All analysis tasks are cognitively demanding Human cognitive resources are finite Well-designed interfaces should free cognitive resources for making sense of data Reduce cognitive burden of searching and manipulating data
Working Memory Mental workspace underlying all complex cognition Has a limited, measurable capacity Often used as a performance metric in other domains
Proposed methodology Evaluate visual analytics interfaces using a dual-task methodology: Primary task: Interaction with interface  Secondary task: Test of working memory capacity Performance on the secondary task should correspond to cognitive resources that would be available for sensemaking in a real-world analysis task
Example working memory task Sternberg task (Sternberg, 1969) Memory set  delay  probe items Low-load memory set: M G J High-load memory set: D K H Y R Q M G J F M W
Comparisons of different interface designs M W F
Later, Compare Different Visualizations of Same Dataset M W F
By this time next year… Pilot Study 1: Compare two interface designs for a simple video player with tagging feature Pilot Study 2:  Compare two different interface designs for a visual text analytics application developed at Sandia Use NASA TLX to develop convergent evidence  Both studies should provide insight into the use of working memory based metrics for interface assessment  …we should have data!
Comments, please!

More Related Content

Similar to Proposed Working Memory Measures for Evaluating Information Visualization Tools.

Crowdsourcing Linked Data Quality Assessment
Crowdsourcing Linked Data Quality AssessmentCrowdsourcing Linked Data Quality Assessment
Crowdsourcing Linked Data Quality AssessmentAmrapali Zaveri, PhD
 
DuraMat Data Analytics
DuraMat Data AnalyticsDuraMat Data Analytics
DuraMat Data AnalyticsAnubhav Jain
 
M.Sc. Thesis Topics and Proposals @ Polimi Data Science Lab - 2024 - prof. Br...
M.Sc. Thesis Topics and Proposals @ Polimi Data Science Lab - 2024 - prof. Br...M.Sc. Thesis Topics and Proposals @ Polimi Data Science Lab - 2024 - prof. Br...
M.Sc. Thesis Topics and Proposals @ Polimi Data Science Lab - 2024 - prof. Br...Marco Brambilla
 
EarthCube Monthly Community Webinar- Nov. 22, 2013
EarthCube Monthly Community Webinar- Nov. 22, 2013EarthCube Monthly Community Webinar- Nov. 22, 2013
EarthCube Monthly Community Webinar- Nov. 22, 2013EarthCube
 
C19013010 the tutorial to build shared ai services session 1
C19013010  the tutorial to build shared ai services session 1C19013010  the tutorial to build shared ai services session 1
C19013010 the tutorial to build shared ai services session 1Bill Liu
 
Presentation
PresentationPresentation
Presentationbutest
 
Major_Project_Presentaion_B14.pptx
Major_Project_Presentaion_B14.pptxMajor_Project_Presentaion_B14.pptx
Major_Project_Presentaion_B14.pptxLokeshKumarReddy8
 
Application-Aware Big Data Deduplication in Cloud Environment
Application-Aware Big Data Deduplication in Cloud EnvironmentApplication-Aware Big Data Deduplication in Cloud Environment
Application-Aware Big Data Deduplication in Cloud EnvironmentSafayet Hossain
 
A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science ...
A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science ...A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science ...
A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science ...Anirudh Prabhu
 
Crafting Recommenders: the Shallow and the Deep of it!
Crafting Recommenders: the Shallow and the Deep of it! Crafting Recommenders: the Shallow and the Deep of it!
Crafting Recommenders: the Shallow and the Deep of it! Sudeep Das, Ph.D.
 
Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality Assessment
Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality AssessmentLeveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality Assessment
Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality AssessmentUmair ul Hassan
 
Chi2011 Case Study: Interactive, Dynamic Sparklines
Chi2011 Case Study: Interactive, Dynamic SparklinesChi2011 Case Study: Interactive, Dynamic Sparklines
Chi2011 Case Study: Interactive, Dynamic SparklinesLeo Frishberg
 
03 interlinking-dass
03 interlinking-dass03 interlinking-dass
03 interlinking-dassDiego Pessoa
 
Building Customized Text Mining Tools via Shiny Framework: The Future of Data...
Building Customized Text Mining Tools via Shiny Framework: The Future of Data...Building Customized Text Mining Tools via Shiny Framework: The Future of Data...
Building Customized Text Mining Tools via Shiny Framework: The Future of Data...Olga Scrivner
 
On Using Network Science in Mining Developers Collaboration in Software Engin...
On Using Network Science in Mining Developers Collaboration in Software Engin...On Using Network Science in Mining Developers Collaboration in Software Engin...
On Using Network Science in Mining Developers Collaboration in Software Engin...IJDKP
 
On Using Network Science in Mining Developers Collaboration in Software Engin...
On Using Network Science in Mining Developers Collaboration in Software Engin...On Using Network Science in Mining Developers Collaboration in Software Engin...
On Using Network Science in Mining Developers Collaboration in Software Engin...IJDKP
 

Similar to Proposed Working Memory Measures for Evaluating Information Visualization Tools. (20)

Distributed Deep Learning + others for Spark Meetup
Distributed Deep Learning + others for Spark MeetupDistributed Deep Learning + others for Spark Meetup
Distributed Deep Learning + others for Spark Meetup
 
Crowdsourcing Linked Data Quality Assessment
Crowdsourcing Linked Data Quality AssessmentCrowdsourcing Linked Data Quality Assessment
Crowdsourcing Linked Data Quality Assessment
 
DuraMat Data Analytics
DuraMat Data AnalyticsDuraMat Data Analytics
DuraMat Data Analytics
 
M.Sc. Thesis Topics and Proposals @ Polimi Data Science Lab - 2024 - prof. Br...
M.Sc. Thesis Topics and Proposals @ Polimi Data Science Lab - 2024 - prof. Br...M.Sc. Thesis Topics and Proposals @ Polimi Data Science Lab - 2024 - prof. Br...
M.Sc. Thesis Topics and Proposals @ Polimi Data Science Lab - 2024 - prof. Br...
 
EarthCube Monthly Community Webinar- Nov. 22, 2013
EarthCube Monthly Community Webinar- Nov. 22, 2013EarthCube Monthly Community Webinar- Nov. 22, 2013
EarthCube Monthly Community Webinar- Nov. 22, 2013
 
C19013010 the tutorial to build shared ai services session 1
C19013010  the tutorial to build shared ai services session 1C19013010  the tutorial to build shared ai services session 1
C19013010 the tutorial to build shared ai services session 1
 
Presentation
PresentationPresentation
Presentation
 
Major_Project_Presentaion_B14.pptx
Major_Project_Presentaion_B14.pptxMajor_Project_Presentaion_B14.pptx
Major_Project_Presentaion_B14.pptx
 
Application-Aware Big Data Deduplication in Cloud Environment
Application-Aware Big Data Deduplication in Cloud EnvironmentApplication-Aware Big Data Deduplication in Cloud Environment
Application-Aware Big Data Deduplication in Cloud Environment
 
Distributed deep learning_over_spark_20_nov_2014_ver_2.8
Distributed deep learning_over_spark_20_nov_2014_ver_2.8Distributed deep learning_over_spark_20_nov_2014_ver_2.8
Distributed deep learning_over_spark_20_nov_2014_ver_2.8
 
A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science ...
A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science ...A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science ...
A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science ...
 
Crafting Recommenders: the Shallow and the Deep of it!
Crafting Recommenders: the Shallow and the Deep of it! Crafting Recommenders: the Shallow and the Deep of it!
Crafting Recommenders: the Shallow and the Deep of it!
 
Softwareproject planning
Softwareproject planningSoftwareproject planning
Softwareproject planning
 
Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality Assessment
Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality AssessmentLeveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality Assessment
Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality Assessment
 
Spark
SparkSpark
Spark
 
Chi2011 Case Study: Interactive, Dynamic Sparklines
Chi2011 Case Study: Interactive, Dynamic SparklinesChi2011 Case Study: Interactive, Dynamic Sparklines
Chi2011 Case Study: Interactive, Dynamic Sparklines
 
03 interlinking-dass
03 interlinking-dass03 interlinking-dass
03 interlinking-dass
 
Building Customized Text Mining Tools via Shiny Framework: The Future of Data...
Building Customized Text Mining Tools via Shiny Framework: The Future of Data...Building Customized Text Mining Tools via Shiny Framework: The Future of Data...
Building Customized Text Mining Tools via Shiny Framework: The Future of Data...
 
On Using Network Science in Mining Developers Collaboration in Software Engin...
On Using Network Science in Mining Developers Collaboration in Software Engin...On Using Network Science in Mining Developers Collaboration in Software Engin...
On Using Network Science in Mining Developers Collaboration in Software Engin...
 
On Using Network Science in Mining Developers Collaboration in Software Engin...
On Using Network Science in Mining Developers Collaboration in Software Engin...On Using Network Science in Mining Developers Collaboration in Software Engin...
On Using Network Science in Mining Developers Collaboration in Software Engin...
 

More from BELIV Workshop

Towards Information-Theoretic Visualization Evaluation Measure: A Practical e...
Towards Information-Theoretic Visualization Evaluation Measure: A Practical e...Towards Information-Theoretic Visualization Evaluation Measure: A Practical e...
Towards Information-Theoretic Visualization Evaluation Measure: A Practical e...BELIV Workshop
 
Is Your User Hunting or Gathering Insights? Identifying Insight Drivers Acros...
Is Your User Hunting or Gathering Insights? Identifying Insight Drivers Acros...Is Your User Hunting or Gathering Insights? Identifying Insight Drivers Acros...
Is Your User Hunting or Gathering Insights? Identifying Insight Drivers Acros...BELIV Workshop
 
A Descriptive Model of Visual Scanning.
A Descriptive Model of Visual Scanning.A Descriptive Model of Visual Scanning.
A Descriptive Model of Visual Scanning.BELIV Workshop
 
Generating a synthetic video dataset
Generating a synthetic video datasetGenerating a synthetic video dataset
Generating a synthetic video datasetBELIV Workshop
 
Beyond system logging: human logging for evaluating information visualization.
Beyond system logging: human logging for evaluating information visualization.Beyond system logging: human logging for evaluating information visualization.
Beyond system logging: human logging for evaluating information visualization.BELIV Workshop
 
Scanning Between Graph Visualizations: An Eye Tracking Evaluation.
Scanning Between Graph Visualizations: An Eye Tracking Evaluation.Scanning Between Graph Visualizations: An Eye Tracking Evaluation.
Scanning Between Graph Visualizations: An Eye Tracking Evaluation.BELIV Workshop
 
Learning-Based Evaluation of Visual Analytic Systems.
Learning-Based Evaluation of Visual Analytic Systems.Learning-Based Evaluation of Visual Analytic Systems.
Learning-Based Evaluation of Visual Analytic Systems.BELIV Workshop
 
Visualization Evaluation of the Masses, by the Masses, and for the Masses.
Visualization Evaluation of the Masses, by the Masses, and for the Masses.Visualization Evaluation of the Masses, by the Masses, and for the Masses.
Visualization Evaluation of the Masses, by the Masses, and for the Masses.BELIV Workshop
 
Evaluating Information Visualization in Large Companies: Challenges, Experien...
Evaluating Information Visualization in Large Companies: Challenges, Experien...Evaluating Information Visualization in Large Companies: Challenges, Experien...
Evaluating Information Visualization in Large Companies: Challenges, Experien...BELIV Workshop
 
Pragmatic Challenges in the Evaluation of Interactive Visualization Systems.
Pragmatic Challenges in the Evaluation of Interactive Visualization Systems.Pragmatic Challenges in the Evaluation of Interactive Visualization Systems.
Pragmatic Challenges in the Evaluation of Interactive Visualization Systems.BELIV Workshop
 
Comparative Evaluation of Two Interface Tools in Performing Visual Analytics ...
Comparative Evaluation of Two Interface Tools in Performing Visual Analytics ...Comparative Evaluation of Two Interface Tools in Performing Visual Analytics ...
Comparative Evaluation of Two Interface Tools in Performing Visual Analytics ...BELIV Workshop
 
BELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz Evaluations
BELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz EvaluationsBELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz Evaluations
BELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz EvaluationsBELIV Workshop
 

More from BELIV Workshop (12)

Towards Information-Theoretic Visualization Evaluation Measure: A Practical e...
Towards Information-Theoretic Visualization Evaluation Measure: A Practical e...Towards Information-Theoretic Visualization Evaluation Measure: A Practical e...
Towards Information-Theoretic Visualization Evaluation Measure: A Practical e...
 
Is Your User Hunting or Gathering Insights? Identifying Insight Drivers Acros...
Is Your User Hunting or Gathering Insights? Identifying Insight Drivers Acros...Is Your User Hunting or Gathering Insights? Identifying Insight Drivers Acros...
Is Your User Hunting or Gathering Insights? Identifying Insight Drivers Acros...
 
A Descriptive Model of Visual Scanning.
A Descriptive Model of Visual Scanning.A Descriptive Model of Visual Scanning.
A Descriptive Model of Visual Scanning.
 
Generating a synthetic video dataset
Generating a synthetic video datasetGenerating a synthetic video dataset
Generating a synthetic video dataset
 
Beyond system logging: human logging for evaluating information visualization.
Beyond system logging: human logging for evaluating information visualization.Beyond system logging: human logging for evaluating information visualization.
Beyond system logging: human logging for evaluating information visualization.
 
Scanning Between Graph Visualizations: An Eye Tracking Evaluation.
Scanning Between Graph Visualizations: An Eye Tracking Evaluation.Scanning Between Graph Visualizations: An Eye Tracking Evaluation.
Scanning Between Graph Visualizations: An Eye Tracking Evaluation.
 
Learning-Based Evaluation of Visual Analytic Systems.
Learning-Based Evaluation of Visual Analytic Systems.Learning-Based Evaluation of Visual Analytic Systems.
Learning-Based Evaluation of Visual Analytic Systems.
 
Visualization Evaluation of the Masses, by the Masses, and for the Masses.
Visualization Evaluation of the Masses, by the Masses, and for the Masses.Visualization Evaluation of the Masses, by the Masses, and for the Masses.
Visualization Evaluation of the Masses, by the Masses, and for the Masses.
 
Evaluating Information Visualization in Large Companies: Challenges, Experien...
Evaluating Information Visualization in Large Companies: Challenges, Experien...Evaluating Information Visualization in Large Companies: Challenges, Experien...
Evaluating Information Visualization in Large Companies: Challenges, Experien...
 
Pragmatic Challenges in the Evaluation of Interactive Visualization Systems.
Pragmatic Challenges in the Evaluation of Interactive Visualization Systems.Pragmatic Challenges in the Evaluation of Interactive Visualization Systems.
Pragmatic Challenges in the Evaluation of Interactive Visualization Systems.
 
Comparative Evaluation of Two Interface Tools in Performing Visual Analytics ...
Comparative Evaluation of Two Interface Tools in Performing Visual Analytics ...Comparative Evaluation of Two Interface Tools in Performing Visual Analytics ...
Comparative Evaluation of Two Interface Tools in Performing Visual Analytics ...
 
BELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz Evaluations
BELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz EvaluationsBELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz Evaluations
BELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz Evaluations
 

Proposed Working Memory Measures for Evaluating Information Visualization Tools.

  • 1. Proposed Working Memory Measures for Evaluating Information Visualization Tools Laura Matzen, Laura McNamara, Kerstan Cole, Alisa Bandlow, Courtney Dornburg & Travis Bauer Sandia National Laboratories Albuquerque, NM 87185 This work was funded by Sandia’s Laboratory Research and Development Program as part of the Networks Grand Challenge (10-119351). Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.
  • 2. Evaluation of information visualization tools Evaluations typically developed for a single, specific task and tool Time consuming, expensive, results can’t be generalized We propose using measures of cognitive resources to create standardized evaluation metrics
  • 3. Why assess cognitive resources? All analysis tasks are cognitively demanding Human cognitive resources are finite Well-designed interfaces should free cognitive resources for making sense of data Reduce cognitive burden of searching and manipulating data
  • 4. Working Memory Mental workspace underlying all complex cognition Has a limited, measurable capacity Often used as a performance metric in other domains
  • 5. Proposed methodology Evaluate visual analytics interfaces using a dual-task methodology: Primary task: Interaction with interface Secondary task: Test of working memory capacity Performance on the secondary task should correspond to cognitive resources that would be available for sensemaking in a real-world analysis task
  • 6. Example working memory task Sternberg task (Sternberg, 1969) Memory set  delay  probe items Low-load memory set: M G J High-load memory set: D K H Y R Q M G J F M W
  • 7. Comparisons of different interface designs M W F
  • 8. Later, Compare Different Visualizations of Same Dataset M W F
  • 9. By this time next year… Pilot Study 1: Compare two interface designs for a simple video player with tagging feature Pilot Study 2: Compare two different interface designs for a visual text analytics application developed at Sandia Use NASA TLX to develop convergent evidence Both studies should provide insight into the use of working memory based metrics for interface assessment …we should have data!