Your SlideShare is downloading. ×
  • Like
Developing Qualitative Metrics for Visual Analytic Environments
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Developing Qualitative Metrics for Visual Analytic Environments

  • 361 views
Published

Presenter: Jean Scholtz …

Presenter: Jean Scholtz
BELIV 2010 Workshop Presentation
http://www.beliv.org/beliv2010/

Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
361
On SlideShare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
6
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Developing Qualitative Metrics for Visual Analytic Environments Jean Scholtz Pacific Northwest National Laboratory BELIV 2010
  • 2. Why Are Qualitative Metrics Needed?
    • Quantitative metrics
      • Time to accomplish a task
      • Accuracy with which a task is done (need ground truth)
      • Percentage of tasks that the user is able to complete
    • Qualitative information
      • Provides more knowledge about what in the software actually helps the user
      • and how it helps the user
    • Problems with obtaining qualitative information
      • It is subjective
      • And varies by individual so generalizations are often difficult
  • 3. Qualitative Assessment and the VAST Challenge
    • In the VAST Challenge we are able to produce quantitative metrics easily (as ground truth is embedded in the data set)
    • But what we really want to know is how well the tool, particularly the visualizations, help the user to arrive at the ground truth.
    • While we have always used reviewers (both analysts and visualization researchers) the reviews were conducted informally until 2009.
    • In 2009 we used a review system so we now have a body of reviews that we can analyze to see what is important to reviewers
  • 4. Evaluating the Reviews
    • Our study asked the following questions:
      • What materials should be provided to evaluators to ensure that they can adequately assess the visual analytic systems?
      • What aspects of visual analytic systems are important to reviewers?
      • Is there an advantage to selecting evaluators from different domains of expertise (visualization researchers and professional analysts)?
    • We analyzed the following information to answer these questions:
      • Reviews (2-3 per entry) of 42 entries
      • Each reviewer provided a clarity rating, ratings for the usefulness, efficiency and intuitiveness of the visualizations, the analytic process and the interactions. Reviewers were also asked to rate the novelty of the submission.
  • 5. What materials should be provided to evaluators to ensure that they can adequately assess the visual analytic systems?
    • Because of the number of teams, the research nature of the tools and the number of reviewers, it is not possible for reviewers to actually use the system. They rely on:
      • A textual description, including screen shots
      • A video showing how a certain answer was achieved (the process)
      • The accuracy of the team’s answers
    • And we found…..
      • The materials are sufficient
      • The clarity of the submission definitely affects the score
      • Accuracy metrics impact reviewers’ scores
    • Conclusions
      • Emphasis to participants that they need to make sure their descriptions (text and video) are understandable
      • Reconsider decision to show reviewers the accuracy scores
  • 6. What aspects of visual analytic systems are important to reviewers?
    • We analyzed the reviews to see what comments viewers made
    • We classified these comments into three categories:
      • Analytic process
      • Visualizations
      • Interactions
    • Notes
      • There is obviously overlap so there may be disagreements about which category a comment belongs in
      • Most comments are stated negatively – in order to provide them in the positive we need more generalization or have to describe the actual situation (visualization, interaction or process)
  • 7. Comments on Analytic Process
    • Highly manual processes
    • Repetitive steps in process
    • Large amount of data that analysts have to visually inspect
    • Automation that might cause an analyst not to see an important piece of data
    • Need for analyst to remember previously seen information
    • Too many steps
    • For automatic identification of patterns and/or behaviors, users need an explanation of what the software is programmed to identify
    • Analysts need to document their rationale for assumptions in their reports
    • Document the selection of a particular visualization if several are available
    • Show filters and transformations applied
    • Participants need to explain how the visualizations helped the analysis process
  • 8. Comments on Visualizations
    • Complexity of visualizations
    • Misleading color coding, inconsistent use of color
    • Lack of labels; non intuitive labels
    • Non intuitive symbols
    • Using tooltips instead of labels causes analysts to mouse over too many items
    • No coordination or linking between visualizations
    • The use of thickness of lines to represent strength of association is difficult to differentiate
    • Difficult to compare visualizations if can only view them serially
    • Is the visualization useful for analysis or is it a reporting tool?
    • Use of different scales in visualizations is confusing
    • Need to relate anomalies seen in visualizations to analysis
  • 9. Comments on Interactions
    • Too much scrolling
    • Interactions embedded in menus
    • Too many levels of menus and options to check
    • Need to be able to filter complex visualizations
    • Need to be able to drill down to actual data
  • 10. Conclusions
    • Reviewers were asked to comment on the categories of process, visualization and interaction
      • They provided many comments but the comments were not always in the appropriate category
    • Reviewers were asked to comment on efficiency, usefulness and intuitiveness
      • They did mention many issues impacting each of these qualities but we did not use the individual ratings as we had expected
    • For VAST 2010 Challenge we are:
      • Providing the guidelines for teams (from this study plus another analyst study)
      • Providing definitions of process, visualization and interaction
      • Looking forward to identifying more guidelines (pertaining to new types of data)
  • 11. Is there an advantage to selecting evaluators from different domains of expertise?
    • Are there differences between what visualization researchers say and what analysts say?
    • We looked at entries where there were large differences in scores between the analyst and the visualization researchers
    • Eight entries (1 analyst review, 2 researcher reviews)
      • Analyst ratings were lower in 2 instances
      • Visualization rating was lower in 5 instances
      • Analyst and one visualization researcher were lower than the other visualization researcher
  • 12. Why Visualization Researchers Gave Lower Ratings
    • Comments
      • The tool does not seem flexible enough to investigate other scenarios
      • Should a way to quickly browse through video snippets be considered a visualization?
      • One visualization is provided. More are needed to look at other data.
      • The analytic process was not well described
      • Suspicious events were highlighted in the visualization but it was unclear how they were found
      • The visualization is distracting and not useful for analysis
      • The analytic process is described in terms of using this tool to detect certain kinds of events without justifying if this type of event is related to the mini challenge question
      • The visualization was too compressed. It was difficult to see groupings.
      • Tool required an iterative process and it was difficult to remember what had been done.
    • Conclusions
      • Visualization researchers are gaining a good understanding of what users need
  • 13. Overall Conclusions
    • Reviewers have the appropriate material to assess the submission, assuming the material is clear and understandable
    • Analysts and Visualization researchers provide excellent comments on aspects of the system (analytic process, visualizations and interactions) although not necessarily classified correctly
    • There is little or no difference between comments provided by analysts and those provided by visualization researchers