SlideShare a Scribd company logo
1 of 11
Evaluating What Really Counts
Expert Users Assess Data Access
Mechanisms and Interfaces
                     Peter Sommer
                    Director of Education
        Center for New Media Teaching and Learning
                     Columbia University

                     Linda Catalano
                   Doctoral Candidate
                 Department of Sociology
                   Columbia University
A panel of expert users (social science data librarians
with a minimum of 4 years professional experience in
the field) were interviewed to:

   Identify what elements make for a good information
    querying and retrieval system
   Assess the OGIRS and AskCal systems according
    to the above-identified criteria.
The panel cited the following elements as contributing
to a good information querying and retrieval system:

   An ability to search a number of related terms
    simultaneously (as opposed to separately in consecutive
    searches)
   An ability to “see where you are” in the system
   Explanations of and/or background information on both
    the data sought and the data found; e.g.,
     Definitions of terms

     Sources of data

     How the data are computed
   Helpful feedback related to user actions
   Indication of required search parameters; e.g.,
     Geographic area

     Time frame

     Unit of measure

   A consistent user-system interface
   A record of user actions and corresponding system
    responses
The panel then developed a query* based on the New
York Times Editorial page of that day** and sought to
answer it utilizing two different energy data access
mechanisms:

   OGIRS (Oil and Gas, Extracting Data), an Access
    database that presents an interface of “peepholed”
    hierarchical categories; and
   AskCal (Philpot et al.), in which data are mediated by a
    set of conceptual categories linked into an ontology and
    with both a natural language and a structured menu
    interface.
*“When did the price of gas go over 75 cents per gallon?”
**5/19/04.
The panel was presented with the opening query
screen of each system. They were provided no
additional directions; rather, they were asked to tell
us how they might best proceed.

The OGIRS search was soon abandoned. The
AskCal search was successfully completed, returning
satisfactory answers in two series.
The panel attributed several advantages to the
AskCal system, specifically,

   An obvious starting point
   An intuitive transition between natural language and
    menu driven query formulation and refinement
   An automated text response to the natural language
    query, which provided helpful feedback as to the
    required search parameters
   Immediate and continuous visibility of all system
    components – i.e., natural language text field, category
    menus, user inputs and corresponding search results
   The ability to search for related terms simultaneously
    from a single screen
   Once results were narrowed down, information on data
    sources
   Notation of changes (feedback) in the natural language
    query box in response to menu selections, leading to...
   A strong ability to learn how to query the system to the
    greatest advantage.
Panelists were next introduced to a third, wholly
different yet complementary system – GetGloss (Co
et al.), a meta-glossary of energy terms pooled from
sources across the internet – and asked to imagine
how using it in conjunction with a system like AskCal
would assist information retrieval or not.
The response to GetGloss, as an appendix to AskCal,
was overwhelmingly positive. Among the benefits
panelists envisioned it providing were:

   Identification and/or specification of search parameters
   Demonstration of variations in terminology across data
    sources, which, together with AskCal’s features,
    provided
   A bridge between user/consumers’ “everyday” language
    and experts’ generally more specific language
Panelists were eager to see a system uniting the
features and functions of AskCal and GetGloss
developed. This evaluation suggests a number of
recommendations in doing so:
   Open with the natural language query string only
   Respond to user inputs with bold changes in format;
    e.g., new text marked in red
   Maintain a record of user actions and system responses
    (feedback) as a scrollable “chat”
   Hyperlink GetGloss terms to descriptions of information
    source agencies and means of computation for
    particular statistical results

More Related Content

Viewers also liked (11)

Route mate presentation_2_sim2012_
Route mate presentation_2_sim2012_Route mate presentation_2_sim2012_
Route mate presentation_2_sim2012_
 
Workforce: the power of employees
Workforce: the power of employees Workforce: the power of employees
Workforce: the power of employees
 
Jan den Hartigh
Jan den HartighJan den Hartigh
Jan den Hartigh
 
Giao trinh ke_toan_va_thue_trong_doanh_nghiep_co_von_dau_tu_nuoc_ngoai
Giao trinh ke_toan_va_thue_trong_doanh_nghiep_co_von_dau_tu_nuoc_ngoaiGiao trinh ke_toan_va_thue_trong_doanh_nghiep_co_von_dau_tu_nuoc_ngoai
Giao trinh ke_toan_va_thue_trong_doanh_nghiep_co_von_dau_tu_nuoc_ngoai
 
U.S. Patent Litigation Presentation 2011
U.S. Patent Litigation Presentation 2011U.S. Patent Litigation Presentation 2011
U.S. Patent Litigation Presentation 2011
 
Helpdesk
HelpdeskHelpdesk
Helpdesk
 
Data entry
Data entryData entry
Data entry
 
Pioneer a 403 r 503r arp2853 amplifier audio
Pioneer a 403 r 503r arp2853 amplifier audioPioneer a 403 r 503r arp2853 amplifier audio
Pioneer a 403 r 503r arp2853 amplifier audio
 
Litigation Webinar Presentation 2011
Litigation Webinar Presentation 2011Litigation Webinar Presentation 2011
Litigation Webinar Presentation 2011
 
Service portfolio
Service portfolioService portfolio
Service portfolio
 
Graffiti
GraffitiGraffiti
Graffiti
 

Similar to Evaluating What Really Matters...

Open domain question answering system using semantic role labeling
Open domain question answering system using semantic role labelingOpen domain question answering system using semantic role labeling
Open domain question answering system using semantic role labeling
eSAT Publishing House
 
Inverted files for text search engines
Inverted files for text search enginesInverted files for text search engines
Inverted files for text search engines
unyil96
 
Query formulation process
Query formulation processQuery formulation process
Query formulation process
malathimurugan
 
On nonmetric similarity search problems in complex domains
On nonmetric similarity search problems in complex domainsOn nonmetric similarity search problems in complex domains
On nonmetric similarity search problems in complex domains
unyil96
 
Nonmetric similarity search
Nonmetric similarity searchNonmetric similarity search
Nonmetric similarity search
unyil96
 
Navigation through citation network based on content similarity using cosine ...
Navigation through citation network based on content similarity using cosine ...Navigation through citation network based on content similarity using cosine ...
Navigation through citation network based on content similarity using cosine ...
Salam Shah
 
A survey of automatic query expansion in information retrieval
A survey of automatic query expansion in information retrievalA survey of automatic query expansion in information retrieval
A survey of automatic query expansion in information retrieval
unyil96
 

Similar to Evaluating What Really Matters... (20)

Open domain question answering system using semantic role labeling
Open domain question answering system using semantic role labelingOpen domain question answering system using semantic role labeling
Open domain question answering system using semantic role labeling
 
A BRIEF SURVEY OF QUESTION ANSWERING SYSTEMS
A BRIEF SURVEY OF QUESTION ANSWERING SYSTEMSA BRIEF SURVEY OF QUESTION ANSWERING SYSTEMS
A BRIEF SURVEY OF QUESTION ANSWERING SYSTEMS
 
A BRIEF SURVEY OF QUESTION ANSWERING SYSTEMS
A BRIEF SURVEY OF QUESTION ANSWERING SYSTEMSA BRIEF SURVEY OF QUESTION ANSWERING SYSTEMS
A BRIEF SURVEY OF QUESTION ANSWERING SYSTEMS
 
A BRIEF SURVEY OF QUESTION ANSWERING SYSTEMS
A BRIEF SURVEY OF QUESTION ANSWERING SYSTEMSA BRIEF SURVEY OF QUESTION ANSWERING SYSTEMS
A BRIEF SURVEY OF QUESTION ANSWERING SYSTEMS
 
Inverted files for text search engines
Inverted files for text search enginesInverted files for text search engines
Inverted files for text search engines
 
Query formulation process
Query formulation processQuery formulation process
Query formulation process
 
A Topic map-based ontology IR system versus Clustering-based IR System: A Com...
A Topic map-based ontology IR system versus Clustering-based IR System: A Com...A Topic map-based ontology IR system versus Clustering-based IR System: A Com...
A Topic map-based ontology IR system versus Clustering-based IR System: A Com...
 
ACS 248th Paper 71 ChAMP Project
ACS 248th Paper 71 ChAMP ProjectACS 248th Paper 71 ChAMP Project
ACS 248th Paper 71 ChAMP Project
 
C017510717
C017510717C017510717
C017510717
 
Proposing a Scientific Paper Retrieval and Recommender Framework
Proposing a Scientific Paper Retrieval and Recommender FrameworkProposing a Scientific Paper Retrieval and Recommender Framework
Proposing a Scientific Paper Retrieval and Recommender Framework
 
On nonmetric similarity search problems in complex domains
On nonmetric similarity search problems in complex domainsOn nonmetric similarity search problems in complex domains
On nonmetric similarity search problems in complex domains
 
Nonmetric similarity search
Nonmetric similarity searchNonmetric similarity search
Nonmetric similarity search
 
Enhancing social tagging with a knowledge organization system
Enhancing social tagging with a knowledge organization systemEnhancing social tagging with a knowledge organization system
Enhancing social tagging with a knowledge organization system
 
Navigation through citation network based on content similarity using cosine ...
Navigation through citation network based on content similarity using cosine ...Navigation through citation network based on content similarity using cosine ...
Navigation through citation network based on content similarity using cosine ...
 
Architecture of an ontology based domain-specific natural language question a...
Architecture of an ontology based domain-specific natural language question a...Architecture of an ontology based domain-specific natural language question a...
Architecture of an ontology based domain-specific natural language question a...
 
FaceTag at IASummit 2007
FaceTag at IASummit 2007FaceTag at IASummit 2007
FaceTag at IASummit 2007
 
FaceTag - IASummit 2007
FaceTag - IASummit 2007FaceTag - IASummit 2007
FaceTag - IASummit 2007
 
Advantages of Query Biased Summaries in Information Retrieval
Advantages of Query Biased Summaries in Information RetrievalAdvantages of Query Biased Summaries in Information Retrieval
Advantages of Query Biased Summaries in Information Retrieval
 
A survey of automatic query expansion in information retrieval
A survey of automatic query expansion in information retrievalA survey of automatic query expansion in information retrieval
A survey of automatic query expansion in information retrieval
 
Query Recommendation by using Collaborative Filtering Approach
Query Recommendation by using Collaborative Filtering ApproachQuery Recommendation by using Collaborative Filtering Approach
Query Recommendation by using Collaborative Filtering Approach
 

Evaluating What Really Matters...

  • 1. Evaluating What Really Counts Expert Users Assess Data Access Mechanisms and Interfaces Peter Sommer Director of Education Center for New Media Teaching and Learning Columbia University Linda Catalano Doctoral Candidate Department of Sociology Columbia University
  • 2. A panel of expert users (social science data librarians with a minimum of 4 years professional experience in the field) were interviewed to:  Identify what elements make for a good information querying and retrieval system  Assess the OGIRS and AskCal systems according to the above-identified criteria.
  • 3. The panel cited the following elements as contributing to a good information querying and retrieval system:  An ability to search a number of related terms simultaneously (as opposed to separately in consecutive searches)  An ability to “see where you are” in the system  Explanations of and/or background information on both the data sought and the data found; e.g.,  Definitions of terms  Sources of data  How the data are computed
  • 4. Helpful feedback related to user actions  Indication of required search parameters; e.g.,  Geographic area  Time frame  Unit of measure  A consistent user-system interface  A record of user actions and corresponding system responses
  • 5. The panel then developed a query* based on the New York Times Editorial page of that day** and sought to answer it utilizing two different energy data access mechanisms:  OGIRS (Oil and Gas, Extracting Data), an Access database that presents an interface of “peepholed” hierarchical categories; and  AskCal (Philpot et al.), in which data are mediated by a set of conceptual categories linked into an ontology and with both a natural language and a structured menu interface. *“When did the price of gas go over 75 cents per gallon?” **5/19/04.
  • 6. The panel was presented with the opening query screen of each system. They were provided no additional directions; rather, they were asked to tell us how they might best proceed. The OGIRS search was soon abandoned. The AskCal search was successfully completed, returning satisfactory answers in two series.
  • 7. The panel attributed several advantages to the AskCal system, specifically,  An obvious starting point  An intuitive transition between natural language and menu driven query formulation and refinement  An automated text response to the natural language query, which provided helpful feedback as to the required search parameters  Immediate and continuous visibility of all system components – i.e., natural language text field, category menus, user inputs and corresponding search results
  • 8. The ability to search for related terms simultaneously from a single screen  Once results were narrowed down, information on data sources  Notation of changes (feedback) in the natural language query box in response to menu selections, leading to...  A strong ability to learn how to query the system to the greatest advantage.
  • 9. Panelists were next introduced to a third, wholly different yet complementary system – GetGloss (Co et al.), a meta-glossary of energy terms pooled from sources across the internet – and asked to imagine how using it in conjunction with a system like AskCal would assist information retrieval or not.
  • 10. The response to GetGloss, as an appendix to AskCal, was overwhelmingly positive. Among the benefits panelists envisioned it providing were:  Identification and/or specification of search parameters  Demonstration of variations in terminology across data sources, which, together with AskCal’s features, provided  A bridge between user/consumers’ “everyday” language and experts’ generally more specific language
  • 11. Panelists were eager to see a system uniting the features and functions of AskCal and GetGloss developed. This evaluation suggests a number of recommendations in doing so:  Open with the natural language query string only  Respond to user inputs with bold changes in format; e.g., new text marked in red  Maintain a record of user actions and system responses (feedback) as a scrollable “chat”  Hyperlink GetGloss terms to descriptions of information source agencies and means of computation for particular statistical results