Evaluating E-Reference: An Evidence Based ApproachElaine Lasda Bergman and Irina I. HoldenUniversity at AlbanyPresentation for Reference RenaissanceDenver, CO August 10, 2010
OverviewWhat is Evidence Based Librarianship?Methods What constitutes “evidence?”Systematic reviews and analysesSystematic Review ProcessResearch questionDatabase SearchArticle ReviewCritical AppraisalSynthesize, analyze, discuss
OverviewResults of our reviewMethods of determining user satisfactionComparison of variablesRange of resultsConclusions, lessons learnedAbout evidence based librarianshipAbout research qualityAbout user satisfaction with electronic reference
What is Evidence Based Librarianship?Booth and Brice’s definition of Evidence Based Information Practice:
“The Retrieval of rigorous and reliable evidence to inform… decision making” (Booth and Brice, ix)
What is Evidence Based Librarianship (EBL)?History Gained traction in Medical fields in 1990’s and spread to social sciences after thatMedical librarians were the first to bring this approach to LIS researchIncreasingly used in social sciences and information/library scienceSources: Booth and Brice, ix.
Don’t we ALREADY use “evidence”?Evidence is “out there, somewhere” Disparate locations: many different journals, many different researchersEvidence is not summarized, readily available and synthesizedNo formal, systematized, concerted effort to quantify and understand if there is a pattern or just our general sense of things
	Heirarchy of “Evidence”Source: http://ebp.lib.uic.edu/applied_health/?q=node/12
Systematic Reviews vs. Literature Reviews
Systematic Reviews: When Are They Useful?Too much information in disparate sourcesToo little information, hard to find all of the researchHelp achieve consensus on debatable issuesPlan for new researchProvide teaching/learning materials
Process of Systematic ReviewFormulate Research Question
Database Search
Review Results
Critical Appraisal
AnalysisResearch QuestionsResearch question formulationDescription of the parties involved in the studies (librarians and patrons, for ex.)What was being studied (effectiveness of instructional mode, for ex.)The outcomes and how they can be compared What data should be collected for this purpose (either student surveys or pre/post tests, etc.)
Our Research Questions1. 	What is the level of satisfaction of patrons who utilize digital reference? 2. 	What are the measures researchers use to quantify user satisfaction and how do they compare?
Database SearchLISTA (EBSCO platform): 123 articles retrievedLISA (CSA platform): 209 articles retrievedERIC: no unique studies retrieved
Working with Results279 Results after de-duplication Only format retrieved: journal articlesAbstracts were reviewed applying inclusion and exclusion criteria
Inclusion/Exclusion CriteriaShould be pre-determined at the beginning of the studyMinimizes bias Allows outside verification of why studies were included/excluded
Sample Inclusion/Exclusion Criteria		InclusionPeer reviewed journalsArticles comparing e-reference with face-to-face referenceArticles on academic, public and special librariesArticles on e-mail, IM, and “chat” reference		ExclusionArticles describing how to implement digital reference programsArticles discussing quantitative or demographic data onlyReviews, editorials and commentaryNon-English articles
Working with Results93 articles were selected based on inclusion/exclusion criteriaFull text was obtained and read by both authors independently to determine if at least one variable pertaining to user satisfaction was present; then the results were compared
Results of Full Text Review
Critical Appraisal ToolsQUOROM (The Lancet, 1999, vol. 354, 1896-1900)Downs-Black scale (“Checklist for study quality”)CriSTAL (Critical Skills Training in Appraisal for Librarians (Andrew Booth)
Glynn’s Critical Appraisal ToolPopulationData collectionStudy designResults
Critical Appraisal Process24 articles were subjected to critical appraisal Each question from Glynn’s tool was answered (either yes, no, unclear or N/A) and the results were calculated12 research papers selected and subjected to the systematic review
Analysis (Findings of Review)Settings and general characteristics:Multiple instruments in a single article9 unique journalsUS basedMethods and timing of data collection7 paper surveys3 pop up surveys3 transcript analysis
Similar Variables in Surveys“Willingness to return”11 surveys of all instruments (Nilsen)Staff person vs service“Have you used it before?”Ranged from 30%-69% (email)Positivity of experience7 point, 4 point, 3 point scales65% - 98.2% (email, small group)14-417 respondentsStaff quality7 point, 4 point, 3 point scales68% - 92.8% (14 respondents)
AnalysisOther questions in obtrusive studies“Were you satisfied?”  “Would you recommend to a colleague?” each only asked in only 1 of the studies
Analysis: Reason for variation:Nature of questions asked is contingent on context in which satisfaction was measured [correlate to guidelines, librarian behaviors, reference interviews, etc.]
Unobtrusive studies: Transcript Analysis2 Basic Methods:Transcript analysis by person asking the question (proxy patron) (Schachaf and Horowitz, 2008, Sugimoto, 2008). 75% “complete”, 68% “mostly incomplete”Transcripts independently assessed for quality and coded (Marsteller and Mizzy, 2003, Schachaf and Horowitz, 2008)3 point scale,  “+ or –” scale2.24 out of 3 (level of quality); 5 negatives/200 transactionsResearch question: Efficacy of third party assessors vs. user surveys

Evaluating e reference

  • 1.
    Evaluating E-Reference: AnEvidence Based ApproachElaine Lasda Bergman and Irina I. HoldenUniversity at AlbanyPresentation for Reference RenaissanceDenver, CO August 10, 2010
  • 2.
    OverviewWhat is EvidenceBased Librarianship?Methods What constitutes “evidence?”Systematic reviews and analysesSystematic Review ProcessResearch questionDatabase SearchArticle ReviewCritical AppraisalSynthesize, analyze, discuss
  • 3.
    OverviewResults of ourreviewMethods of determining user satisfactionComparison of variablesRange of resultsConclusions, lessons learnedAbout evidence based librarianshipAbout research qualityAbout user satisfaction with electronic reference
  • 4.
    What is EvidenceBased Librarianship?Booth and Brice’s definition of Evidence Based Information Practice:
  • 5.
    “The Retrieval ofrigorous and reliable evidence to inform… decision making” (Booth and Brice, ix)
  • 6.
    What is EvidenceBased Librarianship (EBL)?History Gained traction in Medical fields in 1990’s and spread to social sciences after thatMedical librarians were the first to bring this approach to LIS researchIncreasingly used in social sciences and information/library scienceSources: Booth and Brice, ix.
  • 7.
    Don’t we ALREADYuse “evidence”?Evidence is “out there, somewhere” Disparate locations: many different journals, many different researchersEvidence is not summarized, readily available and synthesizedNo formal, systematized, concerted effort to quantify and understand if there is a pattern or just our general sense of things
  • 8.
    Heirarchy of “Evidence”Source:http://ebp.lib.uic.edu/applied_health/?q=node/12
  • 9.
    Systematic Reviews vs.Literature Reviews
  • 10.
    Systematic Reviews: WhenAre They Useful?Too much information in disparate sourcesToo little information, hard to find all of the researchHelp achieve consensus on debatable issuesPlan for new researchProvide teaching/learning materials
  • 11.
    Process of SystematicReviewFormulate Research Question
  • 12.
  • 13.
  • 14.
  • 15.
    AnalysisResearch QuestionsResearch questionformulationDescription of the parties involved in the studies (librarians and patrons, for ex.)What was being studied (effectiveness of instructional mode, for ex.)The outcomes and how they can be compared What data should be collected for this purpose (either student surveys or pre/post tests, etc.)
  • 16.
    Our Research Questions1. What is the level of satisfaction of patrons who utilize digital reference? 2. What are the measures researchers use to quantify user satisfaction and how do they compare?
  • 17.
    Database SearchLISTA (EBSCOplatform): 123 articles retrievedLISA (CSA platform): 209 articles retrievedERIC: no unique studies retrieved
  • 18.
    Working with Results279Results after de-duplication Only format retrieved: journal articlesAbstracts were reviewed applying inclusion and exclusion criteria
  • 19.
    Inclusion/Exclusion CriteriaShould bepre-determined at the beginning of the studyMinimizes bias Allows outside verification of why studies were included/excluded
  • 20.
    Sample Inclusion/Exclusion Criteria InclusionPeerreviewed journalsArticles comparing e-reference with face-to-face referenceArticles on academic, public and special librariesArticles on e-mail, IM, and “chat” reference ExclusionArticles describing how to implement digital reference programsArticles discussing quantitative or demographic data onlyReviews, editorials and commentaryNon-English articles
  • 21.
    Working with Results93articles were selected based on inclusion/exclusion criteriaFull text was obtained and read by both authors independently to determine if at least one variable pertaining to user satisfaction was present; then the results were compared
  • 22.
    Results of FullText Review
  • 23.
    Critical Appraisal ToolsQUOROM(The Lancet, 1999, vol. 354, 1896-1900)Downs-Black scale (“Checklist for study quality”)CriSTAL (Critical Skills Training in Appraisal for Librarians (Andrew Booth)
  • 24.
    Glynn’s Critical AppraisalToolPopulationData collectionStudy designResults
  • 25.
    Critical Appraisal Process24articles were subjected to critical appraisal Each question from Glynn’s tool was answered (either yes, no, unclear or N/A) and the results were calculated12 research papers selected and subjected to the systematic review
  • 26.
    Analysis (Findings ofReview)Settings and general characteristics:Multiple instruments in a single article9 unique journalsUS basedMethods and timing of data collection7 paper surveys3 pop up surveys3 transcript analysis
  • 27.
    Similar Variables inSurveys“Willingness to return”11 surveys of all instruments (Nilsen)Staff person vs service“Have you used it before?”Ranged from 30%-69% (email)Positivity of experience7 point, 4 point, 3 point scales65% - 98.2% (email, small group)14-417 respondentsStaff quality7 point, 4 point, 3 point scales68% - 92.8% (14 respondents)
  • 28.
    AnalysisOther questions inobtrusive studies“Were you satisfied?” “Would you recommend to a colleague?” each only asked in only 1 of the studies
  • 29.
    Analysis: Reason forvariation:Nature of questions asked is contingent on context in which satisfaction was measured [correlate to guidelines, librarian behaviors, reference interviews, etc.]
  • 30.
    Unobtrusive studies: TranscriptAnalysis2 Basic Methods:Transcript analysis by person asking the question (proxy patron) (Schachaf and Horowitz, 2008, Sugimoto, 2008). 75% “complete”, 68% “mostly incomplete”Transcripts independently assessed for quality and coded (Marsteller and Mizzy, 2003, Schachaf and Horowitz, 2008)3 point scale, “+ or –” scale2.24 out of 3 (level of quality); 5 negatives/200 transactionsResearch question: Efficacy of third party assessors vs. user surveys
  • 31.
    Lessons LearnedLessons aboutuser satisfaction with electronic reference:Overall pattern of users being satisfied, regardless of methodology or questions askedMeasurement of user satisfaction is contingent upon contextResearchers most often try to connect user satisfaction to another variable, satisfaction the sole focus of only one article
  • 32.
    Lessons LearnedLessons aboutlibrary researchExtensive amount of qualitative research makes performing systematic reviews challengingInconsistency of methodologies used in original research makes the systematic review challenging, meta-analysis is more often than not impossibleCommon pitfalls in LIS research that affect the quality of the published article
  • 33.
    Lessons LearnedBenefits ofundertaking a systematic review:Sharpens literature searching skills: benefits for both librarians and their patrons who need this kind of researchResearcher gains the ability to critically appraise researchThe practice of librarianship is strengthened by basing decisions on a methodological assessment of evidence
  • 34.
    Systematic Reviews andEBL:Impact on the ProfessionFormal gathering and synthesis of evidence may:Affirm our intuitive sense about the patterns in current researchRefine, clarify and enhance a more robust understanding of a current problem in librarianshipMay, on occasion, provide surprising results!
  • 35.
    Questions?http://www.slideshare.net/librarian68Elaine M. LasdaBergmanebergman@uamail.albany.eduIrina I. Holdeniholden@uamail.albany.edu

Editor's Notes

  • #16 Important to determine at the beginning of the studies to provide some organized structure for future work. It helps to minimize the bias for inclusion of some articles; it can be verifiable by the readers who would like to make sure that the authors did adhere to the selected criteria. There’s also something that is called “publication bias”; it means that often the studies with the positive results get to be published more often that the studies with the negative results or often such studies are published in the journals that are of a least importance and not indexed properly in major databases.
  • #17 In our example we knew right away that we would like to stay clear off the articles that talked about implementation of the electronic reference or establishing such service. We wanted to avoid reviews or book reviews as they were not original studies. Some articles examined the demographic parameters of their users – for example how many female vs. male patrons used the IM services or what was their age, etc. We felt as though it is not important to the main idea of our study of user satisfaction. We also knew that we won’t be able to read raticles in non-English language, so those were excluded as well.With the inclusion criteria we tried to come up with some clear parameters that helped us to identify the initial group of articles to examine. It helped a lot, in fact.
  • #20 For this part of the process the tool was needed. Different researchers approach this in different ways: some look for existing tools, some came up with their own questions that better suit their topics.
  • #21 Each of the four sections contains from 5 to 8 questions. For example, the population section questions whether the study population is representative of all the users, actual and eligible, whether inclusion/exclusion criteria definitely outlined, sample size, whether the population choice is bias-free, etc. Answering these questions can be difficult – we spent a lot of time doing it first on our own and then together, discussing the articles over and over.