Your SlideShare is downloading. ×
Evaluating E-Reference: An Evidence Based Approach
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Evaluating E-Reference: An Evidence Based Approach

287
views

Published on


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
287
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Important to determine at the beginning of the studies to provide some organized structure for future work. It helps to minimize the bias for inclusion of some articles; it can be verifiable by the readers who would like to make sure that the authors did adhere to the selected criteria. There’s also something that is called “publication bias”; it means that often the studies with the positive results get to be published more often that the studies with the negative results or often such studies are published in the journals that are of a least importance and not indexed properly in major databases.
  • In our example we knew right away that we would like to stay clear off the articles that talked about implementation of the electronic reference or establishing such service. We wanted to avoid reviews or book reviews as they were not original studies. Some articles examined the demographic parameters of their users – for example how many female vs. male patrons used the IM services or what was their age, etc. We felt as though it is not important to the main idea of our study of user satisfaction. We also knew that we won’t be able to read raticles in non-English language, so those were excluded as well.

    With the inclusion criteria we tried to come up with some clear parameters that helped us to identify the initial group of articles to examine. It helped a lot, in fact.
  • For this part of the process the tool was needed. Different researchers approach this in different ways: some look for existing tools, some came up with their own questions that better suit their topics.
  • Each of the four sections contains from 5 to 8 questions. For example, the population section questions whether the study population is representative of all the users, actual and eligible, whether inclusion/exclusion criteria definitely outlined, sample size, whether the population choice is bias-free, etc. Answering these questions can be difficult – we spent a lot of time doing it first on our own and then together, discussing the articles over and over.
  • Transcript

    • 1. Evaluating E-Reference: An Evidence Based Approach Elaine Lasda Bergman and Irina I. Holden University at Albany Presentation for Reference Renaissance Denver, CO August 10, 2010
    • 2. Overview  What is Evidence Based Librarianship?  Methods  What constitutes “evidence?”  Systematic reviews and analyses  Systematic Review Process  Research question  Database Search  Article Review  Critical Appraisal  Synthesize, analyze, discuss
    • 3. Overview  Results of our review  Methods of determining user satisfaction  Comparison of variables  Range of results  Conclusions, lessons learned  About evidence based librarianship  About research quality  About user satisfaction with electronic reference
    • 4. What is Evidence Based Librarianship?  Booth and Brice’s definition of Evidence Based Information Practice: ○“The Retrieval of rigorous and reliable evidence to inform… decision making” (Booth and Brice, ix)
    • 5. What is Evidence Based Librarianship (EBL)?  History  Gained traction in Medical fields in 1990’s and spread to social sciences after that  Medical librarians were the first to bring this approach to LIS research  Increasingly used in social sciences and information/library science Sources: Booth and Brice, ix.
    • 6. Don’t we ALREADY use “evidence”?  Evidence is “out there, somewhere”  Disparate locations: many different journals, many different researchers  Evidence is not summarized, readily available and synthesized  No formal, systematized, concerted effort to quantify and understand if there is a pattern or just our general sense of things
    • 7. Heirarchy of “Evidence” Source: http://ebp.lib.uic.edu/applied_health/?q=node/12
    • 8. Systematic Reviews vs. Literature Reviews Literature Review Systematic Review Narrative text Research methodolgy/process Evaluation: author’s opinion Evaluation: formal critical appraisal process Usually single evaluator Best if multiple evaluators Studies are categorized but separately summarized Variables in studies are compared across studies, synthesized and analyzed General sense of a pattern Quantified, identified patterns and comparisons
    • 9. Systematic Reviews: When Are They Useful? Too much information in disparate sources Too little information, hard to find all of the research Help achieve consensus on debatable issues Plan for new research Provide teaching/learning materials
    • 10. Process of Systematic Review Formulate Research Question Database Search Review Results Critical Appraisal Analysis
    • 11. Research Questions  Research question formulation  Description of the parties involved in the studies (librarians and patrons, for ex.)  What was being studied (effectiveness of instructional mode, for ex.)  The outcomes and how they can be compared  What data should be collected for this purpose (either student surveys or pre/post tests, etc.)
    • 12. Our Research Questions 1. What is the level of satisfaction of patrons who utilize digital reference? 2. What are the measures researchers use to quantify user satisfaction and how do they compare?
    • 13. Database Search  LISTA (EBSCO platform): 123 articles retrieved  LISA (CSA platform): 209 articles retrieved  ERIC: no unique studies retrieved
    • 14. Working with Results  279 Results after de-duplication  Only format retrieved: journal articles  Abstracts were reviewed applying inclusion and exclusion criteria
    • 15. Sample Inclusion/Exclusion Criteria Inclusion  Peer reviewed journals  Articles comparing e- reference with face-to- face reference  Articles on academic, public and special libraries  Articles on e-mail, IM, and “chat” reference Exclusion  Articles describing how to implement digital reference programs  Articles discussing quantitative or demographic data only  Reviews, editorials and commentary  Non-English articles
    • 16. Working with Results  93 articles were selected based on inclusion/exclusion criteria  Full text was obtained and read by both authors independently to determine if at least one variable pertaining to user satisfaction was present; then the results were compared
    • 17. Results of Full Text Review Reason for Exclusion # of Articles No variable on user satisfaction 32 Advice or commentary 18 Not about e-ref transactions 5 About resources used 5 Showcased library experience 3 Literature review 2 Review of another study 1 Not scholarly 1 Not about electronic reference 1 Selected for critical appraisal 23 Found during citation search 1 Total 94
    • 18. Critical Appraisal Tools  QUOROM (The Lancet, 1999, vol. 354, 1896-1900)  Downs-Black scale (“Checklist for study quality”)  CriSTAL (Critical Skills Training in Appraisal for Librarians (Andrew Booth)
    • 19. Glynn’s Critical Appraisal Tool Population Data collection Study design Results
    • 20. Critical Appraisal Process  24 articles were subjected to critical appraisal  Each question from Glynn’s tool was answered (either yes, no, unclear or N/A) and the results were calculated  12 research papers selected and subjected to the systematic review
    • 21. Analysis (Findings of Review)  Settings and general characteristics:  Multiple instruments in a single article  9 unique journals  US based  Methods and timing of data collection  7 paper surveys  3 pop up surveys  3 transcript analysis
    • 22. Similar Variables in Surveys “Willingness to return”  11 surveys of all instruments (Nilsen)  Staff person vs service “Have you used it before?”  Ranged from 30%-69% (email) Positivity of experience  7 point, 4 point, 3 point scales  65% - 98.2% (email, small group)  14-417 respondents Staff quality  7 point, 4 point, 3 point scales  68% - 92.8% (14 respondents)
    • 23. Analysis  Other questions in obtrusive studies “Were you satisfied?”  “Would you recommend to a colleague?” each only asked in only 1 of the studies
    • 24. Analysis:  Reason for variation: Nature of questions asked is contingent on context in which satisfaction was measured • [correlate to guidelines, librarian behaviors, reference interviews, etc.]
    • 25. Unobtrusive studies: Transcript Analysis  2 Basic Methods:  Transcript analysis by person asking the question (proxy patron) (Schachaf and Horowitz, 2008, Sugimoto, 2008). • 75% “complete”, 68% “mostly incomplete”  Transcripts independently assessed for quality and coded (Marsteller and Mizzy, 2003, Schachaf and Horowitz, 2008) • 3 point scale, “+ or –” scale • 2.24 out of 3 (level of quality); 5 negatives/200 transactions  Research question: Efficacy of third party assessors vs. user surveys
    • 26. Lessons Learned  Lessons about user satisfaction with electronic reference: Overall pattern of users being satisfied, regardless of methodology or questions asked Measurement of user satisfaction is contingent upon context Researchers most often try to connect user satisfaction to another variable, satisfaction the sole focus of only one article
    • 27. Lessons Learned  Lessons about library research  Extensive amount of qualitative research makes performing systematic reviews challenging  Inconsistency of methodologies used in original research makes the systematic review challenging, meta-analysis is more often than not impossible  Common pitfalls in LIS research that affect the quality of the published article
    • 28. Lessons Learned  Benefits of undertaking a systematic review:  Sharpens literature searching skills: benefits for both librarians and their patrons who need this kind of research  Researcher gains the ability to critically appraise research  The practice of librarianship is strengthened by basing decisions on a methodological assessment of evidence
    • 29. Systematic Reviews and EBL: Impact on the Profession  Formal gathering and synthesis of evidence may: Affirm our intuitive sense about the patterns in current research Refine, clarify and enhance a more robust understanding of a current problem in librarianship May, on occasion, provide surprising results!
    • 30. Questions? http://www.slideshare.net/libraria n68 Elaine M. Lasda Bergman ebergman@uamail.albany.e du Irina I. Holden iholden@uamail.albany.edu