Your SlideShare is downloading. ×
Mining Virtual Reference Data for an Iterative Assessment Cycle
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Mining Virtual Reference Data for an Iterative Assessment Cycle

515
views

Published on

My presentation at the 17th Annual Reference Research Forum at #ala11.

My presentation at the 17th Annual Reference Research Forum at #ala11.

Published in: Education, Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
515
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • MSU information20k studentsVery High Research Carnegie ClassificationMy backgroundManaging the VR program since 2007 in conjunction with the Reference CoordinatorResponsible for supervising 13 chat librarians on 56 hours at the Ref Desk, including training and review of chat transactionsBeen in a standalone implementation of Altarama’sDocutek Chat since September 2008 (before that in a consortia with ASERL on Questionpoint then Docutek), Our data begins there.Today’s presentation (method for assessment, past, now/Discovery, results, takeaways)Looking at a methodology that review of chat transcripts can be used for an iterative Library Assessment programIn 2010 used chat transcripts to inform the web redesign Asking if there are changes with the redesign and concurrent Discovery implementation.Results from our study Future plansTakeaways—this isn’t hard, it can assess the whole library (not just VR), and it is worth committing to an assessment cycle using VR.
  • Chat is the only publicly visible online tool managed outside of the systems department. Requires co-operation between Reference and Systems to manage. Systems provides the tech support for Docutek troubleshooting.
  • As with everyone, we are invested in finding ways to assess our services. Qualitative data is difficult and time consuming to get, in general, requiring surveys etc. Chat transcripts are a passive and readily available source of data. Patrons are coming to us every day and telling us their problems with our systems.
  • Due to the established relationship between the VR Manager and Web Services, when the redesign began , I went to the head of Web Services and asked if data from chat would be helpful for the redesign. He said, maybe…yes. Let’s see a demo. I did a month, which was initially more time consuming than I thought it would be. Finding a methodology was challenging and figuring out parameters that would be interesting for them that I could identify was difficult. He liked what he saw, but I had no time to get more of the same (he wanted two years). He asked for all the entry questions from chat and email—which we had segregated by patron type and department to analyze by word frequency. An idea about what we were working with:
  • Before—searchbox goes to the online catalog as a keyword search. Database tab searches for names of databases, same with e-journal page. Heavy pictures, lots of stuff.
  • Discovery driven box, fewer pictures, cleaner layout
  • This is actually what you see when you come to the screen. Discovery is huge.
  • We wrote it up. I used a modified Grounded Theory model to look at the questions, they used Wordle to analyze word frequency.
  • An example of the Wordle for Faculty chat (not email) entry questions and department affiliation. Entry questions may not be what people want, but it does reflect the language they use. What if we mirrored their language?
  • Research questions proposed at the end of the paper left from the paper--
  • Suggestion for answering the questions
  • Based on this idea of a cycle from Flynn, Gilchrist and Olson’s paper. Evaluate what you’ve done and make changes based on that. Evaluate again.
  • We chose to follow up one year later, after the August 2010 relaunch of the web redesign and concurrent release of discovery in that design.
  • This is how we did it. Pulling short descriptions out, looking for patterns, trusting the data and not external research, use patterns to create codes, use codes to quantify.
  • To set it up—there were a lower number of chat transactions in April 2011. When we code, the terms shift a bit. Questions asked don’t equal transactions, the equal unique types of questions within a transaction. Answers also are unique answers within a transaction.
  • Why are we lower in this April? Is it because of the Discovery implementation and web redesign? ??
  • Nope—we are on the increase each month (as has been our trend for several years). March and April were aberrant. Why? All we could come up with that was different about these months was---
  • Tornado season…we had significant amounts of time spent in the basement and/or without power. Doesn’t seem like enough to drop 25%, but not sure what else it could be.
  • Getting to the analysis. Looking for Patterns in the questions…
  • Coming up with the Codes and counting—summing up the questions that map to these codes in this spreadsheet.
  • Analysis of changes. Even remembering transactions are down 25%, Topic searches and to some extend Finding Known Articles (citations) are also down. This needs further study. Online catalog not down at all. In fact it is up significantly.
  • How about the answers part? These are the tools used in chat. We had three parts. We are going to be looking at one part today-databases—since we were primarily looking at analyzing how Discovery was changing use of DB. We also have data on service referrals and other resources like ILL that we are comparing to the April 2010 data.
  • We used fewer databases in 2011. We used the OC more. We used ASP less. We are on board with using Discovery in chat.
  • Representation of the OC, Discovery and ASP changes. ASP down 75%, OC up 25%. Discovery seems to replace ASP.
  • Decreases in Topic Searches and Finding Known Articles need to be tracked—maybe find another set of months to compare (Feb 2010 and Feb 2011?). OC has some issues that aren’t being addressed by Discovery. One remarkable thing about our OC is that we are acting as a consortium for 42 other libraries. 17 of which were added in February 2011. This is a possible factor in the increase in questions about the OC and holds within the OC.
  • For us looking forward—there were things I didn’t look at in April 2010, so I didn’t really have a basis for comparison. They may not have been there (Remarkable) or likely I didn’t think of them (somehow) as relevant for what we needed. We’d like to track these going forward. In an iterative assessment process.
  • What we can say…
  • Transcript

    • 1. Mining Virtual Reference Data for an Iterative Assessment Cycle
      Amanda Clay Powers
      Virtual Reference Project Manager
      Ag & Life Sciences Librarian / Assistant Professor
      Mississippi State University Libraries
      http://amandaclaypowers.com
      @amandaclay
      17th Annual Reference Research Forum
      June 26, 2011
    • 2. Managed from within Reference
      Requires Systems/Reference Work
      Ongoing transcript review in place
      Chat at the MSU Libraries
      @amandaclay
    • 3. ACRL Value of Academic Libraries: A Comprehensive Research Review and Report (2010)
      SACS Accreditation Review
      MSU Libraries Measurement & Evaluation Committee formed
      Assessment on the Mind
      @amandaclay
    • 4. March 2010 – Web site redesign begins
      April 2010 – Transcript analysis proposed
      May 2010 – Analysis delivered
      August 2010 –Web site and Discovery launched
      Chronology
      @amandaclay
    • 5. @amandaclay
    • 6. @amandaclay
    • 7. @amandaclay
    • 8. with Web Services Manager Clay Hill and Digital Projects Web Services Specialist Julie Shedd
      Write-up of April 2010 data analysis and results published in the Journal of Web Librarianship
      Includes Wordleanalysis of 1800+ email and chat questions
      “The Role of Virtual Reference in Library Web Site Design: A Qualitative Source for Usage Data”
      @amandaclay
    • 9. Faculty Chat Wordle: Entry Questions and Departments
      @amandaclay
    • 10. Do “Topic Searches,” “Finding Known Article Searches,” and “Online Catalog Help” searches decrease in frequency?
      Will reluctant librarians, burned by federated searching, adopt Discovery?
      Lingering Questions
      @amandaclay
    • 11. “Integrating the sampling of virtual reference data into an iterative assessment cycle….can allow for ongoing measures to be taken of the library’s effectiveness.”
      (Powers et al, 111)
      The paper recommends twice yearly assessments using the methodology.
      Taking Your Own Medicine
      @amandaclay
    • 12. Based on Flynn, Gilchrist and Olson. 2004. “Using the Assessment Cycle as a Tool for Collaboration.” Resource Sharing & Information Networks 17(1):187-203.
      Cycle of Assessment
      Goal:
      To evaluate the Web Site redesign and Discovery Implementation
      @amandaclay
    • 13. Using a modified Grounded Theory (GT) model
      Evaluate chat transcripts from April 2010
      Follow up with an evaluation of April 2011 using the same model
      Methodology
      @amandaclay
    • 14. IRB approval
      Excel Spreadsheet—identifier, entry question, “Questions,” “Answers”
      Review transcripts and create short descriptions of transactions
      Refer to the data to find patterns
      Use patterns to create codes
      Code data to quantify
      Breaking It Down
      @amandaclay
    • 15. Lower number of chat transactions in April 2011
      For coding, types of questions asked = questions
      For coding, tools used in answering questions = answers
      Constructs
      @amandaclay
    • 16. Number of Chats in April (2008-2011)
      -25% ?
      @amandaclay
    • 17. @amandaclay
    • 18. Photo credit: Ryan Hoke, MSU Meteorology Student and Storm Chaser
      http://ryanhoke.com/
    • 19. Identifying Patternsin Question Types
      TOPIC SEARCH
    • 20.
    • 21. @amandaclay
      38%
      28%
      44%
    • 22. Question Analysis Results
      Decrease in “Topic Search” questions
      Slight decrease in “Finding Known Articles”
      Increase in questions related to the Online Catalog/Holds
      @amandaclay
    • 23. But What About the Chat Librarians?
      Tools used in chat are the “answers” counted as defined:
      Databases from Libraries’ Database Portal list
      Service Referrals to Specialists (internal or external)
      Other Resources not above include ILL, librarian answered (RR), guides, print sources, etc.
      @amandaclay
    • 24. Databases Used to Answer Chat Questions
      April 2010
      April 2011
      @amandaclay
    • 25. @amandaclay
    • 26. Discussion
      Overall chat transactions decrease by 25%
      Topic Search questions decrease
      Discovery supplants use of Academic Search Premier for chat librarians
      Online catalog/holds questions and references increase
      @amandaclay
    • 27. Discovery is not taking the place of the Online Catalog despite being included in Discovery search
      In February 2011, 17 libraries were added for a total of 8 systems with 42 libraries in our Online Catalog
      What’s Up with the Online Catalog?
      @amandaclay
    • 28. Difficult to compare 120 questions and 90 questions
      Assessment needs change over time—past data analysis may need to be revisited
      Limit yourself to what you really need to know
      Challenges
      @amandaclay
    • 29. Looking Forward: Modifications for Future Assessment
      @amandaclay
    • 30. Need more data points to confirm Topic Search and possible Finding Known Articles reduction
      Discovery has been adopted by chat librarians in lieu of Academic Search Premier
      Something is going on with the online catalog worth investigating
      Conclusions
      @amandaclay
    • 31. Questions?