Mining Virtual Reference Data for an Iterative Assessment CycleAmanda Clay PowersVirtual Reference Project ManagerAg & Life Sciences Librarian / Assistant ProfessorMississippi State University Librarieshttp://amandaclaypowers.com@amandaclay17th Annual Reference Research ForumJune 26, 2011
Managed from within ReferenceRequires Systems/Reference WorkOngoing transcript review in placeChat at the MSU Libraries@amandaclay
ACRL Value of Academic Libraries: A Comprehensive Research Review and Report (2010)SACS Accreditation ReviewMSU Libraries Measurement & Evaluation Committee formedAssessment on the Mind@amandaclay
March 2010 – Web site redesign begins April 2010 – Transcript analysis proposedMay 2010 – Analysis deliveredAugust 2010 –Web site and Discovery launchedChronology@amandaclay
@amandaclay
@amandaclay
@amandaclay
with Web Services Manager Clay Hill and Digital Projects Web Services Specialist Julie SheddWrite-up of April 2010 data analysis and results published in the Journal of Web LibrarianshipIncludes Wordleanalysis of 1800+ email and chat questions“The Role of Virtual Reference in Library Web Site Design: A Qualitative Source for Usage Data”@amandaclay
Faculty Chat Wordle: Entry Questions and Departments@amandaclay
Do “Topic Searches,” “Finding Known Article Searches,” and “Online Catalog Help” searches decrease in frequency?Will reluctant librarians, burned by federated searching, adopt Discovery?Lingering Questions@amandaclay
“Integrating the sampling of virtual reference data into an iterative assessment cycle….can allow for ongoing measures to be taken of the library’s effectiveness.”  (Powers et al, 111)The paper recommends twice yearly assessments using the methodology.Taking Your Own Medicine@amandaclay
Based on Flynn, Gilchrist and Olson. 2004. “Using the Assessment Cycle as a Tool for Collaboration.” Resource Sharing & Information Networks 17(1):187-203.Cycle of AssessmentGoal:To evaluate the Web Site redesign and Discovery Implementation@amandaclay
Using a modified Grounded Theory (GT) modelEvaluate chat transcripts from April 2010 Follow up with an evaluation of April 2011 using the same modelMethodology@amandaclay
IRB approvalExcel Spreadsheet—identifier, entry question, “Questions,” “Answers” Review transcripts and create short descriptions of transactionsRefer to the data to find patterns Use patterns to create codesCode data to quantifyBreaking It Down@amandaclay
Lower number of chat transactions in April 2011For coding, types of questions asked = questionsFor coding, tools used in answering questions = answersConstructs@amandaclay
Number of Chats in April (2008-2011)-25% ?@amandaclay
@amandaclay
Photo credit: Ryan Hoke, MSU Meteorology Student and Storm Chaserhttp://ryanhoke.com/
Identifying Patternsin Question TypesTOPIC SEARCH
@amandaclay38%28%44%
Question Analysis ResultsDecrease in “Topic Search” questionsSlight decrease in “Finding Known Articles” Increase in questions related to the Online Catalog/Holds@amandaclay
But What About the Chat Librarians?Tools used in chat are the “answers” counted as defined:Databases from Libraries’ Database Portal listService Referrals to Specialists (internal or external)Other Resources not above include ILL, librarian answered (RR), guides, print sources, etc.@amandaclay
Databases Used to Answer Chat QuestionsApril 2010April 2011@amandaclay
@amandaclay
DiscussionOverall chat transactions decrease by 25%Topic Search questions decreaseDiscovery supplants use of Academic Search Premier  for chat librariansOnline catalog/holds questions and references increase@amandaclay
Discovery is not taking the place of the Online Catalog despite being included in Discovery searchIn February 2011, 17 libraries were added for a total of 8 systems with 42 libraries in our Online CatalogWhat’s Up with the Online Catalog? @amandaclay
Difficult to compare 120 questions and 90 questionsAssessment needs change over time—past data analysis may need to be revisitedLimit yourself to what you really need to knowChallenges@amandaclay
Looking Forward:  Modifications for Future Assessment@amandaclay
Need more data points to confirm Topic Search and possible Finding Known Articles reductionDiscovery has been adopted by chat librarians in lieu of Academic Search PremierSomething is going on with the online catalog worth investigatingConclusions@amandaclay
Questions?

Mining Virtual Reference Data for an Iterative Assessment Cycle

  • 1.
    Mining Virtual ReferenceData for an Iterative Assessment CycleAmanda Clay PowersVirtual Reference Project ManagerAg & Life Sciences Librarian / Assistant ProfessorMississippi State University Librarieshttp://amandaclaypowers.com@amandaclay17th Annual Reference Research ForumJune 26, 2011
  • 2.
    Managed from withinReferenceRequires Systems/Reference WorkOngoing transcript review in placeChat at the MSU Libraries@amandaclay
  • 3.
    ACRL Value ofAcademic Libraries: A Comprehensive Research Review and Report (2010)SACS Accreditation ReviewMSU Libraries Measurement & Evaluation Committee formedAssessment on the Mind@amandaclay
  • 4.
    March 2010 –Web site redesign begins April 2010 – Transcript analysis proposedMay 2010 – Analysis deliveredAugust 2010 –Web site and Discovery launchedChronology@amandaclay
  • 5.
  • 6.
  • 7.
  • 8.
    with Web ServicesManager Clay Hill and Digital Projects Web Services Specialist Julie SheddWrite-up of April 2010 data analysis and results published in the Journal of Web LibrarianshipIncludes Wordleanalysis of 1800+ email and chat questions“The Role of Virtual Reference in Library Web Site Design: A Qualitative Source for Usage Data”@amandaclay
  • 9.
    Faculty Chat Wordle:Entry Questions and Departments@amandaclay
  • 10.
    Do “Topic Searches,”“Finding Known Article Searches,” and “Online Catalog Help” searches decrease in frequency?Will reluctant librarians, burned by federated searching, adopt Discovery?Lingering Questions@amandaclay
  • 11.
    “Integrating the samplingof virtual reference data into an iterative assessment cycle….can allow for ongoing measures to be taken of the library’s effectiveness.” (Powers et al, 111)The paper recommends twice yearly assessments using the methodology.Taking Your Own Medicine@amandaclay
  • 12.
    Based on Flynn,Gilchrist and Olson. 2004. “Using the Assessment Cycle as a Tool for Collaboration.” Resource Sharing & Information Networks 17(1):187-203.Cycle of AssessmentGoal:To evaluate the Web Site redesign and Discovery Implementation@amandaclay
  • 13.
    Using a modifiedGrounded Theory (GT) modelEvaluate chat transcripts from April 2010 Follow up with an evaluation of April 2011 using the same modelMethodology@amandaclay
  • 14.
    IRB approvalExcel Spreadsheet—identifier,entry question, “Questions,” “Answers” Review transcripts and create short descriptions of transactionsRefer to the data to find patterns Use patterns to create codesCode data to quantifyBreaking It Down@amandaclay
  • 15.
    Lower number ofchat transactions in April 2011For coding, types of questions asked = questionsFor coding, tools used in answering questions = answersConstructs@amandaclay
  • 16.
    Number of Chatsin April (2008-2011)-25% ?@amandaclay
  • 17.
  • 18.
    Photo credit: RyanHoke, MSU Meteorology Student and Storm Chaserhttp://ryanhoke.com/
  • 19.
  • 21.
  • 22.
    Question Analysis ResultsDecreasein “Topic Search” questionsSlight decrease in “Finding Known Articles” Increase in questions related to the Online Catalog/Holds@amandaclay
  • 23.
    But What Aboutthe Chat Librarians?Tools used in chat are the “answers” counted as defined:Databases from Libraries’ Database Portal listService Referrals to Specialists (internal or external)Other Resources not above include ILL, librarian answered (RR), guides, print sources, etc.@amandaclay
  • 24.
    Databases Used toAnswer Chat QuestionsApril 2010April 2011@amandaclay
  • 25.
  • 26.
    DiscussionOverall chat transactionsdecrease by 25%Topic Search questions decreaseDiscovery supplants use of Academic Search Premier for chat librariansOnline catalog/holds questions and references increase@amandaclay
  • 27.
    Discovery is nottaking the place of the Online Catalog despite being included in Discovery searchIn February 2011, 17 libraries were added for a total of 8 systems with 42 libraries in our Online CatalogWhat’s Up with the Online Catalog? @amandaclay
  • 28.
    Difficult to compare120 questions and 90 questionsAssessment needs change over time—past data analysis may need to be revisitedLimit yourself to what you really need to knowChallenges@amandaclay
  • 29.
    Looking Forward: Modifications for Future Assessment@amandaclay
  • 30.
    Need more datapoints to confirm Topic Search and possible Finding Known Articles reductionDiscovery has been adopted by chat librarians in lieu of Academic Search PremierSomething is going on with the online catalog worth investigatingConclusions@amandaclay
  • 31.

Editor's Notes

  • #2 MSU information20k studentsVery High Research Carnegie ClassificationMy backgroundManaging the VR program since 2007 in conjunction with the Reference CoordinatorResponsible for supervising 13 chat librarians on 56 hours at the Ref Desk, including training and review of chat transactionsBeen in a standalone implementation of Altarama’sDocutek Chat since September 2008 (before that in a consortia with ASERL on Questionpoint then Docutek), Our data begins there.Today’s presentation (method for assessment, past, now/Discovery, results, takeaways)Looking at a methodology that review of chat transcripts can be used for an iterative Library Assessment programIn 2010 used chat transcripts to inform the web redesign Asking if there are changes with the redesign and concurrent Discovery implementation.Results from our study Future plansTakeaways—this isn’t hard, it can assess the whole library (not just VR), and it is worth committing to an assessment cycle using VR.
  • #3 Chat is the only publicly visible online tool managed outside of the systems department. Requires co-operation between Reference and Systems to manage. Systems provides the tech support for Docutek troubleshooting.
  • #4 As with everyone, we are invested in finding ways to assess our services. Qualitative data is difficult and time consuming to get, in general, requiring surveys etc. Chat transcripts are a passive and readily available source of data. Patrons are coming to us every day and telling us their problems with our systems.
  • #5 Due to the established relationship between the VR Manager and Web Services, when the redesign began , I went to the head of Web Services and asked if data from chat would be helpful for the redesign. He said, maybe…yes. Let’s see a demo. I did a month, which was initially more time consuming than I thought it would be. Finding a methodology was challenging and figuring out parameters that would be interesting for them that I could identify was difficult. He liked what he saw, but I had no time to get more of the same (he wanted two years). He asked for all the entry questions from chat and email—which we had segregated by patron type and department to analyze by word frequency. An idea about what we were working with:
  • #6 Before—searchbox goes to the online catalog as a keyword search. Database tab searches for names of databases, same with e-journal page. Heavy pictures, lots of stuff.
  • #7 Discovery driven box, fewer pictures, cleaner layout
  • #8 This is actually what you see when you come to the screen. Discovery is huge.
  • #9 We wrote it up. I used a modified Grounded Theory model to look at the questions, they used Wordle to analyze word frequency.
  • #10 An example of the Wordle for Faculty chat (not email) entry questions and department affiliation. Entry questions may not be what people want, but it does reflect the language they use. What if we mirrored their language?
  • #11 Research questions proposed at the end of the paper left from the paper--
  • #12 Suggestion for answering the questions
  • #13 Based on this idea of a cycle from Flynn, Gilchrist and Olson’s paper. Evaluate what you’ve done and make changes based on that. Evaluate again.
  • #14 We chose to follow up one year later, after the August 2010 relaunch of the web redesign and concurrent release of discovery in that design.
  • #15 This is how we did it. Pulling short descriptions out, looking for patterns, trusting the data and not external research, use patterns to create codes, use codes to quantify.
  • #16 To set it up—there were a lower number of chat transactions in April 2011. When we code, the terms shift a bit. Questions asked don’t equal transactions, the equal unique types of questions within a transaction. Answers also are unique answers within a transaction.
  • #17 Why are we lower in this April? Is it because of the Discovery implementation and web redesign? ??
  • #18 Nope—we are on the increase each month (as has been our trend for several years). March and April were aberrant. Why? All we could come up with that was different about these months was---
  • #19 Tornado season…we had significant amounts of time spent in the basement and/or without power. Doesn’t seem like enough to drop 25%, but not sure what else it could be.
  • #20 Getting to the analysis. Looking for Patterns in the questions…
  • #21 Coming up with the Codes and counting—summing up the questions that map to these codes in this spreadsheet.
  • #22 Analysis of changes. Even remembering transactions are down 25%, Topic searches and to some extend Finding Known Articles (citations) are also down. This needs further study. Online catalog not down at all. In fact it is up significantly.
  • #24 How about the answers part? These are the tools used in chat. We had three parts. We are going to be looking at one part today-databases—since we were primarily looking at analyzing how Discovery was changing use of DB. We also have data on service referrals and other resources like ILL that we are comparing to the April 2010 data.
  • #25 We used fewer databases in 2011. We used the OC more. We used ASP less. We are on board with using Discovery in chat.
  • #26 Representation of the OC, Discovery and ASP changes. ASP down 75%, OC up 25%. Discovery seems to replace ASP.
  • #28 Decreases in Topic Searches and Finding Known Articles need to be tracked—maybe find another set of months to compare (Feb 2010 and Feb 2011?). OC has some issues that aren’t being addressed by Discovery. One remarkable thing about our OC is that we are acting as a consortium for 42 other libraries. 17 of which were added in February 2011. This is a possible factor in the increase in questions about the OC and holds within the OC.
  • #30 For us looking forward—there were things I didn’t look at in April 2010, so I didn’t really have a basis for comparison. They may not have been there (Remarkable) or likely I didn’t think of them (somehow) as relevant for what we needed. We’d like to track these going forward. In an iterative assessment process.
  • #31 What we can say…