LEARNING THROUGH CONVERSATION: <br />TRANSCRIPT ANALYSIS OF L-NET’S CHAT REFERENCE SERVICE  <br />
http://sites.google.com/site/<br />stacyjohnsmlscapstoneportfolio/home/featured-project<br />
Purpose of Project<br />1. What types of questions are being asked?<br />2. How effectively are they being answered?<br />...
Scrubbed Transcript<br />
Coding   Spreadsheet<br />
Katz (1997) Classification Scheme, modified by Arnold & Kaske (2005) and Seeking Synchronicity (2007), as interpreted and ...
Number of Questions by Type<br />
Categories of Chat Reference Answers Coding procedures derived from Arnold&Kaske (2005), developed/revised by Torsney & Ra...
Correct & Incorrect Answers<br />
Direct vs. Indirect Answers<br />
Top 10 ‘Frequently Used’ Sites<br />
Breakdown by Domain<br />
Types of Resources Used<br />
Reference Interviews<br />
Average time per chat<br />
High variation in reaction to prank-type chats.<br />Use of automated messages.<br />That's an excellent question. Please ...
Upcoming SlideShare
Loading in …5
×

C:\Fakepath\Learning Through Conversation

224 views

Published on

Includes data tables from a research project about LNet chat reference service

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
224
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Introduction: name, recent MLIS graduate from SCSU, Newport Library, L-net for 1 1/2 years
  • Special project address; switch to special project page.
  • Read through 500 scrubbed transcripts . . .
  • Worked from a list of randomized and anonymized transcripts
  • Codes to categorize questions
  • The high number of pranks includes a number of people who have misunderstood the term “Chat”, and who are looking for a social chatroom. It might be worthwhile to include information on the log-in page that explains more clearly that the service is intended for people who would like assistance in finding answers to genuine questions. It is important to be as welcoming as possible, so that people with odd or embarrassing questions will feel able to ask for help. It might be worthwhile to include something about an expectation for inoffensive language as well, although again, it must be carefully worded so as not to make anyone feel unwelcome
  • The answer rubric was less effective, as the described categories of answers didn’t always apply well to non-reference type questions. For example, what is the ‘correct’ response to a complaint about another staffperson? In order to be consistent, all answers to non-reference type questions were considered ‘other’.
  • Most questions are answered only with a link; Lnet staff do not find and paraphrase the answers.
  • Wikipedia used with caveat; Google includes images, translator, maps, etc . . . LOC= library of congress. Total of 612 sites recommended for the 450 questions. The relatively low percentage of Wikipedia use, or multiple use of any site, points to a broad use of the internet.
  • No oversight on .com and .org sites. .edu and .gov sites more authoritative
  • Some reference interview attempts were ignored or elicited annoyed responses. Time lag also complicates attempt. Most cases where one was appropriate, one was attempted, but in nearly 25% of cases where clarification would have been helpful, no attempt was made. (56/239) it is difficult to hold a linear conversation in online chat, where both parties may be typing about different issues at the same time. Experienced staff may have become accustomed to this, and to the fact that many patrons seem most satisfied with a quick answer rather than more questions. However, reading many transcripts shows that some non-reference interview chats tend to have one or more false starts before the patron’s needs are understood.
  • Overall average, 12.5 minutes. Numbers distorted by patrons signing off w/o librarian noticing, by multitasking, by followup time.
  • C:\Fakepath\Learning Through Conversation

    1. 1. LEARNING THROUGH CONVERSATION: <br />TRANSCRIPT ANALYSIS OF L-NET’S CHAT REFERENCE SERVICE  <br />
    2. 2. http://sites.google.com/site/<br />stacyjohnsmlscapstoneportfolio/home/featured-project<br />
    3. 3. Purpose of Project<br />1. What types of questions are being asked?<br />2. How effectively are they being answered?<br />3. What resources are being used to answer them?<br />
    4. 4. Scrubbed Transcript<br />
    5. 5. Coding Spreadsheet<br />
    6. 6. Katz (1997) Classification Scheme, modified by Arnold & Kaske (2005) and Seeking Synchronicity (2007), as interpreted and used for this L-net study. <br />
    7. 7. Number of Questions by Type<br />
    8. 8. Categories of Chat Reference Answers Coding procedures derived from Arnold&Kaske (2005), developed/revised by Torsney & Radford for Seeking Synchronicity (2007), and further adapted for the purposes of this study. <br />
    9. 9. Correct & Incorrect Answers<br />
    10. 10. Direct vs. Indirect Answers<br />
    11. 11. Top 10 ‘Frequently Used’ Sites<br />
    12. 12. Breakdown by Domain<br />
    13. 13. Types of Resources Used<br />
    14. 14. Reference Interviews<br />
    15. 15. Average time per chat<br />
    16. 16. High variation in reaction to prank-type chats.<br />Use of automated messages.<br />That's an excellent question. Please hold while I check some sources. <br />Hello. You've connected to your 24x7 online reference service staffed by librarians across the state. Please wait one moment while I take a look at your question. <br />I’m working with another person right now. I’ll be with you as soon as possible. Thanks for waiting... <br />L-net answers for reference-type questions 94% correct!<br />

    ×