• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Online Virtual Chat Library Reference Service : A Quantitative and Qualitative Analysis by Dave Harmeyer
 

Online Virtual Chat Library Reference Service : A Quantitative and Qualitative Analysis by Dave Harmeyer

on

  • 277 views

This is a March 12, 2008 PowerPoint presentation I did at UCLA’s Graduate School of Education and Information Science on the results of my dissertation “Online Virtual Chat Library Reference ...

This is a March 12, 2008 PowerPoint presentation I did at UCLA’s Graduate School of Education and Information Science on the results of my dissertation “Online Virtual Chat Library Reference Service: A Quantitative and Qualitative Analysis.” I was invited by Dr. John V. Richardson, my mentor, to speak in one of his graduate library classes. My Doctor of Education in Educational Technology is from Pepperdine University, class of 2007.

Statistics

Views

Total Views
277
Views on SlideShare
277
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution-NonCommercial LicenseCC Attribution-NonCommercial License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • During today’s presentation I will be discussing the findings of my dissertation Online Virtual Chat Library Reference Service: A Quantitative and Qualitative Analysis which I wrote for an Ed.D. in Educational Technology at Pepperdine University, 2007.
  • I intend to follow the following seven point outline.
  • A growing and popular technology in library reference service is virtual chat reference. This technology has come to augment the traditional face-to-face reference interview much like the telephone of past decades and email of more recent years. Through the Internet and web browsers, library patrons chat live with virtual librarians who then co-brose with the patron toward web resources which provide answers to their questions. There is extensive professional literature about library reference dating back to Samuel S. Green’s work in 1876 [titled The Desirableness of Establishing Personal Intercourse and Relations’ Between Librarians and Readers]. However, according to Richardson (2002) and others, professional reference writings through the decades are all too rich in anecdotal narratives with little or no research to back up recommended practices. In addition, the profession of librarianship lacks agreement in measuring the quality of the reference transaction. The purpose of this study is to address this void by suggesting a valid and original methodology designed to provide a theoretical conceptual model of best practices for the reference interview based on an empirical study of chat reference transactions.
  • The dissertation revolves around answering these three research questions .
  • Drawing on 2-½ years of archived academic library chant transcripts, this research uses Krippendorff’s (2004) content analysis methodology to study 333 randomly selected chat transcripts from a population of 2,500 by analyzing 16 independent variables and their relationship with the one dependent variable of an accurate reference answer using Pearson correlations and ANOVA variance tests. Because of the network nature of the virtual reference service, the 333 transcripts were found to be answered by 120 virtual librarians at 43 American institutions who conducted reference interviews with over 320 remote patrons accessing the service through the library web site of one Southern California undergraduate and masters-granting university.
  • Here is an example of a chat transcript. All 333 transcripts were copied from the original web-based archived database and pasted into one word document. For all transcripts, personal information such as names, phone numbers, email addresses, etc. were deidentified in compliance with Institutional Review Board directions.
  • For the first research question ( What measurable indicators are found for virtual chat reference transactions, looking exclusively at data created from the chat reference transcripts? ) 16 independent variables were chosen based on their possible influence on the one dependent variable of question accuracy and their ability to be observed in the content analysis of the chat transcripts. Most of the variables were derived from the Reference and User Services Association’s behavioral guidelines and from the review of relevant literature.
  • Out of the 16 independent variables there are these 7 quantitative ones which were coded by student library assistants. The 7 quantitative variables are:
  • Out of the 16 independent variables there are these 9 qualitative ones which were coded by 3 professional academic reference librarians. The 9 qualitative variables are:
  • The measure of the one dependent variable, accuracy of answer, was adapted from Richardson and Reyes’ (1995) 8-point scale: eight representing high accuracy and one representing low accuracy. Here are the top five definitions and corresponding service quality labels. The three reference librarians coded the dependent variable for each of the 333 transcripts based on these definitions of the eight categories and the document Guidelines for Scoring the Transaction Assessment Instrument.
  • And here are the bottom five definitions and their service quality labels. This scale allows for a good range of possible accuracy answers including referral and “I don’t know” type answers. Since the scale was originally interpreted as interval (analysis used by Richardson and Reyes ,1995, was Person correlations), the current study also applies this 8-point accuracy scale as interval. So the range of responses chosen by the coders is recognized as continuous and the distance between points on the scale are approximately equal.
  • The answer to the second research question ( Do published reference interview guidelines from RUSA, a set of other strategies and the nature of the query contribute to an accurate answer? ) is yes. The inferential statistical methods used for this study resulted in 30 significant relationships (where p < .05) between nine of the original 16 independent variables and the one dependent variable of answer accuracy. Five of the nine variables can be found in the RUSA behavioral guidelines and the remaining four originated in other strategies or the nature of the online chat environment.
  • Looking at how the dependent variable was scored, the three pairs of coders disagreed on the category for answer accuracy 74 times. So, most of the coder pairs agreed on their scoring or differed by only 1 or 2 points. These 74 scores were averaged with a point difference M = .38, SD .794. This averaging technique accounts for all the half point data and the corresponding totals. N = 331 and not 333 because two of the transcripts were removed from analysis (one contained a patron determined to be a minor and one was a duplicate record). To answer the question “What is the % of reference transactions when librarians provided patrons with accurate answers?” the data revels 3 things: 1) if the threshold is considered at a relatively liberal assessment of 5 and above (as scored by the coders) librarians answered questions accurately at about 76.7% or 3/4 th of the time (service quality being satisfactory or better), 2) if the threshold is set at a more conservative level of 7 or above then librarians answered questions accurately only 51.3% or about ½ of the time (service quality very good or better) which interestingly closely reflects the 55% rule, 3) it is worth noting that a relatively large number 20.5% (or 1/5) of the transactions (68 out of 333 occurrences) recorded librarians providing no direct accurate answer showed them referring the patron on to another person, institution or service.
  • The answer to the third research question ( What conceptual model of best practices can be suggested by an analysis of the data?) is summarized by the following eight-point rubric. The essence of the conceptual model is captured by the Latin phrase minor plus est “Less is more.”
  • With the remaining time I’d like us to look at each of these eight findings and discuss some of the statistics which back them up. The first of the eight best practices to improve virtual reference answer accuracy is to keep gaps between sending responses to patron to no more than 1 ½ minutes. 1) The shorter the time gaps between sending responses to patrons, the better will be answer accuracy. This principle reinforces RUSA’s behavioral guideline of librarian interest , particularly section 2.6, keeping the librarian’s time away from the patron short and maintaining “word contact” (RUSA, 2004, June). Although the extreme outliers for the gap variable may have skewed quartile results for this study, it may be useful to remember to keep gaps to no less than one-and-a-half minutes because anything near two minutes or higher is likely to decease answer accuracy.
  • At the < 0.05 level of significance, post hoc tests (Tukey HSD) revealed that the mean accuracy level for longest gaps by librarians of less than 1.85 minutes (M = 6.68) was higher than gaps by librarians between 1.87 and 2.83 minutes (M = 5.97) and over 4.47 minutes (M = 6.03). In other words (using the labels corresponding to accuracy levels), accuracy was at the high good level for librarian who kept gaps at smaller increments (those below 1.85 minutes) and jumped down to satisfactory for librarians who allowed gaps to be larger (1.87 – 2.83 minutes) and low good for those who had even longer gaps (over 4.47 minutes).
  • The 2 nd variable to show a significant relationship with answer accuracy is service time which is the is the total time of each chat transaction. The mean was 16.0 minutes (n = 331) which is 7 minutes longer than Richardson's 8.9 minutes (n = 20,000). However, the mean of 16 does fit data found in six face-to-face reference studies with service time ranging from 10 to up to 20 minutes. When service time data was treated numerically, there was a significant negative relationship between service time and answer accuracy (Pearson correlation r = -0.143, r 2 = .02, p = .009). A shorter service time is related to a higher accuracy score. A total of only 2% of the variance in answer accuracy can be attributed to service time. Thus, there is a fairly weak relationship. When service time data was treated categorically, there was a statistically significant difference in average service time based on the level of accuracy (one-way ANOVA F = 5.126, p = .002).
  • At the < 0.05 level of significance, post hoc tests (Tukey HSD) revealed that the mean accuracy level for service times that were less than 8.3 minutes (M = 6.82) was higher than service time between 8.32-13.08 minutes, 13.1-20.75 minutes and 20.77 minutes and above (M = 6.02, 6.04, 6.12). In other words (using the labels corresponding to accuracy levels), accuracy was at the high good level for reference transactions which had short service times (that is to say those below 8.3 minutes) and jumped down to low good for those transactions which had longer service times (those which were over 8.32 minutes). Maintaining a shorter total transaction time will increase answer accuracy. One is reminded of Ranganathan’s fourth out of five laws: Save the time of the reader (Ranganathan, 1963). This concept of the model also reflects another RUSA behavioral guideline, 2.7, finishing questions in a timely manner (RUSA, 2004, June). An interesting phenomenon occurs at a particular point along a time continuum in the reference transaction. Once this point is reached the librarian is less likely to get an accurate answer and is wasting the time of the patron. The wise librarian who does not find the answer within a reasonable time, refers the patron to another person, institution or service which may lead to an accurate answer. Findings from this study seem to suggest that chat librarians need to keep transactions to within eight minutes. Transactions beyond the eight minute threshold not only decrease answer accuracy but also do so at a very quick rate. The average time of 16 minutes per transaction found in this study is therefore too long.
  • Librarians who keep keystrokes to a minimum are more likely to increase answer accuracy than those librarians who keystroke a great deal. The behavior of keeping things brief is a trend found in this study. Virtual reference software vendors may want to test if this principle works by adapting the virtual reference interface to record not only number of keystrokes but also time. If one uses the standard 74 mono-space characters per line of text of email, then this study suggests the virtual librarian keep keystrokes per transaction to within six and-a-half lines of text (or 480 characters). Anything over 15 lines of text will decrease accuracy. Text which is copied and pasted into the transaction would not count as keystrokes nor would some scripted messages.
  • Using an electronic text standard of 74 mono-space characters per line of text for email I’ve included line equivalents for each keystroke number (for example 480 keystrokes = 6.5 lines of text). At the < 0.05 level of significance, post hoc tests (Tukey HSD) revealed that the mean accuracy level for keystrokes less than 6.5 lines (480 keystrokes) for librarian (M = 6.58) was higher than librarian keystrokes that were more than 15 lines (1128 keystrokes) (M = 5.93), keystrokes less than 2.5 lines (188 keystrokes) for patron (M = 6.65) was higher than patron keystrokes more than 7.5 lines (545 keystrokes) (M = 5.96) and for keystrokes less than 9 lines (690 keystrokes) for both (M = 6.63) was higher than keystrokes for both that were more than 22.5 lines (1668 keystrokes) (M = 5.99). In other words (using the labels corresponding to accuracy levels), accuracy was at the good or high good level for reference transactions which had fewer keystrokes (that is to say librarians who typed below 6.5 lines, patrons who typed below 2.5 lines and for combined librarians and patrons who typed below 9 lines of text) and decreased to satisfactory or good levels for those transactions which had longer keystrokes (that is librarian who typed over 15 lines, patrons who typed over 7.5 lines and combined librarians and patrons who typed both over 22.5 lines).
  • Expect to type twice as many characters as the patron. A phenomenon in this study was the observation that for every character typed by the patron the librarian returned twice as many. This kind of dance online appeared across all four quartile segments between librarian and patron.
  • One important technique in the RUSA behavioral guidelines under the Listening/Inquiry area is the use of open-ended questioning by librarians (section 3.7). An example would be: “Please tell me more about your topic.” The frequency of open-ended questions for this study was found to be present 33% of the time, absent 22.5% of the time (which means the coders felt the librarian should have used this technique but did not) and not applicable 22.8% of the time (meaning the coders determined the librarian did not ask an open-ended question but that it was OK since to do so would have not been appropriate for that reference interview). In addition when coders disagreed about assigning a score (present, absent, not applicable) it was recorded as ambiguous, here occurring in 67 cases or 20.1% (and statistical analyses were run with and without these ambiguous data to determine if there were any changes in outcome). It is interesting to note that in almost half of the cases (45.3%) either the librarian did not use than open-ended question (22.5%) or the coder determined the patron’s question did not require one (22.8%). More study might determine that questions posted online are more complete than face-of-face, reducing the need to train and access reference librarians to ask opened-ended questions.
  • At the < 0.05 level of significance, post hoc tests (Tukey HSD) revealed that the mean accuracy level for those transactions which coders determined were not applicable for open-ended questions (M = 6.72) were higher than those transactions where coders determined librarians did use open-ended questions (M = 5.97). In other words (using the labels corresponding to accuracy levels), accuracy was at the high good level for transactions which did not require an open-ended question and jumped down to good for those transactions where the librarian did ask an open-ended question. I know that sounds odd, that you get less accuracy when you use the suggested open-ended question technique but that’s what the data shows.
  • A second important technique in the RUSA behavioral guidelines under the Listening/Inquiry area is the use of closed-ended questioning by librarians (section 3.8). An example would be: “What type of information do you need (books, articles etc.)?” The frequency of open-ended questions for this study was found to be present 55% of the time, absent 14.4% of the time (which means the coders felt the librarian should have used this technique but did not) and not applicable 16.8% of the time (meaning the coders determined the librarian did not ask an closed-ended question but that it was OK since to do so would have not been appropriate for that reference interview). In addition when coders disagreed about assigning a score (present, absent, not applicable) it was recorded as ambiguous, here occurring in 44 cases or 13.2% (and statistical analyses were run with and without these ambiguous data to determine if there were any changes in outcome). ANOVA tests found there was no statistically significant difference in average closed-ended questions responses based on the level of accuracy (F = 2.378, p = .070) but when ambiguous data were removed there was a significant difference (F = 3.314, p = .038).
  • After ambiguous data is filtered out -- at the < 0.05 level of significance, post hoc tests (Tukey HSD) revealed that the mean accuracy level for those transactions which coders determined were not applicable for closed-ended questions (M = 6.69) were higher than those transactions where coders determined librarians did not use open-ended questions (M = 5.91). In other words librarians got less accuracy when they did not use the suggested closed-ended question technique (and coders thought that they should have) verses when the coders determined that the reference interview did not need it (and the librarian did not use it). Practically this means that if the reference interview does not call for a closed-ended question to answer the question, then do not use one – there’s a better chance of giving the patron an accurate answer. If the librarians fails to ask a closed-ended question and one should have been asked, then the librarian risks giving a less accurate answer.
  • A third important technique in the RUSA behavioral guidelines under the Follow-up area is the librarian asking the patron “does this completely answer your question?" (section 5.1) and is called the follow-up question for this study. The frequency of librarians asking the follow-up question for this study was found to be present 37.5% of the time, absent 12.6% of the time (which means the coders felt the librarian should have used this technique but did not) and not applicable 32.4% of the time (meaning the coders determined the librarian did not ask a follow-up question but that it was OK since to do so would have not been appropriate for that reference interview). In addition when coders disagreed about assigning a score (present, absent, not applicable) it was recorded as ambiguous, here occurring in 56 cases or 16.8% of the time. It’s interesting to note that 1/3 rd of the transactions were coded as not applicable. Further study could determine that these questions were referred or the question was so clearly answered that a follow-up question would not be necessary. ANOVA tests found there was statistically significant difference in average follow-up question responses by librarians based on the level of accuracy (F = 11.169, p = .000) and when ambiguous data were removed there continued to be a significant difference (F = 16.482, p = .000).
  • Question difficulty is the last of the variables to show a significant relationship with answer accuracy. It was scored by coders along a seven point scale with seven being high difficult and one being low difficult. As with the question accuracy variable, 186 scores were averaged where coders disagreed with a point difference M = .82, SD .916. This averaging technique accounts for all the half point data and the corresponding totals. Half of the patron questions were judged as being low difficulty (1.0-2.0) with 3/4 th of the questions falling below medium difficulty (3.5) and only a small percent (6.6%) being judged as high difficulty (5.0-8.0). There was a significant negative relationship between question difficulty and answer accuracy (r = -0.403, r 2 = .16, p = .000). A lower difficulty question is related to a higher accuracy score. A total of 16% of the variance in answer accuracy can be attributed to question difficulty. Thus, there is a good relationship.
  • At the < 0.05 level of significance, post hoc tests (Tukey HSD) revealed that the mean accuracy level for low difficult question category of 1.0 and 1.5 (M = 7.24 & M = 6.95) was higher than those transactions that had slightly more difficult questions categories of 2.0 and 2.5 and above (M = 6.36 & M = 5.94, etc.). In other words (using the labels corresponding to accuracy levels), accuracy was at the very good or high good level for reference interviews which contained very low difficult questions. Interestingly, this means that very easy questions have high accuracy (which one would expect) but accuracy begins to suffer below the medium difficult questions, not just for the medium to high difficult questions.

Online Virtual Chat Library Reference Service : A Quantitative and Qualitative Analysis by Dave Harmeyer Online Virtual Chat Library Reference Service : A Quantitative and Qualitative Analysis by Dave Harmeyer Presentation Transcript

  • Online Virtual Chat LibraryOnline Virtual Chat LibraryReference Service:Reference Service:A Quantitative and QualitativeA Quantitative and QualitativeAnalysisAnalysisDave Harmeyer, M.L.S., Ed.D.Dave Harmeyer, M.L.S., Ed.D.Director of Research & DevelopmentDirector of Research & DevelopmentUniversity LibrariesUniversity LibrariesAzusa Pacific UniversityAzusa Pacific UniversityMarch 12, 2008March 12, 2008UCLA, GSE&ISUCLA, GSE&IS
  • OutlineOutline1.1. Purpose of the StudyPurpose of the Study2.2. Research QuestionsResearch Questions3.3. MethodologyMethodology4.4. Variables (I.V., D.V.)Variables (I.V., D.V.)5.5. Significant FindingsSignificant Findings6.6. ConclusionsConclusions7.7. Questions & AnswersQuestions & Answers
  • Purpose of the StudyPurpose of the Study Virtual chat reference augments face-to-Virtual chat reference augments face-to-face reference interviewface reference interview Library reference literature lacks research-Library reference literature lacks research-based findings to back up recommendedbased findings to back up recommendedpracticespractices This study fills the void with a theoreticalThis study fills the void with a theoreticalconceptual model based on an empiricalconceptual model based on an empiricalstudy of chat reference transactionsstudy of chat reference transactions
  • Research QuestionsResearch Questions1. What measurable indicators are found for virtual1. What measurable indicators are found for virtualchat reference transactions, looking exclusivelychat reference transactions, looking exclusivelyat data created from the chat referenceat data created from the chat referencetranscripts?transcripts?2. Do published reference interview guidelines from2. Do published reference interview guidelines fromRUSA, a set of other strategies and the nature ofRUSA, a set of other strategies and the nature ofthe query contribute to an accurate answer?the query contribute to an accurate answer?3. What conceptual model of best practices can be3. What conceptual model of best practices can besuggested by an analysis of the data?suggested by an analysis of the data?
  • MethodologyMethodology Two-and-a-half years of archived academic library chatTwo-and-a-half years of archived academic library chattranscripts using Krippendorff’s (2004) content analysistranscripts using Krippendorff’s (2004) content analysis 333 random transcripts from 2,500333 random transcripts from 2,500 Analyzing 16 independent variables and their relationshipAnalyzing 16 independent variables and their relationshipwith one dependent variable of an accurate referencewith one dependent variable of an accurate referenceansweranswer Pearson correlations and ANOVA variance testsPearson correlations and ANOVA variance tests 120 virtual librarians at 43 American institutions120 virtual librarians at 43 American institutions 320 remote patrons accessing the service through one320 remote patrons accessing the service through oneSouthern California undergraduate/masters universitySouthern California undergraduate/masters university
  • Methodology (cont.)Methodology (cont.)
  • VariablesVariables Research Question 1 answered:Research Question 1 answered: 16 independent variables16 independent variables 1 dependent variable - question accuracy1 dependent variable - question accuracy Influence on question accuracyInfluence on question accuracy Observed in content analysis of chatObserved in content analysis of chattranscriptstranscripts Derived from RUSA guidelinesDerived from RUSA guidelines Derived from literature reviewDerived from literature review
  • Quantitative IVsQuantitative IVs1. Librarian’s initial contact time (1. Librarian’s initial contact time (hold timehold time, in, inseconds)seconds)2. Total time of transaction (2. Total time of transaction (service timeservice time, in seconds), in seconds)3. Longest time gap by librarian (in seconds)3. Longest time gap by librarian (in seconds)4. Number of URLs co-browsed with the patron4. Number of URLs co-browsed with the patron5. Keystrokes by librarian5. Keystrokes by librarian6. Keystrokes by patron6. Keystrokes by patron7. Keystrokes by both7. Keystrokes by both
  • Qualitative IVsQualitative IVs1. The question’s difficulty (seven-point scale)1. The question’s difficulty (seven-point scale)2. Response to a patron’s “are you there” statements (scored2. Response to a patron’s “are you there” statements (scoredas present, not present, not applicable or ambiguous whenas present, not present, not applicable or ambiguous whencoders disagreed)coders disagreed)3. Librarian’s friendliness3. Librarian’s friendliness4. Lack of jargon4. Lack of jargon5. Use of open-ended questions5. Use of open-ended questions6. Use of closed and/or clarifying questions6. Use of closed and/or clarifying questions7. Librarian maintains objectivity7. Librarian maintains objectivity8. Asking if the question was answered completely8. Asking if the question was answered completely9. The type of question (seven categories: ready reference,9. The type of question (seven categories: ready reference,research question, library technology, request for materials,research question, library technology, request for materials,bibliographic verification, other and ambiguous forbibliographic verification, other and ambiguous fordisagreements among coders)disagreements among coders)
  • Dependent VariableDependent VariableCoders Qualitative JudgmentsCoders Qualitative Judgments Service QualityService Quality88 Librarian gave (or referred) patron to a singleLibrarian gave (or referred) patron to a singlesource with an accurate answersource with an accurate answer ExcellentExcellent77 Librarian gave (or referred) patron to more thanLibrarian gave (or referred) patron to more thanone source, one of which provided an accurateone source, one of which provided an accurateansweranswer Very goodVery good66 Librarian gave (or referred) patron to a singleLibrarian gave (or referred) patron to a singlesource which does not lead directly to ansource which does not lead directly to anaccurate answer but did serve as a preliminaryaccurate answer but did serve as a preliminarysourcesource GoodGood55 Librarian gave (or referred) patron to moreLibrarian gave (or referred) patron to morethan one source, none of which leads directlythan one source, none of which leads directlyto an accurate answer but one which served asto an accurate answer but one which served asa preliminary sourcea preliminary source SatisfactorySatisfactory
  • Dependent Variable (cont.)Dependent Variable (cont.)Coders Qualitative JudgmentsCoders Qualitative Judgments Service QualityService Quality44 No direct accurate answer given,No direct accurate answer given,referred to another person or institutionreferred to another person or institution Fair / poorFair / poor33 No accurate answer (or referral) givenNo accurate answer (or referral) given(e.g., “I don’t know”)(e.g., “I don’t know”) FailureFailure22 Librarian gave (or referred) patron to aLibrarian gave (or referred) patron to asingle source which did not answer thesingle source which did not answer thequestionquestion UnsatisfactoryUnsatisfactory11 Librarian gave (or referred) patron toLibrarian gave (or referred) patron tomore than one source, none of whichmore than one source, none of whichanswered the questionanswered the question Most unsatisfactoryMost unsatisfactory(Richardson and Reyes, 1995)(Richardson and Reyes, 1995)
  • Significant FindingsSignificant FindingsSummarySummary Research Question 2 answered: yesResearch Question 2 answered: yes 30 significant relationships (p < .05)30 significant relationships (p < .05) From 9 of 16 variablesFrom 9 of 16 variables 5 found in RUSA guidelines5 found in RUSA guidelines 4 found in other strategies or nature of4 found in other strategies or nature ofonline chatonline chat
  • Significant FindingsSignificant FindingsAnswer AccuracyAnswer AccuracyAnswer Accuracy as Judged by Coders (N=331)Answer Accuracy as Judged by Coders (N=331)CriteriaCriteria Point FrequencyPoint Frequency %% Cum. %Cum. %Accurate Answer (single source)Accurate Answer (single source)ExcellentExcellent 8.08.0 8888 26.626.6 26.6 (1/4)26.6 (1/4)Accurate Answer (mult. sources)Accurate Answer (mult. sources) 7.57.5 1818 5.45.4 32.032.0Very goodVery good 7.07.0 6464 19.319.3 51.3 (1/2)51.3 (1/2)Preliminary Source (single source) 6.5Preliminary Source (single source) 6.5 99 2.72.7 54.054.0GoodGood 6.06.0 3333 10.010.0 64.0 (2/3)64.0 (2/3)Preliminary Source (mult. sources)Preliminary Source (mult. sources) 5.55.5 77 2.12.1 66.166.1SatisfactorySatisfactory 5.05.0 3535 10.610.6 76.7 (3/4)76.7 (3/4)No Accurate Answer, referredNo Accurate Answer, referred 4.54.5 1111 3.33.3 80.080.0Fair / poorFair / poor 4.04.0 5757 17.217.2 97.297.2““I don’t know,” no referralI don’t know,” no referral 3.53.5 22 0.60.6 97.897.8FailureFailure 3.03.0 33 0.90.9 98.798.7Not Accurate (single source)Not Accurate (single source) 2.52.5 22 0.60.6 99.399.3UnsatisfactoryUnsatisfactory 2.02.0 22 0.60.6 99.999.9Not Accurate (multiple sources)Not Accurate (multiple sources) 1.51.5 00 0.00.0 99.999.9Most unsatisfactoryMost unsatisfactory 1.01.0 00 0.00.0 99.999.9
  • Significant FindingsSignificant FindingsBest PracticesBest PracticesResearch Question 3 answered: yesResearch Question 3 answered: yesA Conceptual Model for Reference Chat AccuracyA Conceptual Model for Reference Chat Accuracyminor plus est (less is more)minor plus est (less is more)1. Keep time gaps between sending responses to1. Keep time gaps between sending responses topatrons to no more than one-and-a-half minutespatrons to no more than one-and-a-half minutes2. Maintain a total chat transaction time of eight2. Maintain a total chat transaction time of eightminutes or lessminutes or less3. Keep total keystrokes per transaction to within six3. Keep total keystrokes per transaction to within sixand-a-half lines of text (or 480 characters).and-a-half lines of text (or 480 characters).4. Expect to type twice as many characters as the4. Expect to type twice as many characters as thepatronpatron
  • Significant FindingsSignificant FindingsBest Practices (cont.)Best Practices (cont.)5. Be careful about beginning the question5. Be careful about beginning the questionnegotiation segment of the reference interviewnegotiation segment of the reference interviewwith an open question unless the nature of thewith an open question unless the nature of thepatron’s question explicitly calls for one.patron’s question explicitly calls for one.6. Ask closed or clarifying questions when6. Ask closed or clarifying questions whenappropriateappropriate7. At the end of the reference transaction, ask “Does7. At the end of the reference transaction, ask “Doesthis completely answer your question?”this completely answer your question?”8. Even moderately difficult questions decrease8. Even moderately difficult questions decreaseanswer accuracy and not just the medium to highanswer accuracy and not just the medium to highdifficult questionsdifficult questions
  • Significant FindingsSignificant Findings1. Gaps1. Gaps Keep time gaps between sendingKeep time gaps between sendingresponses to patrons to not much more thanresponses to patrons to not much more thanone-and-a-half minutesone-and-a-half minutes Reinforces RUSA’sReinforces RUSA’s interestinterest guideline (2.6),guideline (2.6),time away from the patron short, maintaintime away from the patron short, maintain“word contact” (RUSA, June 2004)“word contact” (RUSA, June 2004) Anything nearing two minutes or higher isAnything nearing two minutes or higher islikely to decrease answer accuracylikely to decrease answer accuracy
  • Significant FindingsSignificant Findings1. Gaps1. GapsLongest Librarian GapLongest Librarian GapQuartiles (min.) Acc. Mean Sig. of Diff. (p)_____Quartiles (min.) Acc. Mean Sig. of Diff. (p)_____1st 0 --1st 0 -- 1.851.85 6.686.682nd2nd 1.87 – 2.831.87 – 2.83 5.975.97 .016.016 (1st & 2nd)(1st & 2nd)3rd 2.85 – 4.453rd 2.85 – 4.45 6.316.31 .403 (1st & 3rd, no sig.).403 (1st & 3rd, no sig.)4th4th 4.47 --4.47 -- 6.036.03 .036.036 (1st & 4th)(1st & 4th)Diff=.71Diff=.65Diff=.71
  • Significant Findings:Significant Findings:2. Service Time2. Service Time Maintain a total chat transaction time ofMaintain a total chat transaction time ofeight minutes or lesseight minutes or less Average = 16.0 minutes (n = 331)Average = 16.0 minutes (n = 331) 7 minutes more than Richardson’s (2002)7 minutes more than Richardson’s (2002)8.9 minutes (n = 20,000)8.9 minutes (n = 20,000) However, similar to six f2f studies with meanHowever, similar to six f2f studies with meanservice time ranging from 10 to 20service time ranging from 10 to 20
  • Significant FindingsSignificant Findings2. Service Time2. Service TimeService Time of TransactionsService Time of TransactionsQuartiles (min.) Accuracy Mean Sig. of Diff. (p)Quartiles (min.) Accuracy Mean Sig. of Diff. (p)1st 0 – 8.31st 0 – 8.3 6.826.822nd 8.32 – 13.082nd 8.32 – 13.08 6.026.02 .005 (1st & 2nd).005 (1st & 2nd)3rd 13.1 – 20.753rd 13.1 – 20.75 6.046.04 .007 (1st & 3rd).007 (1st & 3rd)4th 20.77 --4th 20.77 -- 6.126.12 .020 (1st & 4th).020 (1st & 4th)Diff=.78Diff=.70Diff=.80
  • Significant Findings:Significant Findings:3. Keystrokes3. Keystrokes Keep total keystrokes per transactionKeep total keystrokes per transactionto within six and-a-half lines of text (orto within six and-a-half lines of text (or480 characters)480 characters) Application to virtual software vendorsApplication to virtual software vendors(add a timer)(add a timer) Anything over 15 lines of text willAnything over 15 lines of text willdecrease accuracydecrease accuracy
  • Significant FindingsSignificant Findings3. Keystrokes3. KeystrokesKeystrokesKeystrokesKeystroke Quartiles Accuracy Mean Sig. of Diff. (p)Keystroke Quartiles Accuracy Mean Sig. of Diff. (p)LibrarianLibrarian1st 0 – 480 (6.5 lines)*1st 0 – 480 (6.5 lines)* 6.586.584th 1128 (15 lines) --4th 1128 (15 lines) -- 5.935.93 .041 (1st & 4th).041 (1st & 4th)PatronPatron1st 0 – 188 (2.5 lines)1st 0 – 188 (2.5 lines) 6.656.654th 545 (7.5) --4th 545 (7.5) -- 5.965.96 .023 (1st & 4th).023 (1st & 4th)Both Librarian & PatronBoth Librarian & Patron1st 0 – 690 (9 lines)1st 0 – 690 (9 lines) 6.636.634th 1668 (22.5 lines) --4th 1668 (22.5 lines) -- 5.995.99 .041 (1st & 4th).041 (1st & 4th)*measured at 74 keystrokes per line of text*measured at 74 keystrokes per line of text
  • Significant FindingsSignificant Findings4. Twice the Typing4. Twice the Typing Expect to type twice as many characters asExpect to type twice as many characters asthe patronthe patron Appeared across all four quartile segmentsAppeared across all four quartile segmentsbetween librarian and patron.between librarian and patron.
  • Significant FindingsSignificant Findings5. Open-ended Questions5. Open-ended Questions Be careful about beginning the questionnegotiation segment of the reference interviewwith an open question unless the nature of thepatron’s question explicitly calls for oneFrequency of Open-ended QuestionsCategoryCategory FrequencyFrequency PercentPercentPresentPresent 112112 3333Absent (but should) 75Absent (but should) 75 22.522.5Not ApplicableNot Applicable 7676 22.822.8AmbiguousAmbiguous 6767 20.120.1
  • Significant FindingsSignificant Findings5. Open-ended Questions5. Open-ended QuestionsOpen-ended QuestionsOpen-ended QuestionsCategory Accuracy Mean Sig. of Diff. (p)Category Accuracy Mean Sig. of Diff. (p)Not ApplicableNot Applicable 6.726.72PresentPresent 5.975.97 .008 (3 & 1).008 (3 & 1)
  • Significant Findings:Significant Findings:6. Closed-ended Questions6. Closed-ended Questions Ask closed or clarifying questions whenappropriateFrequency of Closed-ended and/or Clarifying QuestionsFrequency of Closed-ended and/or Clarifying QuestionsCategoryCategory Frequency PercentFrequency PercentPresentPresent 183183 55.055.0AbsentAbsent 4848 14.414.4Not ApplicableNot Applicable 5656 16.816.8AmbiguousAmbiguous 4444 13.213.2
  • Significant FindingsSignificant Findings6. Closed-ended Questions6. Closed-ended QuestionsClosed and/or Clarifying QuestionsClosed and/or Clarifying QuestionsCategoryCategory Accuracy Mean Sig. Of Diff. (p)Accuracy Mean Sig. Of Diff. (p)3. Not Applicable3. Not Applicable 6.696.692. Absent2. Absent 5.915.91 .065 (3 & 2, not sig.).065 (3 & 2, not sig.)Ambiguous filteredAmbiguous filtered3. Not Applicable3. Not Applicable 6.696.692. Absent2. Absent 5.915.91 .040.040 (3 & 2)(3 & 2)
  • Significant FindingsSignificant Findings7. Follow-up Question7. Follow-up Question At the end of the reference transaction, ask “Doesthis completely answer your question?”Frequency of the Librarian Asking If the QuestionFrequency of the Librarian Asking If the QuestionHad Been Answered CompletelyHad Been Answered CompletelyCategoryCategory FrequencyFrequency PercentPercentPresentPresent 125125 37.537.5AbsentAbsent 4242 12.612.6Not ApplicableNot Applicable 108108 32.432.4AmbiguousAmbiguous 5656 16.816.8
  • Significant FindingsSignificant Findings8. Question Difficulty8. Question DifficultyQuestion DifficultyQuestion DifficultyCriteria Point Frequency %Criteria Point Frequency % Cum. %Cum. %LowLow 1.01.0 5959 17.717.7 17.817.81.51.5 4141 12.312.3 30.230.22.02.0 6666 19.819.8 50.2 1/250.2 1/22.52.5 5555 16.516.5 66.866.83.03.0 2727 8.18.1 74.9 3/474.9 3/4MediumMedium3.53.5 2626 7.87.8 82.882.84.04.0 1717 5.15.1 87.987.94.54.5 1818 5.45.4 93.493.45.05.0 66 1.81.8 95.295.25.55.5 44 1.21.2 96.496.46.06.0 77 6.6%6.6% 2.12.1 98.598.56.56.5 33 0.90.9 99.499.4HighHigh 7.07.0 22 0.60.6 100.0100.0
  • Significant FindingsSignificant Findings8. Question Difficulty8. Question DifficultyQuestion Difficulty and Accuracy (reporting only significance)Question Difficulty and Accuracy (reporting only significance)CriteriaCriteria Points Accuracy Mean Sig. of Diff. (p)Points Accuracy Mean Sig. of Diff. (p)LowLow 1.01.0 7.247.242.02.0 6.366.36 .035 (1.0 & 2.0).035 (1.0 & 2.0)2.52.5 5.945.94 .000 (1.0 & 2.5).000 (1.0 & 2.5)MediumMedium 3.53.5 5.425.42 .000 (1 & 3.5).000 (1 & 3.5)4.04.0 5.535.53 .001 (1.0 & 4.0).001 (1.0 & 4.0)4.54.5 5.225.22 .000 (1.0 & 4.5).000 (1.0 & 4.5)5.05.0 4.924.92 .009 (1.0 & 5.0).009 (1.0 & 5.0)5.55.5 4.04.0 .001 (1.0 & 5.5).001 (1.0 & 5.5)HighHigh 6.06.0 4.834.83 .006 (1.0 & 6.0).006 (1.0 & 6.0)LowLow 1.51.5 6.956.952.52.5 5.945.94 .041 (1.5 & 2.5).041 (1.5 & 2.5)MediumMedium 3.53.5 5.425.42 .002 (1.5 & 3.5).002 (1.5 & 3.5)4.04.0 5.535.53 .032 (1.5 & 4.0).032 (1.5 & 4.0)4.54.5 5.225.22 .001 (1.5 & 4.5).001 (1.5 & 4.5)5.55.5 4.04.0 .005 (1.5 & 5.5).005 (1.5 & 5.5)HighHigh 6.06.0 4.834.83 .038 (1.5 & 6.0).038 (1.5 & 6.0)
  • 5. Conclusions5. Conclusions Virtual reference lacks a statistically sound conceptualVirtual reference lacks a statistically sound conceptualmodel to guide the library profession toward improving themodel to guide the library profession toward improving thereference interview through empirical studies which informsreference interview through empirical studies which informsbest practices in professional training and assessment.best practices in professional training and assessment. This study addresses that knowledge void by its discoveryThis study addresses that knowledge void by its discoveryof several statistical relationships between nine behavioralof several statistical relationships between nine behavioralfactors and an acurate answer in the reference interview.factors and an acurate answer in the reference interview. It is hoped that the suggested eight-point rubric and otherIt is hoped that the suggested eight-point rubric and otherresults of this project can be a catalyst for practicalresults of this project can be a catalyst for practicalapplication toward improving the practice of the globalapplication toward improving the practice of the globalcommunity of professionals and stakeholders in the field ofcommunity of professionals and stakeholders in the field oflibrary and information studies.library and information studies.
  • 6. Questions6. Questions&&AnswersAnswers