• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Using an Availability Study to Assess Access to Electronic Articles
 

Using an Availability Study to Assess Access to Electronic Articles

on

  • 186 views

Presentation given at the 2010 Library Assessment Conference. Study results published in the October, 2011, issue of the Journal of the Medical Library Association.

Presentation given at the 2010 Library Assessment Conference. Study results published in the October, 2011, issue of the Journal of the Medical Library Association.

Statistics

Views

Total Views
186
Views on SlideShare
184
Embed Views
2

Actions

Likes
0
Downloads
0
Comments
0

2 Embeds 2

http://www.linkedin.com 1
https://www.linkedin.com 1

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Introduce self. Note: This work was done at my prior library, Oregon Health & Science University, with help from my colleague, Carla Pealer.
  • Including this information for context but will go through it quickly in the interest of time.
  • Focus of today’s presentation will be methodology rather than results, though I will summarize our major findings [Note: cut that part if time is short]. Want to share methodology that you can apply in your own setting. Detailed results can be found in the conference paper.
  • Like most libraries, OHSU uses a suite of tools to manage electronic resources and provide access to them. Most of these were from Innovative Interfaces, except for the proxy server. Again, just context for our findings.
  • Now that we’ve covered the context, let’s talk about the problem we were trying to solve. We had lots of data related to use of our electronic collections, but none of them measured the complete experience of retrieving an article. Usage data – quantity only. Can’t see what users want but don’t get, either b/c we don’t have it, or something isn’t working. Also can’t see if they encounter problems trying to access an article.Support requests provide anecdotal data on problems. But hard to know whether a problem is widespread, how much time and energy we should spend trying to solve it. Also, users may not submit requests because they eventually found a way to get the article—or because they gave up. Need more solid data to make decisions about collections and access.Usability tests are excellent for identifying problems with web interfaces, but they involve a short list of tasks created by library staff rather than actual user requests. So they don’t reflect user demand, and they’re unlikely to include items for which there are problems. They are designed to identify problems with user interfaces rather than with holdings or retrieval tools.We didn’t know how successful users are at getting the articles they want. We also didn’t know all the factors that get in their way, and how often these problems occur. Needed another way to measure the quality of our electronic collections and access to them.
  • In a nutshell, when you do an availability study, you gather actual user requests, try to fill them, and see what happens. You record what happens, noting the number and nature of any problems you encounter, and analyze the data to learn: How often your library is able to satisfy a user’s request for an item (in our case, a journal article)What barriers users encounter when trying to retrieve itemsMethod has been around awhile; first described by Paul Kantor in 1976.
  • And now for the world’s shortest literature review. If you want to do an availability study, I recommend the two articles by Nisonger. That’s where I started, and they were very helpful. Full citation is in the conference paper.There have been very few published availability studies involving electronic articles, and I couldn’t find any that involved a link resolver, as this one did.
  • So, now that we’ve covered the background, let’s take a look at what we did. This slide summarizes the steps. We’ll look at each step in more detail. Basically, we used link resolver log data to get a sample of articles users had tried to retrieve via the link resolver. We used the resolver and the library catalog to try to retrieve the articles, recorded the results of our testing plus some basic info about the article, and analyzed the results.
  • Innovative, our link resolver vendor, was able to send us some log files, which included each openURL received and processed by the resolver for a given period of time, as well as which options in the link resolver menu the user clicked. In other words, the logs told us which articles users tried to access – data we’d never had before. Our log files covered parts of days during 2 three-week periods last fall and this spring. We had to clean the data a bit, removing entries for web page elements like images and CSS. Then we were ready to test a sample.
  • We tested every 3rd entry, skipping duplicates (where user clicked the same item more than once in rapid succession) and entries for anything that wasn’t a journal article.
  • For testing, we tried to replicate what a user would do to retrieve full text using our two primary tools, the link resolver and the catalog. For link resolver, we pasted the openURL from the log into a web browser. That took us to a menu of options in our link resolver (as shown in the screenshot on this slide), from which we attempted to retrieve full text.
  • After we attempted to retrieve the article via the link resolver, we tried to retrieve it via the catalog, which includes all the OHSU Library’s journal holdings, print and electronic. This screenshot shows part of a journal record display in the OHSU Library catalog—the part that shows electronic holdings.
  • As we tested, we recorded our results, documenting whether or not the article could be retrieved by each method and the nature of any problems encountered. When using the catalog, if an article was not available electronically, but the catalog indicated that it was available in print, we noted that. We didn’t go to the shelf to verify that it was really there.
  • In addition to documenting the results of our testing, we recorded basic information about each article tested, so we could see if/how article characteristics correlated with availability. We noted the database from which the link resolver request originated and whether or not the user clicked any links in the link resolver menu. We also noted the journal title and publication year of the article.
  • We analyzed results to determine the percentage of articles that were available electronically or in print, breaking down the results by publication date and origin. We also analyzed the nature and frequency of problems we encountered.Now I’ll show you a few selected results to give you a sense of what we did. Complete results are in the paper, but these examples will give you a sense of the kind of information you can glean from an availability study. I’m skipping over most of the results, because while the results are interesting to OHSU, results at other libraries are likely to be quite different. The methodology is what is likely to be useful to you.
  • This chart shows availability via the link resolver and catalog by format. Note especially that 27 items were available electronically via the catalog but could not be retrieved with the link resolver. If you’re interested, results are discussed in depth in the paper.
  • Here’s a list of the problems encountered when trying to retrieve electronic articles via the catalog. Lack of holdings was the biggest problem, accounting for 3 of the top 4 problems. While that result isn’t surprising, it could be useful to have the problem broken out into components—no holdings at all vs. print only, etc.—and the study provides data on which to base collection and resource allocation decisions.Note that I’ve arranged the problems in order from most to least common. One of the most useful things about an availability study is that it can help you allocate/prioritize resources (staff and money) to fix the biggest problems. The next slide shows another way to look at this data to illustrate where you will get the most value for your time and money.
  • A Pareto chart shows graphically which problems are most common, arranging problems in order from most to least common and showing the cumulative percentage as you move from most to least common. They are useful for showing where to devote resources to get the most results. The line across the top shows the cumulative percent of problems. So, for example, the first three categories represent 81% of the problems encountered.
  • The other major category of results came from trying to retrieve articles via our link resolver. This chart shows the problems encountered. Note that this list includes only problems specific to the link resolver. If we had no holdings, we weren’t able to retrieve the item via the link resolver either. Also note that in some cases, there was more than one problems with a single article.Interestingly, over half of the problems could be traced back to the metadata transmitted from the origin database, through the link resolver, to the full text provider. There are lots of possible points of failure in that chain. Many of those points of failure are out of the library’s control, but findings like this can help us lobby for higher-quality metadata to improve access for patrons.
  • And here’s the Pareto chart for the link resolver problems. Rest of findings are in the paper. As I mentioned, I wanted to spend presentation time on methodology, given the audience. Plus, my findings are unique to one institution, but the methodology can be useful nearly anywhere. Since there are so many, it was hard to fit on a ppt slide and still keep it readable.
  • Learned a lot about what gets in the way of users trying to access full text, which should make it easier to focus resources in areas that will make the most difference to users. Have data rather than just anecdotal information. Tells a lot about how well our collections—and the tools to access them—meet the needs of our users. Also, discovered that link resolver logs are a rich source of information about what users are seeking. Don’t know if other link resolver products generate logs or if it’s possible to get access to them, but if so, I highly recommend it. In addition to an availability study, you could mine the logs to see which journals are most in demand—especially which ones the library does not own. Could also identify which materials owned in print only are high demand and therefore a top priority to get electronically.Also provides a baseline. Could re-evaluate after making changes based on results, be able to show that the changes led to improvements for users. Finally, WebBridge doesn’t have logs that are accessible to the library. Only got access because we asked the product manager, who arranged for us to get the data.
  • I’d be glad to take questions now, or feel free to contact me after the conference. I’d love to hear from you.

Using an Availability Study to Assess Access to Electronic Articles Using an Availability Study to Assess Access to Electronic Articles Presentation Transcript

  • Using an Availability Study toAssess Access to Electronic Articles Presented at the 2010 Library Assessment Conference by Janet Crum Director of Library Services, City of HopeWith special thanks to Carla Pealer, OregonHealth & Science University Library, Portland
  • The setting: Oregon Health & ScienceUniversity Library, Portland, OR  Freestanding academic health sciences center: medicine, dentistry, nursing, allied health, basic sciences, biomedical engineering  Library serves faculty, staff, students, patients, unaffiliated health practitioners, and walk-in users
  • What I’ll talk about today Availability study – what and why Methodology Summary of findings Conclusions and take-aways
  • Products/tools used Catalog: Millennium from Innovative Interfaces  Integrated with Innovative’s Electronic Resources Management (ERM) module Link Resolver: WebBridge from Innovative Interfaces Remote access: EZProxy Most electronic holdings maintained by library rather than purchased
  • The problem – incomplete information Available data provided incomplete picture  Usage data measures quantity, not quality, of access  User support requests -> anecdotal data  Usability tests contrived, don’t use actual user requests  LibQUAL+ data -> know there’s a problem but need more information to fix it How often are users able to get full text of desired articles? What gets in their way? And how often?
  • The solution – an availability study Oversimplified summary of method  Gather actual user requests (or simulate them)  Try to fill them the way a user would  Record and analyze results Measures how well library satisfies user requests Identifies and quantifies barriers to satisfying requests First described by Kantor in 1976
  • Very short review of literature onavailability studies Lots of studies of print materials summarized in review articles by Mansbridge (1986) and Nisonger (2007). Nisonger (2007 and 2009) provides excellent introduction to availability studies Very few published availability studies involve electronic articles None include link resolver
  • Summary of our methodology Get sample of user requests from link resolver log Try to retrieve article via link resolver and catalog Record results + information about article Analyze results
  • The data Link resolver logs each user request  Date/time  OpenURL  Which link(s) the user clicks Requested log files from vendor  Parts of selected days during two 3-week periods (fall 2009 and spring 2010) Removed extraneous entries  Web page elements (e.g. images)
  • Sampling Tested random sample of 414 entries  Every 3rd entry  Skipped obvious duplicates  Skipped entries for items other than articles
  • Testing – link resolver Paste openURL in browser Attempt to retrieve full text using menu provided by resolver Test links in order they appear Stop when successful or when run out of links to test
  • Testing - catalog Search for journal Review holdings information If catalog indicates electronic availability, attempt to retrieve full text using catalog link(s)
  • Recording results in Excel Link resolver availability  Whether or not article could be retrieved electronically via article- or journal-level links  Nature of any problems encountered Catalog availability  Whether or not article could be retrieved electronically  If not, is it available in print?  Nature of any problems encountered
  • Other data recorded Link resolver info  Origin of request (e.g. PubMed, Scopus)  Whether or not user clicked any links Article info  Journal title  Year of publication Testing info  Date tested  Initials of tester
  • Analyzing results Availability  Via catalog and resolver  By publication date  By origin Problems  Nature  Frequency  Used Pareto charts
  • Results: Availability Availability via catalog Available Available in print Not Availability via link resolver electronically only availableAvailable with no problems 261 0 0Available with problems 19 0 0Not available 27 21 83Availability unclear due toincomplete data 3 0 0 Total 310 21 83
  • Results: Barriers to accessing articles via the catalog Reasons Articles Were Unavailable % of Total Problem Count ProblemsNo holdings for title 42 40.38%Available in print only 21 20.19%Newer than most recent holdings 21 20.19%Older than oldest holdings 9 8.65%Article missing from target site 4 3.85%Gap in holdings 2 1.92%Subscription/payment problem 2 1.92%Supplement/special issue not available 1 0.96%Problem with proxy configuration 1 0.96%Unknown error in source citation 1 0.96% Totals 104 100.00%
  • Barriers to Accessing Articles via the Catalog100% 98% 99% 100%90% 89% 93% 95% 97%80% 81%70%60% 61%50%40% 40%30%20%10% 0% % of Total
  • Results: Barriers to accessing articles via link resolver Percent of Total Problem Count ProblemsIncomplete or inaccurate metadata 38 57.58%Article missing from provider site 6 9.09%CrossRef down or unable to process request 4 6.06%Subscription/payment problem 3 4.55%Holdings incorrect in knowledge base 3 4.55%Resolver configured incorrectly 2 3.03%Concurrent user limit reached 2 3.03%Article-level link led to journal page 2 3.03%Unknown problem 2 3.03%Broken link in knowledge base 1 1.52%Target site down 1 1.52%Target not set up in resolver 1 1.52%Incorrect or incomplete citation 1 1.52% Totals 66 100%
  • Barriers to accessing articles via link resolver100% 98% 100% 94% 95% 97%90% 88% 91%80% 82% 85% 77%70% 73% 67%60% 58%50%40%30%20%10% 0% Percent of Total Cumulative Percent
  • Conclusions and take-aways Availability studies provide  Useful data to support decisions re: allocating resources  A powerful way to assess the quality of collections and access to them Link resolver logs are a gold mine of information about what users are trying to access Don’t know if you can get resolver log data? Ask!
  • Questions? Please get in touch.Thank you! Janet Crum Director, Library Services City of Hope jcrum@coh.org 626-256-4673 x68614