The collection and meaningful analysis of accurate usage statistics has become increasingly important as library budgets continue to shrink.
In-house – circulation and re-shelving statisticsReplaced by vendor provided data for e-resourcesVery valuable but difficult to collect, manage and assess
Where to beginLocate where to go to get usage reportsOver time, attempts to gather e-resource usage statistics had been somewhat erratic and shared by a number of different peopleAs a result, the library had a lot of incomplete and incorrect information stored in a variety of places.Gathered and corrected existing information, tested it to see if it worked and contacted vendors for missing informationStore it in one place where it could be both secure and appropriately available. Directly in the spreadsheet.
With some more enhancements, Our spreadsheet became a sort of home grown EMSDiscontinuedIt hasn’t turned out to be what we thought. Cannot set up a logging system (with reminders/tasks) that is not check-in prompted. Current system of emailing works just fine.Will not accept locally-created vendor records for non-Ebsco-associated vendorsWill not accept attachments (i.e. scanned licenses, etc.)Clunky and slow.Is made mostly redundant by EbscoNet and A-to-Z.Seems to be more than we really need.
Librarians are able to compare usage statistics from different vendors; derive useful metrics such as cost-per-use; make better-informed purchasing decisions; plan infrastructure more effectively.help librarians to make sense of themARL New Measures Initiativehttp://www.arl.org/stats/newmeas/newmeas.htmlThe ARL Association of Research Libraries) New Measures Initiative has been set up in response to the following two needs: increasing demand for libraries to demonstrate outcomes/impacts in areas important to the institution, and increasing pressure to maximise use of resources.Of particular interest is the work associated with the E-metrics portion of this initiative, which is an effort to explore the feasibility of defining and collecting data on the use and value of electronic resources.new discipline of usage bibliometrics
an attempt to address this difficulty
SUSHI developed as an attempt to address this difficultySUSHI acts as an enabling technology by allowing Usage Consolidation modules to automate the harvesting of COUNTER reportsCOUNTER has also worked with NISO on SUSHI (Standardised Usage Harvesting Initiative) to develop a protocol to facilitate the automated harvesting and consolidation of usage statistics from different vendors. This protocol may be found on the NISO website at http://www.niso.org/workrooms/sushi/ Information StandardsNISO, the National Information Standards Organizationhttp://www.niso.org/apps/group_public/project/details.php?project_id=97http://www.niso.org/workrooms/sushi/http://www.projectcounter.org/http://www.projectcounter.org/documents/newsrelease_apr2012.doc
SUSHI is not a stand-alone application, it works with another system to retrieve COUNTER usage reportsThe COUNTER reports are in XML format so they are not readable by humans therefore,COUNTER reports need to be loaded into another system for processing and reportingFor SUSHI to be effective, a Usage Management system must be in place? Check-list of general featuresDatabase to store usage data and relate it to titles in databases and packages and the platforms where they are hostedData structure to allow usage to be consolidated (be able to find all usage for a particular title)A mechanism to load COUNTER reportsA method of matching variants of titles and variant ISSNsAn interface to generate reportsCheck-list of features for SUSHI implementationDevelop a SUSHI clientStore SUSHI related data about the hosts of contentAre they SUSHI compliant?URL of their SUSHI server and login credentialsDay of month to retrieve reportsCreate a scheduler to detect when its time to harvest the usage and automatically start the SUSHI clientCreate a mechanism to process and import COUNTER data in XML formatAdd some error handlingProviding tools and support to developersWeb siteWebinarsImplementer listservRecruiting experts -- 寿司職人 (SUSHI Shokunin)
Great articleFry, A. (2013). "A Hybrid Model for Managing Standard Usage Data: Principles for e-Resource Statistics Workflows." Serials Review 39(1): 21-28.
The great value will be for your key platforms to be loaded into Usage Consolidation for you to see cost per use integration for your e-journals and e-journal packages handled by EBSCO, including the Package Dashboard in the EBSCONET Analytics.Strength/weaknessesUsage Consolidation cannot total usage data across all platforms for reports such as ACRL’s annual surveyBeta tester - The University of Alabama at Birmingham (UAB) fall 2011If a vendor is sushi compliant jr1,db1, db2Wake ForestUC is basically their online platform; you pay an annual fee and can use their service to download & store usage statistics. ULS is when you pay EBSCO extra to do all the work for you of setting up the platform, downloading the stats, and reconciling the title differences.The real time-killer is in reconciling the downloaded reports. Whether you have UC import the reports using SUSHI or you download COUNTER reports manually and upload them into UC, you still have to manually reconcile the titles in the report against EBSCO's knowledgebase. UC gives you a list of all the journals from the COUNTER report that it couldn't automatically match to their knowledgebase (usually because there are too many possibilities), and you have go one by one and select which knowledgebase title that journal matches. This can be a huge task at first. I have heard EBSCO reps and other librarians using UC say that the number of titles that must be reconciled decreases with each report, because the system remembers how you matched them before. But plan on lots of time up front.I thought of one other criticism I have of EBSCO Usage Consolidation that I probably should have mentioned. Right now UC only supports COUNTER reports JR1, BR1, BR2, DB1, and DB2. We need DB3 (which becomes PR1 with COUNTER Release 4) for annual stats reporting. But really I think UC is designed for compiling & consolidating journal usage stats, which makes sense for EBSCO anyway.based on what I've seen so far, having them load the stats is the way to go.Strength/weaknessesUsage Consolidation cannot total usage data across all platforms for reports such as ACRL’s annual surveyBeta tester - The University of Alabama at Birmingham (UAB) fall 2011If a vendor is sushi compliant jr1,db1, db2
Gather particular statistics for IPEDS or ARL. Used suggested metrics.Journal statistics only gathered when a renewal decision was needed.Compiling stats to a single spreadsheet takes a long time. Looking at large sets of data—like Big Deal packages—becomes a full time job.Generally look at use for just one fiscal/subscription year. Use or CPU alone doesn’t tell you how the resource measures up to the rest of your collection. Or compared to last year’s numbers, etc.
Attempting to automate the gathering of statistics, so we consider what statistics would be useful for us rather than spending what little time we have to collect what we have to.
Valid COUNTER reports: JR1, DB1, DB2, BR1, BR2 – both Release 3 and 4.A journal’s combined use shows where patrons access journal content: e.g. publisher’s site, JSTOR, and EBSCO databases
Usage Consolidation set up is done at the vendor level to apply to all journals and resources on the platform. For instance, Highwire. Includes feature called Usage Loading Service. We received a certain number of these included in our UC subscription, but had the option to pay for additional. EBSCO will manually load some historical and all current statistics twice a year, including all troubleshooting and clean-up. EBSCO vendors: We gave them the vendors with large numbers of resources and database-heavy vendors (EBSCO, Gale, etc.). Configuring vendors: data entry to copy & paste URLs, usernames, and passwords into the system and then selecting which reports are valid. No batch uploading. Additional set-up for SUSHI harvesting. Many of these providers require you to contact the publisher to activate permissions at the publisher site. Fewer providers support SUSHI than expected. I expected most COUNTER compliant providers to also be SUSHI compliant. But some do not support SUSHI and others do, but there are XML issues which cause the loads to fail in UC. We can’t use SUSHI for Elsevier and Wiley and IEEE.
If exceptions not worked, then those stats aren’t loaded and available for reports.
This is the estimated amount of time WCU spent on set-up and a breakdown of the ongoing time commitment. Before we began this process we had gathered the login credentials for our providers already. There were maybe 3 or 4 we had to add. The set-up time includes the basic data entry as well as configuring SUSHI for the vendors. This includes contacting publishers to enable SUSHI permissions and troubleshooting with EBSCO when we had errors.The portion of the ongoing process which takes staff time is the exceptions report from the last slide. These are titles where UC couldn’t find a match, or there was a match against more than one title. Once you go through this clean up, the system remembers your choices with the disambiguation and applies it to future loads. As you go on, the number of exceptions shrinks. In fact, for us, many of the publishers have 0 exceptions to clean up. But at the start it can be a few hundred.We’ve chosen to load statistics for the resources which aren’t pulled in with SUSHI on a quarterly basis. This can be done on your schedule. It takes just as much time to load 3 months of data as it does a year. If you don’t need the statistics available for reports on a regular basis, and you know you always review resources over the summer, you could easily do it once a year. The data EBSCO loads for us, for instance, is done twice a year.
Title (includes book title and journal titles). Database. Or in COUNTER format.Choose date range, and either totals or with monthly breakdowns.For Title and Database reports, you can only run them for one metric at a time. Usage projection appears to use the previous year’s statistics to estimate future use. Calculates the total amount of use and then divides it out evenly over the months.
Title, ISSNs, Publisher, Platform, use dataWe’re interested in looking at the top use titles, we are interested in the top and bottom use titles we subscribe to. Also true looking for zero use titles when there are a ton of journals in aggregators which would be included. You’d have to export and filter them out first.Same with cost and CPU calculations. These features are available in Subscription Management (which I will be showing off in a few minutes), or otherwise you need to do that in Excel.
Enhancements of the SUSHI (Standardised Usage Statistics Harvesting Initiative) protocol designed to facilitate its implementation by vendors and its use by librarians This schema covers all the usage reports listed below. COUNTER reports in XML must be downloadable using the SUSHI protocol, or manually, or both.
The JUSP portal contains (JR1 and JR1a) COUNTER-compliant usage statistics JUSP is a shared service that aims to save academic libraries time and duplicated effort by providing a single gateway for them to access their usage statistics from all publishers, gateways and host intermediaries participating in the NESLi2 consortium agreements and from other publisher deals.
SUSHI: Delivering Major Benefits to JUSPAriadne: Web Magazine for Information Professionals5 December 2012http://www.ariadne.ac.uk/issue70/meehan-et-al
Transcript of Shared Tools for analysis of e-resources in Higher Education Libraries in Sweden (NordLIC)Shared Tools for analysis of e-resources in HE Libraries in Sweden Outcome Swedish Higher Education (HE) Libraries spend on an average 70 % of their media budget on e-resources.Harvesting is easy - analysis hard!Are budgets spent on the right resources? "facilitate analysis of usage in relation to economic factors for HE libraries in Sweden" Background Aim Create a template for the workflow for harvesting and collating usage stats during the year 1) Develop practical tools to facilitate easier and a more effortless evaluation of e-resource package deals 2) What now? Project description A joint project between KarolinskaInstitutet, University Library and Stockholm University Library funded by the National Library of Sweden during 2011.The project group consisted of two librarians from each institution and one statistician. We also had contact with a bibliometrician. Identify trends! National Library administer the toolsFreely available onlineCommunity? KPI E-Journals KPI E-books KPI Databases BibliometricAnalysis Cited by the University In Library Collection Instruction for a bibliometric analysis Web of Knowledge BibExcel abbr. title to ISSN Search strategy Instruction Attribution ShareAlike Image by: freepsdfilesBildfrån Open Clippart: CC-licens Image by: Kyu Oh (CC-license) Bildfrånfreepsdfiles http://www.lib-stats.org.uk/ firstname.lastname@example.org email@example.com Contact Information Thank You! Image by: heaplegoblackfriday (CC-license) Demonstration HenrikÅkerfeltKarolinskaInstitutet, Univeristy Library Image by: LuMaxArt (CC-license) Selena Killick Key Performance Indicators Image by: OpenClipart (CC-License) Image by: Will Scullin (CC-license) Hakerfelt
New data elements DOI and proprietary identifierImproves identification of items listed in the reportsImproves matching usage to the correct items in a knowledge baseInvolves coordination with KBART so the same identifier is on both the vendor’s Title List and their COUNTER reportThe difference between COUNTER and many other standard initiatives, is that a content provider must pass formal audit in order to be recognized as compliant. The goal is for auditors to use the COUNTER-SUSHI implementation profile as a guide and thus help eliminate the sometimes seemingly minor inconsistencies that cause major headaches. Adoption of SUSI and the required COUNTER XML reports is increasing; however, for those of us who work with these technologies, inconsistencies between implementations can be a problem. To address that the SUSHI Standing Committee has released an Implementation Profile for COUNTER and SUSHI – we will review what it includesSUSHI is an XML based standard that requires the COUNTER report to also be delivered as XML. A new code of practice means the XML schemas need to be brought in line.challenges faced developers of usage consolidation tools, many are not because of technical errors in the implementation of the SUSHI standard or the COUNTER XML it delivers; but rather in interpretation of the underlying standards. The COUNTER-SUSHI Implementation Profile has been created to set clear expectations as to how the standards should be interpreted and implemented. Improve consistency.
From Spreadsheets to SUSHI: Five Years of Assessing E-Resources
From Spreadsheets to SUSHI
Five years of assessing use
Leslie Farison: Appalachian State University
Kristin Calvert: Western Carolina University
In the beginning
Acquisitions: Title, order record, publisher
Collections: usage statistics, cost per use,
vendor site access information (url; login;
• COUNTER provides a somewhat
comparable model for providing usage
• COUNTER reports are the most
standardized so use those where
What measure is meaningful?
• Sessions and searches not meaningful
• Full text for full text resources
• Abstracts for index/abstract resources
• Added fields to spreadsheet:
One time $
Notes & cancellation restrictions
Added in 2009
Usage module not ready
Other problems encountered
Discontinued late 2010
Preparing for CY 2013
• Many changes to take into
consideration due to COUNTER R4.
• Released April 2012
• Implementation date December 2013
COUNTER Release 4
• A single, integrated Code of Practice
covering journals, databases, books,
reference works and multimedia
• No more sessions; result clicks & record
• Increased learning curve.
R4: Journal Reports
Journal Report 1
Number of Successful Full-Text Article Requests by Month and Journal
Journal Report 1 GOA
Number of Successful Gold Open Access Full-Text Article Requests by Month
Journal Report 1a
Number of Successful Full-Text Article Requests from an Archive by Month
Journal Report 2
Access Denied to Full-Text Articles by Month, Journal and Category
Journal Report 3
Number of Successful Item Requests by Month, Journal and Page-type
Journal Report 3 Mobile
Number of Successful Item Requests by Month, Journal and Page-type for
usage on a mobile device
Journal Report 4
Total Searches Run By Month and Collection
Journal Report 5
Number of Successful Full-Text Article Requests by Year-of-Publication (YOP)
Database & Book Reports
Database Report 1
Total Searches, Result Clicks and Record Views by Month and Database
Database Report 2
Access Denied by Month, Database and Category
Platform Report 1
(formerly Database Report 3)
Total Searches, Result Clicks and Record Views by Month and Platform
Book Report 1
Number of Successful Title Requests by Month and Title
Book Report 2
Number of Successful Section Requests by Month and Title
Book Report 3
Access Denied to Content Items by Month, Title and Category
Book Report 4
Access Denied to Content items by Month, Platform and Category
Book Report 5
Total Searches and by Month and Title
Multimedia Report 1
Number of Successful Full Multimedia Content Unit Requests by Month
Multimedia Report 2
Number of Successful Full Multimedia Content Unit Requests by Month,
Collection and Item Type
Title Report 1 (formerly
Journal/Book Report 1)
Number of Successful Requests for Journal Full-Text
Articles and Book Sections by Month and Title
Title Report 1 Mobile
Number of Successful Requests for Journal Full-Text
Articles and Book Sections by Month and Title (formatted
for normal browsers/delivered to mobile devices AND formatted
for mobile devices/delivered to mobile devices
Title Report 2
Access Denied to Full-Text Items by Month, Title and
Title Report 3
Number of Successful Item Requests by Month, Title and
Title Report 3 Mobile
Number of Successful Item Requests by Month, Title and
Page Type (formatted for normal browsers/delivered to mobile
devices AND formatted for mobile devices/delivered to mobile
Librarians must download spreadsheets a
file at-a-time from each vendor site then
Librarians need a more efficient method for
getting the data.
Not a stand alone application
Works with another system
Retrieve COUNTER reports
Not a panacea
To date, SUSHI has failed to live up to its
promise, largely because of the lack of
systems available to take advantage of
the protocol and the ongoing irregularity
of vendors' applications of the COUNTER
• Many universities are adding ERMs
and/or third party assessment products
and are using SUSHI to gather some
data, mostly for journals.
• Continuing to use other methods, such
a spreadsheets, for others.
Fry, A. (2013). "A Hybrid Model for Managing Standard Usage Data: Principles
for e-Resource Statistics Workflows." Serials Review 39(1): 21-28.
EBSCO Usage Consolidation
• EBSCONET Usage Consolidation
Product launched Jan 2012
• Reviewed in April 2012
• Revisited June 2013
• Added in July
at Western Carolina University
• Statistics we collected were…
Driven by reporting agencies
Goals for Western Carolina Univ
• Reduce time spent maintaining database
• Make cost-per-use figures more accessible to
• Be able to use journal statistics holistically and
move away from title-by-title requests
▫ “Do we have use data for Nature?”
EBSCO Usage Consolidation
COUNTER-compliant statistics are loaded from all sources
and matched against EBSCO’s AtoZ Knowledgebase
SUSHI enabled for harvesting statistics automatically
Report tools look at use by platform, title, publisher, etc.
Quickly identify most- and least-used resources
Data is fed into EBSCOnet Subscription Management to
combine use and cost data for incredibly powerful collection
Selected vendors where EBSCO
loads/manages statistics for
you. These vendors cannot be
Configured remaining eresource providers, selecting
SUSHI harvesting as often as
Manually load statistics for
Load historical statistics.
Requires staff to touch each
exception and ignore a title or
find a match.
* 1 hour or less
Process & Clean-up
Check resources have
successfully matched against
the EBSCO AtoZ
Requires some staff time to
deal with exceptions report,
though the size of these
reports shrink each month as
the system learns from you.
Decide how often to load
• Report types
• Date range
• Title, Database or
• Single metric per report
• Sessions or FT requests
• Limit to one platform
• Limit to top/bottom use
• 10 titles with most use
• Limit to use amount
• All titles with use <= 5
• Estimate missing usage
• Project use for the
remainder of the year
• Export to Excel, csv, tsv
• Unable to exclude
aggregator usage (as a
group) from title reports
▫ Standard COUNTER
▫ Title sort field
▫ EBSCO A-Z Holdings Y/N
• Must export reports to
include cost information,
OR look up title in
Journals in UC
• With integration with EBSCO AtoZ, journal statistics
work really well.
• COUNTER reports from publishers can include nonsubscribed titles. UC lets you limit your reports to titles
in your collection.
• Spreadsheet formatting from publisher can require
tweaking before loading in UC.
• Publishers can be slow to respond to requests for SUSHI
Databases in UC
• WCU still using spreadsheets
• Many, many database providers are not COUNTER
• Usage loading service is great for EBSCOhost databases
– there are a LOT of titles in the DB1 and JR1 reports.
• Database title matching is problematic and there is no
standard number to use.
▫ We cannot load stats for some because UC can’t match
the name of the database to the entry in the
E-Books in UC
• WCU testing this now
• WCU doesn’t use EBSCO AtoZ to manage our
• No way to integrate cost/order information
▫ Also true for databases and journals not acquired
▫ Must export report and calculate in Excel
If there are some
functional issues, and
we have not eliminated
and some calculations
still need to be done in
Excel, why do we want
to use Usage
The small amount of
work it takes to load data
for journal subscriptions
allows us to spend more
time to use the data to
do our jobs then to have
the data collection
become our entire job.
Data for decision-making at Western Carolina
2011 process versus 2013 process
More than usage data
• Publisher site
2011 = Access Database
3 solid months
of work and
A Vision for the Future
• SUSHI implementation seems to be
better suited for a group of libraries.
• A shared portal for some group of
libraries in North Carolina.
• Perhaps the UNC System.
• Libraries participating in the Carolina
• Samples of portals include:
SUSHI: Delivering Major
Benefits to JUSP
The use of SUSHI has demonstrably saved JUSP and
the UK HE community hundreds of thousands of
pounds of staff costs since its inception; add in an
estimated 97%+ of time in data collection and
processing every month and the dual benefits are
enormous; as more publishers join, this efficiency will
continue to increase. In an age of funding cuts and
budget restrictions, the combination of JUSP and
SUSHI thus affords an economical, high-quality
alternative to the previously onerous and unending task
of journal statistics gathering and management.
Shared Tools for analysis of e-resources in
Higher Education Libraries in Sweden
The perfect storm
Counter R4: December 2013
New data elements
More stringent auditing
Increasing # vendors SUSHI compliant
New implementation protocol
A single portal