Applying Usage Data to Collection Management in Libraries & Consortia Jason S. Price , Ph.D. Science & Electronic Resources Librarian Claremont Colleges’ Libraries; SCELC Better Data, Better Business: Applying Usage Data in Scholarly publishing June 6, 2007 Society for Scholarly Publishing Annual Meeting. San Francisco, CA
Which titles should we keep or cut?
Highest use/value vs. Lowest use/value
Which titles should we add?
Most requested (Traditional & Web)
Which package provides highest or lowest value?
Use and Value
Determining cost per use
Comparison with ILL cost
‘ By-title’ data
Comparison across Publishers
html? pdf? combined (FTAR)?
Need for context
Overall cost per use
=1 year’s cost / 1 year’s requests
e.g. $58,600 publisher E-access fee
35,700 article views
$1.64 Cost per use
* $420,000 mandatory cost of subs (to agent) for
a subset of these same titles
$420K + $ 58.6K = $478,600 / 35,700 = $13.40
Determining Cost Per Use ?
Overall cost per view by Subs Type
Comparison with ILL cost
Package CPV = $13.40
What does this tell us?
Is it High? Low?
Better than ILL?
Which titles to keep/cut: By-title data Combined 3yr use
Challenge: Usage report granularity
Title-level use statistics
Currently reported per title
Can’t separate frontfile use from backfile use
Backfile may be previously purchased or freely available
Ultimately cost per use should be based on what customers are paying for
Issue of repository / free backfile use
Which titles should be added?
Still largely determined by faculty request
Journal Report 2 (turnaways) is under-utilized by libraries and publishers
(see example for Consortia near the end)
So Pkg 1 is a better value than Pkg 3?
It might not be… CPV
Variation in use by format Davis and Price, 2006 JASIST
html to pdf Ratios vary widely for these packages
How many pdfs in Pkg 1 are duplicates of html views?
S3. Package value revisited
pdf requests only tell a different story!
CPP CPU vs.
Publisher interface/content types can have major influence on cost per view
Linking tools can have significant disruptive effects on CPV calculations
It is unclear whether pdfs vs FTARs are ‘better’ for cost comparisons
THUS: Within publisher comparisons are more valid than between publisher comparisons
Benchmarks for context Consortium
16: Low value? Consortium
16: High value? Consortium
16: The value of context Consortium
Questions for Consortia
Is cost/value fairly distributed among members/participants?
How can value/attractiveness be maintained or increased for current and future participants?
License renewals– do pricing terms need to be renegotiated?
Equitable distribution of value? Consortium
Year to year comparison
Year/Year by institution
JR2 Turnaways by Journal: Collaborative Collection Management 2004 2005 2006 ChemInform ChemInform ChemInform Journal of Molecular Recognition Journal of Molecular Recognition Journal of Molecular Recognition Journal of Peptide Science Journal of Peptide Science Journal of Peptide Science Ment Retard and Dev Disab Ment Retard and Dev Disab Ment Retard and Dev Disab Electroanalysis Electroanalysis ChemMedChem Reviews in Medical Virology Chemical Engineering & Technology - CET Reviews in Medical Virology Int J Robust and Nonlinear Control Prenatal Diagnosis Prenatal Diagnosis The Chemical Record Human Psychopharmacology International Journal of Cancer Chirality Polymer Engineering and Science Journal of Cellular Biochemistry Single Molecules Progress in Photovoltaics: Research and Applications Ultrasound in Obstetrics and Gynecology
Price is an integral part of use evaluation
Recognize that libraries need to base expenditures on actual return on cost
Meaningful cross-publisher comparisons are difficult (but will still be sought)
Context is crucial for within-publisher comparison
Consortial statistics provide valuable benchmarks
Cross-year comparisons can indicate increasing value
Use consortial context to identify ideal institutions for additional sales (added titles, backfiles, etc)