Kelly Marie Blanchat, presenter
The need to continually evaluate electronic resources should not limited to a metric for how resources perform. The reporting tools that monitor and collect e-resource usage need to have their performance evaluated as well. This presentation will cover how vendor-provided systems -- designed to aid in the decision making process of the e-resources lifecycle -- can be assessed for reporting accuracy. Following this session, participants will have an understanding of what data points to review when assessing vendor-provided usage statistic tools, and will have a method to begin evaluating their own systems. In summer 2015, Yale Library brought up ProQuest’s 360 COUNTER Data Retrieval Service (DRS), a service in which COUNTER-compliant usage statistics are uploaded, archived, and normalized into consolidated reports twice per year. To date 360 COUNTER has freed up a significant amount of time for Yale's E-Resources Group, allowing for staff resources to be allocated elsewhere in the e-resources lifecycle. This extra staff time also allowed time to “kick the tires” of the system, which resulted in an assessment workflow using Microsoft Excel to compare how raw COUNTER data uploaded to the system was affected by title normalization in the knowledgebase. This assessment workflow helped to identify the volume of data available in the system, and also gave clarity to how the 360 COUNTER system works and what steps need to be taken–by both ProQuest and Yale Library–to improve reporting accuracy. Please note that this presentation will touch on issues found within the system, and how ProQuest worked with Yale to identify the source through title normalization decisions, and correct errors when possible. The primary purpose is to bring awareness for the need of reporting tool assessment, which can be applied to any assessment tool, not just 360 COUNTER.
Turning the Corner at High Speed: How Collections Metrics Are Changing in a H...NASIG
Collections metrics have always been an important component of effectively managing libraries. But today they are more important than ever before as user-focused libraries and information centers attempt to adjust their collections to current and future library user needs. Frequently this requires sharp turns, smart traffic control, and even drafting behind other libraries who might be in the lead at any given stretch in order to achieve ultimate success. In this presentation, perspectives from a corporate library context and a liberal arts college library will be presented. What are the key metrics today vs. five years ago? What factors are at work that create changes in metrics value over time? What changes might we expect to see in the future? These and other questions will be addressed.
Speakers:
Marija Markovic, Independent Consultant
Steve Oberg, Wheaton College (IL)
Data Stories: Using Narratives to Reflect on a Data Purchase Pilot ProgramNASIG
Anita Foster and Gene R. Springs, presenters
The Ohio State University Libraries, driven by campus demand, developed and implemented a data resource purchase pilot program that took place over one fiscal year. Having previously only prioritized the purchasing of subject-related data resources on a small scale, this initiative included large data resources, most of which can meet the research and teaching needs of a variety of academic disciplines. Beginning the pilot with very few criteria for selection and potential acquisition, the Collections Strategist and Electronic Resources Officer encountered various challenges along with way, each requiring additional exploration, research, and eventual resolution. As the pilot program proceeded, other criteria emerged as important considerations when examining data resources, particularly for content and licensing.
To best develop an understanding of what was learned over the year of this pilot program, the Collections Strategist and Electronic Resources Officer collaborated in writing "data stories," or narratives about each of the data resource options investigated for acquisition. Each narrative is structured similarly, from the requestor and initial stated need through the end result. Any pertinent details regarding content, access, or licensing were incorporated to complete the narratives. The data stories will be further analyzed to track commonalities among both the successful and unsuccessful acquisitions, with the proposed outcome of developing tested criteria for future acquisition of data resources.
Datavi$: Negotiate Resource Pricing Using Data VisualizationNASIG
Stephanie J. Spratt, presenter
Ready to ask for a reduction in the annual increase of an e-resource product but unclear on how to make your case? Want to try some innovative strategies to avoid spending more than your budget? Want to reduce the amount of heavy renewal work falling right at fiscal close? Attend this presentation to learn techniques on all of that and more!
The speaker will use commonly collected data to show how to combine and visualize metrics to help make a library’s case for requesting reductions in pricing, adjusting service fees, and asking for changes to subscription periods to balance out the renewal workload. Attendees will learn which data to analyze and combine as it relates to pricing negotiations along with the steps involved to make that data come alive in Excel graphs and charts. Alternate data visualization products will also be discussed. The data visualization techniques, not outcomes, will be the focus of this presentation with the goal of attendees taking back which techniques might be worthwhile endeavors at their own institutions. Attendees will also learn about negotiation strategies and internal and external considerations when preparing to negotiate.
Growing an awareness of negotiation techniques and factors in play both inside and outside the library will help librarians make their cases for equitable pricing and models for library resources. The data visualization techniques shown in this presentation will serve as a stepping-off point for any librarian who wishes to use honesty, directness, and real-world scenarios to negotiate pricing for content and other library expenditures.
Promoting Open Access and Open Educational Resources to FacultyNASIG
Heather Crozier, presenter
Student debt is a compelling issue and many institutions are investigating solutions to ease the financial burdens of their students. Increasing the use of open educational resources benefits students by reducing course costs. Adopting OER in the classroom allows faculty more freedom in choosing instructional tools. Faculty also benefit from open access publishing by increasing their exposure. However, on the campus of a small, private institution, attendance at workshops to spread awareness and increase the use of these materials was minimal. Faculty had the perception that free resources could not be the same quality as traditional resources. In order to dispel this myth, the Electronic Resources Librarian and Educational Technology Manager collaborated to create custom one hour sessions for individual departments, leveraging library/faculty liaison relationships and the expertise of the office of educational technology. In the session, faculty learn more about open access publishing options, the value of open educational resources, the quality of many open educational resources, and where to find these resources. The session uses the course management system to both disseminate the information shared in the session and create a forum for departments to share resources with each other. Through the CMS, faculty gain access to vetted resources. All attendants have editing privileges within the site after the workshop, allowing them to curate course-specific lists for sharing and future reference. Pilot sessions have been well received and wider implementation is planned for the next academic year.
This presentation was provided by Corey Harper of Elsevier Labs during the NISO webinar, Using Analytics to Extract Value from the Library's Data, held on September 12, 2018.
A snake, a planet, and a bear ditching spreadsheets for quick, reproducible r...NASIG
Presenter: Andrew Kelly, Cataloging & E-Resources Librarian, Paul Smith's College
This poster has two accompanying handouts: https://www.slideshare.net/NASIG/a-snake-a-planet-and-a-bear-ditching-spreadsheets-handout1 and https://www.slideshare.net/NASIG/a-snake-a-planet-and-a-bear-ditching-spreadsheets-handout2slides.
The Kaleidoscope of Impact: same data, different perspectives, constantly cha...Kudos
Scholars, scientists, academic institutions, publishers and funders are all interested in impact. We have different roles and goals, and therefore different reasons for needing to understand impact; we are therefore asking different questions about impact, and those questions continue to evolve, much as the concept of impact itself is evolving. To answer our different questions, do we need different data, in separate silos, or are we looking at the same data, from different angles? This session gathered researcher, library, publisher and metrics provider perspectives to consider who has an interest in impact, what data they are interested in, how they use it, and how the situation is evolving as e.g. business models and technical infrastructures shift.
Turning the Corner at High Speed: How Collections Metrics Are Changing in a H...NASIG
Collections metrics have always been an important component of effectively managing libraries. But today they are more important than ever before as user-focused libraries and information centers attempt to adjust their collections to current and future library user needs. Frequently this requires sharp turns, smart traffic control, and even drafting behind other libraries who might be in the lead at any given stretch in order to achieve ultimate success. In this presentation, perspectives from a corporate library context and a liberal arts college library will be presented. What are the key metrics today vs. five years ago? What factors are at work that create changes in metrics value over time? What changes might we expect to see in the future? These and other questions will be addressed.
Speakers:
Marija Markovic, Independent Consultant
Steve Oberg, Wheaton College (IL)
Data Stories: Using Narratives to Reflect on a Data Purchase Pilot ProgramNASIG
Anita Foster and Gene R. Springs, presenters
The Ohio State University Libraries, driven by campus demand, developed and implemented a data resource purchase pilot program that took place over one fiscal year. Having previously only prioritized the purchasing of subject-related data resources on a small scale, this initiative included large data resources, most of which can meet the research and teaching needs of a variety of academic disciplines. Beginning the pilot with very few criteria for selection and potential acquisition, the Collections Strategist and Electronic Resources Officer encountered various challenges along with way, each requiring additional exploration, research, and eventual resolution. As the pilot program proceeded, other criteria emerged as important considerations when examining data resources, particularly for content and licensing.
To best develop an understanding of what was learned over the year of this pilot program, the Collections Strategist and Electronic Resources Officer collaborated in writing "data stories," or narratives about each of the data resource options investigated for acquisition. Each narrative is structured similarly, from the requestor and initial stated need through the end result. Any pertinent details regarding content, access, or licensing were incorporated to complete the narratives. The data stories will be further analyzed to track commonalities among both the successful and unsuccessful acquisitions, with the proposed outcome of developing tested criteria for future acquisition of data resources.
Datavi$: Negotiate Resource Pricing Using Data VisualizationNASIG
Stephanie J. Spratt, presenter
Ready to ask for a reduction in the annual increase of an e-resource product but unclear on how to make your case? Want to try some innovative strategies to avoid spending more than your budget? Want to reduce the amount of heavy renewal work falling right at fiscal close? Attend this presentation to learn techniques on all of that and more!
The speaker will use commonly collected data to show how to combine and visualize metrics to help make a library’s case for requesting reductions in pricing, adjusting service fees, and asking for changes to subscription periods to balance out the renewal workload. Attendees will learn which data to analyze and combine as it relates to pricing negotiations along with the steps involved to make that data come alive in Excel graphs and charts. Alternate data visualization products will also be discussed. The data visualization techniques, not outcomes, will be the focus of this presentation with the goal of attendees taking back which techniques might be worthwhile endeavors at their own institutions. Attendees will also learn about negotiation strategies and internal and external considerations when preparing to negotiate.
Growing an awareness of negotiation techniques and factors in play both inside and outside the library will help librarians make their cases for equitable pricing and models for library resources. The data visualization techniques shown in this presentation will serve as a stepping-off point for any librarian who wishes to use honesty, directness, and real-world scenarios to negotiate pricing for content and other library expenditures.
Promoting Open Access and Open Educational Resources to FacultyNASIG
Heather Crozier, presenter
Student debt is a compelling issue and many institutions are investigating solutions to ease the financial burdens of their students. Increasing the use of open educational resources benefits students by reducing course costs. Adopting OER in the classroom allows faculty more freedom in choosing instructional tools. Faculty also benefit from open access publishing by increasing their exposure. However, on the campus of a small, private institution, attendance at workshops to spread awareness and increase the use of these materials was minimal. Faculty had the perception that free resources could not be the same quality as traditional resources. In order to dispel this myth, the Electronic Resources Librarian and Educational Technology Manager collaborated to create custom one hour sessions for individual departments, leveraging library/faculty liaison relationships and the expertise of the office of educational technology. In the session, faculty learn more about open access publishing options, the value of open educational resources, the quality of many open educational resources, and where to find these resources. The session uses the course management system to both disseminate the information shared in the session and create a forum for departments to share resources with each other. Through the CMS, faculty gain access to vetted resources. All attendants have editing privileges within the site after the workshop, allowing them to curate course-specific lists for sharing and future reference. Pilot sessions have been well received and wider implementation is planned for the next academic year.
This presentation was provided by Corey Harper of Elsevier Labs during the NISO webinar, Using Analytics to Extract Value from the Library's Data, held on September 12, 2018.
A snake, a planet, and a bear ditching spreadsheets for quick, reproducible r...NASIG
Presenter: Andrew Kelly, Cataloging & E-Resources Librarian, Paul Smith's College
This poster has two accompanying handouts: https://www.slideshare.net/NASIG/a-snake-a-planet-and-a-bear-ditching-spreadsheets-handout1 and https://www.slideshare.net/NASIG/a-snake-a-planet-and-a-bear-ditching-spreadsheets-handout2slides.
The Kaleidoscope of Impact: same data, different perspectives, constantly cha...Kudos
Scholars, scientists, academic institutions, publishers and funders are all interested in impact. We have different roles and goals, and therefore different reasons for needing to understand impact; we are therefore asking different questions about impact, and those questions continue to evolve, much as the concept of impact itself is evolving. To answer our different questions, do we need different data, in separate silos, or are we looking at the same data, from different angles? This session gathered researcher, library, publisher and metrics provider perspectives to consider who has an interest in impact, what data they are interested in, how they use it, and how the situation is evolving as e.g. business models and technical infrastructures shift.
Presented at the 2010 Electronic Resources & Libraries Conference. --
Mary Feeney, Jim Martin, Ping Situ, University of Arizona --
Abstract: Searches, sessions, article requests - have access to data, but what's the next step? Learn how the University of Arizona Libraries' Spending Reductions Project analyzed usage of different types of resources to assess them against quality standards and make cancellation decisions. Tools, challenges, and organizational approaches will also be discussed.
Research evaluation: is it our business?: Librarians in the brave new world of research evaluation by Andria McGrath, Senior Information Specialist, Research Support, King’s College London. Presentation at the Research Evaluation: Is It Our Business? The Role of Librarians in the Brave New World of Research Evaluation 29 June 2011, University of Birmingham, Edgbaston Campus.
A base de dados SciELO armazena e disponibiliza registros de metadados e de textos completos de aproximadamente 700 mil artigos de diferentes disciplinas e idiomas, originadas das coleções nacionais de periódicos da Rede SciELO de 16 países. Os registros de metadados contém os campos de dados das referências bibliográficas dos artigos (título, autor, periódico fonte, data, resumo e palavras chaves) e das referências dos documentos citados nos textos artigos (título, autor, fonte, data) que são disponibilizados pelo SciELO em acesso aberto com atribuição CC-BY.
Ao mesmo tempo, os metadados de todos os artigos indexados no SciELO, publicados nos últimos 10 anos, estão também armazenados e disponibilizados na base de dados do SciELO Citation Index na plataforma WoS e na base de dados Dimensions. Da mesma, forma estão disponíveis nas bases de dados Scopus e WoS os metadados dos artigos dos periódicos indexados por esta base.
O SciELO constitui portanto uma notável fonte de dados bibliométricos para o estudo das produções científicas dos países da Rede SciELO.
O escopo deste grupo de trabalho / workshop é compartilhar metodologias e tecnologias de acesso e exploração dos dados da base de dados SciELO, com destaque para a introdução ao uso de técnicas de ciência de dados com o concurso da linguagem Python, o acesso aos dados com a linguagem R com vistas a análises estatísticas e técnicas de acesso e uso dos dados das bases de dados SciELO Citation Index e Dimensions.
A tool for librarians to select metrics across the research lifecycleLibrary_Connect
These slides introduce a range of research impact metrics. They were presented at the ER&L Conference (April 2017) by Chris James, Product Manager Research Metrics, Elsevier.
Matthew J Jabaily
Electronic Resources and Serials Librarian, Kraemer Family Library, University of Colorado Colorado Springs
When assessing the value of electronic serials, librarians are typically limited to looking at the usage of serials to which their library already subscribes. While this is useful for making renewal decisions, librarians are often flying blind when considering new subscriptions. Librarians often look at interlibrary loan requests to gauge interest in unsubscribed materials, but we know that these requests don’t tell the full story. Without other available data, it is difficult for librarians to make informed decisions about what subscriptions to add.
This presentation will look beyond interlibrary loan data to discuss other methods for predicting future use, including usage numbers of similar materials, turnaway statistics, and data from failed link resolver requests. Each of these methods has strengths and weaknesses, and each can all tell librarians something different about how users are discovering and attempting to access materials.
I will discuss some of the recent literature that discusses the association of the data from these sources with usage numbers. I will also share preliminary data from my institution, attempting to correlate prior year indicators of interest in electronic serials with first year use of new acquisitions.
Shows the use of Excel and Esri ArcGIS Desktop 10.1 to make statistics reports from Innovative Interfaces circulation data. Originally presented at IUG 2014 as part of panel: Slinging statistics and dicing data in the public library.
Walk Before You Run: Prerequisites to Linked DataKenning Arlitsch
Presentation on April 23, 2015 at the Amigos Library Services online conference: "Linked Data & RDF: New Frontiers in Metadata and Access"
Covers traditional SEO and Semantic Web Optimization, including Semantic Web Identity and a Schema.org project at Montana State University Library.
Quick reference cards for research impact metricsLibrary_Connect
When meeting with students, researchers, deans or department heads, the metrics on these quick reference cards can serve as a jumping off point in conversations about where to publish, adding to researcher profiles, enriching promotion and tenure files, and benchmarking research outputs. The cards were co-developed by librarian Jenny Delasalle and Elsevier's Library Connect program. Learn more and download poster versions as well at: https://libraryconnect.elsevier.com/articles/librarian-quick-reference-cards-research-impact-metrics
Commercial Serials Decision Support SystemsRobin Paynter
Slides to accompany an Oregon Library Association (OLA) / Washington Library Association (WLA) 2008 Joint Conference presentation on Commercial Serials Decision Support Systems.
Commercial Serials Decision Support SystemsRobin Paynter
Slides accompanied an Oregon Library Association (OLA) / Washington Library Association (WLA) 2008 Joint Conference presentation on commercial serials decision support databases.
Improving the reported use and impact of institutional repositoriesKenning Arlitsch
This presentation describes the problems of accurately counting file downloads from institutional repositories using commonly applied web analytics methods: page tagging and log file analysis. The presentation introduces a new prototype web service called RAMP (Repository Analytics and Metrics Portal) that provides a much more accurate method of counting file downloads.
Presented at the 2010 Electronic Resources & Libraries Conference. --
Mary Feeney, Jim Martin, Ping Situ, University of Arizona --
Abstract: Searches, sessions, article requests - have access to data, but what's the next step? Learn how the University of Arizona Libraries' Spending Reductions Project analyzed usage of different types of resources to assess them against quality standards and make cancellation decisions. Tools, challenges, and organizational approaches will also be discussed.
Research evaluation: is it our business?: Librarians in the brave new world of research evaluation by Andria McGrath, Senior Information Specialist, Research Support, King’s College London. Presentation at the Research Evaluation: Is It Our Business? The Role of Librarians in the Brave New World of Research Evaluation 29 June 2011, University of Birmingham, Edgbaston Campus.
A base de dados SciELO armazena e disponibiliza registros de metadados e de textos completos de aproximadamente 700 mil artigos de diferentes disciplinas e idiomas, originadas das coleções nacionais de periódicos da Rede SciELO de 16 países. Os registros de metadados contém os campos de dados das referências bibliográficas dos artigos (título, autor, periódico fonte, data, resumo e palavras chaves) e das referências dos documentos citados nos textos artigos (título, autor, fonte, data) que são disponibilizados pelo SciELO em acesso aberto com atribuição CC-BY.
Ao mesmo tempo, os metadados de todos os artigos indexados no SciELO, publicados nos últimos 10 anos, estão também armazenados e disponibilizados na base de dados do SciELO Citation Index na plataforma WoS e na base de dados Dimensions. Da mesma, forma estão disponíveis nas bases de dados Scopus e WoS os metadados dos artigos dos periódicos indexados por esta base.
O SciELO constitui portanto uma notável fonte de dados bibliométricos para o estudo das produções científicas dos países da Rede SciELO.
O escopo deste grupo de trabalho / workshop é compartilhar metodologias e tecnologias de acesso e exploração dos dados da base de dados SciELO, com destaque para a introdução ao uso de técnicas de ciência de dados com o concurso da linguagem Python, o acesso aos dados com a linguagem R com vistas a análises estatísticas e técnicas de acesso e uso dos dados das bases de dados SciELO Citation Index e Dimensions.
A tool for librarians to select metrics across the research lifecycleLibrary_Connect
These slides introduce a range of research impact metrics. They were presented at the ER&L Conference (April 2017) by Chris James, Product Manager Research Metrics, Elsevier.
Matthew J Jabaily
Electronic Resources and Serials Librarian, Kraemer Family Library, University of Colorado Colorado Springs
When assessing the value of electronic serials, librarians are typically limited to looking at the usage of serials to which their library already subscribes. While this is useful for making renewal decisions, librarians are often flying blind when considering new subscriptions. Librarians often look at interlibrary loan requests to gauge interest in unsubscribed materials, but we know that these requests don’t tell the full story. Without other available data, it is difficult for librarians to make informed decisions about what subscriptions to add.
This presentation will look beyond interlibrary loan data to discuss other methods for predicting future use, including usage numbers of similar materials, turnaway statistics, and data from failed link resolver requests. Each of these methods has strengths and weaknesses, and each can all tell librarians something different about how users are discovering and attempting to access materials.
I will discuss some of the recent literature that discusses the association of the data from these sources with usage numbers. I will also share preliminary data from my institution, attempting to correlate prior year indicators of interest in electronic serials with first year use of new acquisitions.
Shows the use of Excel and Esri ArcGIS Desktop 10.1 to make statistics reports from Innovative Interfaces circulation data. Originally presented at IUG 2014 as part of panel: Slinging statistics and dicing data in the public library.
Walk Before You Run: Prerequisites to Linked DataKenning Arlitsch
Presentation on April 23, 2015 at the Amigos Library Services online conference: "Linked Data & RDF: New Frontiers in Metadata and Access"
Covers traditional SEO and Semantic Web Optimization, including Semantic Web Identity and a Schema.org project at Montana State University Library.
Quick reference cards for research impact metricsLibrary_Connect
When meeting with students, researchers, deans or department heads, the metrics on these quick reference cards can serve as a jumping off point in conversations about where to publish, adding to researcher profiles, enriching promotion and tenure files, and benchmarking research outputs. The cards were co-developed by librarian Jenny Delasalle and Elsevier's Library Connect program. Learn more and download poster versions as well at: https://libraryconnect.elsevier.com/articles/librarian-quick-reference-cards-research-impact-metrics
Commercial Serials Decision Support SystemsRobin Paynter
Slides to accompany an Oregon Library Association (OLA) / Washington Library Association (WLA) 2008 Joint Conference presentation on Commercial Serials Decision Support Systems.
Commercial Serials Decision Support SystemsRobin Paynter
Slides accompanied an Oregon Library Association (OLA) / Washington Library Association (WLA) 2008 Joint Conference presentation on commercial serials decision support databases.
Improving the reported use and impact of institutional repositoriesKenning Arlitsch
This presentation describes the problems of accurately counting file downloads from institutional repositories using commonly applied web analytics methods: page tagging and log file analysis. The presentation introduces a new prototype web service called RAMP (Repository Analytics and Metrics Portal) that provides a much more accurate method of counting file downloads.
presentation at ALA Annual 2016 ALCTS/LITA Electronic Resources Management Interest Group panel “Making it count: Usage statistics and electronic resources management.”
This presentation was provided by Oliver Pesch, Chief Strategist, E-Resources
EBSCO Information Services during the ALA Midwinter Meeting, held on January 22, 2012.
With ever-shrinking library budgets it is more essential than ever to ensure that the library collection is targeted, relevant and well-used. Return on Investment (ROI) has become the mantra of library management and libraries need to show accountability for collection decisions. This webinar will focus on speakers who have successfully implemented assessment metrics (such as COUNTER 3, Eigenfactor and impact factors) as one determining factor of collection development decisions.
Data Visualization: Analyzing your library data provides tips on using Access crosstab query; Excel pivot table and pivot chart; Tableau Public. A presentation at ELUNA, 2015. Supplemental file also available on slideshare.
How to Build Fast Data Applications: Evaluating the Top ContendersVoltDB
Massive amounts of data generated from mobile devices, M2M communications, sensors and other IoT devices is redefining the world. What kind of applications will you build to take advantage of this data and provide value to your customers? What technologies are out there to help you? This deck will illustrate the difference between fast OLAP, stream-processing, and OLTP database solutions. You will also learn the importance of state, real-time analytics and real-time decisions when building applications on streaming data, and how streaming applications deliver more value when built on a super-fast in-memory, SQL database. To view the webinar in its entirety, click here: http://learn.voltdb.com/WRFastDataAppsTopContenders.html
Designing high performance datawarehouseUday Kothari
Just when the world of “Data 1.0” showed some signs of maturing; the “Outside In” driven demands seem to have already initiated some the disruptive changes to the data landscape. Parallel growth in volume, velocity and variety of data coupled with incessant war on finding newer insights and value from data has posed a Big Question: Is Your Data Warehouse Relevant?
In short, the surrounding changes happening real time is the new “Data 2.0”. It is characterized by feeding the ever hungry minds with sharper insights whether it is related to regulation, finance, corporate action, risk management or purely aimed at improving operational efficiencies. The source in this new “Data 2.0” has to be commensurate to the outside in demands from customers, regulators, stakeholders and business users; and hence, you would need a high relformance (relevance + performance) data warehouse which will be relevant to your business eco-system and will have the power to scale exponentially.
We starts this webinar by giving the audiences a sneak preview of what happened in the Data 1.0 world & which characteristics are shaping the new Data 2.0 world. It then delves deep on the challenges that growing data volumes have posed to the Data warehouse teams. It also presents the audiences some of the practical and proven methodologies to address these performance challenges. Finally, in the end it will highlight some of the thought provoking ways to turbo charge your data warehouse related initiatives by leveraging some of the newer technologies like Hadoop. Overall, the webinar will educate audiences with building high performance and relevant data warehouses which is capable of meeting the newer demands while significantly driving down the total cost of ownership.
Similar to Beyond COUNTER Compliant: Ways to Assess E-Resources Reporting Tools (20)
Ctrl + Alt + Repeat: Strategies for Regaining Authority Control after a Migra...NASIG
Speaker: Jamie Carlstone
This presentation is on how to regain authority control in a large research library catalog: first, dealing with a backlog of problems from years without authority control and second, creating a process for ongoing workflows to realistically maintain authority control when new records are added to the collection.
The Serial Cohort: A Confederacy of CatalogersNASIG
Speaker: Mandy Hurt
In 2018, at a time when our department was shrinking through attrition, the decision was made to further leverage the particular skill sets of a select group of monographic catalogers by training them to also undertake the complex copy cataloging of serials.
This presentation concerns the assumptions underlying how this decision was originally made, the initial plan for how this would be accomplished by CONSER Bridge Training, the eventual formation of the Serials Cohort with a view to creating an iterative process I would design and manage, and the problems, obstacles and time constraints faced and addressed along the way.
Calculating how much your University spends on Open Access and what to do abo...NASIG
Librarians are working hard to understand how much money their university is spending on open access article processing fees (APCs), and how much of what they subscribe to is available as OA. This information is useful when making subscription decisions, considering Read and Publish agreements, rethinking library open access budgets, and designing Institution-wide OA policies.
This session will talk concretely about how to calculate the impact of Open Access on *your* university. It will provide an overview on how to estimate the amount of money spent across a university on Open Access fees: we will discuss underlying concepts behind calculating OA article-processing fee (APC) spend and give an overview of useful data sources, including:
FlourishOA
Microsoft Academic Graph
PLOS API
Unpaywall Journals
We will also talk about Open Access on the subscription side, including how much of what you subscribe to is available as open access and how you can use that in your subscription decisions and negotiations.
The presenters are the cofounders of Our Research, the nonprofit company behind Unpaywall, the primary source of Open Access data worldwide.
Heather Piwowar, Co-founder, Our Research
Jason Priem, Co-founder, Our Research
Measure Twice and Cut Once: How a Budget Cut Impacted Subscription Renewals f...NASIG
Speakers: Ilda Cardenas, Keri Prelitz, Greg Yorba
The process of looking at subscriptions with the goal of proactively downsizing revealed that the library’s existing renewal workflows were outdated and in need of regular analysis to identify underused resources. Additionally, this project uncovered shortcomings of analysis that is reliant on usage data, the unexpected ramifications of large-scale subscription cancellations, as well as the need for improved communication within and between the many library departments affected by subscription cancellations.
Analyzing workflows and improving communication across departments NASIG
Presented by Jharina Pascual and Sarah Wallbank.
The presentation provides people with simple techniques for analyzing their local workflow and information-sharing practices, some ideas for interrogating and improving intra-technical services communication, and ideas for simple changes that can improve communication and build a sense of community/joint purpose within or across departments.
Supporting Students: OER and Textbook Affordability Initiatives at a Mid-Size...NASIG
Presented by Jennifer L. Pate.
With support from the president and provost of the university, Collier Library adopted strategic purchasing initiatives, including database purchases to support specific courses as well as purchasing reserve copies of textbooks for high-enrollment, required classes. In addition, the scholarly communications librarian became a founding member of the OER workgroup on campus. This group’s mission is to direct efforts for increasing faculty awareness and adoption of OER. This presentation discusses the structure of the each of these programs from initial idea to implementation. Included will be discussions of assessment of faculty and student awareness, development of an OER grant program, starting a textbook purchasing program, promotion of efforts, funding, and future goals.
Access to Supplemental Journal Article Materials NASIG
Presented by Electra Enslow, Suzanne Fricke, Susan Shipman
The use of supplemental journal article materials is increasing in all disciplines. These materials may be datasets, source code, tables/figures, multimedia or other materials that previously went unpublished, were attached as appendices, or were included within the body of the work. Current emphasis on critical appraisal and reproducibility demands that researchers have access to the complete shared life cycle in order to fully evaluate research. As more libraries become dependent on secondary aggregators and interlibrary loan, we questioned if access to these materials is equitable and sustainable.
Communications and context: strategies for onboarding new e-resources librari...NASIG
Presented by Bonnie Thornton.
This presentation details onboarding strategies institutions can utilize to help acclimate new e-resources librarians with an emphasis on strategies for effectively establishing and perpetuating communications with stakeholders.
Full Text Coverage Ratios: A Simple Method of Article-Level Collections Analy...NASIG
Presented by Matthew Goddard.
his presentation describes a simple and efficient method of using a discovery layer to evaluate periodicals holdings at the article level, and suggest a variety of applications.
Web accessibility in the institutional repository crafting user centered sub...NASIG
Presented by Jenny Hoops and Margaret McLaughlin.
As web accessibility initiatives increase across institutions, it is important not only to reframe and rethink policies, but also to develop sustainable and tenable methods for enforcing accessibility efforts. For institutional repositories, it is imperative to determine the extent to which both the repository manager and the user are responsible for depositing accessible content. This presentation allows us to share our accessibility framework and help repository and content managers craft sustainable, long-term goals for accessible content in institutional repositories, while also providing openly available resources for short-term benefit.
Linked Data is exploding in the library world, but the biggest problems libraries have are coming up with the time or money involved in converting their records, looking into Linked Data programs, finding community support, and all the various other issues that arise as part of developing new methods. Likewise, one of the biggest hurdles for libraries and linked data is that they do not know what to do to get involved. As we have fewer people available and smaller budgets each year, we would like to explore ways in which libraries can get involved in the process without expending an undue amount of their already dwindling resources. To see how linked data can be applied, we will look at the example of the Smithsonian Libraries (SIL). Over the past 18 months, SIL has been preparing for the transition from MARC to linked open data. This session will talk about various SIL projects and initiatives (such as the FAST headings project and the introduction of Wikidata and WikiBase); how to incorporate linked data elements into MARC records; and how to develop staff and give them proficiency with new tools and workflows.
Heidy Berthoud, Head, Resource Description, Smithsonian Libraries
Walk this way: Online content platform migration experiences and collaboration NASIG
In this session, a librarian and a publisher share their perspectives on content platform migrations, and the Working Group Co-chairs will describe the group’s efforts to-date and expected outcomes. Our publisher-side speaker will describe issues they must consider when their content migrates, such as providing continuous access, persistent linking, communicating with stakeholders, and working with vendors. Our librarian speaker will describe their experience and steps they take during migrations, such as receiving notifications about migrations, identifying affected e-resources, updating local systems to ensure continuous access, and communicating with their front-line staff and patrons.
Read & Publish – What It Takes to Implement a Seamless Model?NASIG
PANELISTS
Adam Chesler
Director of Global Sales
AIP Publishing
Sara Rotjan
Assistant Marketing Director, AIP Publishing
Keith Webster
Dean of Libraries and Director of Emerging and Integrative Media Initiatives
Carnegie Mellon University
Andre Anders
Director, Leibniz Institute of Surface Engineering (IOM)
Editor in Chief of Journal of Applied Physics
Professor of Applied Physics, Leipzig University
“Read & Publish” agreements continue to gain global attention. What’s rarely discussed when these new access and article processing models are introduced is the paperwork, back-end technology and overall management required to implement the new program that works for all involved. This panel, comprised of a librarian, publisher, and researcher, will focus on the complexities of developing, implementing and using the infrastructures of different Read & Publish models and the challenges of developing a seamless experience for everyone.
From article submission to publication to final reporting, the panel will discuss the “hidden” impact that new workflows will have on stakeholders in scholarly communications. Time will be allotted for Q&A and attendee participation is encouraged.
When to hold them when to fold them: reassessing big deals in 2020NASIG
This presentation goes into details for each of the publishers’ big deals that we examined and present reasons as to why we cancelled them, with concrete examples from our experiences (four cancellations and two restructurings).
Getting on the Same Page: Aligning ERM and LIbGuides ContentNASIG
This presentation gives background on the development of the initial processes, the review and revision of the processes,and the issues encountered in developing a workflow for importing data from one system to the other.
A multi-institutional model for advancing open access journals and reclaiming...NASIG
The presenters will provide brief overviews of CIL and PDXScholar, and they will detail the challenges and ultimate successes of this multi-institutional model for advancing open access journals and reclaiming control of the scholarly record.
Knowledge Bases: The Heart of Resource ManagementNASIG
This session will discuss the knowledge base metadata lifecycle, current and upcoming metadata standards, and the effect that knowledge bases have on discovery and e-resource management. The presenters will look at ways knowledge bases can be leveraged to create downstream tools for resource management and discovery. The session will also provide different perspectives on knowledge bases, including from librarians and product managers, as well as a discussion of the NISO's KBART Automation recommended practice and what this could mean for knowledge bases in the future. The session will also include a conversation regarding how leveraging knowledge bases can aid librarians in improving resource discovery within their own libraries and ultimately decrease the amount of time spent on metadata workflows. Through this presentation, we also aim to improve communication between the library community and metadata providers and creators.
Elizabeth Levkoff Derouchie, Metadata Librarian for Serials & Electronic Resources, Samford University Library
Beth Ashmore, Associate Head, Acquisitions & Discovery (Serials), North Carolina State University
Eric Van Gorden, Product Manager, EBSCO
This session will talk about various SIL projects and initiatives (such as the FAST headings project and the introduction of Wikidata and WikiBase); how to incorporate linked data elements into MARC records; and how to develop staff and give them proficiency with new tools and workflows.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Beyond COUNTER Compliant: Ways to Assess E-Resources Reporting Tools
1. BEYOND COUNTER-COMPLIANT
THE IMPORTANCE OF ASSESSING E-RESOURCES REPORTING TOOLS
University Library
Kelly Blanchat
Electronic Resources Support Librarian
Excel Handout: https://tinyurl.com/y7bvlg27
2. 360 COUNTER @ YALE
Out-source usage statistics harvesting
To allocate staff resources elsewhere
Consolidate usage across time and providers
To enhance reporting
NOT to calculate Cost-Per-Use, not using SUSHI
3. 360 COUNTER: HOW IT WORKS
Part 2: INTOTA ASSESSMENT
A. Consolidates usage
• Across time
• Across providers
B. Assessment (i.e.: CPU)
Part 1: 360 COUNTER
A. Collection of usage statistics
• “Data Retrieval Service”
• SUSHI
B. Archives usage data
Jan Feb
0 0
1 42
3 7
89 0
Jan Feb
2 4
16 0
0 0
1 0
Jan Feb
2 0
7 6
0 0
26 87
Jan Feb
23 7
0 0
1 1
0 9
Jan Feb
27 11
32 48
4 8
106 96
4. IMPLEMENTATION TIMELINE
Prior to 2015 Spring / Summer 2015
Administrative
credentials added to 360
COUNTER for Data
Retrieval Service (DRS)
YUL staff manually
uploads historical
statistics to 360
COUNTER
YUL staff manually
retrieves usage statistics
every 6 months
YUL staff uploads
statistics to a local
website
The DRS team finalizes the
first retrieval of Yale’s
usage statistics, for
January – June 2015
September 2015 November 2015
Collection Development
retrieves usage statistics for
2015 OUP BR2 within IA
Oxford English Dictionary is
MISSING from consolidated
report, as are 342 titles, and
29,170 total uses
October 2015
Begin exploring the
consolidated system
Open tickets for data
corrections
Pull sample reports from
Intota Assessment (IA)
5. …WHAT DID THAT SAY?
Oxford English Dictionary is MISSING
from consolidated report, as are 342
other titles, and 29,170 total uses
November 2015
6.
7. PHASE 1: TITLE-LEVEL ANALYSIS
Excel V-LOOKUP on ISSN/ISBN and title
between COUNTER report and consolidated report
8. PHASE 1: SAMPLE FINDINGS
360 COUNTER
Missing Titles (20 total titles):
• A Dictionary of Geography
• The Oxford Dictionary of Plays
Variant Usage (34 total titles):
• A Dictionary of Economics
• The Oxford Classical Dictionary
Variant ISBN (63 total titles):
• Between Two Empires: Race,
History, and Transnationalism in
Japanese America
INTOTA ASSESSMENT
Duplicate entries / editions scrubbed
Usage from distinct editions merged
ISBN changed from raw COUNTER
2015 Oxford BR2
REASON
REASON
REASON
9. PHASE 1: OUTCOME
Title-level example errors submitted to ProQuest for the 2015
Oxford BR2.
After response and fix, only 2 titles and 20 uses remained
outstanding.
10. PHASE 1: SUCCESSFUL, NOT SUSTAINABLE
Favorable results!
Time consuming and really not fun at all
Yale has over 100 providers in 360 COUNTER
11. PHASE 2: SIMPLIFIED TO COLLECT TOTALS
Collection Date: June 2016
VERY SIMPLE
SUBTRACT
FORMULA BUILT-IN
12. PHASE 2: IN PRACTICE
CONSTANT: REPORTS
VARIABLE: TIME
RESULT:
ACCURACY,
OVER TIME
13. High-level data collection from 8 content providers for
JR1 & BR2 submitted to ProQuest’s 360 COUNTER team.
PHASE 2: OUTCOME
360 COUNTER INTOTA ASSESSMENT
HOW
Received granular information on HOW data is
consolidated through normalization to the Authority Title.
Jan Feb
0 0
1 42
3 7
89 0
Jan Feb
2 4
16 0
0 0
1 0
Jan Feb
2 0
7 6
0 0
26 87
Jan Feb
23 7
0 0
1 1
0 9
Jan Feb
27 11
32 48
4 8
106 96
14. CONSOLIDATION = NORMALIZATION
Normalization to the Authority Title will affect overall title count
because…
• When duplicates have matching ISSN and title (i.e.: full match),
usage is merged onto 1 entry
• When a title has variant data points (DOI, ISSN) over time,
titles may display multiple times
• When duplicate titles have the same ISSN but distinct titles,
usage is picked from 1 version
15. EXAMPLE: SAME ISSN, VARIANT NAMES
2015 Springer JR1: When duplicate titles have the same ISSN but
distinct titles, usage is picked from 1 version
16. EVALUATION
Yale has so much data. High-level analysis has helped us understand
WHAT exactly what happening to our data better behind the scenes.
And… it is complex because it is connected to the knowledgebase
(and the knowledgebase is complicated).
17. 1) Do the “rules” for title normalization/consolidation make sense?
• How does it affect potential CPU reporting?
2) Which results ….
• should trigger a bug fix?
• are a result of incorrect COUNTER data?
• are a result of over title normalization?
EVALUATION
19. WHERE WE ARE IN 2017
Prior to 2015 Spring –
Summer 2015
Administrative
credentials added to 360
COUNTER for Data
Retrieval Service (DRS)
YUL staff manually
uploads historical
statistics to 360
COUNTER
YUL staff manually
retrieves usage statistics
every 6 months
YUL staff uploads
statistics to a local
website
The DRS team finalizes the
first retrieval of Yale’s
usage statistics, for
January – June 2015
OED is MISSING from
consolidated report, as
are 342 titles, and 29,170
total uses
Sept – Nov 2015 April 2016Dec 2015 –
March 2016
Phase 1 title-level analysis
comparing raw COUNTER
reports to consolidated
reports in Intota Assessment
Phase 2 begins with high-
level analysis, gathering
totals between reports
August 2016
Lots of conference calls
with 360 COUNTER
Planning for Phase 3
begins, using ARL statistics
as a pilot project;
transforming COUNTER as
data source in Tableau
Lots more conference
calls with 360 COUNTER
January 2017
Pilot with ARL stats is
complete, internal testing
begins in Access, Tableau
March 2017
Add more stats for ARL
providers to Tableau
Discuss more robust data
solutions (MySQL, Python)
June 2017
NASIG – HELLO!
The Future….
20. YOU BET THERE’S A PHASE 3!
August 2016
Planning for Phase 3 begins
Transforming COUNTER reports as data
source for Tableau
Yay!
21. • We’re putting ourselves in charge, removing guess-work and
assuming “burden” of accepting all data
• Still leveraging the use of 360 COUNTER’s data retrieval – YAY!
• More easily import ILS $$$ data into Tableau to merge with usage
PHASE 3: “LET COUNTER BE COUNTER”
22. PHASE 3: COUNTER AS DATA SOURCE
STANDARD COUNTER
REPORT, JR1
COUNTER REPORT TRANSFORMED
AS A DATA SOURCE WITH EXCEL
TABLEAU PLUG-IN
24. PHASE 3: TABLEAU, PROOF OF CONCEPT
2015 Springer JR1: When duplicate titles have the same ISSN but
distinct titles, usage is picked from 1 version – now fixed!
25. Problem: We collect data on calendar years, but ARL needs
fiscal years…(UGH)
PHASE 3: PILOTING
26. PHASE 3: PILOT WITH ARL STATS
• It’s self service!
We can set-up
standard views
set-up in Tableau
for subject
librarians to
retrieve
27. • Move from less manual (Excel, Access) to more automated and
robust (Python, SQL)
• Data visualization!
• Self-service for renewals
• NO QUESTIONS about data, built on COUNTER standards
NEXT GOALS…
My name is Kelly Blanchat, and my talk today, titled “Beyond Counter-compliant: the importance of assessing e-resources reporting tools” will discuss Yale’s experience bringing up an ERMS for usage statistics, the methods we used to assess the output from the tool, and how we have modified our use of it over time.
This talk focuses on ProQuest’s 360 COUNTER tool and how we worked with ProQuest to uncover how title normalization factored into the usage consolidation. Though this talk is specific to one tool, my hope is that it will give each of you with ERMS for usage statistics a method and purpose for replicating within your own systems.
For Yale, 360 COUNTER was implemented in order to OUT SOURCE the harvesting of usage statistics in order to ALLOCATE staff resources elsewhere.
We also wanted to leverage it to consolidate usage in order to enhance and expand our reporting capabilities.
360 COUNTER was NEVER intended to calculate CPU at Yale. At Yale, adding in usage data to another system would be unnecessarily replicating work happening in our ILS. The time it would take to add that information into 360 COUNTER didn’t seem worth it.
We’re also only using the Data Retrieval Service option, which involves team members from ProQuest manually retrieving our stats for us, and not using an automated retrieval like SUSHI. One thing that I’ll talk about later is the approaching iceberg that is both SUSHI and COUNTER R5.
Now that I’ve talked a bit about our intentions with the usage ERM, I want to clarify a bit about how the system works, since I’ll be using some language specific to 360 COUNTER in this presentation.
What I’ll be referring to as “360 COUNTER” is actually made up of two parts: 360 COUNTER itself and the reporting piece, Intota Assessment. For any ProQuest ExLibris customers out there, I should clarify that Yale is only an Intota customer for this one piece – Intota Assessment – and we don’t subscribe to Intota as a whole.
Essentially, how the two pieces break down is that 360 COUNTER is the AUTOMATED COLLECTION and STORAGE of usage data. Once the data is available in 360 COUNTER, it is brought into Intota Assessment, part 2, where usage is CONSOLIDATED over both TIME and PROVIDER. In short: 360 COUNTER does the collection, and Intota Assessment does the consolidated reporting, and it can generate cost-per-use calculations as well.
Prior to implementing 360 COUNTER, usage statistics were retrieved and uploaded to a local server twice per year by a staff member in e-resources. This process was time consuming.
In Spring / Summer 2015 we began implementing 360 COUNTER (which incidentally was also around the time when I began at Yale). Implementation involved entering all of our administrative data into the system, as well as uploading our historical usage reports. This, like our former processes, was time consuming, but was intended to streamline our workflows down the line. The ultimate payoff.
In September / October 2015 we received our first set of outsourced statistics from the 360 COUNTER Data Retrieval Service, and we began exploring both parts of the system, 360 COUNTER and Intota Assessment.
Then in November, Collection Development went in to review the usage statistics for the 2015 Oxford University Press BR1.
It was in November 2015 that we really started kicking the tires in the system, because while retrieving these Oxford statistics, the OED was completely missing from the consolidated report in Intota Assessment, while it was present in the raw data available in 360 COUNTER.
Remember, Intota Assessment is the consolidated reporting piece – and the consolidated report was also 299 titles short, and was missing 29,170 uses from the original COUNTER report.
And if anyone asks WHY we started assessing our reporting tools in e-resources, the missing Oxford English Dictionary was a big reason why.
Up until this point we had been finding smaller errors and had submitted individual tickets for ProQuest to correct. It is important to note that these tickets had been resolved quickly, and we stayed in touch with ProQuest the entire time we were implementing and testing. Our support team here was invaluable to for the next parts of my presentation.
However, once the OED went missing, we started doing a much more detailed analysis.
In order to actually get the numbers that I rattled off about the Oxford BR2 – again that was 342 total missing titles from the consolidated report and 29K uses -- we had to find a way to compare the raw COUNTER report with the consolidated usage report from Intota Assessment.
To do this, we did a title level analysis using Excel’s VLOOPUP function, first matching ISSNs between raw COUNTER reports and the consolidated report, and then matching titles. A VLOOKUP can identify where data points match EXACTLY, and therefore it also isolates data points that do not have an exact match.
The result was akin to the “spot the differences” comic strips from kid’s magazines. If you’ve ever smashed data together in this way, you’ll know its not a perfect method, but it does the trick when you’re trying to do high-level comparisons.
The title-level analysis helped us identify 3 major categories of what had happened to the consolidated 2015 Oxford BR2 –
--duplicate titles had been scrubbed
--distinct editions of books with unique ISBNs had been merged into a single entry
--ISBNs had changed from the raw COUNTER report to the consolidated report.
Though this information was useful, it still didn’t answer why OED was missing. You’ll see that the totals I have here – 20, 34, 63 – don’t add up to the full 342 title difference between the reports.
Again -- VLOOKUP isn’t perfect -- but what we discovered AND what was still missing lead us to the next steps in our exploration.
Though they weren’t quite complete, we submitted our title-level findings to ProQuest for review. After about a month fixes were put into place and only 2 titles and 20 uses remained outstanding.
The results from ProQuest were favorable, and for us the remaining 2 titles and 20 uses was nominal and likely wouldn't affect decision making. It’s still not perfect though, and we weren’t sure why.
We did know that embarking on a title-level process of examining ALL consolidated reports would be EXTREMELY time consuming, and not sustainable to evaluate all providers and all reports.
Figure we have 100+ providers in 360 COUNTER, times 1-8 different types of COUNTER R4 reports. This process was simply not sustainable, but was a good exercise in understanding the system and pin pointing areas for improvement and correction.
Start Phase 2, where we moved to high-level data collection, focusing on the totals, using an Excel spreadsheet to keep track of the data over time.
This image is a snipped of what the working assessment tool looks like, which is an Excel spreadsheet with a very simple SUBTRACT formula built-in. You can see here that I have a collection date noted in my header, followed by the provider, report type, and year, and then distinct areas for the TOTALS collected from both the 360 COUNTER original report and the consolidated report from Intota Assessment.
Its simple math, with a big impact.
During testing, we updated collection data if there is a big system update, or when a new batch of statistics became available.
The worksheet builds as time passes, with new sections created for each distinct collection date. Therefore, we always have CONSTANTS – provider, report type, and year – as well as a variable, which is the DATE OF COLLECTION. Combined, these figures give us a rate of change over time. Remember: collection over time is intended for the SAME COUNTER REPORT, and those numbers shouldn’t be changing. With each assessment submitted to ProQuest, our goal was to see these totals getting closer to zero, where 0 equals exact data matches between 360 COUNTER and Intota Assessment.
I do want to note the COUNTER-validation tool from Usus quickly here. I have tested the Usus validation tool on these reports, both the native COUNTER reports and re-formatted consolidated reports from IA. This tool cannot pick up what I am describing here, and technically both are COUNTER-compliant.
When we moved to submitting high-level error reports to ProQuest, we were able to provide MORE data on a variety of different report types and providers, and the result gave us insight on what happens behind the scenes as data moves from 360 COUNTER collection to consolidated reports within Intota Assessment.
Animated slide.
A look at other things we’ve learned from ProQuest after submitting our high-level assessment for review.
Here’s an example of what ProQuest found from our high-level assessment for normalization when “duplicate titles have the same ISSN and distinct titles, usage pulled from 1 version”. Since we didn’t build this system, and because the normalization process occurs in a ‘black box’, knowing this information helps us understand the system better, and also helps in our decision making about whether the system totals are valuable for our assessment.
In this case, while ~900 some uses might be nominal compared to the total uses of the Springer report, the lost 847 uses compared to the normalized title’s 0 uses is significant. For a provider smaller than Springer, this kind of normalization would have an even bigger impact; and for a provider such as JSTOR, these types of occurrences will be more prevalent.
The system ADDS usage by month instead of copying RPT
from COUNTER report
Titles without ISSN data cannot be normalized
Titles with the same ISSN for print and online cannot be normalized
Though there are many different usage consolidation systems, this template can work to evaluate them due to its simplicity.
With ProQuest’s help, phases 1 & 2 challenged the assumption that our data would do what we expected. 360 COUNTER fully implemented at Yale for 1.5 years.
After phase 2, we paused on our workflows assessing the 360 COUNTER system, due to the following:
In the process we provided ProQuest with so much data, in addition to enhancement requests. This data helped us evaluate the system, and also helped us re-evaluate what we wanted from the system.
We can’t rebuild their system, but we can make recommendations
We needed to get going on consolidating usage so that we could enhance our assessment metrics
360 COUNTER was already freeing up our time in collecting reports – which is great!
We’re putting ourselves in charge, and in IA we couldn’t see what was happening, and had to guess or put in a ticket to ask
The burden is now on us, but we have a clearer view of what’s happening
We’re now transforming our raw COUNTER data so that it is formatted as a data source, which can be lined up with our ILS data and matched behind the scenes in Tableau. Tableau is a data visualization tool, and with COUNTER as a data source we are essentially able to consolidate our own usage, on our terms.
This method lets us leverage 360 COUNTER by out-sourcing the retrieval, while avoiding that pesky title normalization piece.
This new process means that we’re accepting every single title already represented in the COUNTER report. Knowing that we are not over-working the report or over-normalizing our data -- and potentially creating misleading or inaccurate information -- is more important to us than pulling apart or merging title history.
Remember: when consolidating usage using 360 COUNTER and Intota Assessment, this journal title had been represented as having 0 uses, when in fact it had 847 uses.
Now, with our home-grown usage consolidation, we can consolidate the usage for this title, see its title history (even with its flaws), and still see its total usage count for the year in question (2015).
Now that we know phase 3 can work, we wanted a proof of concept. Because we struggle to collect ARL statistics every year – since the switch from calendar year reporting to fiscal year reporting can be cumbersome – using ARL providers was a very clear choice.
To create the proof of concept for ARL providers, we went through the process of turning the COUNTER reports into a data source for each provider, for the 2013 to present. Remember: since we’re still using the retrieval service from ProQuest, we had extra time to go through this process, and because the transformation into a data source happens with the click of a button, it was quick!
The ARL statistics were imported into Tableau, which enabled us to easily switch from calendar year reporting to fiscal year reporting, for FY14 to FY16. The picture here is the result – success!
Right now the process of transforming COUNTER reports into a data source is manual, requiring an Excel plug-in and an Access database. Moving forward, we’re looking into making this process more automatic by having Python pick up the reports from 360 COUNTER and transform them with a script. Additionally, when we have more and more reports as part of this process, we’ll need to switch to a database structure more robust than Microsoft Access, which we are in the process of testing with SQL.
Tableau also provides our subject librarians with more opportunities for self-service usage retrieval. Right now we’re still doing some degree of emailing usage to librarians. Tableau can allow us to set-up standard and highly-requested views (or analysis types) so that any librarian can enter the system and be able to retrieve what they want.
So far we have tackled bringing up a new system, testing it, assessing it, and starting our own home-grown version. We’re currently working within COUNTER R4 and with manual usage harvesting.
On the horizon we see as potential road blocks: COUNTER R5 (and reports that may be formatted in a different way, with different or revised metrics) and SUSHI. To over come these potential blocks, we see automation as part of the solution – for both SUSHI and COUNTER 5. Python can help us create a pipeline so that it is just handled in the background and we don’t have to worry about it.
Once we move fully to SUSHI we’ll be re-using the assessment worksheet to evaluate the delivery of SUSHI reports and the consistency of the data.