An early experimenter with Zepheira's Linked Data for libraries discusses their experience with converting their MARC records to BIBFRAME/Linked Data and trying to measure the impact of this service on circulation, new borrower registrations, traffic counts, and Inter-Library Loans in 2016.
Linked Data for Libraries: Experiments between Cornell, Harvard and StanfordSimeon Warner
The Linked Data for Libraries (LD4L) project aims to connect bibliographic, person, and usage data from Cornell, Harvard, and Stanford using linked open data. The project is developing an extensible LD4L ontology based on existing standards like BIBFRAME and VIVO. It is working to transform over 30 million bibliographic records into linked data and demonstrate cross-institutional search. The goals are to provide richer discovery and context for scholarly resources by connecting previously isolated library data.
This document summarizes Rob Sanderson's presentation on linked data best practices and BibFrame. It finds that while BibFrame 2.0 shows some improvement, it still does not fully conform to linked data best practices. Specifically, it does not sufficiently reuse existing vocabularies, relate terms outside its namespace, or drop remaining non-URI identifiers. It also finds that the MARC to BibFrame conversion tools are insufficient for production use and need to be more openly developed and documented to support implementation by the linked data community.
The document discusses marketing and assessment in libraries. It covers topics such as the four P's of marketing (product, price, place, promotion), advocacy, branding, outreach, using social media and new technologies, conducting library assessments using tools like surveys and usability studies, analyzing LibQual+ survey results, identifying areas for improvement, and challenges to effective assessment. The main purpose is to help libraries better understand user needs and priorities in order to improve services, allocate resources, and advocate for funding.
Elevate the status of your library with data visualizations and multimedia me...Library_Connect
Webinar slides from:
- Todd Bruns, Institutional Repository Librarian, Eastern Illinois University
- Dudee Chiang, Senior Technical Librarian, NASA Jet Propulsion Laboratory
- Jean Shipman, Vice President of Global Library Relations, Elsevier
See the recorded webinar at: http://libraryconnect.elsevier.com/library-connect-webinars?commid=279911
Radicalize Your Library Catalog with Ebooks Your Patrons Can Keep Foreverloriayre
Presentation about how to find and select ebooks from the Internet Archive and create clickable links from within your library catalog so patrons can access them without having to leave your catalog.
Discussion of some of reasons libraries might collaborate in consortia. Includes data from the forthcoming book, “Library Consortia: Models for Collaboration and Sustainability" (Editors Greg Pronevitz and Valerie Horton). Presentation was on April 29, 2014.
Linked Data for Libraries: Experiments between Cornell, Harvard and StanfordSimeon Warner
The Linked Data for Libraries (LD4L) project aims to connect bibliographic, person, and usage data from Cornell, Harvard, and Stanford using linked open data. The project is developing an extensible LD4L ontology based on existing standards like BIBFRAME and VIVO. It is working to transform over 30 million bibliographic records into linked data and demonstrate cross-institutional search. The goals are to provide richer discovery and context for scholarly resources by connecting previously isolated library data.
This document summarizes Rob Sanderson's presentation on linked data best practices and BibFrame. It finds that while BibFrame 2.0 shows some improvement, it still does not fully conform to linked data best practices. Specifically, it does not sufficiently reuse existing vocabularies, relate terms outside its namespace, or drop remaining non-URI identifiers. It also finds that the MARC to BibFrame conversion tools are insufficient for production use and need to be more openly developed and documented to support implementation by the linked data community.
The document discusses marketing and assessment in libraries. It covers topics such as the four P's of marketing (product, price, place, promotion), advocacy, branding, outreach, using social media and new technologies, conducting library assessments using tools like surveys and usability studies, analyzing LibQual+ survey results, identifying areas for improvement, and challenges to effective assessment. The main purpose is to help libraries better understand user needs and priorities in order to improve services, allocate resources, and advocate for funding.
Elevate the status of your library with data visualizations and multimedia me...Library_Connect
Webinar slides from:
- Todd Bruns, Institutional Repository Librarian, Eastern Illinois University
- Dudee Chiang, Senior Technical Librarian, NASA Jet Propulsion Laboratory
- Jean Shipman, Vice President of Global Library Relations, Elsevier
See the recorded webinar at: http://libraryconnect.elsevier.com/library-connect-webinars?commid=279911
Radicalize Your Library Catalog with Ebooks Your Patrons Can Keep Foreverloriayre
Presentation about how to find and select ebooks from the Internet Archive and create clickable links from within your library catalog so patrons can access them without having to leave your catalog.
Discussion of some of reasons libraries might collaborate in consortia. Includes data from the forthcoming book, “Library Consortia: Models for Collaboration and Sustainability" (Editors Greg Pronevitz and Valerie Horton). Presentation was on April 29, 2014.
NCompass Live - January 2, 2014.
http://nlc.nebraska.gov/ncompasslive/
The Bibliographic Framework Initiative, or BIBFRAME, is intended to provide a replacement to the MARC format as an encoding standard for library catalogs. Its aim is to move library data into a Linked Data format, allowing it to interact with other data on the Web. In this session, Emily Nimsakont, the NLC’s Cataloging Librarian, will cover the basics of BIBFRAME, describe what it can provide for users of library catalogs that MARC can’t, and outline what librarians should be aware of regarding this change in the cataloging landscape.
What Is Linked Data, and What Does it Mean for Libraries? ALAO TEDSIG Spring ...Emily Nimsakont
This document provides an overview of Linked Data and what it means for libraries. It defines Linked Data as a method of publishing structured data on the web so it can be interlinked and more useful. Linked Data uses URIs and RDF to make relationships between data explicit. This allows data to be queried and customized in new ways. Examples of Linked Data include DBpedia and Freebase. For libraries, Linked Data could eliminate data silos by breaking down traditional bibliographic records into linked data. This would allow library data to interact more openly on the web. It may change cataloging workflows and require new skills from librarians. However, challenges include needing to develop new software and standards, as well as ensuring reliable data.
This webinar is about the Open Source software that is available to supplement your library system, regardless of whether you are using an Open Source Library System like Koha or Evergreen or a proprietary system like Millennium, CARL, or Horizon.
Software that dramatically extends and expands the capabilities of your library system software fall into two main categories: discovery interface and metasearch. While other products (e.g. content management systems) may integrate with your ILS to some degree, we will focus our attention on discovery and metasearch tools, how they work and who is using them.
BIBFRAME as a Library Linked Data StandardThomas Meehan
BIBFRAME is a proposed standard for encoding bibliographic metadata as linked data to replace the MARC format. It was developed by the Library of Congress to address MARC's limitations in a linked data environment. BIBFRAME models bibliographic information as works, instances of works, and authority data. It defines a vocabulary and encoding guidelines to structure data according to the FRBR conceptual model and enable linking between related metadata. Several projects are experimenting with implementing BIBFRAME to demonstrate its utility for library linked data applications.
The Impact of Linked Data in Digital Curation and Application to the Catalogu...Ian Bigelow
Presented by Ian Bigelow, Danielle Emon, Stacey Boileau, Jenny Jing and Erin Tripp at ACCESSYYZ in Toronto on September 10th, 2015.
Abstract:
Information organization and systems in libraries are in a state of significant flux. In systems there is a shift to XML and RDF-based schemas and ontologies while resource description content standards have changed from AACR2 to RDA. A move from MARC to BIBFRAME and other linked data applications is on the horizon. Linked data and the semantic web have become buzzwords, but what is linked data and why it is important for librarians? How can we use it in digital curation? What can libraries do now to “prepare” for this change in their current practice?
In light of these questions, the panel presentation will discuss two projects. First, there will be coverage of a sample project using the Fedora-based open source framework, Islandora to demonstrate the concepts of connecting related data across the Web with URIs, HTTP and RDF. The second half of the presentation will describe how a consortia has taken a holistic approach to writing an RDA workflow to help front-line cataloguers develop a wider perspective when it comes to resource description (creating more structured, future compatible metadata). Up for discussion: the current state and future possibilities of library metadata with a focus on the implications of linked data.
Building a Better Knowledgebase: An Investigation of Current Practical Uses a...NASIG
While knowledgebases have become essential tools for electronic resources management, little research has been done about how practitioners have integrated them into their everyday workflows. Inspired by a partnership with the GOKb project, which aims to build an open source knowledgebase, librarians at North Carolina State University set out to investigate the practical requirements, areas of improvement, and desired enhancements that librarians have for their knowledgebases. During this program, the presenters will describe the results of a survey about knowledgebase use sent to electronic resources managers across the country. The survey results will be supplemented by individual points of view gathered from in-depth interviews with selected respondents.The program will conclude with a look at how the findings of the investigation can be applied to the GOKb project. At the end of the session, the attendee should walk away with an understanding of trends in knowledgebase management, areas where the greatest improvement is needed, and ideas for enhancing knowledgebase functionality in an open source setting.
Maria Collins
Head of Acquisitions and Discovery, North Carolina State University
Maria Collins is the head of Acquisitions and Discovery at North Carolina State University Libraries. The Acquisitions & Discovery department was formed through the merger of acquisitions and cataloging in June 2012. Her other positions held at NCSU since 2005 include serials librarian, associate head of Acquisitions and the head of Content Acquisitions and Licensing. She previously worked as serials librarian and serials coordinator at Mississippi State University Libraries. Maria is editor of Serials Review and was the column editor for SR's Electronic Journal Forum. She also chairs the team developing NCSU's locally developed electronic resource management system, E-Matrix, and participates in the Kuali OLE and Global Open KnowledgeBase (GOKb) projects.
Katherine Hill
North Carolina State University
Katherine Hill is a library fellow in Acquisitions and Discovery, at North Carolina State University Libraries. In that role, she has been involved in planning and designing the open source knowledge base GOKb as well as e-acquisitions workflows for the open source ILS, Kuali OLE.
The document summarizes a NISO/UKSG webinar on KBART (Knowledge Bases And Related Tools) and improving access to electronic resources through the OpenURL standard. Peter McCracken of Serials Solutions discussed KBART's goals of ensuring accurate and timely transfer of metadata between publishers, link resolvers, and libraries to improve access. Thomas Ventimiglia of Princeton University explained how following KBART guidelines could reduce the challenges link resolver vendors face in processing metadata from numerous sources. Christine Noonan of Pacific Northwest National Laboratory provided the perspective of how libraries rely on accurate knowledge base data to link users to appropriate resources. Jenny Walker of CredoReference discussed Credo's participation in KBART to help users
The Power of Sharing Linked Data - ELAG 2014 WorkshopRichard Wallis
Presentation to set the scene and stimulate discussion in the Workshop "The Power of Sharing Linked Data" at ELAG 2014 - Bath University, UK June 10/11 2014
ERM Maintenance: Mapping, Maximizing and Marketing Multiple User Access MethodsSusan Massey
The proliferation of discovery layers to access library materials poses challenges for electronic resource record loading, maintenance, display, and marketing of multiple user portals. Centralized discovery systems promise to reduce data silos but may complicate electronic resource management activities and mask the importance of the library as a data provider. Electronic resource maintenance must include cross-departmental cooperation for tracking statistics, making purchased titles available to the user, updating subscription changes, verifying metadata display and access in multiple user interfaces, storing metadata files for potential future system migrations, deduplication of processes, and crosswalks between discovery layers. This presentation uses the University of North Florida’s discovery interfaces to map the maze of multiple maintenance workflows and open discussion about best practices for the future.
The Progress of BIBFRAME, by Angela KroegerAngela Kroeger
Presentation given at the OLAC-MOUG 2014 conference. Abstract: BIBFRAME is the Library of Congress's current effort to develop a linked data replacement for MARC. BIBFRAME is a work in progress, not yet ready for implementation. In this two-hour session, we will examine how BIBFRAME works, what it is intended to accomplish, and the progress that has been made toward that goal. We'll take a look at the BIBFRAME tools that are under development, including the prototype editor for creating new records. And we'll share a glimpse of what the future holds for library catalogs and cataloging. NOTE: SlideShare seems to have garbled the formatting of some of my slides. To receive a clean copy via email, contact me at angelajkroeger [at] gmail [dot] com.
When There Is no Magic Bullet: an Interlocking Approach of Managing EbooksNASIG
Presenter:
Xiaoyan Song, Electronic Resource Librarian, NC State University Libraries
As academic ebook business grows rapidly, opportunities and challenges arise out of this change. A wide range of systems and tools spring up aiming to assist librarians to manage ebooks in an efficient and streamlined fashion. Proprietary vendors are acquiring new technologies and products to integrate into their existing product line. Some community developed open source systems and tools become the rising stars due to the economic and budget pressures. Specific local needs result home-grown tools. Nevertheless, Librarians often find themselves get frustrated with the variety of choices presented in front of them, realizing that there is not a single magic bullet that can solve all their problems. Creative and critical thinking has become the norm as libraries seek an optimizing solution to mingle these options. And that is what’s essential to lego play!
This session demonstrates how an interlocking approach is developed that integrates ILS, ERM, open source tools and a locally developed database to manage ebooks. It starts with an examination of the lego building process from a lego workshop that the presenter has recently attended, followed by the analogy between lego building and ebooks management. It provides a quick overview of the mainstream systems that the presenter’s home libraries are using, discussing the pain points within these mainstream systems. It elaborates on how open source tools and local developed tools are brought into the “lego building” process.
Ebooks are dynamic in nature. Entailing with creative thinking and problem-solving skills, the interlocking approach allows us to embrace the changes with innumerable fun which we find in lego play.
Forging New Links: Libraries in the Semantic WebGillian Byrne
This document discusses the potential benefits of applying Semantic Web and Linked Data technologies to libraries. It describes how structured data, controlled vocabularies, and linking of data across systems can help address current issues with library discovery like siloed data and lack of connections between related resources. The document outlines key Semantic Web concepts like RDF, ontologies, and reasoning and provides examples of how libraries can publish and interconnect their metadata as Linked Open Data to enhance discovery and personalization for users. However, it also notes obstacles to library adoption of these approaches like competing vocabularies, issues with identity, trust, preservation and licensing.
The document provides an overview of current developments and future trends relating to the library website and online services at University College Dublin Library. It discusses statistics on usage of the current website, planned enhancements including a new institutional repository, moving some content to subject guides and using new platforms like LibGuides. It notes that the website will need to be restructured and packaged differently across various hosted platforms going forward to better meet user needs in an evolving information environment. A consultancy will help develop a new vision and roadmap for online library services over the next 2-3 years.
This presentation was delivered by Carolyn Hansen of the University of Cincinnati during the NISO VIrtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016
This document summarizes a CrossRef workshop held in South Africa in September 2015. It discusses managing CrossRef DOIs and metadata, including depositing DOIs and metadata, updating changes, and handling content moves between publishers. It also covers using the same DOI for content in multiple languages or on multiple sites through CrossRef's multiple resolution feature.
Level Up Web: Modern Web Development and Management Practices for LibrariesNina McHale
This document discusses content strategy, workflow, and governance for websites. It defines content strategy as planning for the creation, publication, and governance of useful, usable content. It emphasizes the importance of defining roles, responsibilities, and processes for developing, maintaining, and updating content. Examples are provided for defining different types of content and creating workflow matrices. The importance of governance structures, policies, and guidelines is discussed to ensure consistency across websites. Overall, the document provides guidance on taking a strategic approach to managing website content through definition, collaboration, and oversight.
Comprehensive Database Assessment with a Small Staff.pptxCourtney F
This document summarizes Courtney Fuson's process for conducting a comprehensive database assessment with a small library staff at Belmont University. It outlines Fuson's initial timeline which had to be adjusted due to changes at the library and availability of staff. It also describes Fuson's literature review on best practices, development of assessment criteria, creation of a team to gather usage data, and initial scoring results. The next steps will be to complete data collection, determine weighting of criteria, create a core list of resources, and discuss cancellation decisions with faculty. Key lessons learned were that a master list of resources is still needed and gathering complete information on all databases takes significant time.
This document discusses MINES for Libraries, a methodology developed by the Association of Research Libraries (ARL) to assess usage of electronic resources. MINES uses web surveys to randomly sample users and collect data on demographics, location, and purpose of use without being obtrusive. It addresses questions about how sponsored researchers, instructors, students, and other users access electronic resources. Over 150,000 usage instances have been surveyed using MINES. The presentation outlines the history and framework of MINES, how it differs from other usage metrics, and challenges in assessing digital resources as users, devices, and expectations change.
NCompass Live - January 2, 2014.
http://nlc.nebraska.gov/ncompasslive/
The Bibliographic Framework Initiative, or BIBFRAME, is intended to provide a replacement to the MARC format as an encoding standard for library catalogs. Its aim is to move library data into a Linked Data format, allowing it to interact with other data on the Web. In this session, Emily Nimsakont, the NLC’s Cataloging Librarian, will cover the basics of BIBFRAME, describe what it can provide for users of library catalogs that MARC can’t, and outline what librarians should be aware of regarding this change in the cataloging landscape.
What Is Linked Data, and What Does it Mean for Libraries? ALAO TEDSIG Spring ...Emily Nimsakont
This document provides an overview of Linked Data and what it means for libraries. It defines Linked Data as a method of publishing structured data on the web so it can be interlinked and more useful. Linked Data uses URIs and RDF to make relationships between data explicit. This allows data to be queried and customized in new ways. Examples of Linked Data include DBpedia and Freebase. For libraries, Linked Data could eliminate data silos by breaking down traditional bibliographic records into linked data. This would allow library data to interact more openly on the web. It may change cataloging workflows and require new skills from librarians. However, challenges include needing to develop new software and standards, as well as ensuring reliable data.
This webinar is about the Open Source software that is available to supplement your library system, regardless of whether you are using an Open Source Library System like Koha or Evergreen or a proprietary system like Millennium, CARL, or Horizon.
Software that dramatically extends and expands the capabilities of your library system software fall into two main categories: discovery interface and metasearch. While other products (e.g. content management systems) may integrate with your ILS to some degree, we will focus our attention on discovery and metasearch tools, how they work and who is using them.
BIBFRAME as a Library Linked Data StandardThomas Meehan
BIBFRAME is a proposed standard for encoding bibliographic metadata as linked data to replace the MARC format. It was developed by the Library of Congress to address MARC's limitations in a linked data environment. BIBFRAME models bibliographic information as works, instances of works, and authority data. It defines a vocabulary and encoding guidelines to structure data according to the FRBR conceptual model and enable linking between related metadata. Several projects are experimenting with implementing BIBFRAME to demonstrate its utility for library linked data applications.
The Impact of Linked Data in Digital Curation and Application to the Catalogu...Ian Bigelow
Presented by Ian Bigelow, Danielle Emon, Stacey Boileau, Jenny Jing and Erin Tripp at ACCESSYYZ in Toronto on September 10th, 2015.
Abstract:
Information organization and systems in libraries are in a state of significant flux. In systems there is a shift to XML and RDF-based schemas and ontologies while resource description content standards have changed from AACR2 to RDA. A move from MARC to BIBFRAME and other linked data applications is on the horizon. Linked data and the semantic web have become buzzwords, but what is linked data and why it is important for librarians? How can we use it in digital curation? What can libraries do now to “prepare” for this change in their current practice?
In light of these questions, the panel presentation will discuss two projects. First, there will be coverage of a sample project using the Fedora-based open source framework, Islandora to demonstrate the concepts of connecting related data across the Web with URIs, HTTP and RDF. The second half of the presentation will describe how a consortia has taken a holistic approach to writing an RDA workflow to help front-line cataloguers develop a wider perspective when it comes to resource description (creating more structured, future compatible metadata). Up for discussion: the current state and future possibilities of library metadata with a focus on the implications of linked data.
Building a Better Knowledgebase: An Investigation of Current Practical Uses a...NASIG
While knowledgebases have become essential tools for electronic resources management, little research has been done about how practitioners have integrated them into their everyday workflows. Inspired by a partnership with the GOKb project, which aims to build an open source knowledgebase, librarians at North Carolina State University set out to investigate the practical requirements, areas of improvement, and desired enhancements that librarians have for their knowledgebases. During this program, the presenters will describe the results of a survey about knowledgebase use sent to electronic resources managers across the country. The survey results will be supplemented by individual points of view gathered from in-depth interviews with selected respondents.The program will conclude with a look at how the findings of the investigation can be applied to the GOKb project. At the end of the session, the attendee should walk away with an understanding of trends in knowledgebase management, areas where the greatest improvement is needed, and ideas for enhancing knowledgebase functionality in an open source setting.
Maria Collins
Head of Acquisitions and Discovery, North Carolina State University
Maria Collins is the head of Acquisitions and Discovery at North Carolina State University Libraries. The Acquisitions & Discovery department was formed through the merger of acquisitions and cataloging in June 2012. Her other positions held at NCSU since 2005 include serials librarian, associate head of Acquisitions and the head of Content Acquisitions and Licensing. She previously worked as serials librarian and serials coordinator at Mississippi State University Libraries. Maria is editor of Serials Review and was the column editor for SR's Electronic Journal Forum. She also chairs the team developing NCSU's locally developed electronic resource management system, E-Matrix, and participates in the Kuali OLE and Global Open KnowledgeBase (GOKb) projects.
Katherine Hill
North Carolina State University
Katherine Hill is a library fellow in Acquisitions and Discovery, at North Carolina State University Libraries. In that role, she has been involved in planning and designing the open source knowledge base GOKb as well as e-acquisitions workflows for the open source ILS, Kuali OLE.
The document summarizes a NISO/UKSG webinar on KBART (Knowledge Bases And Related Tools) and improving access to electronic resources through the OpenURL standard. Peter McCracken of Serials Solutions discussed KBART's goals of ensuring accurate and timely transfer of metadata between publishers, link resolvers, and libraries to improve access. Thomas Ventimiglia of Princeton University explained how following KBART guidelines could reduce the challenges link resolver vendors face in processing metadata from numerous sources. Christine Noonan of Pacific Northwest National Laboratory provided the perspective of how libraries rely on accurate knowledge base data to link users to appropriate resources. Jenny Walker of CredoReference discussed Credo's participation in KBART to help users
The Power of Sharing Linked Data - ELAG 2014 WorkshopRichard Wallis
Presentation to set the scene and stimulate discussion in the Workshop "The Power of Sharing Linked Data" at ELAG 2014 - Bath University, UK June 10/11 2014
ERM Maintenance: Mapping, Maximizing and Marketing Multiple User Access MethodsSusan Massey
The proliferation of discovery layers to access library materials poses challenges for electronic resource record loading, maintenance, display, and marketing of multiple user portals. Centralized discovery systems promise to reduce data silos but may complicate electronic resource management activities and mask the importance of the library as a data provider. Electronic resource maintenance must include cross-departmental cooperation for tracking statistics, making purchased titles available to the user, updating subscription changes, verifying metadata display and access in multiple user interfaces, storing metadata files for potential future system migrations, deduplication of processes, and crosswalks between discovery layers. This presentation uses the University of North Florida’s discovery interfaces to map the maze of multiple maintenance workflows and open discussion about best practices for the future.
The Progress of BIBFRAME, by Angela KroegerAngela Kroeger
Presentation given at the OLAC-MOUG 2014 conference. Abstract: BIBFRAME is the Library of Congress's current effort to develop a linked data replacement for MARC. BIBFRAME is a work in progress, not yet ready for implementation. In this two-hour session, we will examine how BIBFRAME works, what it is intended to accomplish, and the progress that has been made toward that goal. We'll take a look at the BIBFRAME tools that are under development, including the prototype editor for creating new records. And we'll share a glimpse of what the future holds for library catalogs and cataloging. NOTE: SlideShare seems to have garbled the formatting of some of my slides. To receive a clean copy via email, contact me at angelajkroeger [at] gmail [dot] com.
When There Is no Magic Bullet: an Interlocking Approach of Managing EbooksNASIG
Presenter:
Xiaoyan Song, Electronic Resource Librarian, NC State University Libraries
As academic ebook business grows rapidly, opportunities and challenges arise out of this change. A wide range of systems and tools spring up aiming to assist librarians to manage ebooks in an efficient and streamlined fashion. Proprietary vendors are acquiring new technologies and products to integrate into their existing product line. Some community developed open source systems and tools become the rising stars due to the economic and budget pressures. Specific local needs result home-grown tools. Nevertheless, Librarians often find themselves get frustrated with the variety of choices presented in front of them, realizing that there is not a single magic bullet that can solve all their problems. Creative and critical thinking has become the norm as libraries seek an optimizing solution to mingle these options. And that is what’s essential to lego play!
This session demonstrates how an interlocking approach is developed that integrates ILS, ERM, open source tools and a locally developed database to manage ebooks. It starts with an examination of the lego building process from a lego workshop that the presenter has recently attended, followed by the analogy between lego building and ebooks management. It provides a quick overview of the mainstream systems that the presenter’s home libraries are using, discussing the pain points within these mainstream systems. It elaborates on how open source tools and local developed tools are brought into the “lego building” process.
Ebooks are dynamic in nature. Entailing with creative thinking and problem-solving skills, the interlocking approach allows us to embrace the changes with innumerable fun which we find in lego play.
Forging New Links: Libraries in the Semantic WebGillian Byrne
This document discusses the potential benefits of applying Semantic Web and Linked Data technologies to libraries. It describes how structured data, controlled vocabularies, and linking of data across systems can help address current issues with library discovery like siloed data and lack of connections between related resources. The document outlines key Semantic Web concepts like RDF, ontologies, and reasoning and provides examples of how libraries can publish and interconnect their metadata as Linked Open Data to enhance discovery and personalization for users. However, it also notes obstacles to library adoption of these approaches like competing vocabularies, issues with identity, trust, preservation and licensing.
The document provides an overview of current developments and future trends relating to the library website and online services at University College Dublin Library. It discusses statistics on usage of the current website, planned enhancements including a new institutional repository, moving some content to subject guides and using new platforms like LibGuides. It notes that the website will need to be restructured and packaged differently across various hosted platforms going forward to better meet user needs in an evolving information environment. A consultancy will help develop a new vision and roadmap for online library services over the next 2-3 years.
This presentation was delivered by Carolyn Hansen of the University of Cincinnati during the NISO VIrtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016
This document summarizes a CrossRef workshop held in South Africa in September 2015. It discusses managing CrossRef DOIs and metadata, including depositing DOIs and metadata, updating changes, and handling content moves between publishers. It also covers using the same DOI for content in multiple languages or on multiple sites through CrossRef's multiple resolution feature.
Level Up Web: Modern Web Development and Management Practices for LibrariesNina McHale
This document discusses content strategy, workflow, and governance for websites. It defines content strategy as planning for the creation, publication, and governance of useful, usable content. It emphasizes the importance of defining roles, responsibilities, and processes for developing, maintaining, and updating content. Examples are provided for defining different types of content and creating workflow matrices. The importance of governance structures, policies, and guidelines is discussed to ensure consistency across websites. Overall, the document provides guidance on taking a strategic approach to managing website content through definition, collaboration, and oversight.
Comprehensive Database Assessment with a Small Staff.pptxCourtney F
This document summarizes Courtney Fuson's process for conducting a comprehensive database assessment with a small library staff at Belmont University. It outlines Fuson's initial timeline which had to be adjusted due to changes at the library and availability of staff. It also describes Fuson's literature review on best practices, development of assessment criteria, creation of a team to gather usage data, and initial scoring results. The next steps will be to complete data collection, determine weighting of criteria, create a core list of resources, and discuss cancellation decisions with faculty. Key lessons learned were that a master list of resources is still needed and gathering complete information on all databases takes significant time.
This document discusses MINES for Libraries, a methodology developed by the Association of Research Libraries (ARL) to assess usage of electronic resources. MINES uses web surveys to randomly sample users and collect data on demographics, location, and purpose of use without being obtrusive. It addresses questions about how sponsored researchers, instructors, students, and other users access electronic resources. Over 150,000 usage instances have been surveyed using MINES. The presentation outlines the history and framework of MINES, how it differs from other usage metrics, and challenges in assessing digital resources as users, devices, and expectations change.
Beyond MARC: BIBFRAME and the Future of Bibliographic DataEmily Nimsakont
The Bibliographic Framework Initiative, or BIBFRAME, is intended to provide a replacement to the MARC format as an encoding standard for library catalogs. Its aim is to move library data into a Linked Data format, allowing it to interact with other data on the Web. In this session, Emily Nimsakont, the NLC’s Cataloging Librarian, will cover the basics of BIBFRAME, describe what it can provide for users of library catalogs that MARC can’t, and outline what librarians should be aware of regarding this change in the cataloging landscape.
The document discusses three options for libraries to adopt linked data: BIBFRAME 2.0, Schema.org, and Linky MARC. BIBFRAME 2.0 is a library standard that allows standardized RDF interchange but is not recognized outside libraries. Schema.org is the de facto web standard that improves discovery on the web but lacks detail for library needs. Linky MARC adds URIs to MARC without changing its format. The document evaluates the pros and cons of each and who may want to adopt each standard.
The document summarizes a panel discussion on the future of libraries held at SUNY Potsdam College. The 6 panelists discussed how user behaviors and technologies are changing libraries. Users now expect instant access to information anywhere through mobile devices. Libraries are providing more digital resources and collaborative spaces while print collections decline. New models like purchase-on-demand and e-books are shaping library collections. Discovery tools aim to improve search across resources but challenges remain regarding evaluation, serendipity and supporting different user levels.
The document summarizes the development and assessment of mobile apps by the libraries at Cedarville University and Ohio Northern University. It provides details on:
1) The schools' student enrollment sizes and academic programs. Both launched mobile apps in 2011 to provide access to library resources.
2) Usability surveys were conducted to determine which features to include. The most popular requested features were the library catalog, account information, and research tools.
3) Promotional strategies included banners, posters, emails and in-person events. App usage has ranged from 100-140 users per month accessing 2000-4300 pages.
4) Further assessment identified opportunities to improve navigation and address limitations of the small screen for
The document summarizes the development and assessment of mobile apps by the libraries at Cedarville University and Ohio Northern University. It provides details on:
- The student population and academic programs at each university.
- How each library developed their mobile app, including deciding on features through a user survey, promoting the app, and using an outside vendor (Boopsie) to develop it.
- Usage statistics and most/least popular features for each app, which are informing revisions.
- Usability testing conducted at ONU to qualitatively assess students' experience with the app and identify areas for improvement.
@WebSciDL PhD Student Project Reviews August 5&6, 2015Michael Nelson
Herbert Van de Sompel (LANL) visisted the Web Science & Digital Libraries Group @ ODU on August 5--7, 2015. The seven PhD students who were in town at that time reviewed their current status for him.
This document discusses transforming open government data from Romania into linked open data. It begins with background on linked data and open data initiatives. Then it describes efforts to model, transform, link, and publish Romanian open data as linked open data. This includes identifying common vocabularies and properties, creating URIs, linking to external datasets like DBPedia, and publishing the linked data for use in applications via a SPARQL endpoint. Overall the goal is to make this data more accessible and interoperable through semantic web standards.
Delivered by Peter Burnhill, Director of EDINA, at the PRELIDA Consolidation and Dissemination workshop on 17/18 October 2014 (http://prelida.eu/consolidation-workshop).
Summary: The web changes over time, and significant reference rot inevitably occurs. Web archiving delivers only a 50% chance of success. So in addition to the original URI, the link should be augmented with temporal context to increase robustness.
Linked data and the future of librariesRegan Harper
The document discusses a presentation given by OCLC and LYRASIS on linked data and what it means for the future of libraries. It provides an overview of linked data concepts, including defining linked data as using the web to connect related data and lower barriers to linking data. It outlines some of the key principles of linked data, and discusses how linked data can benefit libraries by making data more reusable, efficient to maintain and discoverable. It also notes some of the challenges libraries may face in changing workflows and maintaining information provenance with linked data.
1) BIBFRAME is a new bibliographic framework developed by the Library of Congress to replace MARC standards and better integrate library data with the semantic web.
2) BIBFRAME uses linked data principles and RDF to make library data more extensible and interconnected on the web.
3) The main benefits of BIBFRAME are that it allows library data to be more discoverable online, integrates better with web standards, and is more flexible and reusable than MARC records. However, transforming existing data and training catalogers will be challenges in adopting BIBFRAME.
1) BIBFRAME is a new bibliographic framework developed by the Library of Congress to replace MARC standards and better integrate library data with the semantic web.
2) BIBFRAME uses linked data principles and RDF to make library data more extensible and interconnected on the web.
3) The main benefits of BIBFRAME are that it allows library data to be more discoverable online, integrates better with web standards, and is more flexible and reusable than MARC records. However, transforming existing data and training catalogers will be challenges in adopting BIBFRAME.
Webscale Discovery with the Enduser in Mind Debra Kolah
The document summarizes a presentation given at the 2012 SLA Annual Conference in Chicago. It discusses the history of discovery tools in libraries, from cataloging to federated search to web-scale discovery. It provides biographies of three speakers: Harry Kaplanian of EBSCO Publishing, Debra Kolah of Rice University, and Rafal Kasprowski of Rice University. The presentation covered topics like the development of discovery services, lessons learned from a discovery tool selection process at Rice University, and best practices for customizing and implementing discovery systems.
OCLC Research Update at ALA Chicago. June 26, 2017.OCLC
Rachel Frick, OCLC Executive Director of the OCLC Research Library Partnership, reviews some of the broad agenda items and recent publications related to the work of OCLC Research. Rachel is then joined for two presentations on specific research topics. First, Sharon Streams (OCLC Director of WebJunction) and Monika Sengul-Jones (OCLC Wikipedian-in-Residence) present on “Public Libraries and Wikipedia.” Next, Kenning Arlitsch (Dean, Montana State University Library) and Jeff Mixter (OCLC Senior Software Engineer) share their findings on “Accurate Institutional Repository Download Measurement using RAMP, the Repository Analytics and Metrics Portal.”
Walk Before You Run: Prerequisites to Linked DataKenning Arlitsch
Presentation on April 23, 2015 at the Amigos Library Services online conference: "Linked Data & RDF: New Frontiers in Metadata and Access"
Covers traditional SEO and Semantic Web Optimization, including Semantic Web Identity and a Schema.org project at Montana State University Library.
The Impact of Linked Data in Digital Curation and Application to the Catalogu...Hong (Jenny) Jing
(Full version of the presentation: https://www.youtube.com/watch?v=WS9Svbmp-YY)
Information organization and systems in libraries are in a state of significant flux. In systems there is a shift to XML and RDF-based schemas and ontologies while resource description content standards have changed from AACR2 to RDA. A move from MARC to BIBFRAME and other linked data applications is on the horizon. Linked data and the semantic web have become buzzwords, but what is linked data and why it is important for librarians? How can we use it in digital curation? What can libraries do now to “prepare” for this change in their current practice?
In light of these questions, the panel presentation will discuss two projects. First, there will be coverage of a sample project using the Fedora-based open source framework, Islandora to demonstrate the concepts of connecting related data across the Web with URIs, HTTP and RDF. The second half of the presentation will describe how a consortia has taken a holistic approach to writing an RDA workflow to help front-line cataloguers develop a wider perspective when it comes to resource description (creating more structured, future compatible metadata). Up for discussion: the current state and future possibilities of library metadata with a focus on the implications of linked data.
The document discusses the concept of linked data and its implications for libraries. It provides an overview of linked data, describing its key principles and how it represents data using RDF. The document then discusses how linked data can connect the world's libraries by linking their metadata and other resources on the web. It notes challenges for libraries in transitioning to linked data but also opportunities to better integrate library data on the web and reassert their role as discoverable sources of information for all materials.
The document discusses the evolution of subject guides at the University of Bolton library from 2008 to 2017. It describes how the guides have become more comprehensive over time, providing detailed descriptions of databases and links to additional help resources. Usage statistics show the guides are popular with over 31,000 views of 97 guides. The most viewed guides cover subjects like law, health, and business. The number of questions received about electronic resources has decreased as the guides have improved. Future work includes usability testing and expanding guide content for researchers.
Similar to Breaking Up with MARC: Are We There Yet? 2017 MCLS Linked Data Summit (03.16.2017) (20)
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Principle of conventional tomography-Bibash Shahi ppt..pptx
Breaking Up with MARC: Are We There Yet? 2017 MCLS Linked Data Summit (03.16.2017)
1. Breaking Up with MARC
AreWeThereYet?
MCLS Linked Data Summit
March 16, 2017
Andrea Kappler
Cataloging Manager
Evansville Vanderburgh Public Library
2. Disclaimers
• I’m not a BIBFRAME expert
• I’m not a webmaster or CS major
• I’m older than MARC format
• I didn’t sleep at a Holiday Inn Express
3. Why Dump MARC Format?
• It’s old and inflexible
• It’s designed for data strings, not things
• It’s anti-social
• It’s not the only game in town
• It’s a proprietary format
4. Invisible Libraries
• Data silos
• Search engine optimization (SEO)
• Library website + Online catalog ≠ to
exposing library collections on theWeb
5. Invisible Libraries (cont.)
• User search behavior (OCLC- “Perceptions of Libraries…”
2005)
– 84% of information searches begin on the Internet
– 1% of information searches begin on a library website
• In 2010, 0% of users began their searches on a library
website (“Perceptions” 2010)
• In 2012, Google claimed it processed 1.2 trillion searches
• In 2015, mobile Google searches took over desktop
computer searches for the first time ever (smartphones
only)
6. Linked Data
• Structured data & shared vocabulary (Schema.org)
• Shows relationships between data elements (people,
places & things)
• Google Knowledge Graphs, location-specific
information (e.g., movie times for your
neighborhood, restaurants near you, targeted
advertisements)
7. What is BIBFRAME?
• New bibliographic initiative
• Standardized biblio-centric vocabulary
• Flexible and extensible
8. What is BIBFRAME? (cont.)
• Utilizes Linked Data
– URIs vs. URLs
• Emphasizes relationships
• MARC Format replacement
9. BIBFRAME 2.0 Model
Source:
United States Library of Congress. Overview of the BIBFRAME
2.0 Model. https://www.loc.gov/bibframe/docs/bibframe2-
model.html Website accessed February 9, 2017.
10. BIBFRAME for Civilians
Cans of cat food analogy:
• Cat food = Bibliographic & authority data
• Sealed metal cans = MARC format
• Cans in your cupboard =Traditional library catalog
• Feed the neighborhood cat(s) = Get your library’s
information out to patrons using theWeb
12. BIBFRAME for Civilians (cont.)
• Open the cans = MARC format transformed into
BIBFRAME/Linked Data (Zepheira)
• Put the cat food on a plate = LD hosted on servers
(Zepheira)
• Cats smell the food & come running =Web crawlers find &
index our Linked Data
• Cats (and library directors) are happy!
26. A Direct Correlation?
Title Format Date Out Branch Date In Branch
Frommer’s Kauai Book 1/20/2017 Central Library 2/6/2017 North Park
The lost girls / Heather
Young
Book 1/27/2017 North Park 2/10/2017 North Park
The pharaoh’s secret /
Marissa Moss
Book 1/7/2017 McSelf Check 1/25/2017 McCollough
Human impact / Carole
Garbunny Vogel
Book 2/13/2017 Internet (renewal) 3/6/2017 Red Bank
The readers of Broken
Wheel recommend /
Katarina Bivald
E-book 1/29/2017 OverDrive 2/19/2017 OverDrive
Between 1/1/17-3/1/17, 18 titles were referred from
link.evpl.org to encore.evpl.org. These five were checked out
(28%). Did users find them on the Web? Is this evidence of
Linked Data working for us?
27. Circulation Statistics
Year Total Circulation Difference % Change
2015 2,313,259 N/A N/A
*2016 2,314,534 +1,275 +.05%
*First year of Linked Data with Zepheira
Should we call them “Circumstantial” Statistics?
28. Patron Registration Statistics
Patron Category Total
Registrations
Difference % Change
2015 *2016
Adult (18+) 51,398 48,503 -2,895 -5.6%
YA (ages 15-17) 6,026 5,340 -686 -11.4%
Juvenile (ages 0-14) 6,096 5,444 -652 -10.7%
New Borrower 3,869 3,513 -356 -9.2%
ILL 2,495 1,247 -1,248 -50%
Online Registrant 79 120 +41 +34.2%
*First year of Linked Data with Zepheira
29. FootTraffic
Location Name 2015 *2016 Difference % Change
Central Library 787,561 **500,861 -286,700 -36.4%
East 28,694 25,690 -3,004 -10.5%
McCollough 196,812 194,615 -2,197 -1.1%
North Park 219,073 140,980 -78,093 -35.6%
Oaklyn 177,927 122,931 -54,996 -30.9%
Red Bank 120,816 119,483 -1,333 -1.1%
Stringtown 37,388 35,041 -2,347 -6.3%
West 32,076 30,052 -2,024 -6.3%
*First year of Linked Data with Zepheira
**Major construction project all year long, blocking access to the main
entrance, meeting rooms and parking lot at Central Library
30. ILL Lending
Month/Year # of ILL Requests
February 2015 449
*February 2016 558
February 2017 341
*First year of Linked Data with Zepheira
Data compiled from OCLC Statistics (WorldShare Interlibrary Loan)
on 3/13/17
31. HaveWe Fed the Cats?
• Jury is still out
• Market Linked Data
• Time will tell
32. What Hasn’t Changed @EVPL
• Still using MARC format
• Still doing authority control
• Still using same ILS software
• No Linked Data in our catalog
33. Moving Forward
• Continue with Zepheira
• Watch BIBFRAME development
• Communicate with ILS vendor
• Evaluate Linked Data’s impact
34. “If libraries cling to outdated standards, they
will find it increasingly difficult to serve their
clients as they expect and deserve.”
– Roy Tennant, “MARC Must Die”, 2002
35. Resources
• “MARC Must Die” – RoyTennant, Library Journal, Oct. 15, 2002
– http://lj.libraryjournal.com/2002/10/ljarchives/marc-must-die/#_
• “OCLC WorksTowards Linked Data Environment” – Matt Enis, Library Journal,
Feb. 17, 2015
– http://lj.libraryjournal.com/2015/02/technology/oclc-works-toward-linked-data-
environment-ala-midwinter-2015/
• “Ending the Invisible Library” – Matt Enis, Library Journal, Feb. 24, 2015
– http://lj.libraryjournal.com/2015/02/technology/ending-the-invisible-library-linked-data/
• Googlebot (explains how Google’s web crawling bot works)
– https://support.google.com/webmasters/answer/182072?hl=en
• “Google Launches Knowledge Graph to Provide Answers, Not Just Links”
– http://searchengineland.com/google-launches-knowledge-graph-121585
• “Google Now Handles at Least 2Trillion Searches PerYear” – Danny Sullivan, May
24, 2016
– http://searchengineland.com/google-now-handles-2-999-trillion-searches-per-year-250247
• “It’s Official: Google Says More Searches Now On Mobile Than On Desktop” –
Greg Sterling, Search Engine Land, May 5, 2015
– http://searchengineland.com/its-official-google-says-more-searches-now-on-mobile-than-on-desktop-
220369
36. Resources
• Linked Data
– https://en.wikipedia.org/wiki/Linked_data
• Schema.org
– http://schema.org/
• RDF (Resource Description Framework)
– https://en.wikipedia.org/wiki/Resource_Description_Framework
• “The ILS and Linked Data: AWhite Paper” – Georgia Fujikawa, Innovative
Interfaces (Aug. 2015)
– http://t.co/L4Nw3GFyeB
• BIBFLOW at UC Davis
– https://bibflow.library.ucdavis.edu/
37. Resources
• BIBFRAME home page
– http://bibframe.org/
• LC’s BIBFRAME page
– https://www.loc.gov/bibframe/
• “MARC21 to BIBFRAME: Outcomes, Possibilities, and New
Directions” (New Zealand Library and Information Management
Journal, v. 55, no. 1, Dec. 2014):
– http://www.lianza.org.nz/sites/default/files/NZLIMJ%20Vol%2055%20Issue%
201%20Dec%202014%20-%20Rollitt.pdf
• Denver PL’s BIBFRAME pilot and conversion of 840,000 MARC
records to BIBFRAME resources:
– http://copia.posthaven.com/denver-public-library-data-pilot-release
– http://copia.posthaven.com/early-progress-on-denver-public-library-slash-
number-visiblelibrary
– https://www.denverlibrary.org/blog/rachel-f/dpl-announces-linked-data-
launch
38. OCLC Research Publications
• “Perceptions of Libraries and Information Resources: A Report to theOCLC Membership” (290
pages, 2005)
– https://www.oclc.org/content/dam/oclc/reports/pdfs/Percept_all.pdf
• “Perceptions of Libraries
• “The Library in the Life of the User: Engaging with PeopleWhereThey Live and Learn” (2015)
– http://www.oclc.org/research/publications/2015/oclcresearch-library-in-life-of-user.html
• “Shaping the Library to the Life of the User: Adapting, Empowering, Partnering, Engaging” (2015)
– http://www.oclc.org/research/publications/2015/oclcresearch-shaping-library-to-life-of-user-2015.html
• “The Relationship between BIBFRAME andOCLC’s Linked-Data Model of Bibliographic
Description:AWorking Paper” – Carol Jean Godby, Senior Research Scientist,OCLC Research
(2013)
– http://www.oclc.org/content/dam/research/publications/library/2013/2013-05.pdf
• “Common Ground: Exploring Compatibilities Between the Linked Data Models of the Library of
Congress andOCLC” – Carol Jean Godby, OCLC Research; Ray Denenberg, Library of Congress
(2015)
– http://www.oclc.org/research/publications/2015/oclcresearch-loc-linked-data-2015.html
• Library Linked Data in theCloud: OCLC’s Experiments with New Models of Resource Description –
Carol Jean Godby, ShenghuiWang, Jeffery K. Mixter (140 pages, 2015)
– http://www.oclc.org/research/publications/2015/oclcresearch-library-linked-data-in-the-cloud.html
I’m still learning about BIBFRAME and many of the other things I’ll be presenting on today. What I will tell you about BIBFRAME will be a high-level overview, with some examples of what it can do and who is using it or experimenting with it. I’ll also discuss some of BIBFRAME’s potential and what libraries need to know about preparing for BIBFRAME.
My educational background is English literature and History. I love to read, I love learning about history and I have an MLS. I’m not a webmaster, I’ve never created a web page, and I don’t have a CS degree.
I’m just a little older than MARC format. I’ve been using it since I began working in libraries in 1989 (28 years).
I hope everyone got a chuckle out of this last one! I stayed at my brother’s house, because he lives just 20 minutes away from here.
There are quite a few reasons why MARC format has become antiquated and libraries should consider dumping it. Refer to Roy Tennant’s 2002 article, “MARC Must Die”
Old & inflexible – It was created in the mid-late 1960s to for the sole purpose of speeding up the creation of card catalog data and for enabling the efficient distribution of that data, which it has done a fine job of for almost 50 years. But now, nobody produces cards from it anymore and even OCLC finally stopped producing cards last year (they produced cards from 1971-Oct. 21, 2015, with a peak of 141 million cards produced per year and a total of 1.9 billion cards produced over the years. The last card printed was for Walt Disney’s Sleeping Beauty).
Inflexible – Although MARC21 Format has been updated to accommodate many elements from the new RDA cataloging guidelines, it’s still not suitable for use with RDA nor is it useful with the current Web, which is based on individual pieces of data about things. After completing RDA testing, the Library of Congress declared MARC format does not allow for full implementation of RDA. Communities outside of libraries cannot add their own descriptive elements to MARC format and updating MARC format is done through a committee process within the library community.
Data strings, not things – MARC format was designed for strings of data to identify entities. MARC makes reliable data. But it cannot easily show relationships between those entities. The current Web is about small bits of data which can be assembled as needed by computers and joined together to show relationships between those pieces of data. Imagine Lego blocks and the things you can build with them, compared to a bag of string and what you can’t build with that. MARC is machine readable only within library systems and by human eyes, but it’s not machine actionable.
Anti-social – It’s not web-friendly or discoverable by web-crawling bots or spiders. It can’t integrate with the Web or be connected to anything on the Web. Search engines such as Google, Yahoo, and Bing can’t crawl the Internet and index MARC catalog records, so they’re not searchable, therefore they’re not findable.
Other formats – Digital repositories use Dublin Core and other data storage systems (OAI-PMH, EAD), which can’t communicate with MARC. Some libraries (like ours) have their resources described with MARC format and with Dublin Core (e.g., CONTENTdm), but those are two different systems for storing and sharing bibliographic data. Publishers use ONIX for their information. None of those metadata systems can communicate with one another.
Proprietary format – MARC format is used exclusively by libraries. No other memory institution (museums, archives) or business/organization uses it to organize and store their data. MARC records can only be read by specific software provided by ILS vendors, which is a highly specialized market. MARC format keeps our data in silos which can’t be penetrated by modern search engine crawlers.
Libraries have rich stores of carefully curated bibliographic and authoritative data about all kinds of wonderful resources, from physical to digital collections. But with the exceptions of our websites, our collections are virtually invisible on the Web. You can’t Google the name of a book or a movie or an e-book and see that your local public library has a copy of it for you to borrow. Why are our libraries invisible?
Data silos – Since MARC format is a proprietary format used only by libraries, we’re essentially locking away our collections in impenetrable data silos. MARC format cannot be discovered by web crawlers and it cannot be found by Internet search software.
SEO – Libraries have traditionally avoided having search engine web crawlers index their pages, mostly due to concerns about patron privacy and later, of hacking. But being so cautious about the way the web used to be has kept us and our collections from being found by search engines. We need to get them optimized for search engines and for discovery on the open web.
Website + online catalog – Just because your library has a website and an online catalog does NOT mean your collections are visible to your users, especially those who start their searches on Google or other search engine pages. It assumes they know to come to your library’s website, find the search box for your online catalog, then start their search there. I’m not saying nobody does this, but there’s plenty of data gathered by OCLC and others saying almost everybody starts their search for information with a search engine, such as Google, Yahoo, or Bing. When they do that, they’re bypassing your online catalog and they’re never seeing that your library may have what they want.
Libraries have rich stores of carefully curated bibliographic and authoritative data about all kinds of wonderful resources, from physical to digital collections. But with the exceptions of our websites and some federated searching on our catalogs that includes our journals, our collections are virtually invisible on the Web. You can’t Google the name of a book or a movie or an e-book and see that your local public library has a copy of it for you to borrow. Why are our libraries invisible?
User search behavior – OCLC’s Perceptions of Libraries report in 2005 revealed some interesting, if not shocking data about user behavior when people start their searches for information.
84% of information searches begin on the Internet – this is from the 2005 Perceptions of Libraries report compiled by OCLC
1% of information searches begin on the library website – this is also from the 2005 Perceptions report
2010 – 0% of users start their searches on library websites, according to OCLC’s Perceptions of Libraries report compiled in 2010
2012 – 1.2 trillion Google searches; probably processing more than 2+ million per year
2015 – Mobile Google searches surpass desktop searches for first time
How do we make libraries more visible on the Web? We need to understand and start using Linked Data, because that’s what a growing proportion of the web is starting to use.
Structured data & shared vocabulary (Schema.org) – In 2011, Google, Yahoo, and Microsoft Bing! Joined an alliance to create Schema.org, which is a collection of standard data schemas with a shared vocabulary.
Relationships- Allows web designers to include information in HTML pages that identifies entities and relationships between entities. Also makes the data recognizable by search engine spiders, which can consume and index the data.
Google Knowledge Graphs, etc. – You’re already seeing and using Linked Data on the Web. Google Knowledge Graphs are one example. Other examples are location-specific information, which have context and meaning with regards to your search, your location, and your interests.
New bibliographic initiative – A web-friendly linked data format for library bibliographic information that’s focused on bibliographic data and is built on Linked Data principles. It was created to help convert legacy MARC format to Linked Data. It can also be used to create new bibliographic information as Linked Data. BIBFRAME will make use of existing authority records and string-based labels due to being based on MARC format.
Standardized biblio-centric vocabulary – BIBFRAME’s standardized vocabulary is built upon concepts which are unique to the bibliographic world, yet it also includes vocabulary terms from schema.org, which is used by the wider Web community.
Flexible and extensible – Unlike MARC format, BIBFRAME and schema.org are flexible and are built to be changeable, as both knowledge and technology evolve. This will help the diverse worlds of libraries, museums, and archives to develop their own flavors of BIBFRAME, yet still be able to share their bibliographic data on the Web.
Utilizes Linked Data – Linked Data is another web standard that has been a concept for 10 years, ever since Tim Berners-Lee coined the term in a design note about the Semantic Web project. Linked Data uses URIs (Uniform Resource Identifiers) to denote things. These are static URLs assigned to pieces of information. Linked Data allows information to be exposed, shared, and connected to other pieces of data, information, and knowledge. Linked Data is everywhere, it allows us to use the Web like a single global database, performing complex queries over multiple pages and data sources. We’re already using it every day.
URIs vs. URLs – URIs identify real-world objects (people, cars, books, unicorns); URLs are links to documents on the Web. The URIs are in HTTP format, so they can be looked up by computers.
Emphasizes relationships – It emphasizes relationships between creators and their works, between works, and between derivatives of works. It can eventually show relationships between library data and other data on the web, such as your physical location with respect to the user’s physical location (e.g., location-based search results).
MARC format replacement – Yes, you heard me right. It is meant to replace MARC Format- eventually, as we’ll discuss later.
This is also called the Semantic Web, which is built on bits of data that are searchable and can be connected together with stable HTTP links called Uniform Resource Identifiers (URIs). This is called Linked Data and you use it every day, even if you don’t realize it.
Here’s an illustration of the current model of BIBFRAME.
In case that was too abstract I have an analogy for you I used with our library’s lawyer when explaining to him what BIBFRAME/Linked Data does for our library’s catalog data. He mistakenly thought that having our catalog on the Web and using the Dewey Decimal Classification meant our collection was findable on the Internet. In my frustration in explaining BIBFRAME/Linked Data, I came up with the analogy of cans of cat food.
BIBFRAME @ EVPL
MCLS online class – In 2015, I convinced our then-director to let me take the Linked Data for the Practical Practitioner online class from Zepheira at the discounted price offered through MCLS.
LibHub Initiative- This 2014 partnership of 12 public libraries and Zepheira intrigued me, because it included so many public libraries, many of whom our library look up to as leaders in the public library world
ILS vendor – Throughout the summer and fall of 2015, I asked our ILS vendor specific questions about their plans for incorporating BIBFRAME and Linked Data into future versions of their software. I got the runaround from them and they told me to read up on BIBFRAME before I asked them any more questions (I told them the reason I had all the questions is because I had taken the class from Zepheira, the same one their staff were taking that same year). In that timeframe, they’ve had two CEO changes, the person working as the liaison to Zepheira quit, and numerous other staff have quit too. They’re offering a service for their customers to extract their data and send it to Zepheira, but without any of the follow-up feedback and ongoing tweaking of the data Zepheira would provide to libraries working directly with them.
Early Experimenters Program – At the end of 2015, we joined Zepheira’s Early Experimenters Program and we signed a one-year contract with them. I extracted our MARC records, converted them to MARC21XML, then uploaded them to Zepheira, who converted them to Linked Data and placed that data on their servers for Web search engine spiders to crawl and index.
Here’s what we’ve noticed since starting with Zepheira:
Older vs. newer titles – Older titles are more easily found, as long as searches contain enough of the title and maybe even the subtitle (e.g., not just one word searches) and as long as they’re available in one format. Newer titles are more hit and miss, because I update our catalog on a monthly basis (I’m now try to update it every two weeks). For this reason, new content is not going to show up on the Web as fast as it does in our online catalog. Newer titles will also more likely have more Linked Data about them already in existence on the Web, so they may be drowned out by author’s websites, media attention, and other non-book formats related to the title.
High publicity titles – Titles in our collection which have received a high degree of media publicity or attention (local or national) don’t show up at the top of the screen, at least not now. We’re hoping this changes over time, especially as more libraries publish their bibliographic data as BIBFRAME/Linked Data.
Qualifier required – We need to add the words “Evansville Vanderburgh Public Library” or “evpl” as a qualifier to the search string, at least for now. In the case of Orange is the New Black, which was our One Book One Community title in 2015, we might also have to use the word ‘portal’ that is part of the URL with the Library.Link Network. This is due to the fact the title has had so much publicity, especially in our own community. The hope is that someday, the need to use these qualifiers will be taken care of by the geo-location data we’ve given Zepheira, which will combine our bibliographic data with the user’s desktop or mobile IP address.
Not all formats represented together – We’ve noticed that although we may have a title in multiple different formats (e.g., book, e-book, audiobook, e-audiobook), those formats are not all represented together on the search results page on the Web. This is true of the example for the title Predictably Irrational, which we have in 4 separate formats in our catalog, yet just two of them appear on the first page of the search results you’ll see on the next slide.
There are two results for this title searched with the qualifier ‘evpl’, although they don’t represent the totality of our holdings for all formats we own or for which we provide access, such as e-books and e-audiobooks. These two examples are for an e-audiobook (1st result) and an e-book (2nd result).
When you click on a link from the search results, you arrive at this landing page on the Library.Link Network created by Zepheira. It isn’t meant to replace our catalog, but it does let the user know they’ve found a library resource on the Web. There’s a blue Get it at the Library button users can click on to access this title in our catalog and check on availability. There’s a link taking them to an online registration screen on our website, where they can begin the process of getting a library card. They can also see all of our library’s branch locations in a Google map, as well as see additional links on our website.
Here’s that title in our Encore catalog, where you can see it’s an e-book from OverDrive, which has a link where you can borrow it from OverDrive, without searching separately for it in that platform.
OverDrive – Hits for the same title in our OverDrive collection appear in the results as well, regardless of whether we use “Evansville” or “evpl”, because both of those words are in our OverDrive name and URL. These results have been there for awhile, but we’ve not been looking for our catalog entries on the Internet, so they haven’t been a problem. However, they tend to crowd out the results from our Linked Data project with Zepheira, especially if the title is available as a print book in our collection. Zepheira is working with OverDrive to see what they can do about this.
Books are “MIA” – Not every entry that floats to the top of the results list is the book version of a title, even if the book format is the only version in our catalog. It’s most likely the e-audiobook or e-book version of the title, which isn’t easily discerned from the initial results screen. You can add the words ‘book,’ ‘dvd,’ etc., to the search string, but you risk adding results for non-EVPL resources, like Amazon, to your list. The Linked Data resources on the Web also don’t exactly match what’s in our catalog, possibly because they’re scattered about the Internet, rather than collated neatly together as they are/would be in our catalog.
No timeline for indexing – When I upload a copy of our database to Zepheira, there’s no timeline as to when we can expect those BIBFRAME resources to appear in search engine results. It’s out of our control and out of Zepheira’s control as to when and how the search bots determine which sites to crawl, how often they crawl them, and how many pages to fetch from the site, as well as indexing the data they find.
Other libraries’ links displaying first- For some search results, the results of other libraries in the Library.Link Network are showing up before ours in the search results screen. I have a ticket open with Zepheira on this one, since I just began noticing it this year. Hopefully we can get that fixed soon.
What type of impact has Linked Data made at EVPL? Has it increased…
Web traffic?-
Circulation?-
New borrowers?-
Foot traffic?-
Google Analytics is the only way we can track web traffic into our Encore catalog. Lucky for me, this site was already set up in our GA account and all I have to do is look at referrals from the link.evpl.org site.
Trends, not specific data – GA is good for showing trends, but not for helping you analyze specific data. There’s no way to tell how a searcher found a resource on the web and followed that link to our catalog. Did they do a title search, an author search, a subject or genre search? And without painstaking manual research one title at a time, there’s no way to see if they borrowed or downloaded or streamed it.
User behavior or search bots? – When we first went live with Linked Data in January of 2016, it appeared to really take off with lots of hits to our Encore catalog page. But over time, it turns out these hits were either from search bots or they may have been from the redirect button on Zepheira’s landing page for each title. The redirect button hit the catalog after 10 seconds of the page being called up by a human (or a search bot). Now that the automatic redirect has been removed from that button, hits to our catalog have dropped drastically.
Different interpretations – As with all statistics, they’re open to interpretation, depending on what you want them to say and what you know about the data used to compile them. As we’ll see in the upcoming slides, it’s easy to say Linked Data is either doing great or doing nothing for our library.
A comparison of 2016 and 2015 external traffic coming from link.evpl.org to encore.evpl.org, which is our discovery layer. In 2015, we did not have the link.evpl.org URL set up with Zepheira, so the flat line for that year accurately measured zero referrals coming from that link.
I uploaded our first dataset to Zepheira at the end of December 2015. The dataset was published as Linked Data after the start of the year. Although it’s hard to see from this yearlong overview of the data, our first external traffic referred from link.evpl.org started hitting our Encore server on Monday, January 25, 2016, with 5 sessions. It peaked on Tuesday, February 16, 2016, with 143 sessions. It basically bottomed out when I was out of town for a business trip in early June, from which I returned from with major lower back pain and leg numbness. I didn’t return to work for 6 weeks and I was unable to send updated datasets to Zepheira during those 6 weeks.
I returned to work part-time in mid-July and I eventually returned to full-time on Aug. 8. Shortly after returning to work, I underwent surgery and was diagnosed with breast cancer on Aug. 30. I underwent yet another surgery in mid-Sept. and I began 6 weeks of radiation therapy in early October, finishing up the week of Thanksgiving. During this entire time, I was battling fatigue and trying to take unused vacation time, so I missed a lot of work through the end of the year. Also in that time frame of July-December, Zepheira made changes to the Take Me to the Library button, removing the automatic redirect to our catalog. But since I didn’t refresh our dataset during that time, I don’t know if that change affected the landing page for our data. There was a gap between May 26 and Dec. 15 when I didn’t refresh our dataset with Zepheira. In short, our external traffic to our Encore server basically flat-lined in that time period, making us pretty much invisible again.
The lesson I learned from this was to do monthly or bi-monthly extracts of our catalog records and send them to Zepheira to refresh our data on their servers and on the Web.
A comparison of Jan. 1-Mar. 1, for the years 2016-2017, both time periods for which we had linked data on the Web, so we had referrals to our Encore catalog coming from link.evpl.org.
In contrast to the previous slide, the orange line is 2016. It looks great, because it reflects a period of time when the landing pages of our resources had an automatic redirect button to our catalog on them. Those redirects automatically looked up the resource in our catalog, even if the user didn’t want to do that and may have quickly exited the screen once getting to it.
The blue line is 2017, which looks really bad in comparison to 2016, until you understand that the automatic redirect from the landing page to our catalog has been removed. The button is still there to take the user to our catalog, but they have to click it if they want to see the resource in our catalog. Maybe this is a slightly more accurate number, even if it’s not as high as it was a year ago.
In the three columns where we saw increases between the two years, it appears the Bounce Rate, which is how long a user stays on a web page before leaving it for another page, has improved over a year ago. This could be because the automatic re-direct is gone now and users who click through to our catalog may actually want to go there.
The number of Pages per Session has gone up, perhaps because users are browsing a bit longer in our catalog, once they get there.
And the Average Session Duration has gone up, from less than a minute to more than a minute, again, perhaps because the user is actually there intentionally, rather than accidentally.
Do referrals from link.evpl.org equal circulation?
It’s really time-consuming to look for one-to-one ratios of relevance between referrals from link.evpl.org to circulation of materials, but it can be done.
Physical materials – These are fairly easy to track, because I can look at the item record in Sierra and see if it was checked out in the same timeframe in which the link.evpl.org referral came to encore.evpl.org
E-resources – These are a little harder to pin down, because we don’t track their circulation in our Sierra system. I can search either OverDrive or hoopla digital to see if they’ve circulated in the time period covered by the referral from link.evpl.org.
Websites & databases- We have MARC records for these resources in our catalog, but we have no way to know if once they’ve found the record in Encore whether or not they clicked the link in that catalog record and followed it back out to the online resource.
If you look at Referrals traffic, you can see the individual titles on the link.evpl.org site, as referenced from the /portal/ portion of the URL. It still doesn’t tell me whether or not a human or a computer search bot hit the link, nor does it tell me whether or not the material was borrowed from our library, but at least I know which titles were referred from link.evpl.org to our Encore discovery layer. Those are the titles I can search in Sierra to see if they’re checked out or I can look for them in OverDrive or hoopla if they’re e-resources from those collections.
Out of 18 searches referred from link.evpl.org to encore.evpl.org between Jan. 1, 2017 and Feb. 28, 2017, these five books were checked out. That’s 28% of searches in that month resulting in checkouts of physical or virtual materials. Is this proof of Linked Data working for us? We don’t really know for sure. I have not compared direct searches of encore.evpl.org to see how many are/have been checked out in the same time period, because there are far more of them to research this way. Plus Linked Data is so new, it’s understandable that it’s not yet representing the bulk of searches coming into our Encore catalog right now.
There was a tiny increase in our total circulation from 2015 to 2016. Was this attributable to Linked Data and more people finding our stuff on the Web, then checking it out? It’s hard to say for sure. Should we call them “Circumstantial” Statistics, instead of Circulation Statistics?
I combined our 52 distinct patron types into these 6 main categories to look for any trends in the numbers. The only real increase was for Online Registrant. Was this increase due to people finding our Linked Data online, then creating new registrations online, so they can borrow our materials? Or is it because more people are online now than ever, so starting an online registration is a convenient process for them?
One thing to note- New Borrower cards were suspended by our director on 2/27/2017, so we won’t have that data to compare going forward.
So it looks like foot traffic didn’t go up as a result of our catalog being converted to Linked Data on the Web. Our Central Library location experienced a drop in foot traffic there, due to a year-long major construction project, which blocked access to the library’s parking lot and main entrance (to build a hotel and parking garage next to the library).
There appears to be a bump in ILL lending requests for us in February of 2016, the first full month of having our Linked Data out on the Web. Was it due to Linked Data? Or just coincidence? That bump up has been erased by the February 2017 lending statistics.
So, have we fed the cats? Are they finding our MARC data as Linked Data on the web?
Jury is still out- It’s still hard to conclusively say Linked Data is working for us, any more than say conclusively that it’s not working for us.
Market Linked Data – We’re still unsure of how to market our Linked Data experiment with Zepheira. Just doing searches on the Web still requires some kind of qualifier, which we’d need to train our staff and patrons to use. However, in the Library.Link Network, we now have the ability to find unique links we can use to publicize our titles on Twitter, Facebook, and blogs, but we don’t have marketing staff to do that for us right now.
Time will tell – We’re still so new at this that we need to have our Linked Data out there on the Web for a much longer time than just the past year or so. Search engine spiders are still discovering and indexing our rich bibliographic and authority data. BIBFRAME still has a long way to go before it’s considered a metadata modeling standard and a replacement for MARC format.
Now that we’ve converted our catalog to BIBFRAME/Linked Data, many day-to-day activities and workflows haven’t changed.
MARC format – We’re still using MARC format, because that’s the format still used by OCLC, EBSCO, OverDrive, Baker & Taylor, Brodart, etc., for bibliographic data. That’s the only format allowed in our ILS and it’s the only format we receive from our various MARC records vendors.
Authority control – We’re still doing authority control as we’ve always done it, verifying headings in each bibliographic record and using Marcive as our main source of authority records.
ILS software – We’re still using Sierra for staff functions and our public catalogs are available in our Encore and Classic Catalog versions. As are all libraries, we’re at the mercy of our ILS vendor when it comes to using MARC format and when we’ll maybe someday get to use Linked Data in our catalogs.
No Linked Data in our catalog – We’re not adding any Linked Data to our catalog, because it’s structurally impossible to do so. We don’t have a triple-store database with Linked Data in it. Zepheira is housing that data for us and if we ever decide to cancel our contract with Zepheira, they’ll give us our data back (not that we have anywhere to put it right now!).
Continue with Zepheira – For the rest of this year, we’ll continue sending Zepheira copies of our database, converted to MARC21XML first, for them to convert to BIBFRAME and place on the Web for search bots to crawl and index. We’ll continue working with them to fine-tune any part of the process over which they have control.
Watch BIBFRAME development – We’ll continue to watch BIBFRAME’s development as LC, OCLC, Zepheira, and other major players continue to develop BIBFRAME vocabularies.
Communicate with ILS vendor – We’ll continue talking with Innovative’s upper management and telling them how important BIBFRAME is to our library and our visibility on the web. We know Zepheira’s solution isn’t the final end-point with BIBFRAME, but if ILS vendors don’t invest in developing a software architecture that can handle the triple-store database required by BIBFRAME, it very well may be the best solution. I’m attending the Innovative Users Group meeting April 2-5, 2017 and there are several BIBFRAME/Linked Data sessions scheduled.
Evaluate Linked Data’s impact – I’ll continue working with Google Analytics to see what information it gives me. I’ll also continue looking at things like patron registrations, ILL borrowing requests, and foot traffic to our buildings, where we can measure those things.
I want to leave you with this one last quote to ponder.