Kevin Balster presented on current and future cataloging practices, focusing on the transition from MARC to BIBFRAME. Some key points:
- MARC has limitations like being library-specific and not supporting linked data. BIBFRAME is being developed to replace MARC using linked data standards like RDF.
- BIBFRAME models bibliographic data using RDF triples and allows linking to other datasets on the semantic web. It also better supports serials modeling than MARC.
- The transition will require mapping legacy MARC data to BIBFRAME and new cataloging workflows. Standards and tools are still in development and the change will not happen overnight.
Seminar presentation for which the entire work was conducted at Technical University Kaiserslautern. The seminar work involved understanding the Semantic Web technology along with RDF and querying mechanism. It also involved looking at technologies that are used for data storage, data management and data querying.
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
This presentation was given by Melanie Wacker of Columbia University during the NISO Virtual Conference, BIBFRAME and Real World Applications of Linked Bibliographic Data, held on June 15, 2016
This presentation by Shana McDanold of Georgetown University was presented during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016
This presentation was given by Michael Lauruhn of Elsevier Labs during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016.
This document summarizes a webinar on deploying Resource Description and Access (RDA) cataloging and expressing it as linked data. The webinar speaker, Alan Danskin from the British Library, discussed RDA as a cataloging standard that provides guidelines for describing resources to support discovery. He explained how RDA works with linked data by using entities, relationships, and attributes expressed as URIs. Challenges in applying RDA as linked data include the complexity of the FRBR model and publishing RDA vocabularies as linked open data. Application profiles help apply RDA by defining the metadata elements, policies, and guidelines for a specific domain or community.
The Library of Congress engaged in linked data efforts starting in 2009 and created its Linked Data Service. It contracted with Zepheira to develop the initial BIBFRAME model and vocabulary 1.0 with input from early experimenters. The Library of Congress conducted a pilot of BIBFRAME from October 2015 to March 2016 with 40 staff cataloging in both MARC and BIBFRAME. The pilot helped develop BIBFRAME and identified areas for improvement. The Library of Congress will continue to refine BIBFRAME 2.0 and conduct additional testing.
The document discusses the Research and Education Space (RES) project, which aims to create a web-based platform called Acropolis that aggregates and interconnects cultural heritage resources from various institutions like the British Library, British Museum, BBC archive, and others. It describes Acropolis' technical approach of using crawlers, indexes, and APIs to make these resources searchable. It also outlines challenges around standardizing heterogeneous metadata, reliably linking entities, and usability issues regarding tools, licensing, and stakeholder engagement. The author is looking to provide guidance on publishing cultural data as linked open data to help address these challenges.
Seminar presentation for which the entire work was conducted at Technical University Kaiserslautern. The seminar work involved understanding the Semantic Web technology along with RDF and querying mechanism. It also involved looking at technologies that are used for data storage, data management and data querying.
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
This presentation was given by Melanie Wacker of Columbia University during the NISO Virtual Conference, BIBFRAME and Real World Applications of Linked Bibliographic Data, held on June 15, 2016
This presentation by Shana McDanold of Georgetown University was presented during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016
This presentation was given by Michael Lauruhn of Elsevier Labs during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016.
This document summarizes a webinar on deploying Resource Description and Access (RDA) cataloging and expressing it as linked data. The webinar speaker, Alan Danskin from the British Library, discussed RDA as a cataloging standard that provides guidelines for describing resources to support discovery. He explained how RDA works with linked data by using entities, relationships, and attributes expressed as URIs. Challenges in applying RDA as linked data include the complexity of the FRBR model and publishing RDA vocabularies as linked open data. Application profiles help apply RDA by defining the metadata elements, policies, and guidelines for a specific domain or community.
The Library of Congress engaged in linked data efforts starting in 2009 and created its Linked Data Service. It contracted with Zepheira to develop the initial BIBFRAME model and vocabulary 1.0 with input from early experimenters. The Library of Congress conducted a pilot of BIBFRAME from October 2015 to March 2016 with 40 staff cataloging in both MARC and BIBFRAME. The pilot helped develop BIBFRAME and identified areas for improvement. The Library of Congress will continue to refine BIBFRAME 2.0 and conduct additional testing.
The document discusses the Research and Education Space (RES) project, which aims to create a web-based platform called Acropolis that aggregates and interconnects cultural heritage resources from various institutions like the British Library, British Museum, BBC archive, and others. It describes Acropolis' technical approach of using crawlers, indexes, and APIs to make these resources searchable. It also outlines challenges around standardizing heterogeneous metadata, reliably linking entities, and usability issues regarding tools, licensing, and stakeholder engagement. The author is looking to provide guidance on publishing cultural data as linked open data to help address these challenges.
Future directions for RDA / Gordon DunsireCILIP MDG
The document discusses future directions for the Resource Description and Access (RDA) standard. It outlines strategies for expanding RDA communities internationally and in cultural heritage domains. It also describes plans to consolidate FRBR models, develop related standards, and reorganize the RDA Toolkit. Additionally, it discusses developing a new RDA reference data infrastructure to support multiple services and applying the Linked Data approach. The impact of the new FRBR-LRM conceptual model on RDA is also addressed.
This presentation was given by Tim Thompson of Princeton University during the NISO Virtual Conference, BIBFRAME & Real World Applications for Linked Bibliographic Data, held on June 15, 2016.
This document discusses approaches to developing globally interoperable metadata standards like RDA. It describes the failure of top-down approaches and issues with both top-down and bottom-up mapping strategies. Bottom-up risks multiple overlapping element sets while top-down may not fully represent local practices. The author advocates balancing global needs with flexibility for local implementation.
This presentation was delivered by Carolyn Hansen of the University of Cincinnati during the NISO VIrtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016
The document discusses the requirements for representing provenance in RDF. It outlines the work of the W3C Provenance Incubator Group to understand provenance needs. Key requirements include: (1) enabling identity management of RDF statements and resources, (2) representing how resources evolve over time, and (3) distinguishing between asserted and inferred provenance information. The document also examines existing techniques for modeling provenance and calls for standardized vocabularies to improve interoperability.
Linked Data for Libraries: Experiments between Cornell, Harvard and StanfordSimeon Warner
The Linked Data for Libraries (LD4L) project aims to connect bibliographic, person, and usage data from Cornell, Harvard, and Stanford using linked open data. The project is developing an extensible LD4L ontology based on existing standards like BIBFRAME and VIVO. It is working to transform over 30 million bibliographic records into linked data and demonstrate cross-institutional search. The goals are to provide richer discovery and context for scholarly resources by connecting previously isolated library data.
Open data is a crucial prerequisite for inventing and disseminating the innovative practices needed for agricultural development. To be usable, data must not just be open in principle—i.e., covered by licenses that allow re-use. Data must also be published in a technical form that allows it to be integrated into a wide range of applications. The webinar will be of interest to any institution seeking ways to publish and curate data in the Linked Data cloud.
This webinar describes the technical solutions adopted by a widely diverse global network of agricultural research institutes for publishing research results. The talk focuses on AGRIS, a central and widely-used resource linking agricultural datasets for easy consumption, and AgriDrupal, an adaptation of the popular, open-source content management system Drupal optimized for producing and consuming linked datasets.
Agricultural research institutes in developing countries share many of the constraints faced by libraries and other documentation centers, and not just in developing countries: institutions are expected to expose their information on the Web in a re-usable form with shoestring budgets and with technical staff working in local languages and continually lured by higher-paying work in the private sector. Technical solutions must be easy to adopt and freely available.
This document summarizes a presentation on trends in technical services for cataloging and metadata librarians. It discusses how the role of catalogers is expanding beyond bibliographic description to include tasks like metadata application, data sharing, and standard development. The document also covers transitions in the field, such as moving from AACR2 to RDA rules and the potential role of linked data. Challenges discussed include implementing RDA, training staff, and maintaining shared catalogs as new approaches are developed.
RDA development and implementation overview / Gordon DunsireCIGScotland
Presented at the RDA for Implementers Conference, 27 May 2015 at the National Library of Scotland, Edinburgh. Organised by the Cataloguing & Indexing Group in Scotland
Rdf And Rdf Schema For Ontology Specificationchenjennan
The document discusses RDF (Resource Description Framework) and RDF Schema for ontology specification on the Semantic Web. It provides an introduction to RDF and how it uses URIs to identify resources and assertions. It then discusses RDF applications for mobile terminals, RDF graph models, RDF/XML syntax, RDF vocabularies and schemas, and the RDF Schema language. It concludes with an overview of how OWL (Web Ontology Language) and OWL-S (Web Service Ontology) build upon RDF Schema to facilitate ontology specification and automation of web services.
A Brief Overview of BIBFRAME, by Angela KroegerAngela Kroeger
Short presentation given ALCTS CaMMS Forum on Bibframe: Notes From the Field, at ALA Midwinter, February 1, 2015. ABSTRACT: Overview of the current status of BIBFRAME development, including a brief introduction to what BIBFRAME is and what it does, which tools are available or under development, a glimpse what fully-implemented linked data looks like, a closer look at the four core classes of the BIBFRAME model, and a dab of philosophy.
The document provides an overview of the work done at DERI Galway, including developing technologies like SIOC, ActiveRDF, and BrowseRDF to interconnect online communities and enable semantic applications. It also describes JeromeDL, a digital library system that uses semantic metadata and services to allow users to collaboratively browse and share knowledge.
The document discusses leveraging library authority control and controlled vocabularies on the semantic web. It describes converting existing metadata like Library of Congress Subject Headings (LCSH) into semantic web standards like SKOS to make the data accessible and linkable on the web. This would allow libraries to publish and share authority and classification data using common web technologies, enabling new applications and discovery across systems.
This document discusses creating a knowledge graph for Irish history as part of the Beyond 2022 project. It will include digitized records from core partners documenting seven centuries of Irish history. Entities like people, places, and organizations will be extracted from source documents and related in a knowledge graph using semantic web technologies. An ontology was created to provide historical context and meaning to the relationships between entities in Irish history. Tools will be developed to explore and search the knowledge graph to advance historical research.
The document provides statistics on the FAST (Faceted Application of Subject Terminology) thesaurus as of June 2017, including the number of records and types of headings in FAST. It also lists links from FAST to other datasets like Library of Congress Subject Headings and Wikidata. The FAST team is continuing to synchronize and refine processing rules for FAST and developing an import tool. Information is also provided on the FACETVOC-L discussion list focused on faceted controlled vocabularies.
RDA Toolkit Essentials is a presentation about the RDA Toolkit, which is an online product that allows users to interact with cataloging documents and resources including RDA. It summarizes that RDA is a standard for bibliographic description based on FRBR concepts, and that the RDA Toolkit contains RDA instructions, examples, and policy statements from various national libraries. It also outlines how to access free and subscription content, create profiles, navigate the interface, search, and use user-created content in the RDA Toolkit.
Sherif Metadata Talk - London (June 25th 2018)Getaneh Alemu
This document summarizes the existing challenges and opportunities in the cataloguing and metadata function of Southampton Solent University. It discusses how the university has shifted to primarily electronic resources and moved to enrich metadata through standards like RDA. It also touches on balancing metadata quality with completeness while avoiding duplication through techniques like WEMI and FRBRization. The future of metadata is discussed as being enriched, linked, open and filtered.
Getaneh Alemu (Southampton Solent) - The existing challenges and opportunitie...sherif user group
This document summarizes the existing challenges and opportunities in the cataloguing and metadata function of Southampton Solent University. It discusses efforts to catalog print and electronic resources using standards like RDA and WebDewey. It also covers the implementation of discovery services like Primo and efforts to meet user needs through continuous metadata enrichment. This includes importing controlled vocabularies, standardizing records, and avoiding duplication through techniques like WEMI and FRBRization. The goal is to provide rich, high quality, and interoperable metadata to improve resource discovery.
Future directions for RDA / Gordon DunsireCILIP MDG
The document discusses future directions for the Resource Description and Access (RDA) standard. It outlines strategies for expanding RDA communities internationally and in cultural heritage domains. It also describes plans to consolidate FRBR models, develop related standards, and reorganize the RDA Toolkit. Additionally, it discusses developing a new RDA reference data infrastructure to support multiple services and applying the Linked Data approach. The impact of the new FRBR-LRM conceptual model on RDA is also addressed.
This presentation was given by Tim Thompson of Princeton University during the NISO Virtual Conference, BIBFRAME & Real World Applications for Linked Bibliographic Data, held on June 15, 2016.
This document discusses approaches to developing globally interoperable metadata standards like RDA. It describes the failure of top-down approaches and issues with both top-down and bottom-up mapping strategies. Bottom-up risks multiple overlapping element sets while top-down may not fully represent local practices. The author advocates balancing global needs with flexibility for local implementation.
This presentation was delivered by Carolyn Hansen of the University of Cincinnati during the NISO VIrtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016
The document discusses the requirements for representing provenance in RDF. It outlines the work of the W3C Provenance Incubator Group to understand provenance needs. Key requirements include: (1) enabling identity management of RDF statements and resources, (2) representing how resources evolve over time, and (3) distinguishing between asserted and inferred provenance information. The document also examines existing techniques for modeling provenance and calls for standardized vocabularies to improve interoperability.
Linked Data for Libraries: Experiments between Cornell, Harvard and StanfordSimeon Warner
The Linked Data for Libraries (LD4L) project aims to connect bibliographic, person, and usage data from Cornell, Harvard, and Stanford using linked open data. The project is developing an extensible LD4L ontology based on existing standards like BIBFRAME and VIVO. It is working to transform over 30 million bibliographic records into linked data and demonstrate cross-institutional search. The goals are to provide richer discovery and context for scholarly resources by connecting previously isolated library data.
Open data is a crucial prerequisite for inventing and disseminating the innovative practices needed for agricultural development. To be usable, data must not just be open in principle—i.e., covered by licenses that allow re-use. Data must also be published in a technical form that allows it to be integrated into a wide range of applications. The webinar will be of interest to any institution seeking ways to publish and curate data in the Linked Data cloud.
This webinar describes the technical solutions adopted by a widely diverse global network of agricultural research institutes for publishing research results. The talk focuses on AGRIS, a central and widely-used resource linking agricultural datasets for easy consumption, and AgriDrupal, an adaptation of the popular, open-source content management system Drupal optimized for producing and consuming linked datasets.
Agricultural research institutes in developing countries share many of the constraints faced by libraries and other documentation centers, and not just in developing countries: institutions are expected to expose their information on the Web in a re-usable form with shoestring budgets and with technical staff working in local languages and continually lured by higher-paying work in the private sector. Technical solutions must be easy to adopt and freely available.
This document summarizes a presentation on trends in technical services for cataloging and metadata librarians. It discusses how the role of catalogers is expanding beyond bibliographic description to include tasks like metadata application, data sharing, and standard development. The document also covers transitions in the field, such as moving from AACR2 to RDA rules and the potential role of linked data. Challenges discussed include implementing RDA, training staff, and maintaining shared catalogs as new approaches are developed.
RDA development and implementation overview / Gordon DunsireCIGScotland
Presented at the RDA for Implementers Conference, 27 May 2015 at the National Library of Scotland, Edinburgh. Organised by the Cataloguing & Indexing Group in Scotland
Rdf And Rdf Schema For Ontology Specificationchenjennan
The document discusses RDF (Resource Description Framework) and RDF Schema for ontology specification on the Semantic Web. It provides an introduction to RDF and how it uses URIs to identify resources and assertions. It then discusses RDF applications for mobile terminals, RDF graph models, RDF/XML syntax, RDF vocabularies and schemas, and the RDF Schema language. It concludes with an overview of how OWL (Web Ontology Language) and OWL-S (Web Service Ontology) build upon RDF Schema to facilitate ontology specification and automation of web services.
A Brief Overview of BIBFRAME, by Angela KroegerAngela Kroeger
Short presentation given ALCTS CaMMS Forum on Bibframe: Notes From the Field, at ALA Midwinter, February 1, 2015. ABSTRACT: Overview of the current status of BIBFRAME development, including a brief introduction to what BIBFRAME is and what it does, which tools are available or under development, a glimpse what fully-implemented linked data looks like, a closer look at the four core classes of the BIBFRAME model, and a dab of philosophy.
The document provides an overview of the work done at DERI Galway, including developing technologies like SIOC, ActiveRDF, and BrowseRDF to interconnect online communities and enable semantic applications. It also describes JeromeDL, a digital library system that uses semantic metadata and services to allow users to collaboratively browse and share knowledge.
The document discusses leveraging library authority control and controlled vocabularies on the semantic web. It describes converting existing metadata like Library of Congress Subject Headings (LCSH) into semantic web standards like SKOS to make the data accessible and linkable on the web. This would allow libraries to publish and share authority and classification data using common web technologies, enabling new applications and discovery across systems.
This document discusses creating a knowledge graph for Irish history as part of the Beyond 2022 project. It will include digitized records from core partners documenting seven centuries of Irish history. Entities like people, places, and organizations will be extracted from source documents and related in a knowledge graph using semantic web technologies. An ontology was created to provide historical context and meaning to the relationships between entities in Irish history. Tools will be developed to explore and search the knowledge graph to advance historical research.
The document provides statistics on the FAST (Faceted Application of Subject Terminology) thesaurus as of June 2017, including the number of records and types of headings in FAST. It also lists links from FAST to other datasets like Library of Congress Subject Headings and Wikidata. The FAST team is continuing to synchronize and refine processing rules for FAST and developing an import tool. Information is also provided on the FACETVOC-L discussion list focused on faceted controlled vocabularies.
RDA Toolkit Essentials is a presentation about the RDA Toolkit, which is an online product that allows users to interact with cataloging documents and resources including RDA. It summarizes that RDA is a standard for bibliographic description based on FRBR concepts, and that the RDA Toolkit contains RDA instructions, examples, and policy statements from various national libraries. It also outlines how to access free and subscription content, create profiles, navigate the interface, search, and use user-created content in the RDA Toolkit.
Sherif Metadata Talk - London (June 25th 2018)Getaneh Alemu
This document summarizes the existing challenges and opportunities in the cataloguing and metadata function of Southampton Solent University. It discusses how the university has shifted to primarily electronic resources and moved to enrich metadata through standards like RDA. It also touches on balancing metadata quality with completeness while avoiding duplication through techniques like WEMI and FRBRization. The future of metadata is discussed as being enriched, linked, open and filtered.
Getaneh Alemu (Southampton Solent) - The existing challenges and opportunitie...sherif user group
This document summarizes the existing challenges and opportunities in the cataloguing and metadata function of Southampton Solent University. It discusses efforts to catalog print and electronic resources using standards like RDA and WebDewey. It also covers the implementation of discovery services like Primo and efforts to meet user needs through continuous metadata enrichment. This includes importing controlled vocabularies, standardizing records, and avoiding duplication through techniques like WEMI and FRBRization. The goal is to provide rich, high quality, and interoperable metadata to improve resource discovery.
Lauri Roine - New directions in bibliographic control - BOBCATSSS 2017BOBCATSSS 2017
RDA cataloguing code and the upcoming Bibframe cataloguing format aim to improve on current standards by better implementing FRBR concepts and moving to a linked data model. The study found that RDA cataloguing adheres closely to FRBR by distinguishing between recording and presenting data and relationships between entities. RDA also supports different content types and prioritizes user needs. Bibframe would provide benefits like seamless metadata sharing between libraries using linked data. However, both RDA and Bibframe face challenges to widespread adoption from lack of system support and need for further development. The new approaches ultimately depend on compatible cataloguing systems to realize their full potential.
The International Federation of Library Associations and Institutions (IFLA) is responsible for the development and maintenance of International Standard Bibliographic Description (ISBD), UNIMARC, and the "Functional Requirements" family for bibliographic records (FRBR), authority data (FRAD), and subject authority data (FRSAD). ISBD underpins the MARC family of formats used by libraries world-wide for many millions of catalog records, while FRBR is a relatively new model optimized for users and the digital environment. These metadata models, schemas, and content rules are now being expressed in the Resource Description Framework language for use in the Semantic Web.
This webinar provides a general update on the work being undertaken. It describes the development of an Application Profile for ISBD to specify the sequence, repeatability, and mandatory status of its elements. It discusses issues involved in deriving linked data from legacy catalogue records based on monolithic and multi-part schemas following ISBD and FRBR, such as the duplication which arises from copy cataloging and FRBRization. The webinar provides practical examples of deriving high-quality linked data from the vast numbers of records created by libraries, and demonstrates how a shift of focus from records to linked-data triples can provide more efficient and effective user-centered resource discovery services.
Quick intro to RDA for my staff includes basic overview of how RDA differs from AACR2, MARC, FRBR, and the Semantic Web. Includes examples. by robin fay for UGA Libraries/ DBM, georgiawebgurl@gmail.com
Linked data and the future of librariesRegan Harper
The document discusses a presentation given by OCLC and LYRASIS on linked data and what it means for the future of libraries. It provides an overview of linked data concepts, including defining linked data as using the web to connect related data and lower barriers to linking data. It outlines some of the key principles of linked data, and discusses how linked data can benefit libraries by making data more reusable, efficient to maintain and discoverable. It also notes some of the challenges libraries may face in changing workflows and maintaining information provenance with linked data.
MarcEdit is a free metadata editing suite that supports MARC and other formats. It introduced automated RDA processing in 2013 through its RDA Helper tool. The RDA Helper handles converting 336/337/338 and 344/345/346/347 fields, evaluates 260/264 fields, makes 040 modifications, and processes abbreviations based on regular expressions. It also handles GMD processing and allows customizing RDA workflows through general record automation and task automations. MarcEdit assists with RDA implementation by processing AACR2, hybrid, and early RDA records according to RDA rules.
Cataloging with RDA - Western New York Library Resources CouncilEmily Nimsakont
RDA is the new cataloging code that will replace AACR2. It is based on FRBR and FRAD conceptual models which are entity-relationship models that focus on user tasks. RDA differs significantly from AACR2 in its structure, terminology, transcription practices, and categorization of resources using media, carrier, and content types instead of GMDs. Testing of RDA by national libraries began in 2010 with full implementation planned after the testing period. Libraries need to prepare for RDA by learning the new terminology and monitoring developments during the testing process.
First Steps in Semantic Data Modelling and Search & Analytics in the CloudOntotext
This webinar will break the roadblocks that prevent many from reaping the benefits of heavyweight Semantic Technology in small scale projects. We will show you how to build Semantic Search & Analytics proof of concepts by using managed services in the Cloud.
Robin Fay presented an update on the Bibliographic Framework Initiative (BIBFRAME). The presentation covered the need for BIBFRAME as MARC records have limitations for machine processing. FRBR and RDA were discussed as models that focus on relationships between works, expressions, manifestations and items. XML was presented as a way to encode bibliographic data in a machine-readable format using elements rather than character strings. The semantic web and linked data were discussed as ways to make metadata shareable on the web. BIBFRAME was introduced as a new bibliographic framework to replace MARC that would use RDF to encode bibliographic data.
The document discusses the concepts and implementation of linked data and the semantic web. It describes Cambridge University Library's COMET project which converted bibliographic records from MARC21 format to RDF triples and published them as linked open data with HTTP URIs. The project aimed to release data for open use and gain experience working with semantic web technologies like RDF, SPARQL and triplestores. Key challenges included dealing with IPR issues in MARC21 records and developing tools to transform and link the data.
Selecting the right database type for your knowledge management needs.Synaptica, LLC
This presentation looks at relational vs. graph databases and their advantages and disadvantages in storing semantic data for taxonomies and ontologies.
A very basic overview of RDA, updated. This presentation is appropriate for all library staff including those outside of cataloging, library science students, and others.
The document summarizes a panel discussion on BIBFRAME and linked data. It discusses how BIBFRAME aims to replace MARC with a more network-friendly format, distinguishing works from manifestations. Panelists discussed projects involving linked data and increased collaboration across institutions. Specific projects at Cornell and Columbia were mentioned. Questions were asked about controlled access points, vocabularies, and cataloging's role in the semantic web.
RDA was developed to address limitations in AACR2 for cataloging resources in the digital environment. It is based on FRBR and FRAD conceptual models and aims to improve resource discovery. RDA is more flexible, international, and expanded in scope compared to AACR2. It is developed through international collaboration and is designed to accommodate new formats and user needs on the web.
I presented ALA's Annual Conference in Anaheim (#ALA12) to the AVIAC meeting (URL to ALA Connect). My topic was RDA Toolkit and how it relates to Library System vendors and other software and service developers. I included some background on RDA: Resource Description and Access and RDA Toolkit. I described and demo RDA Toolkit's free MARC based linking service. I invited vendors to read our RDA Toolkit Development blog and to participate in our regular Virtual User Group meetings. Finally I will describe our current plans and seek input from vendors on developing and distributing an RDA - Application Profile as a free part of RDA Toolkit
This document provides an introduction and overview of Resource Description and Access (RDA), the new cataloging standard that replaces Anglo-American Cataloguing Rules (AACR). RDA is designed for the digital age and is based on Functional Requirements for Bibliographic Records (FRBR) and Functional Requirements for Authority Data (FRAD). RDA provides more flexibility and is compatible with current metadata standards and encoding formats like MARC. While RDA has some advantages, there are also ongoing considerations and discussions around its implementation.
This document summarizes the Cambridge Open Metadata project. The project aims to release Cambridge University Library's bibliographic records as open data in various formats like XML, RDF, and JSON. The goals are to drive innovation, provide value for taxpayer money, and promote the library's collections. Key activities include converting records to RDF, adding subject headings from external sources, and determining appropriate open licenses for records from different vendors. The project hopes to make more of the library's data reusable and help non-library developers build new tools and services.
Similar to BIBFRAMEing for Non-BIBFRAMErs: An Introduction to Current and Future Cataloging Practices (20)
Ctrl + Alt + Repeat: Strategies for Regaining Authority Control after a Migra...NASIG
Speaker: Jamie Carlstone
This presentation is on how to regain authority control in a large research library catalog: first, dealing with a backlog of problems from years without authority control and second, creating a process for ongoing workflows to realistically maintain authority control when new records are added to the collection.
The Serial Cohort: A Confederacy of CatalogersNASIG
Speaker: Mandy Hurt
In 2018, at a time when our department was shrinking through attrition, the decision was made to further leverage the particular skill sets of a select group of monographic catalogers by training them to also undertake the complex copy cataloging of serials.
This presentation concerns the assumptions underlying how this decision was originally made, the initial plan for how this would be accomplished by CONSER Bridge Training, the eventual formation of the Serials Cohort with a view to creating an iterative process I would design and manage, and the problems, obstacles and time constraints faced and addressed along the way.
Calculating how much your University spends on Open Access and what to do abo...NASIG
Librarians are working hard to understand how much money their university is spending on open access article processing fees (APCs), and how much of what they subscribe to is available as OA. This information is useful when making subscription decisions, considering Read and Publish agreements, rethinking library open access budgets, and designing Institution-wide OA policies.
This session will talk concretely about how to calculate the impact of Open Access on *your* university. It will provide an overview on how to estimate the amount of money spent across a university on Open Access fees: we will discuss underlying concepts behind calculating OA article-processing fee (APC) spend and give an overview of useful data sources, including:
FlourishOA
Microsoft Academic Graph
PLOS API
Unpaywall Journals
We will also talk about Open Access on the subscription side, including how much of what you subscribe to is available as open access and how you can use that in your subscription decisions and negotiations.
The presenters are the cofounders of Our Research, the nonprofit company behind Unpaywall, the primary source of Open Access data worldwide.
Heather Piwowar, Co-founder, Our Research
Jason Priem, Co-founder, Our Research
Measure Twice and Cut Once: How a Budget Cut Impacted Subscription Renewals f...NASIG
Speakers: Ilda Cardenas, Keri Prelitz, Greg Yorba
The process of looking at subscriptions with the goal of proactively downsizing revealed that the library’s existing renewal workflows were outdated and in need of regular analysis to identify underused resources. Additionally, this project uncovered shortcomings of analysis that is reliant on usage data, the unexpected ramifications of large-scale subscription cancellations, as well as the need for improved communication within and between the many library departments affected by subscription cancellations.
Analyzing workflows and improving communication across departments NASIG
Presented by Jharina Pascual and Sarah Wallbank.
The presentation provides people with simple techniques for analyzing their local workflow and information-sharing practices, some ideas for interrogating and improving intra-technical services communication, and ideas for simple changes that can improve communication and build a sense of community/joint purpose within or across departments.
Supporting Students: OER and Textbook Affordability Initiatives at a Mid-Size...NASIG
Presented by Jennifer L. Pate.
With support from the president and provost of the university, Collier Library adopted strategic purchasing initiatives, including database purchases to support specific courses as well as purchasing reserve copies of textbooks for high-enrollment, required classes. In addition, the scholarly communications librarian became a founding member of the OER workgroup on campus. This group’s mission is to direct efforts for increasing faculty awareness and adoption of OER. This presentation discusses the structure of the each of these programs from initial idea to implementation. Included will be discussions of assessment of faculty and student awareness, development of an OER grant program, starting a textbook purchasing program, promotion of efforts, funding, and future goals.
Access to Supplemental Journal Article Materials NASIG
Presented by Electra Enslow, Suzanne Fricke, Susan Shipman
The use of supplemental journal article materials is increasing in all disciplines. These materials may be datasets, source code, tables/figures, multimedia or other materials that previously went unpublished, were attached as appendices, or were included within the body of the work. Current emphasis on critical appraisal and reproducibility demands that researchers have access to the complete shared life cycle in order to fully evaluate research. As more libraries become dependent on secondary aggregators and interlibrary loan, we questioned if access to these materials is equitable and sustainable.
Communications and context: strategies for onboarding new e-resources librari...NASIG
Presented by Bonnie Thornton.
This presentation details onboarding strategies institutions can utilize to help acclimate new e-resources librarians with an emphasis on strategies for effectively establishing and perpetuating communications with stakeholders.
Full Text Coverage Ratios: A Simple Method of Article-Level Collections Analy...NASIG
Presented by Matthew Goddard.
his presentation describes a simple and efficient method of using a discovery layer to evaluate periodicals holdings at the article level, and suggest a variety of applications.
This document provides information about Bloomsbury Digital Resources. It highlights that Bloomsbury won awards in 2018 from the Independent Publishers Guild and The Bookseller for its digital publishing and website. It offers over 10,000 titles across many subject areas from various imprints. Titles are available through GOBI and OASIS. The platform provides perpetual access to DRM-free titles with unlimited concurrent users and downloading/printing. It also offers over 220 open access titles and features like related content links and personalization options. Specific collections are mentioned like Drama Online Library which includes playtexts, scholarly books, audio plays and video plays.
Web accessibility in the institutional repository crafting user centered sub...NASIG
Presented by Jenny Hoops and Margaret McLaughlin.
As web accessibility initiatives increase across institutions, it is important not only to reframe and rethink policies, but also to develop sustainable and tenable methods for enforcing accessibility efforts. For institutional repositories, it is imperative to determine the extent to which both the repository manager and the user are responsible for depositing accessible content. This presentation allows us to share our accessibility framework and help repository and content managers craft sustainable, long-term goals for accessible content in institutional repositories, while also providing openly available resources for short-term benefit.
Linked Data is exploding in the library world, but the biggest problems libraries have are coming up with the time or money involved in converting their records, looking into Linked Data programs, finding community support, and all the various other issues that arise as part of developing new methods. Likewise, one of the biggest hurdles for libraries and linked data is that they do not know what to do to get involved. As we have fewer people available and smaller budgets each year, we would like to explore ways in which libraries can get involved in the process without expending an undue amount of their already dwindling resources. To see how linked data can be applied, we will look at the example of the Smithsonian Libraries (SIL). Over the past 18 months, SIL has been preparing for the transition from MARC to linked open data. This session will talk about various SIL projects and initiatives (such as the FAST headings project and the introduction of Wikidata and WikiBase); how to incorporate linked data elements into MARC records; and how to develop staff and give them proficiency with new tools and workflows.
Heidy Berthoud, Head, Resource Description, Smithsonian Libraries
Walk this way: Online content platform migration experiences and collaboration NASIG
In this session, a librarian and a publisher share their perspectives on content platform migrations, and the Working Group Co-chairs will describe the group’s efforts to-date and expected outcomes. Our publisher-side speaker will describe issues they must consider when their content migrates, such as providing continuous access, persistent linking, communicating with stakeholders, and working with vendors. Our librarian speaker will describe their experience and steps they take during migrations, such as receiving notifications about migrations, identifying affected e-resources, updating local systems to ensure continuous access, and communicating with their front-line staff and patrons.
Read & Publish – What It Takes to Implement a Seamless Model?NASIG
PANELISTS
Adam Chesler
Director of Global Sales
AIP Publishing
Sara Rotjan
Assistant Marketing Director, AIP Publishing
Keith Webster
Dean of Libraries and Director of Emerging and Integrative Media Initiatives
Carnegie Mellon University
Andre Anders
Director, Leibniz Institute of Surface Engineering (IOM)
Editor in Chief of Journal of Applied Physics
Professor of Applied Physics, Leipzig University
“Read & Publish” agreements continue to gain global attention. What’s rarely discussed when these new access and article processing models are introduced is the paperwork, back-end technology and overall management required to implement the new program that works for all involved. This panel, comprised of a librarian, publisher, and researcher, will focus on the complexities of developing, implementing and using the infrastructures of different Read & Publish models and the challenges of developing a seamless experience for everyone.
From article submission to publication to final reporting, the panel will discuss the “hidden” impact that new workflows will have on stakeholders in scholarly communications. Time will be allotted for Q&A and attendee participation is encouraged.
Mapping Domain Knowledge for Leading and Managing ChangeNASIG
This document provides an overview of domain knowledge related to leading and managing change in library services. It discusses key concepts in management, leadership, change management, emotional intelligence, social intelligence, and project management. Specific models and theories are mapped to each domain, such as Kotter's 8 steps of change, Lewin's change model, and Goleman's leadership styles. Competencies within each domain like communication, problem-solving, and relationship building are also outlined. The document aims to equip leaders in library services with the knowledge to successfully lead and manage organizational change efforts.
When to hold them when to fold them: reassessing big deals in 2020NASIG
This presentation goes into details for each of the publishers’ big deals that we examined and present reasons as to why we cancelled them, with concrete examples from our experiences (four cancellations and two restructurings).
Getting on the Same Page: Aligning ERM and LIbGuides ContentNASIG
The document discusses efforts at the University of North Texas libraries to align their electronic resource management (ERM) system data with their LibGuides subject and course guides. This included cleaning up subject headings, migrating data from their ERM to populate the LibGuides A-Z database list, and using spreadsheets to match records and import fields to enhance the A-Z list entries. The goals were to centralize electronic resource information management, improve the user experience of finding resources, and establish workflows for regular synchronization between the ERM and LibGuides systems.
A multi-institutional model for advancing open access journals and reclaiming...NASIG
The presenters will provide brief overviews of CIL and PDXScholar, and they will detail the challenges and ultimate successes of this multi-institutional model for advancing open access journals and reclaiming control of the scholarly record.
Knowledge Bases: The Heart of Resource ManagementNASIG
This session will discuss the knowledge base metadata lifecycle, current and upcoming metadata standards, and the effect that knowledge bases have on discovery and e-resource management. The presenters will look at ways knowledge bases can be leveraged to create downstream tools for resource management and discovery. The session will also provide different perspectives on knowledge bases, including from librarians and product managers, as well as a discussion of the NISO's KBART Automation recommended practice and what this could mean for knowledge bases in the future. The session will also include a conversation regarding how leveraging knowledge bases can aid librarians in improving resource discovery within their own libraries and ultimately decrease the amount of time spent on metadata workflows. Through this presentation, we also aim to improve communication between the library community and metadata providers and creators.
Elizabeth Levkoff Derouchie, Metadata Librarian for Serials & Electronic Resources, Samford University Library
Beth Ashmore, Associate Head, Acquisitions & Discovery (Serials), North Carolina State University
Eric Van Gorden, Product Manager, EBSCO
This session will talk about various SIL projects and initiatives (such as the FAST headings project and the introduction of Wikidata and WikiBase); how to incorporate linked data elements into MARC records; and how to develop staff and give them proficiency with new tools and workflows.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
How to Download & Install Module From the Odoo App Store in Odoo 17Celine George
Custom modules offer the flexibility to extend Odoo's capabilities, address unique requirements, and optimize workflows to align seamlessly with your organization's processes. By leveraging custom modules, businesses can unlock greater efficiency, productivity, and innovation, empowering them to stay competitive in today's dynamic market landscape. In this tutorial, we'll guide you step by step on how to easily download and install modules from the Odoo App Store.
Brand Guideline of Bashundhara A4 Paper - 2024khabri85
It outlines the basic identity elements such as symbol, logotype, colors, and typefaces. It provides examples of applying the identity to materials like letterhead, business cards, reports, folders, and websites.
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
2. Current Cataloging Environment
•Content Standards: RDA, AACR2, DACS, DCRM, other community guidelines (e.g., OLAC, MLA,
CONSER), etc.
•Encoding Standards: MARC 21, EAD, Dublin Core, MODS, MADS, VRA Core, schema.org, etc.
•Exchange Formats: MARC 21, RDF, etc.
3. What’s Wrong with MARC?
•Library specific format
•Stuck data
•Repetitive entry of shared metadata
•MARC Must Die – Roy Tennant
5. What is BIBFRAME?
•Vocabulary built on the Resource Description Framework (RDF)
•Terms describing the Libraries/Archives/Museum world
•Allows our data to live in a Linked Data environment
6. Down the Rabbit Hole (RDF)
•Simple statements to express relationships (AKA Triples)
•Best used with Uniform Resource Identifiers (URIs)
•The essential piece of Linked Data
(Subject) (Predicate) (Object)
Fahrenheit 451 written by Ray Bradbury
<http://worldcat.org/entity/work/id/268886> <http://rdaregistry.info/Elements/w/author> <http://dbpedia.org/resource/Ray_Bradbury>
7. Further Down the Rabbit Hole (Linked Data)
•Data encoded in RDF, using global identifiers
•Published online, supporting use with data from other sources
Library Land Wikipedia
Fahrenheit 451 written by Ray Bradbury Ray Bradbury
<http://…> <http://…> <http://dbpedia.org/resource/Ray_Bradbury>
9. BIBFRAME Developments
•Version 2.0
•Library of Congress Tools: MARC to BIBFRAME Conversion Specifications/Comparison Viewer
•Upcoming Tool: BIBFRAME Editor
•Library of Congress BIBFRAME Pilot Phase 2
•Zepheira: bibfra.me
10. Future Cataloging
•No direct cataloging in encoding format (i.e., none of this)
•Browser interface with material profiles
12. The Very Near Future of MARC 21
•Necessary to map legacy data
•Not all data will be turned into linked data
•Not all libraries will (be able to) make the switch
•Workflows depend on it!
13. The FRBR Family
•IFLA Library Reference Model (LRM)
• Resolves inconsistencies between FR models
• Drastic changes to serials
• RDA to be frozen August 2017-April 2018 (3R Project)
14. FRBR vs. LRM
Serial work
FRBR LRM
Expression Expression
Manifestation Manifestation Manifestation
Serial work
Expression
Manifestation
Serial work
Expression
Manifestation
Serial work
Expression
Manifestation
15. Exchange Formats in Action
•MARC 21 & RDF:
• Easily exchanged and interpreted
• Elements have universal and (relatively) persistent meaning
•MARC 21:
• Natural state: Record
• Innate mechanisms for version control and provenance
• “Over-shares”
•RDF (i.e., BIBFRAME)
• Natural state: Triple
• Lacks self describing metadata
• “Tight lipped”
16. RDF Questioning
•Who created the statement about the thing? (provenance)
• Based on what information? (authoritativeness)
•Did anybody ever make a different statement about the thing? (version control)
18. Next Steps
•In need of specialized cataloging communities
• BIBFRAME still being updated
• Best practices for BIBFRAME & LRM
•In need of vendors, programmers, and IT folks
• Linked Data technological infrastructure
20. Sources & Other Readings
•Roy Tennant, “MARC Must Die,” Library Journal October 15, 2002,
http://lj.libraryjournal.com/2002/10/ljarchives/marc-must-die/#_
•“Semantic Web,” World Wide Web Consortium (W3C), https://www.w3.org/standards/semanticweb/
•“MARC 21 to BIBFRAME 2.0 Conversion Specifications,” Library of Congress,
http://www.loc.gov/bibframe/mtbf/
•“BIBFRAME Comparison Tool,” Library of Congress, http://id.loc.gov/tools/bibframe/compare-id/full-
ttl
•MacKenzie Smith, Carl G. Stahmer, Xiaoli Li, and Gloria Gonzalez, “BIBFLOW: A Roadmap for Library
Linked Data Transition,” https://bibflow.library.ucdavis.edu/roadmap/
•Pat Riva, Patrick Le Bœuf, Maja Žumer, editors, “IFLA Library Reference Model,” March 2017 version,
International Federation of Library Associations and Institutions,
https://www.ifla.org/files/assets/cataloguing/frbr-lrm/ifla_lrm_2017-03.pdf
•James Hennelly and Judy Kuhagen, “3R Project,” Presentation for the RDA Steering Committee, May
16, 2017, http://www.rda-rsc.org/sites/all/files/3R%20Update%20Hennelly%20and%20Kuhagen.pdf
21. Image Sources
•RDA Toolkit: http://access.rdatoolkit.org/
•MARC 21 Format for Bibliographic Data (245 field):
https://www.loc.gov/marc/bibliographic/bd245.html
•Overview of the BIBFRAME 2.0 Model: http://www.loc.gov/bibframe/docs/bibframe2-
model.html
•Functional Requirements for Bibliographic Records, Final Report:
https://www.ifla.org/files/assets/cataloguing/frbr/frbr_2008.pdf
•Zepheira BIBFRAME Scribe: http://editor.bibframe.zepheira.com/static/
•OCLC Connexion
•BIBFLOW: A Roadmap for Library Linked Data Transition:
https://bibflow.library.ucdavis.edu/roadmap/
Editor's Notes
Hello and welcome. As you can probably guess from the title, you are going to be in for some BIBFRAME for the next hour. However, this will not be an overview of the nuts and bolts of BIBFRAME. Rather, I aim to provide a higher level overview of the BIBFRAME model, and show some of the ways that cataloging in a BIBFRAME environment may be different than in our current environment. And since BIBFRAME is not the only change on the horizon, I will also attempt to cover a few other upcoming changes.
So before we look to see where we’re going, it will be useful to see where we currently stand. Within the current cataloging environment, catalogers essentially work within the scope of a small set of various standards. Obviously, this list does not include any of the tools or technology that catalogers work with – things like cataloging utilities, integrated library systems, digital asset managements systems, or library service platforms, but I would like to focus on the small number of schemas we work with that dictate much of how we catalog.
No cataloger would need to deal with all of these standards, but all of these standards are likely used by at least a handful of catalogers. This presentation will mostly focus on the bolded standards.
First on our list are the content standards. These provide rules and instructions for describing the materials we catalog. They provide guidance on what constitutes title information, what to capitalize, etc. There are a varied number of these standards, and they are often community specific. RDA and AACR2 are the general rules, while DACS is used for archives, DCRM is for rare materials, and so on. There are also a number of community specific guidelines that often accompany the official standards. For example, in RDA, and previously in AACR2, the Program for Cooperative Cataloging and the Library of Congress often issues their own interpretations of instructions, and states their policies when they differ from the explicit instructions. And the Cooperative Online Serials Program, or CONSER, publishes several supplemental documents to help serials catalogers.
Next up are encoding standards. These stndards dictate how to encode metadata created when following a content standard. MARC 21 is probably the most common example, and is made up of a number of numeric fields and alphanumeric subfields that correspond to various metadata elements where the information is entered. Once again, there are a number of other encoding standards that are used by different communities.
Last up are the exchange formats. These are stable schemas that allow data to be exchanged between different parties. MARC 21 is once again the most common exchange format used in cataloging. By having strictly defined fields and subfields, metadata encoded in MARC 21 can easily be moved around and can be deciphered by anybody who knows what the heck MARC 21 is.
While MARC has been an extremely successful tool for catalogers, it is really showing its age. In the age of the internet, MARC stands out by being virtually unknown or ignored by the “regular” community, and is used solely within the library community. The structure of MARC is also extremely rigid. Metadata recorded in MARC records are extremely difficult to pull out in an automated fashion, and they cannot easily be integrated with data from other sources.
Another issue with using MARC when cataloging in RDA is that it is completely self contained. The RDA model, based on FRBR has a structure that allows for common metadata to be shared amongst several resources, but MARC records are single, flat files, so common metadata must be re-entered for every record.
The limitations of MARC have been well documented, but up until recently, there have been no good options for replacement.
Depending on your opinion, the beginning of the end of MARC came in 2011 when the Library of Congress announced the Bibliographic Framework Initiative (BIBFRAME). They tapped Zepheira, a private company founded by Eric Miller, who was formerly part of the World Wide Web Consortium (W3C), to formulate the first draft of the BIBFRAME data model. The intention of BIBFRAME is to serve as the replacement for MARC 21.
Even though BIBFRAME is intended to replace MARC, its structure is vastly different.
BIBFRAME is built on the Resource Description Framework, or (RDF), which we will cover in just a minute.
The scope of BIBFRAME is the Libraries/Archives/Museum world, but it is being built in such a way that it should be interoperable with outside schemas and vocabularies, and would allow our data to live on the web in a linked data environment.
RDF: Resource Description Framework. The central piece of Linked Data. RDF is a data model for making simple statements about resources and the relationships between them. This model is expressed as triples, which follows a subject-predicate-object structure. The subject and object normally represent the entities, and the predicate represents the relationship between them. For example, the statement Fahrenheit 451 was written by Ray Bradbury, can be translated into a triple with Fahrenheit 451 as the subject, Ray Bradbury as the object, and the relationship “written by” as the predicate.
By describing entities and relationships in triples, and describing everything using a URI, we can add semantic meaning to the statement. This semantic meaning is incredibly important for machine “understanding.” If a machine comes across a statement coded in HTML, it cannot parse out the meaning of the statement. It’s just a collection of text. By providing a dereferenceable URI which can stand in for the entity, we can provide a contextual anchor that is useful if the machine comes across other entities tied to the same URI.
Linked Data: describing data using HTTP URIs/IRIs in an RDF framework. This allows machines to “understand” the semantics of the triples.
Here we see an example of how assigning a URI to an entity in a triple can facilitate connections to other resources. By using a “global” URI for Ray Bradbury, in this case a URI tied to Wikipedia, any other statements that include references to Ray Bradbury that are tied to the same URI can be linked to our triple.
Another important piece of linked data is publishing RDF triples online and making them available for querying using standard query tools such as SPARQL. This is what allows RDF triples from different sources to be linked together.
Work – Instance – Item
This differs from RDA/FRBR’s WEMI model, but the idea is similar. The reasoning is that BIBFRAME should be content model agnostic since RDA is just one of many content models used by the library/archives/museum community. In theory, the RDA manifestation and item correspond one-to-one with the BIBFRAME instance and item, while the RDA work and expression both correspond to the BIBFRAME work. This means that a translation of a work, which would be treated as an expression of the same work in RDA would be treated as a different work in BIBFRAME. In either case, there would be a relationship established between the two entities.
The Library of Congress released version 2.0 of BIBFRAME in April 2016. Now available on the Library of Congress BIBFRAME site are tools that allow for the investigation of LC’s mapping from MARC to BIBFRAME. The conversion specifications contains instructions on how to perform the conversions, and the Comparision Viewer provides a way of seeing an individual LC record get transformed into BIBFRAME.
Coming soon is the BIBFRAME Editor which is being built to support the 2nd phase of LC’s BIBFRAME pilot which was scheduled to begin in early June 2017.
And Zepheira has since moved on to further develop its own version of BIBFRAME that is still being updated.
In our current environment, catalogers frequently work directly in an encoding schema. When cataloging in MARC, catalogers almost universally work directly in MARC, either in a cataloging utility, or an ILS, or some other system. This cataloging interface will most likely make dramatic changes in a linked data environment.
Both the Library of Congress Editor that is being finalized, and Zepheira’s Scribe tool are browser based cataloging tools where catalogers follow prompts that are customized for particular format profiles. Browser based cataloging using profiles allows for varied options on the cataloging prompts. When cataloging directly into a MARC record, the metadata we record is stringently based on how MARC 21 defines the various fields. When cataloging outside of a specific format, prompts can be based on encoding standards, such as RDA, or built in-house using non-standard terms. All that matters is that the recorded metadata is granular enough to move between various encoding standards.
It is also important to note that with these new browser-based interfaces, the mapping from the description happens behind the scenes, so catalogers will not be working directly in RDF.
Having a granular description is important because it allows for mapping between the “generic” description and any number of encoding standards. By avoiding mapping between the various encoding standards, any issues that arise from metadata being lost from being mapped to a less granular schema can be avoided.
That being said, MARC is not going away any time soon.
First, direct mapping of MARC data to BIBFRAME will be necessary for legacy data. And even after legacy data is mapped, the MARC records will not just disappear.
Second, it is not know if all MARC data will be transformed into linked data. Things like payment or invoice information may be better off not being converted.
Next, there are still a large number of institutions that are not investigating linked data. Many may not feel that they will be able to make the switch for a variety of reasons, and other simply may not want to make the switch.
Finally, as seen in the illustration from the BIBFLOW project conducted by UC Davis and Zepheira, MARC data is used extensively throughout the library environment. It will be extremely difficult to pry MARC out of these workflows.
There has been much discussion about the IFLA Library Reference Model (formerly FRBR LRM), and the conclusions it draws. The document itself was created to resolve inconsistencies between the various FR models. Since the previous models varied in a number of ways, it is not surprising that the LRM model differs from the FRBR model that many catalogers know about.
One area that has been drastically changed is serials. Within the FRBR model, a serial work can have a number of expressions. In practice, these expressions usually involve language editions. And each expression can have a number of manifestations, often corresponding to print and online versions. Within the LRM, each serial work has exactly one expression and exactly one manifestation.
It remains to be seen how those changes will filter down to RDA. There is currently a project underway to update RDA, and part of this project involves restructuring elements and instructions to better align with the LRM. This is the RDA Toolkit Restructure and Redesign Project, or 3R.
Now that we have a basic overview of the serials model within the LRM, we can get at least a minimum understanding of how these changes may impact our cataloging practices.
Within our current MARC environment, RDA instructions based on the LRM could mean drastic changes to the “single record approach” and the “provider neutral record.” Following the single record approach allows for including two or more different kinds of manifestations (usually print and online) of the same expression on a single record. The provider neutral guidelines are used for creating single descriptions for serials that have multiple online manifestations available from different providers by leaving out metadata that is specific to the providers. In both situations, we would likely need to create separate records for each manifestation.
While MARC has some problems, it has served admirably as an exchange format for the library community. RDF is a more recent exchange format, but the two have some common characteristics. Both are easily exchanged and interpreted (remember, RDF is intended to be interpreted by machines), and both are made of up elements (classes and properties in the case of RDF) which have universal and relatively persistent meaning.
However, there are also some differences which may cause some problems for libraries in the future.
Metadata encoded in MARC is normally transferred as a complete record, and contains several fields which describe the record itself. These fields allow for robust version control (i.e., when the record was updated, and who updated it) and provenance (who created the record, how authoritative the metadata is, etc.). Metadata encoded in RDF triples does not have these same benefits. A triple may provide information about an entity, but it does not provide information about itself.
When attempting to discover what kind of information we need to be able to draw from our metadata, it may be useful to consider asking questions of the metadata.
Since RDF by itself may not be able to answer these questions, we need to look elsewhere.
Without going into detail about the specific tools, the questions to our answers may like in the technical infrastructure that is necessary to support a linked data environment. While catalogers have gotten used to understanding the tools that provide the answers to questions we have about our metadata, it may be necessary to look outside of our community to the IT and programming community for help.