Lightning talks by
Gordon Dunsire on library standards and linked data
Gill Hamilton on recent initiatives with open and linked open data at National Library of Scotland
Semantic Web special interest group meeting - IFLA WLIC 2012Figoblog
The document discusses the 2nd open session of the IFLA Semantic Web Special Interest Group (SWSIG) being held in Helsinki. It provides an introduction to semantic web concepts including the semantic web, linked data, RDF triples, and ontologies. It also discusses applications of semantic web standards and namespaces from organizations like IFLA, W3C, and FRBR in areas such as library linked open data projects, element sets, value vocabularies, and dataset applications. Presentations will be given on topics like the Bibliographic Framework Update, licensing issues in linked data projects, and practical linked library data applications.
The Digital Pompidou Centre project aims to create a new website for the Centre Pompidou using semantic web and linked data principles. This will replace the current website and create a central digital library. The project involves linking cultural data from the museum, libraries, and archives into a unified data model. Key challenges include improving scalability, updating data daily, and gaining institutional support for opening the data.
1) Linked data is a set of best practices for publishing structured data on the web so that both humans and machines can access and link related data across different sources. It realizes Tim Berners-Lee's vision of a Semantic Web.
2) The key principles of linked data are using URIs to identify things, providing HTTP URIs so that URIs can be looked up, and including links to other URIs to allow for discovery of related data on the web.
3) By following these principles, data sources on the web have been connected into a large Web of Data, with over 31 billion RDF triples organized into different domains such as media, geography, life sciences, and libraries. This enables new applications for data
Presentation at the Online Information Conference, London 20th November 2013. Taking a look at the drivers behind the emerging Web of Data and how libraries need to be and can be part of it in the future.
Wednesday 6 May: Hand me the data! What you should know as a humanities resea...WARCnet
Wednesday 6 May: Hand me the data! What you should know as a humanities researcher before asking for data from a web archive, Ulrich Have, NetLab/DIGHUMLAB, Aarhus University
This document summarizes a presentation on recent developments in cataloging standards and practices, including RDA, Bibframe, and linked data. The presentation discusses how standards like RDA and FRBR are moving cataloging towards a more entity-centric model based on semantic web principles. It also outlines proposals to encode library metadata as linked open data using the Resource Description Framework (RDF) to represent bibliographic records as sets of semantic triples and link them to external datasets. The goal is to transform library data into a true "Web of data" rather than just making it available on the traditional document-based web.
- The document discusses connecting museums through linked open data (LOD). It outlines the LODAC Museum project which aims to aggregate and associate over 1.4 billion collection objects from over 1,000 Japanese museums and cultural organizations.
- The project gathers data from various sources, standardizes the data, integrates it by identifying and associating the same data points, and publishes the integrated museum data as LOD.
- By connecting museum data to other types of data through LOD, it can provide new value like connecting works to local information, events, and enabling user-generated contributions about cultural collections.
The document discusses big data and linked data. It presents the three V's of big data - volume, velocity, and variety. It shows the semantic web layer cake and how linked data provides a lingua franca for data integration. It provides examples of using linked data for sensor data, supply chain data, and as a bridge between online and offline systems. Finally, it discusses adding a linked data layer to the existing internet architecture and engaging more stakeholders with the technology.
Semantic Web special interest group meeting - IFLA WLIC 2012Figoblog
The document discusses the 2nd open session of the IFLA Semantic Web Special Interest Group (SWSIG) being held in Helsinki. It provides an introduction to semantic web concepts including the semantic web, linked data, RDF triples, and ontologies. It also discusses applications of semantic web standards and namespaces from organizations like IFLA, W3C, and FRBR in areas such as library linked open data projects, element sets, value vocabularies, and dataset applications. Presentations will be given on topics like the Bibliographic Framework Update, licensing issues in linked data projects, and practical linked library data applications.
The Digital Pompidou Centre project aims to create a new website for the Centre Pompidou using semantic web and linked data principles. This will replace the current website and create a central digital library. The project involves linking cultural data from the museum, libraries, and archives into a unified data model. Key challenges include improving scalability, updating data daily, and gaining institutional support for opening the data.
1) Linked data is a set of best practices for publishing structured data on the web so that both humans and machines can access and link related data across different sources. It realizes Tim Berners-Lee's vision of a Semantic Web.
2) The key principles of linked data are using URIs to identify things, providing HTTP URIs so that URIs can be looked up, and including links to other URIs to allow for discovery of related data on the web.
3) By following these principles, data sources on the web have been connected into a large Web of Data, with over 31 billion RDF triples organized into different domains such as media, geography, life sciences, and libraries. This enables new applications for data
Presentation at the Online Information Conference, London 20th November 2013. Taking a look at the drivers behind the emerging Web of Data and how libraries need to be and can be part of it in the future.
Wednesday 6 May: Hand me the data! What you should know as a humanities resea...WARCnet
Wednesday 6 May: Hand me the data! What you should know as a humanities researcher before asking for data from a web archive, Ulrich Have, NetLab/DIGHUMLAB, Aarhus University
This document summarizes a presentation on recent developments in cataloging standards and practices, including RDA, Bibframe, and linked data. The presentation discusses how standards like RDA and FRBR are moving cataloging towards a more entity-centric model based on semantic web principles. It also outlines proposals to encode library metadata as linked open data using the Resource Description Framework (RDF) to represent bibliographic records as sets of semantic triples and link them to external datasets. The goal is to transform library data into a true "Web of data" rather than just making it available on the traditional document-based web.
- The document discusses connecting museums through linked open data (LOD). It outlines the LODAC Museum project which aims to aggregate and associate over 1.4 billion collection objects from over 1,000 Japanese museums and cultural organizations.
- The project gathers data from various sources, standardizes the data, integrates it by identifying and associating the same data points, and publishes the integrated museum data as LOD.
- By connecting museum data to other types of data through LOD, it can provide new value like connecting works to local information, events, and enabling user-generated contributions about cultural collections.
The document discusses big data and linked data. It presents the three V's of big data - volume, velocity, and variety. It shows the semantic web layer cake and how linked data provides a lingua franca for data integration. It provides examples of using linked data for sensor data, supply chain data, and as a bridge between online and offline systems. Finally, it discusses adding a linked data layer to the existing internet architecture and engaging more stakeholders with the technology.
A presentation by Gordon Dunsire.
Delivered at the Cataloguing and Indexing Group Scotland (CIGS) Linked Open Data (LOD) Conference which took place Fri 21 September 2012 at the Edinburgh Centre for Carbon Innovation.
Linked Open Data projects aim to extend the web of documents to a web of linked data by adding semantics through standards like RDF and ontologies. The Linked Open Data cloud has grown significantly since 2007 and contains billions of RDF triples and links between data sources. Projects like LOD2 build on this by developing technologies and linking more open datasets to enable new applications. For Linked Data to achieve its full potential, openness and allowing free access and reuse is important, though it does mean losing some control over data usage.
Harvesting Repositories: DPLA, Europeana, & Other Case Studieseohallor
Join this discussion on the benefits and process of harvesting to aggregators such as DPLA, Europeana and other aggregators. Through case studies we'll outline three stages of the process, including 1) mapping, migrating, and normalizing data in open source digital repositories, 2) making use of the Open Archives Initiative Protocol for Metadata Harvesting (OAI - PMH), and 3) reaping the benefits of increased exposure. Presenters welcome lively discussion and questions from participants of all technical backgrounds and skill levels.
Development of Semantic Web based Disaster Management SystemNIT Durgapur
Semantic Web model In the field of disaster management to structurise the data such that any information needed during emergency will be easily available.
This document discusses standardizing data on the web. It notes that data exists in many formats, from informal to curated, and machine to human readable. W3C has focused on integrating data at web scale using standards like RDF, SPARQL, and Linked Data principles. However, converting all data to RDF has challenges. Much data exists as CSV, JSON, XML and does not need full integration. The reality is data on the web is messy with many formats. Developers see converting data as too complex. The document discusses providing tools to publish Linked Data easily, or focusing on raw data without RDF. It notes different approaches can coexist and discusses a workshop on open data formats.
The document is a presentation by Prof. Dr. Stefan Gradmann from KU Leuven given on July 11, 2013 at Universidad Carlos III de Madrid titled "From Records to Graphs: Linked Data and Libraries". It discusses how libraries are moving from traditional catalog records to linked data graphs by embracing semantic web technologies like RDF. Specifically, it covers the Europeana Data Model (EDM) and how it enables libraries to publish linked cultural heritage data and support new types of context-driven research services.
The document discusses the Research and Education Space (RES) project, which aims to create a web-based platform called Acropolis that aggregates and interconnects cultural heritage resources from various institutions like the British Library, British Museum, BBC archive, and others. It describes Acropolis' technical approach of using crawlers, indexes, and APIs to make these resources searchable. It also outlines challenges around standardizing heterogeneous metadata, reliably linking entities, and usability issues regarding tools, licensing, and stakeholder engagement. The author is looking to provide guidance on publishing cultural data as linked open data to help address these challenges.
1) Ontologies play a key role in semantic digital libraries by supporting bibliographic descriptions, extensible resource structures, and community-aware features.
2) Semantic digital libraries integrate information from various metadata sources and provide interoperability between systems using semantics.
3) Key ontologies for digital libraries include bibliographic ontologies, structure description ontologies, and community-aware ontologies that model folksonomies and social semantic collaborative filtering.
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
This document summarizes recent approaches to web data management including Fusion Tables, XML, and Linked Open Data (LOD). It discusses properties of web data like lack of schema, volatility, and scale. LOD uses RDF, global identifiers (URIs), and data links to query and integrate data from multiple sources while maintaining source autonomy. The LOD cloud has grown rapidly, currently consisting of over 3000 datasets with more than 84 billion triples.
The importance of metadata for datasets: The DCAT-AP European standardGiorgia Lodi
The document discusses metadata standards for datasets, including DCAT, DCAT-AP, and related standards. It provides 3 key points:
1. DCAT and DCAT-AP are metadata standards that provide models for describing datasets and their distributions in order to improve discoverability, interoperability, and reuse. DCAT-AP adds constraints to DCAT for use by European data portals.
2. DCAT-AP_IT is the Italian implementation of DCAT-AP, which extends it with additional mandatory properties and controlled vocabularies. It defines core classes and properties for catalogs, datasets, and distributions in RDF.
3. Future developments include DCAT version 2, which introduces new
Smart Data Applications powered by the Wikidata Knowledge GraphPeter Haase
This document discusses Wikidata and how it can power smart data applications. Wikidata is a large, structured, collaborative knowledge graph containing over 15 million entities. It collects data in a structured form from Wikipedia pages and can be queried like a database using the Wikidata Query Service. The document promotes metaphacts, an enterprise knowledge graph platform that can be used to build applications using Wikidata, enrich Wikidata with private data, and enable companies to build and leverage their own knowledge graphs for various domains such as cultural heritage and pharma.
The document provides an overview of knowledge graphs and the metaphactory knowledge graph platform. It defines knowledge graphs as semantic descriptions of entities and relationships using formal knowledge representation languages like RDF, RDFS and OWL. It discusses how knowledge graphs can power intelligent applications and gives examples like Google Knowledge Graph, Wikidata, and knowledge graphs in cultural heritage and life sciences. It also provides an introduction to key standards like SKOS, SPARQL, and Linked Data principles. Finally, it describes the main features and architecture of the metaphactory platform for creating and utilizing enterprise knowledge graphs.
Islandora Webinar: Research Data Repositorieseohallor
The July 2015 Islandora Webinar, highlighting Islandora Research Data Repositories discusses data repositories spearheaded by individual researchers, academic libraries, and research centers.
Big Linked Data - Creating Training CurriculaEUCLID project
This presentation includes an overview of the basic rules to follow when developing training and education curricula for Linked Data and Big Linked Data
The document discusses Japan Link Center's (JaLC) experiment to register DOIs for research data. The experiment aims to establish workflows for registering DOIs for research data using JaLC's system. It involves 9 projects with 14 organizations testing DOI registration for research data. The document outlines several issues in registering DOIs for data, including operations flow, persistent access, granularity, dynamics of data, and quantity of data. It also provides examples of how projects can involve multiple institutions and how data lifecycles differ from literature.
This document discusses approaches to developing globally interoperable metadata standards like RDA. It describes the failure of top-down approaches and issues with both top-down and bottom-up mapping strategies. Bottom-up risks multiple overlapping element sets while top-down may not fully represent local practices. The author advocates balancing global needs with flexibility for local implementation.
An RDA record is a set of machine-readable identifiers and human-readable text that represents an entity such as a work, expression, manifestation, or item. It includes attributes like titles and identifiers, and relationships to other entities. The specific attributes and relationships included in a record depend on the needs of the application using it. RDA aims to support linked data applications by providing structured, linkable descriptions of cultural works and their relationships.
A presentation by Gordon Dunsire.
Delivered at the Cataloguing and Indexing Group Scotland (CIGS) Linked Open Data (LOD) Conference which took place Fri 21 September 2012 at the Edinburgh Centre for Carbon Innovation.
Linked Open Data projects aim to extend the web of documents to a web of linked data by adding semantics through standards like RDF and ontologies. The Linked Open Data cloud has grown significantly since 2007 and contains billions of RDF triples and links between data sources. Projects like LOD2 build on this by developing technologies and linking more open datasets to enable new applications. For Linked Data to achieve its full potential, openness and allowing free access and reuse is important, though it does mean losing some control over data usage.
Harvesting Repositories: DPLA, Europeana, & Other Case Studieseohallor
Join this discussion on the benefits and process of harvesting to aggregators such as DPLA, Europeana and other aggregators. Through case studies we'll outline three stages of the process, including 1) mapping, migrating, and normalizing data in open source digital repositories, 2) making use of the Open Archives Initiative Protocol for Metadata Harvesting (OAI - PMH), and 3) reaping the benefits of increased exposure. Presenters welcome lively discussion and questions from participants of all technical backgrounds and skill levels.
Development of Semantic Web based Disaster Management SystemNIT Durgapur
Semantic Web model In the field of disaster management to structurise the data such that any information needed during emergency will be easily available.
This document discusses standardizing data on the web. It notes that data exists in many formats, from informal to curated, and machine to human readable. W3C has focused on integrating data at web scale using standards like RDF, SPARQL, and Linked Data principles. However, converting all data to RDF has challenges. Much data exists as CSV, JSON, XML and does not need full integration. The reality is data on the web is messy with many formats. Developers see converting data as too complex. The document discusses providing tools to publish Linked Data easily, or focusing on raw data without RDF. It notes different approaches can coexist and discusses a workshop on open data formats.
The document is a presentation by Prof. Dr. Stefan Gradmann from KU Leuven given on July 11, 2013 at Universidad Carlos III de Madrid titled "From Records to Graphs: Linked Data and Libraries". It discusses how libraries are moving from traditional catalog records to linked data graphs by embracing semantic web technologies like RDF. Specifically, it covers the Europeana Data Model (EDM) and how it enables libraries to publish linked cultural heritage data and support new types of context-driven research services.
The document discusses the Research and Education Space (RES) project, which aims to create a web-based platform called Acropolis that aggregates and interconnects cultural heritage resources from various institutions like the British Library, British Museum, BBC archive, and others. It describes Acropolis' technical approach of using crawlers, indexes, and APIs to make these resources searchable. It also outlines challenges around standardizing heterogeneous metadata, reliably linking entities, and usability issues regarding tools, licensing, and stakeholder engagement. The author is looking to provide guidance on publishing cultural data as linked open data to help address these challenges.
1) Ontologies play a key role in semantic digital libraries by supporting bibliographic descriptions, extensible resource structures, and community-aware features.
2) Semantic digital libraries integrate information from various metadata sources and provide interoperability between systems using semantics.
3) Key ontologies for digital libraries include bibliographic ontologies, structure description ontologies, and community-aware ontologies that model folksonomies and social semantic collaborative filtering.
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
This document summarizes recent approaches to web data management including Fusion Tables, XML, and Linked Open Data (LOD). It discusses properties of web data like lack of schema, volatility, and scale. LOD uses RDF, global identifiers (URIs), and data links to query and integrate data from multiple sources while maintaining source autonomy. The LOD cloud has grown rapidly, currently consisting of over 3000 datasets with more than 84 billion triples.
The importance of metadata for datasets: The DCAT-AP European standardGiorgia Lodi
The document discusses metadata standards for datasets, including DCAT, DCAT-AP, and related standards. It provides 3 key points:
1. DCAT and DCAT-AP are metadata standards that provide models for describing datasets and their distributions in order to improve discoverability, interoperability, and reuse. DCAT-AP adds constraints to DCAT for use by European data portals.
2. DCAT-AP_IT is the Italian implementation of DCAT-AP, which extends it with additional mandatory properties and controlled vocabularies. It defines core classes and properties for catalogs, datasets, and distributions in RDF.
3. Future developments include DCAT version 2, which introduces new
Smart Data Applications powered by the Wikidata Knowledge GraphPeter Haase
This document discusses Wikidata and how it can power smart data applications. Wikidata is a large, structured, collaborative knowledge graph containing over 15 million entities. It collects data in a structured form from Wikipedia pages and can be queried like a database using the Wikidata Query Service. The document promotes metaphacts, an enterprise knowledge graph platform that can be used to build applications using Wikidata, enrich Wikidata with private data, and enable companies to build and leverage their own knowledge graphs for various domains such as cultural heritage and pharma.
The document provides an overview of knowledge graphs and the metaphactory knowledge graph platform. It defines knowledge graphs as semantic descriptions of entities and relationships using formal knowledge representation languages like RDF, RDFS and OWL. It discusses how knowledge graphs can power intelligent applications and gives examples like Google Knowledge Graph, Wikidata, and knowledge graphs in cultural heritage and life sciences. It also provides an introduction to key standards like SKOS, SPARQL, and Linked Data principles. Finally, it describes the main features and architecture of the metaphactory platform for creating and utilizing enterprise knowledge graphs.
Islandora Webinar: Research Data Repositorieseohallor
The July 2015 Islandora Webinar, highlighting Islandora Research Data Repositories discusses data repositories spearheaded by individual researchers, academic libraries, and research centers.
Big Linked Data - Creating Training CurriculaEUCLID project
This presentation includes an overview of the basic rules to follow when developing training and education curricula for Linked Data and Big Linked Data
The document discusses Japan Link Center's (JaLC) experiment to register DOIs for research data. The experiment aims to establish workflows for registering DOIs for research data using JaLC's system. It involves 9 projects with 14 organizations testing DOI registration for research data. The document outlines several issues in registering DOIs for data, including operations flow, persistent access, granularity, dynamics of data, and quantity of data. It also provides examples of how projects can involve multiple institutions and how data lifecycles differ from literature.
This document discusses approaches to developing globally interoperable metadata standards like RDA. It describes the failure of top-down approaches and issues with both top-down and bottom-up mapping strategies. Bottom-up risks multiple overlapping element sets while top-down may not fully represent local practices. The author advocates balancing global needs with flexibility for local implementation.
An RDA record is a set of machine-readable identifiers and human-readable text that represents an entity such as a work, expression, manifestation, or item. It includes attributes like titles and identifiers, and relationships to other entities. The specific attributes and relationships included in a record depend on the needs of the application using it. RDA aims to support linked data applications by providing structured, linkable descriptions of cultural works and their relationships.
The document discusses the concept of granularity as it relates to linked open data from libraries, noting that data can have different levels of granularity depending on the level of description and complexity of the schema. It provides examples of how granularity applies to different aspects of bibliographic records and semantic mappings between properties from different schemas. The document also demonstrates how semantic reasoning can be used to infer additional statements based on sub-properties and property domains.
Parallel computing is computing architecture paradigm ., in which processing required to solve a problem is done in more than one processor parallel way.
Multilingual issues in the representation of international bibliographic stan...Gordon Dunsire
The document discusses multilingual issues in representing international bibliographic standards for the semantic web. It outlines IFLA's standards for bibliographic data and its namespace used to represent the standards and vocabularies as RDF. Translating the standards into different languages exposed challenges regarding scope, style, source documentation, disambiguation, and language inflection. The presentation calls for authoritative translations of cataloguing standards and related documents in 26+ languages.
This document provides a summary of a talk given by Tope Omitola on using linked data for world sense-making. The talk discussed EnAKTing, a project focused on building ontologies from large-scale user participation and querying linked data. It also covered publishing and consuming public sector datasets as linked data, including challenges around data integration, normalization and alignment. The talk concluded with a discussion of linked data services and applications developed by the project to enhance findability, search, and visualization of linked data.
SPARQL1.1 Tutorial, given in UChile by Axel Polleres (DERI)net2-project
This document provides an introduction to SPARQL 1.1. It begins by explaining that SPARQL is a query language for the semantic web that allows users to query RDF data stores similarly to how SQL queries relational databases. It then describes SPARQL 1.0, the initial standard version, and the new features being added in SPARQL 1.1, including aggregate functions, subqueries, property paths and federated querying. The document concludes by discussing SPARQL implementations and the status of the 1.1 specification.
Presentation at ELAG 2011, European Library Automation Group Conference, Prague, Czech Republic. 25th May 2011
http://elag2011.techlib.cz/en/815-lifting-the-lid-on-linked-data/
This document summarizes Marin Dimitrov's presentation on linked data management at the 3rd GATE training course in Montreal in August 2010. The presentation covered linked data principles, key vocabularies and datasets, open government data initiatives, and tools for working with linked data. Some open issues discussed were the diversity of linked data schemas, data quality issues, reliability of endpoints, licensing concerns, and challenges of querying distributed data.
This document discusses linking open data with Drupal. It begins with an introduction to open data and the semantic web. It then explains how to transform open data into linked data by using ontologies, semantic metadata, and publishing semantic data on the web. Several Drupal modules are presented that allow consuming, publishing, and building applications with linked open data, including RDFx, Schema.org, and SPARQL. Finally, it proposes a hackathon event to work on projects connecting open data and Drupal through linked data approaches.
Building a linked data based content discovery service for the RTÉ ArchivesMediaMixerCommunity
The DRI is a digital repository for humanities and social sciences data in Ireland that links and preserves data from Irish institutions through a central access point. The DRI platform provides preservation, access, and discovery of federated archives and storage. The DRI is working to grow digital preservation and access policies in Ireland through various initiatives and aims to adopt global best practices around digital preservation, data citation, and metadata. The presentation describes the DRI's mission and various technical aspects and initiatives.
Sandra Collins - Building a linked data based content discovery service for t...dri_ireland
The document summarizes the mission and activities of the Digital Repository of Ireland (DRI). DRI aims to preserve and provide access to humanities and social sciences data from Irish institutions through a central online access point. It links and preserves various cultural and social heritage data. DRI utilizes an open-source platform with federated archives and storage to allow for digital preservation, access, and discovery of data. It is working to establish national guidelines and policies around research data management in Ireland.
This document describes VoID (Vocabulary of Interlinked Datasets), which is a metadata vocabulary for describing linked datasets and linksets between datasets. VoID allows datasets to provide information about structural metadata, access points, statistics, and interlinking between other datasets. It has been adopted by many datasets in the Linked Open Data cloud.
This document discusses linking open data with Drupal. It begins with an introduction to open data and the semantic web. It explains how to transform open data into linked data using ontologies and semantic metadata. Several Drupal modules are presented for importing, publishing, and querying linked data. The document concludes by proposing a hackathon where participants could consume, publish, and build applications with linked open government data and the Drupal framework.
Contributing to the Smart City Through Linked Library DataMarcia Zeng
The document discusses how linked library data can contribute to smart cities through two research projects. The first project develops recommendations for preparing bibliographic data as linked open data by unifying metadata from different sources and selecting an appropriate metadata standard. The second project connects library data to unfamiliar datasets available as linked open data by analyzing music-related datasets and aligning library data constructs with those from linked open data sources. The projects lay the technological groundwork for libraries and users to benefit from linked data through discovery and reuse of datasets and by enhancing information services.
RO-Crate: packaging metadata love notes into FAIR Digital ObjectsCarole Goble
Abstract
slides available at: https://zenodo.org/record/7147703#.Y7agoxXP2F4
The Helmholtz Metadata Collaboration aims to make the research data [and software] produced by Helmholtz Centres FAIR for their own and the wider science community by means of metadata enrichment [1]. Why metadata enrichment and why FAIR? Because the whole scientific enterprise depends on a cycle of finding, exchanging, understanding, validating, reproducing), integrating and reusing research entities across a dispersed community of researchers.
Metadata is not just “a love note to the future” [2], it is a love note to today’s collaborators and peers. Moreover, a FAIR Commons must cater for the metadata of all the entities of research – data, software, workflows, protocols, instruments, geo-spatial locations, specimens, samples, people (well as traditional articles) – and their interconnectivity. That is a lot of metadata love notes to manage, bundle up and move around. Notes written in different languages at different times by different folks, produced and hosted by different platforms, yet referring to each other, and building an integrated picture of a multi-part and multi-party investigation. We need a crate!
RO-Crate [3] is an open, community-driven, and lightweight approach to packaging research entities along with their metadata in a machine-readable manner. Following key principles - “just enough” and “developer and legacy friendliness - RO-Crate simplifies the process of making research outputs FAIR while also enhancing research reproducibility and citability. As a self-describing and unbounded “metadata middleware” framework RO-Crate shows that a little bit of packaging goes a long way to realise the goals of FAIR Digital Objects (FDO)[4], and to not just overcome platform diversity but celebrate it while retaining investigation contextual integrity.
In this talk I will present the why, and how Research Object packaging eases Metadata Collaboration using examples in big data and mixed object exchange, mixed object archiving and publishing, mass citation, and reproducibility. Some examples come from the HMC, others from EOSC, USA and Australia, and from different disciplines.
Metadata is a love note to the future, RO-Crate is the delivery package.
[1] https://helmholtz-metadaten.de/en
[2] Scott, Jason The Metadata Mania, http://ascii.textfiles.com/archives/3181, June 2011
[3] Soiland-Reyes, Stian et al. “Packaging Research Artefacts with RO-Crate”. Data Science, 2022; 5(2):97-138, DOI: 10.3233/DS-210053
[4] De Smedt K, Koureas D, Wittenburg P. “FAIR Digital Objects for Science: From Data Pieces to Actionable Knowledge Units”. Publications. 2020; 8(2):21. https://doi.org/10.3390/publications8020021
A presentation by Gill Hamilton, Digital Access Manager at the National Library of Scotland (NLS).
Delivered at the Cataloguing and Indexing Group Scotland (CIGS) Linked Open Data (LOD) Conference which took place Fri 21 September 2012 at the Edinburgh Centre for Carbon Innovation.
The document provides an overview of how the LOCAH project is applying Linked Data concepts to expose archival and bibliographic data from the Archives Hub and Copac as Linked Open Data. It describes the process of (1) modeling the data as RDF triples, (2) transforming existing XML data to RDF, (3) enhancing the data by linking to external vocabularies and datasets, (4) loading the RDF into a triplestore, and (5) creating Linked Data views to expose the data on the web. The goal is to publish structured data that can be interconnected across domains to enable new uses by both humans and machines.
This document discusses creating a knowledge graph for Irish history as part of the Beyond 2022 project. It will include digitized records from core partners documenting seven centuries of Irish history. Entities like people, places, and organizations will be extracted from source documents and related in a knowledge graph using semantic web technologies. An ontology was created to provide historical context and meaning to the relationships between entities in Irish history. Tools will be developed to explore and search the knowledge graph to advance historical research.
This document discusses metadata and its importance for digital libraries and humanities. It defines metadata as "data about data" that describes resources to help users find, identify and select them. Metadata plays a crucial role in managing the huge amount of digital information and data available. The document advocates for an approach of enriching metadata by allowing both experts and users to contribute, and filtering it through customizable interfaces to meet diverse user needs.
LIDO is an XML schema that allows museums to share metadata about their collections in a standardized way. It was created to deliver descriptive information about museum objects, such as art, culture, technology, and natural science, to various online services like collection databases and aggregation portals. LIDO supports multilingual environments and individual data providers can decide how much or how little metadata to contribute for each record. It also allows linking contributed metadata back to the original collection records.
The document discusses the transition from the traditional web (Web 1.0) to the semantic web (Web 3.0) through Web 2.0. It outlines the key principles of linking data on the web in a way that is machine-readable and outlines progress made in publishing linked open government data through the UK's data.gov.uk portal, which has released over 1500 datasets from government departments. The document argues that linked open data can drive transparency, economic and social value, and improvements to public services.
Linked data demystified:Practical efforts to transform CONTENTDM metadata int...Cory Lampert
This document outlines a presentation about transforming metadata from a CONTENTdm digital collection into linked data. It discusses the concepts of linked data, including defining linked data, linked data principles, technologies and standards. It then explains how these concepts can be applied to digital collection records, including anticipated challenges working with CONTENTdm. The document describes a linked data project at UNLV Libraries to transform collection records into linked data and publish it on the linked data cloud. It provides tips for creating metadata that is more suitable for linked data.
Our collections, our memory - National Library of Scotland at Kelvin Hall pre...Gill Hamilton
The National Library of Scotland was founded in 1689 and is the legal deposit library for Scotland, originally based in Edinburgh but now also serving Glasgow from their new Kelvin Hall location. The document describes the library's collections, including over 10 million digital resources and a Moving Image Archive, and details the complex technical work that went into transforming the building site into a state-of-the-art library space that aims to inspire learning and foster a sense of community.
Europeana: we transform the world with cultureGill Hamilton
Presentation given at Link and Linkage, the International Digital Culture Forum in Taichung, Taiwan on 12 and 13 August by Gill Hamilton, Digital Access Manager at National Library of Scotland. The presentation explains the work of Europeana, and the services it supports to ensure access to pan-European cultural services. It explains the history of governance of Europeana, its campaigns, advocacy and services, and looks at issues and benefits.
The document discusses the Europeana IIIF Task Force. It notes that Europeana has adopted IIIF but many of its content providers are unaware or unsure of IIIF. It provides information about an initial survey of 69 responses from collections across Europe on their IIIF implementation and awareness. The task force aims to identify trends in IIIF adoption among Europeana content providers and encourage more involvement from partners.
Presentation given at IIIF Showcase seminar on 17 March 2017 at National Library of Scotland outlining the Library's use of IIIF and its plans for further development and adoption of the Framework
Presentation given at the University of Edinburgh inaugural Open Knowledge Network meeting on 17 March 2017 in the School of Informatics. Covers; about the National Library of Scotland, about Gill Hamilton, Digital Access Manager, open definition and associated licensing tools, history of the open movement and implementation of open initiatives at the Library; Wikipedians, open licensing policy, licensing of digitized collections.
Open for learning: Gaelic Digital Assistant and Gaelic CollectionsGill Hamilton
Presentation given to the Open Education Resources 2016 conference in Edinburgh on the Library's plans to employ a Gaelic Digital Assistant to work with the Gaelic collections to create new educational resources
The reality of linked data in libraries: presented at CILIP linked data execu...Gill Hamilton
The document is a presentation by Gill Hamilton from the National Library of Scotland about linked open data and their experiments with it. It discusses three main tips for preparing data for linked open data: 1) using URIs to identify resources rather than strings, 2) not simplifying data structures when converting to linked data, and 3) focusing on making unique contributions by working with distinctive parts of the collection. The presentation also advocates for openly licensing metadata and using open vocabularies.
RLS-athon: the challenges at National Library of Scotland and opportunities o...Gill Hamilton
Presentation given at the RLS-athon on 9 November 2015 at the University of Edinburgh Carbon Centre. The RLS-athon looked at using RDA (Resource Description and Access) content standard to describe the works of Scottish author Robert Louis Stevenson. This presentation outlined the issues associated with delivering on the Library's priority to describe its collections and make a third available digitally in the next 10 years and how RDA is part of the solution (POTS)
Deus Ex Machina: is linked data the answer?Gill Hamilton
Presentation given by Gill Hamilton, National Library of Scotland at the OCLC seminar "Is there are library shaped black hole in the web?" on 16 October 2015 at Royal College of Surgeons, Edinburgh.
Gill's presentation explains the experiments undertake at the Library into linked open data. She suggests several practical tips to help libraries prepare for linked open data including; recording URIs, not dumbing down your metadata, concentrating on your unique collections, openly licensing your metadata, using open vocabularies and demanding better systems to manage linked data components and requirements.
Giving culture a helping hand: National Library of Scotland metadata and digi...Gill Hamilton
Presentation given at "Museums working with Wiki: engaging audiences through open knowledge" seminar at Kelvin Grove Museum, Glasgow on 4 September 2015. Gill Hamilton, Digital Access Manager at National Library of Scotland discusses the drivers, issues and challenges in developing a metadata and digital content licence policy to encourage open use and re-use of the Library's collections.
The long and winding road: implementation of electronic legal deposit at Nati...Gill Hamilton
Presentation made to The Scottish Working Group on Official Publications (SWOP) about the implementation of electronic legal deposit at National Library of Scotland. The presentation was given by Gill Hamilton, Digital Access Manager at National Library of Scotland on 26 February 2014 in Edinburgh.
Digitised collections and services at National Library of ScotlandGill Hamilton
Presentation given at National Collections and the Digital Humanities seminar at The University of Edinburgh, 14 February 2014. Presented by Gill Hamilton, DIgital Access Manager on behalf of National Library of Scotland.
LO(D) and behold: issues, tips and techniques for extending to the giant glob...Gill Hamilton
Presentation given to the Cataloguing and Indexing Group Scotland seminar on Linked Open Data practises in archives and libraries, 18 November 2013. I explained the issues associated with discovering vocabulary URIs from literals and tips and techniques that could be employed to help discovery of URIs
Making mapping real: experience and thoughts from National Library of ScotlandGill Hamilton
The document discusses challenges in mapping local data instances to a global graph. It describes extracting triples from a local database and assigning URIs using templates. Mappings from local to global identifiers are stored separately and can be added over time. String and statistical matching are used to match local instances to global concepts, with the goal of unique matches but humans sometimes needed for resolution.
Frankly my dear: National Library of Scotland's approach to open and linked dataGill Hamilton
Presentation from lightning talk at 2nd UK Ontology Network Workshop, 11 April 2013. About the Library's development in open and linked open data and the challenges faced and addressed.
Open Knowledge Foundation Meetup 4 : 24 January 2013Gill Hamilton
My lightning talk from Open Knowledge Foundation Edinburgh meet-up on 24 January 2013. It's about our work at National Library of Scotland in: securing a WIkipedian in Residence, our commitment to open data and progress with linked open data
Unlocking doors: recent initiatives in open and linked data at National Libra...Gill Hamilton
Presentation given to "Data publication and linked data in the humanities" workshop at National Library of Wales, 12 November 2012. This presentation has developed from previous as it explains how and why the Library modelled its database structure in to RDF rather than use pre-existing schemas
Unlocking doors: recent initiatives in open and linked data at National Libra...Gill Hamilton
Presentation given on 21 Sept 2012 at Cataloguing and Index Group (Scotland) seminar on "Opening Library Linked Data to National Heritage: Perspectives on International
Practice" http://www.slainte.org.uk/events/EvntShow.cfm?uEventID=2999
Open Knowledge Foundation Edinburgh meet-up #3Gill Hamilton
My lightning talk from Open Knowledge Foundation Edinburgh meet-up on 30 August. It's about recent initiatives with open and linked open data at National Library of Scotland.
opening new doors: recent initiatives in open data at National Library of Sco...Gill Hamilton
Presentation given at IFLA 2012 (Helsinki) on National Library of Scotland's low cost initiatives and developments with open data and linked open data. Includes loading of data and resources to Flickr and Youtube. Work with Open Knowledge Foundation on how to publish open data. Licensing open data as CC.0. Work with freeyourmetadata.org to learn how to use Google Refine for URI resolution. Work with Metadata Management Asssociates to model structure of the Library's Digital Object Database as RDF.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
How to Manage Reception Report in Odoo 17Celine George
A business may deal with both sales and purchases occasionally. They buy things from vendors and then sell them to their customers. Such dealings can be confusing at times. Because multiple clients may inquire about the same product at the same time, after purchasing those products, customers must be assigned to them. Odoo has a tool called Reception Report that can be used to complete this assignment. By enabling this, a reception report comes automatically after confirming a receipt, from which we can assign products to orders.
How to Download & Install Module From the Odoo App Store in Odoo 17Celine George
Custom modules offer the flexibility to extend Odoo's capabilities, address unique requirements, and optimize workflows to align seamlessly with your organization's processes. By leveraging custom modules, businesses can unlock greater efficiency, productivity, and innovation, empowering them to stay competitive in today's dynamic market landscape. In this tutorial, we'll guide you step by step on how to easily download and install modules from the Odoo App Store.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.