The DM2E project developed a data model to standardize metadata for digitized manuscripts. It specialized the Europeana Data Model (EDM) by adding over 50 new properties and 23 classes to better represent physical and conceptual aspects of manuscripts. The DM2E model was documented in PDF and OWL formats and made available online for humans and machines. Future work includes addressing uncertain statements about timespans and creators.
Olaf Janssen on the principles of large-scale digital libraries and their app...Olaf Janssen
In this lecture Olaf Janssen - project manager at Europeana - introduces the principles of large-scale international cross-cultural, cross-domain digital libraries.
In a case study he outlines how these principles are applied to Europeana, the EU’s flagship online museum, library, film theatre and archive.
Olaf held this presentation at Leiden University, The Netherlands on 19-02-2009
Webinar slides: Interoperability between resources involved in TDM at the lev...openminted_eu
OpenMinTeD hosted a series of webinars on interoperability. These slides are of the webinar on the level of metadata. Full webinar recording accessible through: https://www.fosteropenscience.eu/content/achieving-interoperability-between-resources-involved-tdm-level-metadata
Laurel Stvan, Associate Professor of Linguistics, UT Arlington, presentation for “Using Digital Humanities Research Tools in the Classroom” at UT Dallas 2/27/13
Olaf Janssen on the principles of large-scale digital libraries and their app...Olaf Janssen
In this lecture Olaf Janssen - project manager at Europeana - introduces the principles of large-scale international cross-cultural, cross-domain digital libraries.
In a case study he outlines how these principles are applied to Europeana, the EU’s flagship online museum, library, film theatre and archive.
Olaf held this presentation at Leiden University, The Netherlands on 19-02-2009
Webinar slides: Interoperability between resources involved in TDM at the lev...openminted_eu
OpenMinTeD hosted a series of webinars on interoperability. These slides are of the webinar on the level of metadata. Full webinar recording accessible through: https://www.fosteropenscience.eu/content/achieving-interoperability-between-resources-involved-tdm-level-metadata
Laurel Stvan, Associate Professor of Linguistics, UT Arlington, presentation for “Using Digital Humanities Research Tools in the Classroom” at UT Dallas 2/27/13
LOHAI: Providing a baseline for KOS based automatic indexingKai Eckert
Automatic KOS based indexing – i.e. indexing based on a
restricted, controlled vocabulary, a thesaurus or a classification – can play
an important role to close the gap between the intellectually, high quality
indexed publications and the mass of unindexed publications. Especially
for unknown, heterogeneous publications, like web publications, simple
processes that do not rely on manually created training data are needed.
With this contribution, we propose a straight-forward linguistic indexer,
that can be used as a basis for own developments and for experiments
and analyses to explore own documents and KOSs; it uses state-of-the-
art information retrieval techniques and hence forms a suitable baseline
for evaluations. Finally, it is free and open source.
Metadata Provenance Tutorial at SWIB 13, Part 1Kai Eckert
The slides of part one of the Metadata Provenance Tutorial (Linked Data Provenance). Part 2 is here: http://de.slideshare.net/MagnusPfeffer/metadata-provenance-tutorial-part-2-modelling-provenance-in-rdf
Guidance, Please! Towards a Framework for RDF-based Constraint Languages.Kai Eckert
Presentation held at the DCMI Conference 2015 in Sao Paulo.
http://dcevents.dublincore.org/IntConf/dc-2015/paper/view/386
In the context of the DCMI RDF Application Profile task group and the W3C Data Shapes Working Group solutions for the proper formulation of constraints and validation of RDF data on these constraints are being developed. Several approaches and constraint languages exist but there is no clear favorite and none of the languages is able to meet all requirements raised by data practitioners. To support the work, a comprehensive, community-driven database has been created where case studies, use cases, requirements and solutions are collected. Based on this database, we have hitherto published 81 types of constraints that are required by various stakeholders for data applications. We are using this collection of constraint types to gain a better understanding of the expressiveness of existing solutions and gaps that still need to be filled. Regarding the implementation of constraint languages, we have already proposed to use high-level languages to describe the constraints, but map them to SPARQL queries in order to execute the actual validation; we have demonstrated this approach for the Web Ontology Language in its current version 2 and Description Set Profiles. In this paper, we generalize from the experience of implementing OWL 2 and DSP by introducing an abstraction layer that is able to describe constraints of any constraint type in a way that mappings from high-level constraint languages to this intermediate representation can be created more or less straight-forwardly. We demonstrate that using another layer on top of SPARQL helps to implement validation consistently accross constraint languages, simplifies the actual implementation of new languages, and supports the transformation of semantically equivalent constraints across constraint languages.
The Metadata Provenance Task Group aims to define a data model that allows for making
assertions about description sets. Creating a shared model of the data elements required to
describe an aggregation of metadata statements allows to collectively import, access, use and
publish facts about the quality, rights, timeliness, data source type, trust situation, etc. of the
described statements. In this paper we describe the preliminary model created by the task group,
together with first examples that demonstrate how the model is to be used.
Presentation on the DM2E data model, a specialisation of the EDM for the domain of (handwritten) manuscripts. Held at the EDM-Tutorial (22.09.) at the TPDL 2013 on Malta.
DM2E Content (Doron Goldfarb – ONB Austrian National Library) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Valentine Charles: Linking cultural heritage with KOS: the Europeana example COST Action TD1210
Valentine Charles (Europeana) “Linking cultural heritage with KOS: the Europeana example”
Presentation at the KnoweScape workshop "Evolution and variation of classification systems" March 4-5, 2015 Amsterdam
DM2E 4th Digital Humanities Advisory Board Meeting, 3 April 2014 - Report on the 2nd Project Review Meeting (13 March 2014), Vivien Petras (Humboldt-Universität zu Berlin)
An introduction to the Europeana Data Model and services in the context of creating benchmarks for a cultural heritage data set. Presented at the Linked Data Benchmark Council Technical User Committee in London in November 2013.
LOHAI: Providing a baseline for KOS based automatic indexingKai Eckert
Automatic KOS based indexing – i.e. indexing based on a
restricted, controlled vocabulary, a thesaurus or a classification – can play
an important role to close the gap between the intellectually, high quality
indexed publications and the mass of unindexed publications. Especially
for unknown, heterogeneous publications, like web publications, simple
processes that do not rely on manually created training data are needed.
With this contribution, we propose a straight-forward linguistic indexer,
that can be used as a basis for own developments and for experiments
and analyses to explore own documents and KOSs; it uses state-of-the-
art information retrieval techniques and hence forms a suitable baseline
for evaluations. Finally, it is free and open source.
Metadata Provenance Tutorial at SWIB 13, Part 1Kai Eckert
The slides of part one of the Metadata Provenance Tutorial (Linked Data Provenance). Part 2 is here: http://de.slideshare.net/MagnusPfeffer/metadata-provenance-tutorial-part-2-modelling-provenance-in-rdf
Guidance, Please! Towards a Framework for RDF-based Constraint Languages.Kai Eckert
Presentation held at the DCMI Conference 2015 in Sao Paulo.
http://dcevents.dublincore.org/IntConf/dc-2015/paper/view/386
In the context of the DCMI RDF Application Profile task group and the W3C Data Shapes Working Group solutions for the proper formulation of constraints and validation of RDF data on these constraints are being developed. Several approaches and constraint languages exist but there is no clear favorite and none of the languages is able to meet all requirements raised by data practitioners. To support the work, a comprehensive, community-driven database has been created where case studies, use cases, requirements and solutions are collected. Based on this database, we have hitherto published 81 types of constraints that are required by various stakeholders for data applications. We are using this collection of constraint types to gain a better understanding of the expressiveness of existing solutions and gaps that still need to be filled. Regarding the implementation of constraint languages, we have already proposed to use high-level languages to describe the constraints, but map them to SPARQL queries in order to execute the actual validation; we have demonstrated this approach for the Web Ontology Language in its current version 2 and Description Set Profiles. In this paper, we generalize from the experience of implementing OWL 2 and DSP by introducing an abstraction layer that is able to describe constraints of any constraint type in a way that mappings from high-level constraint languages to this intermediate representation can be created more or less straight-forwardly. We demonstrate that using another layer on top of SPARQL helps to implement validation consistently accross constraint languages, simplifies the actual implementation of new languages, and supports the transformation of semantically equivalent constraints across constraint languages.
The Metadata Provenance Task Group aims to define a data model that allows for making
assertions about description sets. Creating a shared model of the data elements required to
describe an aggregation of metadata statements allows to collectively import, access, use and
publish facts about the quality, rights, timeliness, data source type, trust situation, etc. of the
described statements. In this paper we describe the preliminary model created by the task group,
together with first examples that demonstrate how the model is to be used.
Presentation on the DM2E data model, a specialisation of the EDM for the domain of (handwritten) manuscripts. Held at the EDM-Tutorial (22.09.) at the TPDL 2013 on Malta.
DM2E Content (Doron Goldfarb – ONB Austrian National Library) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Valentine Charles: Linking cultural heritage with KOS: the Europeana example COST Action TD1210
Valentine Charles (Europeana) “Linking cultural heritage with KOS: the Europeana example”
Presentation at the KnoweScape workshop "Evolution and variation of classification systems" March 4-5, 2015 Amsterdam
DM2E 4th Digital Humanities Advisory Board Meeting, 3 April 2014 - Report on the 2nd Project Review Meeting (13 March 2014), Vivien Petras (Humboldt-Universität zu Berlin)
An introduction to the Europeana Data Model and services in the context of creating benchmarks for a cultural heritage data set. Presented at the Linked Data Benchmark Council Technical User Committee in London in November 2013.
Introduction to DM2E - Doron Goldfarb (Austrian National Library) - Presentation given at DM2E event 'Putting Linked Library Data to Work: the DM2E Showcase' (18 Nov 2014, ONB, Vienna)
CLARIAH Toogdag 2018: A distributed network of digital heritage informationEnno Meijers
Slides of my keynote at the CLARIAH Toogdag 2018 on 9 March at the National Library of the Netherlands. The main topics were the development of the distributed digital heritage network and the alignment to and cooperation with the CLARIAH infrastructure and data. It also points at some of the current limitations of the semantic web technology.
External or internal domain-specific languages (DSLs) or (fluent)
APIs? Whoever you are – a developer or a user of a DSL –
you usually have to choose side; you should not! What about
metamorphic DSLs that change their shape according to your
needs? Our 4-years journey of providing the "right" support
(in the domain of feature modeling), led us to develop an external
DSL, different shapes of an internal API, and maintain
all these languages. A key insight is that there is no one-size-fits-
all solution or no clear superiority of a solution compared
to another. On the contrary, we found that it does make sense
to continue the maintenance of an external and internal DSL.
Based on our experience and on an analysis of the DSL engineering
field, the vision that we foresee for the future of
software languages is their ability to be self-adaptable to the
most appropriate shape (including the corresponding integrated
development environment) according to a particular
usage or task. We call metamorphic DSL such a language,
able to change from one shape to another shape.
The talk has been presented at SPLASH conference in Portland (USA), Onward! Essays track.
Paper is here: https://hal.archives-ouvertes.fr/hal-01061576/fr
Slides (in German) for a talk of Magnus Pfeffer and Kai Eckert. We propose the linked data/semantic web technology as an infrastructure to publish the results of research projects for easy reuse.
Crowdsourcing the Assembly of Concept HierarchiesKai Eckert
How to create a taxonomy by a paid workforce provided by Amazon Mechanical Turk. Evaluative comparison to an existing community of motivated students and domain experts.
Presentation held at JCDL 2010, Brisbane, Australia (http://www.jcdl2010.org).
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Specialising the EDM for Digitised Manuscript (SWIB13)
1. Specialising the EDM for Digitised
Manuscripts
Kai Eckert1, Steffen Hennicke², Evelyn Dröge², Julia Iwanowa², Violeta Trkulja²
1Universität
Mannheim, ²Humboldt-Universität zu Berlin
Semantic Web in Libraries - Hamburg, 27.11.2013
co-funded by the European Union
2. Digitised Manuscripts to Europeana
• EU-funded Europeana satellite project
• Duration: Three years (2012 – 2015)
• Partners from Germany, Austria, Norway, Greece, UK and Italy
• DM2E works on:
– a tool-chain for data migration to Europeana and the LOD
Web (OMNOM),
– a digital research environment for the Digital Humanities
(PUNDIT),
– an open community of cultural heritage professionals
(OPENGLAM)
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
2
4. DM2E: Provided Content
• Metadata about manuscripts:
– Described by: TEI, MAB2,
MARC, EAD, METS/MODS
Database content
– In different languages
– 118.000+ items
– 20.006.930+ pages
TEI, MARC,
EAD, MAB2,
MODS, EAD
DE, EN, HEB,
AR
fulltext,
facsimiles,
transcription
DM2E Model
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
4
5. DM2E: Data Model
• Semantically and structurally heterogeneous data
– e.g. EAD, METS, TEI, MARCXML and MAB2, relational databases,
proprietary schemas
• The Europeana Data Model (EDM) is made for this
scenario!
– provides a generic semantic interoperability layer
– enables the definition of “applications profiles” which may
address the needs of specific communities
• The DM2E Data Model (DM2E)
– is an “application profile” of the EDM for the domain of
handwritten manuscripts
– retains rich descriptions by specialising the EDM
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
5
6. DM2E: Specialisation approach
• RDF(S) allows the specialisation of EDM classes and
properties
– use of rdfs:subClassOf
– use of rdfs:subPropertyOf
edm:hasMet
rdfs:subPropertyOf
• An “application profile” typically
also includes
– additional ontological restrictions
– documentation
dc:contributor
rdfs:subPropertyOf
dcterms:contributor
rdfs:subPropertyOf
dcterms:creator
rdfs:subPropertyOf
dm2e:writer
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
6
7. DM2E: Specialisation Guidelines
• Empirical analysis of provided source metadata
• Iterative mappings to the EDM
• Close cooperation with data providers
– agree on shared conceptualisations
• Create rich and connected representations
– retain original semantics as much as possible
– use existing URIs of resources
– assign a class to the resources (rdf:type)
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
7
8. DM2E: Interoperability approach
• Create new classes or properties in the DM2E-Namespace only
if there is no other suitable option available
– reuse existing namespaces (ontologies)
– mind existing semantics (scope notes, domains, ranges)
• Types, roles and relations between agents
– Friend-of-a-Friend (FOAF) [FOAF] (types of agents)
– Publishing Roles Ontology (PRO) [SPAR] (roles of agents in the
publication process)
– VIVO [VIVO] (types of agents)
• Detailed semantics on bibliographic entities
– FRBR-aligned Bibliographic Ontology (FaBiO) [SPAR]
– Citation Typing Ontology (CiTO) [SPAR]
– Bibliographic Ontology (BIBO) [BIBO]
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
8
9. DM2E Model: Class-Specialisation
• 23 new or reused classes, mainly for
– physical and conceptual parts of a handwritten manuscripts
– as found in our source metadata
– different types of Agents
edm:NonInformationResource
edm:Place
edm:PhysicalThing
edm:Event
dm2e:Book
skos:Concept edm:TimeSpan
dm2e:Work
dm2e:Page
edm:Agent
dm2e:Institution
dm2e:Person
…
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
9
10. edm:PhysicalThing
Physical and
tangible aspects of
handwritten
manuscripts.
edm:NonInformationResource
edm:PhysicalThing
dm2e:Cover
dm2e:Document
dm2e:Photo
dm2e:File
dm2e:Page
dm2e:Manuscript
bibo:Journal
bibo:Book
bibo:Letter
is-a
http://www.europeana.eu/schemas/edm/
http://onto.dm2e.eu/schemas/dm2e/1.0/
http://purl.org/ontology/bibo/
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
10
11. Contextual Resources: Agent
Different types
of agents.
edm:Agent
foaf:Person
foaf:Organisation
vivo:University
dm2e:Archive
vivo:Library
vivo:Museum
is-a
http://www.europeana.eu/schemas/edm/
http://xmlns.com/foaf/0.1/
http://onto.dm2e.eu/schemas/dm2e/1.0/
http://vivoweb.org/ontology/core#
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
11
12. DM2E Model: Properties-Specialisation
• Property-centric modelling
– more than 50 new properties
• Documentation for the DM2E Data Model contains only EDM
properties which are utilized
– to keep the documentation clear
– e.g. dcterms:replaces, dc:source, or dc:conformsTo are not used
• Domain and Range Restrictions
– some OWL-Restrictions on properties in order to encourage the use of
specific resources of a specific type, e.g.
• CHO hasPart CHO
• WebResource hasPart WebResource
• Some EDM-Properties are mandatory in DM2E
– dc:type: at least one of the physical (e.g. dm2e:Page) or logical (e.g.
dm2e:Paragraph) aspects
– dc:subject: ideally an URI from a controlled vocabulary
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
12
13. DM2E Model: Property Extensions
Example: Adding
new properties as
subproperties for
dm2e:artist
dcterms:creator
pro:illustrator
dm2e:composer
dcterms:creator
pro:author
dm2e:painter
dm2e:writer
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
13
14. Outlook: Uncertain Statements
Part of the next model version: How to deal with uncertain
timespans and presumably creators?
• Problem: Confidence declarations for RDF-statements need
Named Graphs or Reification
• Solution:
Agents
Timespans
„The creator of the CHO is presumably
Goethe.“
„The timespan was somewhere in the
1920ies and lasted 2 years.“
timeSpan1 a edm:TimeSpan.
uncertainBegin 1920;
uncertainEnd 1929;
presumableAgent1 a PresumableAgent;
duration 2.
isPresumably goethe;
confidence 0.8.
res1 dc:creator presumableAgent1.
Confidence is optional
27.11.2013
Duration is optional
Kai Eckert: Specialising the EDM for Digitised Manuscripts
14
15. Documentation: PDF and OWL
The PDF and the OWL representations can be accessed
via the project‘s website:
dm2e.eu/document/#DM2EModelSpecification
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
15
16. Documentation: Online
• Human & machine
readable
• Version 1.0
onto.dm2e.eu
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
16
17. Summary
• The DM2E Data Model is an application profile of the
EDM for the domain of Manuscripts
• DM2E v1.0: Latest and first operational version
• DM2E v1.1: Next version under development
• Work is on-going and feedback welcome!
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
17
18. Thank you for your attention!
Questions and Feedback:
Steffen Hennicke,
Julia Iwanowa,
Evelyn Droege.
vorname.nachname@ibi.hu-berlin.de
27.11.2013
Kai Eckert: Specialising the EDM for Digitised Manuscripts
18