Dr. Kepa Rodriguez, Data and Content Specialist, Archives Division, Yad Vashem
Integration and Retrieval of Heterogeneous Archival Metadata
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
The slideset used to conduct an introduction/tutorial
on DBpedia use cases, concepts and implementation
aspects held during the DBpedia community meeting
in Dublin on the 9th of February 2015.
(slide creators: M. Ackermann, M. Freudenberg
additional presenter: Ali Ismayilov)
eNanoMapper database, search tools and templatesNina Jeliazkova
A webinar given at the NCIP Hub https://nciphub.org/resources/1925
Nanomaterial safety assessment has become an important task following the production growth of engineered nanomaterials (ENMs) and the increased interest for ENMs from various academic, industry and regulatory parties. A number of challenges exist in nanomaterials data representation and integration mainly due to the data complexity and origination of ENM information from diverse sources. We have recently described eNanoMapper database [1] as part of the computational infrastructure for toxicological data management of engineered materials, developed within eNanoMapper project [2].
The eNanoMapper prototype database is publicly available at http://data.enanomapper.net, demonstrating the integration of data from multiple sources, using the common data model and Application Programming Interface. The supported import formats are IUCLID5 files (OECD HT), semantic format (RDF) and custom spreadsheet templates. The latter accommodates the preferred approach for data gathering for the majority of the NanoSafety Cluster projects and is enabled by a configurable parser mapping the the custom spreadsheet organization into the internal eNanoMapper storage components through external configuration file. Import of spreadsheet data and other data formats, generated by a number of NanoSafety Cluster projects is currently ongoing. The export formats have been extended with the new ISA JSON format, following the most recent ISA specification.
Defining templates for data gathering is a common activity for most of the NanoSafety Cluster projects usually resulting in modified Excel spreadsheets. In order to help avoiding the incompatibility issues, we present a tool for template generation, based on templates released under open license by JRC under the framework of the NANoREG project [3]. A number of physchem, in-vitro and in-vivo assays are supported and using feedback from users we added and extended existing information about different aspects of nanosafety, e.g. environmental exposure, cell culture assays, cellular and animal models, nanomaterial production features, and nanomaterial ageing.
Finally, the data can be accessed programmatically via the application programming interface as well as via user friendly search interface at https://search.data.enanomapper.net. The search application is powered by a free text search engine and eNanoMapper ontology and was improved over the last year based on user feedback.The search function allows now multiple filtering for information. It is possible to stack filters for e.g. nanomaterial type, cell model and assay.
eNanoMapper is supported by European Commission 7th Framework Programme for Research and Technological Development Grant (Grant agreement no: 604134).
Liber 2014 - Chain Reactions: TEL & RLUK on their Linked Open data.Mike Mertens
Presentation on the experience and learning of The European Library and Research Libraries UK (RLUK) in creating a set of Linked Open Data based on some 19 million bibliographic records
The slideset used to conduct an introduction/tutorial
on DBpedia use cases, concepts and implementation
aspects held during the DBpedia community meeting
in Dublin on the 9th of February 2015.
(slide creators: M. Ackermann, M. Freudenberg
additional presenter: Ali Ismayilov)
eNanoMapper database, search tools and templatesNina Jeliazkova
A webinar given at the NCIP Hub https://nciphub.org/resources/1925
Nanomaterial safety assessment has become an important task following the production growth of engineered nanomaterials (ENMs) and the increased interest for ENMs from various academic, industry and regulatory parties. A number of challenges exist in nanomaterials data representation and integration mainly due to the data complexity and origination of ENM information from diverse sources. We have recently described eNanoMapper database [1] as part of the computational infrastructure for toxicological data management of engineered materials, developed within eNanoMapper project [2].
The eNanoMapper prototype database is publicly available at http://data.enanomapper.net, demonstrating the integration of data from multiple sources, using the common data model and Application Programming Interface. The supported import formats are IUCLID5 files (OECD HT), semantic format (RDF) and custom spreadsheet templates. The latter accommodates the preferred approach for data gathering for the majority of the NanoSafety Cluster projects and is enabled by a configurable parser mapping the the custom spreadsheet organization into the internal eNanoMapper storage components through external configuration file. Import of spreadsheet data and other data formats, generated by a number of NanoSafety Cluster projects is currently ongoing. The export formats have been extended with the new ISA JSON format, following the most recent ISA specification.
Defining templates for data gathering is a common activity for most of the NanoSafety Cluster projects usually resulting in modified Excel spreadsheets. In order to help avoiding the incompatibility issues, we present a tool for template generation, based on templates released under open license by JRC under the framework of the NANoREG project [3]. A number of physchem, in-vitro and in-vivo assays are supported and using feedback from users we added and extended existing information about different aspects of nanosafety, e.g. environmental exposure, cell culture assays, cellular and animal models, nanomaterial production features, and nanomaterial ageing.
Finally, the data can be accessed programmatically via the application programming interface as well as via user friendly search interface at https://search.data.enanomapper.net. The search application is powered by a free text search engine and eNanoMapper ontology and was improved over the last year based on user feedback.The search function allows now multiple filtering for information. It is possible to stack filters for e.g. nanomaterial type, cell model and assay.
eNanoMapper is supported by European Commission 7th Framework Programme for Research and Technological Development Grant (Grant agreement no: 604134).
Liber 2014 - Chain Reactions: TEL & RLUK on their Linked Open data.Mike Mertens
Presentation on the experience and learning of The European Library and Research Libraries UK (RLUK) in creating a set of Linked Open Data based on some 19 million bibliographic records
morning session talk at the second Keystone Training School "Keyword search in Big Linked Data" held in Santiago de Compostela.
https://eventos.citius.usc.es/keystone.school/
EC-WEB: Validator and Preview for the JobPosting Data Model of Schema.orgJindřich Mynarz
The presentation describes a tool for validating and previewing instances of Schema.org JobPosting described in structured data markup embedded in web pages. The validator and preview was developed to assist users of Schema.org to produce data of better quality. In this way, it tries to enhance usability of a part of Schema.org covering the domain of job postings. The paper discusses implementation of the tool and design of its validation rules based on SPARQL 1.1. Results of experimental validation of a job posting corpus harvested from the Web are presented. Among other findings, the results indicate that publishers of Schema.org JobPosting data often misunderstand precedence rules employed by markup parsers and that they ignore case-sensitivity of vocabulary names.
Build Narratives, Connect Artifacts: Linked Open Data for Cultural HeritageOntotext
Many issues are faced by scholars, book researchers, museum directors who try to find the underlying connection between resources. Scholars in particular continuously emphasizes the role of digital humanities and the value of linked data in cultural heritage information systems.
First Steps in Semantic Data Modelling and Search & Analytics in the CloudOntotext
This webinar will break the roadblocks that prevent many from reaping the benefits of heavyweight Semantic Technology in small scale projects. We will show you how to build Semantic Search & Analytics proof of concepts by using managed services in the Cloud.
RDF2vec is a method for creating embeddings vectors for entities in knowledge graphs. In this talk, I introduce the basic idea of RDF2vec, as well as the latest extensions developments, like the use of different walk strategies, the flavour of order-aware RDF2vec, RDF2vec for dynamic knowledge graphs, and more.
Exploration, visualization and querying of linked open data sourcesLaura Po
afternoon hands-on session talk at the second Keystone Training School "Keyword search in Big Linked Data" held in Santiago de Compostela.
https://eventos.citius.usc.es/keystone.school/
Initially prepared for the CERN/RDA workshop on Active Data Management Plans (28-30 June 2016). Also presented in Denver at International Data Week (12-17 Sept 2016).
When a local project becomes beneficial for the whole community (and vice ver...4Science
Presentation given to Open Repository 2018, Bozeman, Montana.
The Document Server@UHasselt, the repository of Hasselt University, was set up by the University Library in 2003. The development of the Document Server@UHasselt is based on the Open Source Software DSpace, but the platform had to be extended to fulfil the extra requirements needed by the University Library and the Research Coordination Office.
Despite the aim to share enhancements, Hasselt University Library was unable to finalize the contribution of customizations to the main code base. At the beginning of 2017 Hasselt University decided to outsource a new round of development seeking for a solution to the long term sustainability. This happens in the year of a big transition for the DSpace platform toward a new UI technology.
The case at Hasselt University shows at the same time the complexity and the possibility of integrating local developments in community efforts. Local resources can be made available to extend community efforts, for the benefit of all institutions that adopt the same open-source solution. Evolution and innovation of non-commercial solutions is only possible by collaboration and sharing.
Wednesday 6 May: Hand me the data! What you should know as a humanities resea...WARCnet
Wednesday 6 May: Hand me the data! What you should know as a humanities researcher before asking for data from a web archive, Ulrich Have, NetLab/DIGHUMLAB, Aarhus University
Open Data (and Software, and other Research Artefacts) -A proper managementOscar Corcho
Presentation at the event "Let's do it together: How to implement Open Science Practices in Research Projects" (29/11/2019), organised by Universidad Politécnica de Madrid, where we discuss on the need to take into account not only open access or open research data, but also all the other artefacts that are a result of our research processes.
Open Chemistry, JupyterLab and data: Reproducible quantum chemistryMarcus Hanwell
The Open Chemistry project is developing an ambitious platform to facilitate reproducible quantum chemistry workflows by integrating the best of breed open source projects currently available in a cohesive platform with extensions specific to the needs of quantum chemistry. The core of the project is a Python-based data server capable of storing metadata, executing quantum chemistry calculations, and processing the output. The platform exposes RESTful endpoints using programming language agnostic web endpoints, and uses Linux container technology to package quantum codes that are often difficult to build.
The Jupyter project has been leveraged as a web-based frontend offering reproducibility as a core principle. This has been coupled with the data server to initiate quantum chemistry calculations, cache results, make them searchable, and even visualize the results within a modern browser environment. The Avogadro libraries have been reused for visualization workflows, coupled with Open Babel for file translation, and examples of the use of NWChem and Psi4 will be demonstrated.
The core of the platform is developed upon JSON data standards, and encouraging the wider adoption of JSON/HDF5 as the principle storage mediums. A single page web application using React at its core will be shown for sharing simple views of data output, and linking to the Jupyter notebooks that documents how they were made. Command line tools and links to the Avogadro graphical interface will be shown demonstrating capabilities from web through to desktop.
morning session talk at the second Keystone Training School "Keyword search in Big Linked Data" held in Santiago de Compostela.
https://eventos.citius.usc.es/keystone.school/
EC-WEB: Validator and Preview for the JobPosting Data Model of Schema.orgJindřich Mynarz
The presentation describes a tool for validating and previewing instances of Schema.org JobPosting described in structured data markup embedded in web pages. The validator and preview was developed to assist users of Schema.org to produce data of better quality. In this way, it tries to enhance usability of a part of Schema.org covering the domain of job postings. The paper discusses implementation of the tool and design of its validation rules based on SPARQL 1.1. Results of experimental validation of a job posting corpus harvested from the Web are presented. Among other findings, the results indicate that publishers of Schema.org JobPosting data often misunderstand precedence rules employed by markup parsers and that they ignore case-sensitivity of vocabulary names.
Build Narratives, Connect Artifacts: Linked Open Data for Cultural HeritageOntotext
Many issues are faced by scholars, book researchers, museum directors who try to find the underlying connection between resources. Scholars in particular continuously emphasizes the role of digital humanities and the value of linked data in cultural heritage information systems.
First Steps in Semantic Data Modelling and Search & Analytics in the CloudOntotext
This webinar will break the roadblocks that prevent many from reaping the benefits of heavyweight Semantic Technology in small scale projects. We will show you how to build Semantic Search & Analytics proof of concepts by using managed services in the Cloud.
RDF2vec is a method for creating embeddings vectors for entities in knowledge graphs. In this talk, I introduce the basic idea of RDF2vec, as well as the latest extensions developments, like the use of different walk strategies, the flavour of order-aware RDF2vec, RDF2vec for dynamic knowledge graphs, and more.
Exploration, visualization and querying of linked open data sourcesLaura Po
afternoon hands-on session talk at the second Keystone Training School "Keyword search in Big Linked Data" held in Santiago de Compostela.
https://eventos.citius.usc.es/keystone.school/
Initially prepared for the CERN/RDA workshop on Active Data Management Plans (28-30 June 2016). Also presented in Denver at International Data Week (12-17 Sept 2016).
When a local project becomes beneficial for the whole community (and vice ver...4Science
Presentation given to Open Repository 2018, Bozeman, Montana.
The Document Server@UHasselt, the repository of Hasselt University, was set up by the University Library in 2003. The development of the Document Server@UHasselt is based on the Open Source Software DSpace, but the platform had to be extended to fulfil the extra requirements needed by the University Library and the Research Coordination Office.
Despite the aim to share enhancements, Hasselt University Library was unable to finalize the contribution of customizations to the main code base. At the beginning of 2017 Hasselt University decided to outsource a new round of development seeking for a solution to the long term sustainability. This happens in the year of a big transition for the DSpace platform toward a new UI technology.
The case at Hasselt University shows at the same time the complexity and the possibility of integrating local developments in community efforts. Local resources can be made available to extend community efforts, for the benefit of all institutions that adopt the same open-source solution. Evolution and innovation of non-commercial solutions is only possible by collaboration and sharing.
Wednesday 6 May: Hand me the data! What you should know as a humanities resea...WARCnet
Wednesday 6 May: Hand me the data! What you should know as a humanities researcher before asking for data from a web archive, Ulrich Have, NetLab/DIGHUMLAB, Aarhus University
Open Data (and Software, and other Research Artefacts) -A proper managementOscar Corcho
Presentation at the event "Let's do it together: How to implement Open Science Practices in Research Projects" (29/11/2019), organised by Universidad Politécnica de Madrid, where we discuss on the need to take into account not only open access or open research data, but also all the other artefacts that are a result of our research processes.
Open Chemistry, JupyterLab and data: Reproducible quantum chemistryMarcus Hanwell
The Open Chemistry project is developing an ambitious platform to facilitate reproducible quantum chemistry workflows by integrating the best of breed open source projects currently available in a cohesive platform with extensions specific to the needs of quantum chemistry. The core of the project is a Python-based data server capable of storing metadata, executing quantum chemistry calculations, and processing the output. The platform exposes RESTful endpoints using programming language agnostic web endpoints, and uses Linux container technology to package quantum codes that are often difficult to build.
The Jupyter project has been leveraged as a web-based frontend offering reproducibility as a core principle. This has been coupled with the data server to initiate quantum chemistry calculations, cache results, make them searchable, and even visualize the results within a modern browser environment. The Avogadro libraries have been reused for visualization workflows, coupled with Open Babel for file translation, and examples of the use of NWChem and Psi4 will be demonstrated.
The core of the platform is developed upon JSON data standards, and encouraging the wider adoption of JSON/HDF5 as the principle storage mediums. A single page web application using React at its core will be shown for sharing simple views of data output, and linking to the Jupyter notebooks that documents how they were made. Command line tools and links to the Avogadro graphical interface will be shown demonstrating capabilities from web through to desktop.
This talk was given at SEMANTiCS 2014 in Leipzig. It gives an overview how to develop an enterprise linked data strategy around controlled vocabularies based on SKOS. It discusses how knowledge graphs based on SKOS can extended step by step due to the needs of the organization.
A summary of DBpedia's History and a detailed analysis of challenges and solutions.
We show how the Linked Data Cloud evolved around DBpedia and also what problems we and other data projects encountered. We included a section on the new solutions that will lead DBpedia into a bright future.
Automated interpretability of linked data ontologies: an evaluation within th...Nuno Freire
Publication and usage of linked data has been highly pursued by cultural heritage institutions and service providers in this domain. Much research and cooperation are taking place in adapting and improving cultural heritage data models for linked data and in defining ontologies and vocabularies, as well as the setting up of services based on linked data. This article presents an evaluation of ontologies and vocabularies published as liked data, which originate from the cultural heritage domain, or are frequently used and linked to in this domain. Our study aims to evaluate their usability by crawlers operating on the web of data, according to specifications and practices of linked data, the Semantic Web and ontology reasoning. We evaluate having in mind the use case of general data consumption applications based on RDF, RDF Schema, OWL, SKOS and linked data’s guidelines. We have evaluated twelve ontologies and vocabularies and identified that four were not fully compliant, and that alignments between ontologies are not included in the definitions of the ontologies. This study contributes to the research of novel services consuming linked data. It also allows to better assess the automation that can be achieved to handle the variety and large volume of linked data, when assessing the viability of new services based on linked data in cultural heritage.
CLARIAH Toogdag 2018: A distributed network of digital heritage informationEnno Meijers
Slides of my keynote at the CLARIAH Toogdag 2018 on 9 March at the National Library of the Netherlands. The main topics were the development of the distributed digital heritage network and the alignment to and cooperation with the CLARIAH infrastructure and data. It also points at some of the current limitations of the semantic web technology.
Jenny Mitcham from the University of York and Chris Awre from the University of Hull share lessons learned from their project to explore the potential of the digital preservation solution Archivematica to help manage research data that academics within the University produce. The project 'Filling the Digital Preservation Gap' has been carried out with funding from Jisc as part of their Research Data Spring program and was a collaboration of the University of York and the University of Hull. The project did not only explore Archivematica as a possible solution but also how it could integrate with the repositories and other systems for the management of research data.
The Series is jointly sponsored by ANDS and CAUL.
Eyal Reuven, National Library of Israel: the Open Library
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
G12 susan hazan_roundtableopenaccesjewishevaminerva
Susan Hazan, The Israel Museum, Jerusalem, Harvard
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
G12 susan hazan_roundtableopenaccesjewishevaminerva
Susan Hazan, The Israel Museum, Jerusalem, Harvard
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Alex Valdman, The Central Archives for the History of the Jewish People and the Ben-Gurion University of the Negev
Jewish Documentary Heritage Online: The Yerusha Project at the Central Archives for the History of the Jewish People
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Alex Valdman, The Central Archives for the History of the Jewish People and the Ben-Gurion University of the Negev
Jewish Documentary Heritage Online: The Yerusha Project at the Central Archives for the History of the Jewish People
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Ronit Gadish and Alexander Vainer, The Academy of the Hebrew Language
Hebrew Terminology: Presentation of Data and Technological Challenges
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Ronit Gadish and Alexander Vainer, The Academy of the Hebrew Language
Hebrew Terminology: Presentation of Data and Technological Challenges
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Edwin Seroussi and Josef Sprinzak, Da'at Hamakom Center for the Study of Cultures of Place in Jewish Modernity, The Hebrew University
Mapping Jewish Culture in Time and Place: The Interactive Map of Da'at Hamakom
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Edwin Seroussi and Josef Sprinzak, Da'at Hamakom Center for the Study of Cultures of Place in Jewish Modernity, The Hebrew University
Mapping Jewish Culture in Time and Place: The Interactive Map of Da'at Hamakom
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Menachem Katz, The Open University of Israel, The Friedberg Jewish Manuscript Society
Hillel Gershuni, Hebrew University of Jerusalem, The Friedberg Jewish Manuscript Society
Categorization of Textual Variants in Digital Synopses and its Research Potential
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Menachem Katz, The Open University of Israel, The Friedberg Jewish Manuscript Society
Hillel Gershuni, Hebrew University of Jerusalem, The Friedberg Jewish Manuscript Society
Categorization of Textual Variants in Digital Synopses and its Research Potential
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Jonathan Ben-Dov, University of Haifa
Scripta Qumranica Electronica: Dead Sea Scrolls Aggregated Database and Virtual Research Environment
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Oren Ableman and Orit Rosengarten, Israel Antiquities Authority
The Leon Levy Dead Sea Scrolls Digital Library: Digitizing and Cataloging the Dead Sea Scrolls
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Oren Ableman and Orit Rosengarten, Israel Antiquities Authority
The Leon Levy Dead Sea Scrolls Digital Library: Digitizing and Cataloging the Dead Sea Scrolls
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Daniel Stoeckl ben Ezra, EPHE, Sorbonne, France
Hayim Lapin, University of Maryland, US
Building the Next Generation of Resources for Cultural Heritage Digital Texts: Mishna and Tosefta
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Daniel Stoeckl ben Ezra, EPHE, Sorbonne, France
Hayim Lapin, University of Maryland, US
Building the Next Generation of Resources for Cultural Heritage Digital Texts: Mishna and Tosefta
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Michael Satlow, Brown University
Inscriptions of Israel/Palestine: A Digital Project
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Michael Satlow, Brown University
Inscriptions of Israel/Palestine: A Digital Project
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Sigal Arie-Erez, Director, Cataloguing Department, Archives Division, Yad Vashem
Reconnecting the Past: How to Link Archival Descriptions – the EHRI Portal Model
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Sigal Arie-Erez, Director, Cataloguing Department, Archives Division, Yad Vashem
Reconnecting the Past: How to Link Archival Descriptions – the EHRI Portal Model
2016 EVA/Minerva Jerusalem International Conference on Digitisation of Cultural Heritage
http://2016.minervaisrael.org.il
http://www.digital-heritage.org.il
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
1. EVA/Minerva 2016
Integration and Retrieval of
Heterogeneous Archival Metadata
CONNECTING
COLLECTIONS
Kepa J. Rodriguez – Archives Yad Vashem
09/11/2016
2. Outline
●
Data integration in the first phase of the project
●
Our actual integration approach
●
Retrieval of data using controlled vocabularies
●
Development of the EHRI controlled vocabularies
3. Data integration in the first phase of the project
●
Holding institutions delivered data in very different formats:
●
XML, text files, CSV, JSON, etc...
●
Ingestion into the portal was made case by case
●
We interpreted data model and map it with our model
●
Sometimes without help of the institution
●
Lots of data introduced by hand
●
Process no sustainable, it cannot be repeated
●
No automatic updates are possible
●
If an institution updates content, data has to be updated by hand
●
Other problems: infrastructure, persistent identifiers, etc.
4. Proposal for the second phase of the project
● Data conversion
● Data publication and synchronization
● Data ingestion
5. Data conversion
●
Converstion tool: different data formats into EAD:
●
XML, JSON, CSV...
●
Generic transformation
●
Useful for a relevant number of institutions
●
Reusable functions, as mappings for specific fields of their export
format into EAD
●
Utilities to configure specific transformations
●
Validation of the output:
●
Machine validation: XML validation protocols
●
Schematron, RNG
●
Human validation: HTML preview including mark-up
for validation errors
6. EAD File sample (1)
<archdesc level="subgrp">
<did>
<unitid>M.49.E</unitid>
<unittitle encodinganalog="3.1.2">Testimonies of Holocaust Survivors collected by the
Central Jewish Historical Commission in Poland, 1944-1947</unittitle>
<physdesc encodinganalog="3.1.5">6845 files</physdesc>
<langmaterial>
<language langcode="deu" encodinganalog="3.4.3">German</language>
<language langcode="pol" encodinganalog="3.4.3">Polish</language>
<language langcode="yid" encodinganalog="3.4.3">Yiddish</language>
</langmaterial>
<repository>
<corpname>ושם יד ארכיון / Yad Vashem Archives</corpname>
</repository>
</did>
<scopecontent encodinganalog="3.3.1">
<p>The collection consists of approximately 7,200 testimonies collected by the
Centralna Żydowska Komisja Historyczna (Central Jewish Historical Committee) in
Poland during its during its active years, 1944-1947.
…..
as well as testimonies from survivors who fought in partisan units and survivors who
were in hiding.</p>
</scopecontent>
…....
8. Data publication and synchronization
●
We plan to use two data publication protocols:
●
OAI-PMH: one of the first protocols for publication of data
●
Publication of data in different formats: Dublin Core (default), EAD,
etc.
●
PMH-servers are not easy to implement and to mantain for small
archives
●
But we want to implement a client for institutions that already use it
●
RessourceSync: a new protocol
●
Based on SiteMaps
●
Data can be published on the web page of the institution
●
Higher security
●
Use sitemaps to expose changes and updates
●
Only modified and new data will be tranferred to the portal
●
Both are standard protocols of the Open Archives Initiative
9. Data ingestion
●
After data is ingested into the portal, it will receive a
permanent URL:
●
Formal protocol is in progress
●
Necessary to publish our data in the Linked Open Data cloud
●
Updates: data will be overwritten
●
But the portal keeps the user generated data
●
But... is it enough for the user just to have all
information in a single infrastructure?
10. Data retrieval
●
The user needs to be able to retrieve information related to
selected topics, places, people, organizations, creators...
●
Regardless which institution holds it
●
Regardless in which language the metadata is written
11. EHRI controlled vocabularies
●
EHRI Thesaurus
●
Concepts: hierarchy of concepts formalized in SKOS
●
A first set translated into 10 languages
●
Made by historians and content specialists
●
Authority lists:
●
Named entities or instances of the concepts
●
Proposed by historians and especialists: not really useful for indexing
and retrieval of data
●
During import a lot were added by hand to address necessities of the real
data
●
Domain specific authorities: Ghettos, Camps, Administrative Districts
●
Vocabularies created for applications in the portal:
●
Two research guides
●
Linked to the EHRI Thesaurus
12. Problems of the first approach of the project
●
A vocabulary built with knowledge about the Shoah can be
helpful to represent the history, but not necessarily the
documentation:
●
The complilation of an encyclopedia and the implementation of an
engine for cataloguing and retrieval are two very different things
and require different strategies and kinds of expertise.
●
The vocabularies should be able to retrieve the real existing
data:
●
Vocabularies should be able to describe the data, not only the
content... i.e: types of documents, physical format of the data...
●
A strategy to increase te datasets when new data addresses new
necessities has to be implemented.
13. The reality of the data
●
Different institutions use different systems to assign
keywords (or no system)
●
Keywords can have different relevance in different systems
●
In a National Archive “holocaust” can be a relevant keyword, but it
is not relevant for the EHRI portal.
●
A same keyword can have different meanings in different
knowledge basis
●
i.e: “labor” in one set of imported data corresponds to “forced
labor”, in another set to “trade unions”
●
Relevant information is often given as free text:
●
Necessary to use Natural Language Processing to extract this
information, but we can do in the project only in a experimental
level.
14. EHRI's data driven approach (1)
●
Extraction of access points of the EAD files during import
<controlaccess>
<geogname>Poland</geogname>
<geogname>Warsaw</geogname>
</controlaccess>
<controlaccess>
<subject>Persecution of Jews</subject>
<subject>Testimonies, Biographies</subject>
<subject>Holocaust survivors</subject>
</controlaccess>
<controlaccess>
<corpname>Centralna Żydowska Komisja Historyczna</corpname>
</controlaccess>
15. EHRI's data driven approach (2)
●
Person, corporate bodies:
●
Check whether we have corresponding authority files
●
If we have: link the description unit with the correspoinding authority
file
●
If we don't have: create a new authority file
●
Priority of EHRI: creators of archival collections
●
Places:
●
Link the places with the geographical database GeoNames
●
Problematic for historical places, some of them will be added as extra
vocabulary.
16. EHRI's data driven approach (3)
●
Concepts/terms: the most complicated case
●
Archives used very different strategies for concepts:
●
Some institutions make composition of terms using different rules
(or no-rule)
●
Subject: “Jews--Persecution--France” (data of USHMM)
●
EHRI has an atomic approach
●
Subject: “Persecution of Jews”
●
Place: “France”
●
Steps to process concepts/terms:
●
Terms are normalized and de-duplicated
●
If there are equivalent terms in the thesaurus we establish a link
●
If there are not equivalent terms the concept goes to further
analysis
●
If necessary a board of experts will consider to accomodate a new
concept in our concept hierarchy.
17. Ghethos and Concentration Camps
●
We evaluate to start a WikiData project for ghettos and
concentration camps
●
Strategy:
●
Extract information from the actual thesaurus and alternative
sources
●
Encyclopedic knowledge
●
Data from project partners
●
Integration of all this data in the WikiData platform
●
Enrichment with help of the community
●
Multilingual labels and no controversial information
●
Finally the data in WikiData and in the portal should be
synchronized
18. NIOD Institute for War, Holocaust and Genocide
Studies (NL)
CEGESOMA Centre for Historical Research and
Documentation
on War and Contemporary Society (BE)
Jewish Museum in Prague (CZ)
Center for Holocaust Studies at the Institute for
Contemporary History in Munich (DE)
YAD VASHEM The Holocaust Martyrs’ and
Heroes’ Remembrance Authority (IL)
United States Holocaust Memorial Museum (USA)
Bundesarchiv (DE)
The Wiener Library Institute for the Study of
the Holocaust & Genocide (UK)
Holocaust Documentation Centre (SK)
Polish Center for Holocaust Research (PL)
The Jewish Museum of Greece (GR)
Jewish Historical Institute (PL)
King’s College London (UK)
Ontotext AD (BG)
Elie Wiesel National Institute for the Study of Holocaust
in Romania (RO)
DANS Data Archiving and Networked Services (NL)
Shoah Memorial, Museum, Center for Contemporary
Jewish Documentation (FR)
ITS International Tracing Service (DE)
Hungarian Jewish Archives (HU)
INRIA Institute for Research in Computer Science and Automation (FR)
Vilna Gaon State Jewish Museum (LT)
VWI Vienna Wiesenthal Institute for Holocaust Studies (AT)
Foundation Jewish Contemporary Documentation Center (IT)
CONNECTING
KNOWLEDGE