Europeana and schema.org
Presentation at the Dublin Core conference, special session on Schema.org, Sept 5, 2013.
Conference site: http://dcevents.dublincore.org/index.php/IntConf/dc-2013/
Presentation at the Education Session of the American Art Collaborative (AAC) Linked Open Data Initiative, 31 March 2015. http://americanartcollaborative.org/
Wikidata, a target for Europeana's semantic strategy - GLAM-WIKI 2015Antoine Isaac
"Wikidata, a target for Europeana's semantic strategy"/ Presentation at the GLAM-Wiki conference with Valentine Charles, Hugo Manguinhas, Antoine Isaac, Vladimir Alexiev http://nl.wikimedia.org/wiki/GLAM-WIKI_2015/
Slides for Culture Hack panel @SXSW2013 : http://schedule.sxsw.com/2013/events/event_IAP4580
Some slides re-used from Harry Verwayen (http://www.slideshare.net/hverwayen/business-model-innovation-open-data) and Julia Fallon
Europeana is the European Union's digital platform for cultural heritage, currently providing access to over 4.6 million digitized items from European libraries, archives, and museums. The document discusses efforts to enhance Europeana by developing a semantic layer and semantics-enabled search capabilities that can better connect related concepts and expand queries using metadata and controlled vocabularies. It describes prototypes developed by the Europeana Thought Lab that cluster search results and autocomplete queries using semantic relationships between concepts. Key challenges mentioned include converting legacy metadata into semantic formats, aligning different descriptive ontologies and vocabularies, and ensuring the semantic features can be scaled for Europeana's production environment.
Europeana and schema.org
Presentation at the Dublin Core conference, special session on Schema.org, Sept 5, 2013.
Conference site: http://dcevents.dublincore.org/index.php/IntConf/dc-2013/
Presentation at the Education Session of the American Art Collaborative (AAC) Linked Open Data Initiative, 31 March 2015. http://americanartcollaborative.org/
Wikidata, a target for Europeana's semantic strategy - GLAM-WIKI 2015Antoine Isaac
"Wikidata, a target for Europeana's semantic strategy"/ Presentation at the GLAM-Wiki conference with Valentine Charles, Hugo Manguinhas, Antoine Isaac, Vladimir Alexiev http://nl.wikimedia.org/wiki/GLAM-WIKI_2015/
Slides for Culture Hack panel @SXSW2013 : http://schedule.sxsw.com/2013/events/event_IAP4580
Some slides re-used from Harry Verwayen (http://www.slideshare.net/hverwayen/business-model-innovation-open-data) and Julia Fallon
Europeana is the European Union's digital platform for cultural heritage, currently providing access to over 4.6 million digitized items from European libraries, archives, and museums. The document discusses efforts to enhance Europeana by developing a semantic layer and semantics-enabled search capabilities that can better connect related concepts and expand queries using metadata and controlled vocabularies. It describes prototypes developed by the Europeana Thought Lab that cluster search results and autocomplete queries using semantic relationships between concepts. Key challenges mentioned include converting legacy metadata into semantic formats, aligning different descriptive ontologies and vocabularies, and ensuring the semantic features can be scaled for Europeana's production environment.
Europeana - American Art Collaborative LOD MeetingAntoine Isaac
Presentation at a seminar on linked data and art museums at the Smithsonian Institute, April 29 2013.
Other presentations at http://lodlam.net/2013/05/07/linked-open-data-in-art/
Europeana and the relevance of the DM2E resultsAntoine Isaac
Presentation on the value of results of the DM2E project, from the Europeana perspective.
Presented at the DM2E final event, Pisa, Dec 11 2014
http://dm2e.eu/dm2e-final-event-registration-and-agenda/
Europeana is a digital platform that aggregates over 30 million cultural heritage objects from various European institutions. It aims to make this content openly accessible online through its website, apps, and APIs. The Europeana Data Model was created to better structure metadata and link objects to related entities and multilingual descriptions. Europeana seeks to facilitate reuse of this content through its linked open data approach and by distinguishing between rights for metadata and digital objects. It also works on innovations like semantic search and annotation to help users discover and interact with the cultural heritage materials.
Data modelling at Europeana and DM2E - SMW13Antoine Isaac
Presentation on how the Eduorpeana Data Model is used and extended in the Europeana and DM2E projects.
Made for the Semantic Media Web innovation day, Berlin, Sept 27, 2013: http://semantic-media-web.de/innovationsforum/metadaten/
EuropeanaTech update - Europeana AGM 2015Antoine Isaac
Update on the EuropeanaTech community activities. Presentation with Greg Markus, Sound and Vision. Europeana general Assembly Meeting 2015, November 2-4 2015. http://pro.europeana.eu/event/europeana-annual-general-meeting-2015
Presentation at the H2020-CEF Infoday, 16 January 2014 http://ec.europa.eu/digital-agenda/en/news/information-and-networking-days-h2020-work-programme-2014-2015-connecting-europe-facility
Multilingual challenges for accessing digitized culture online - Riga Summit 15Antoine Isaac
"Multilingual challenges for accessing digitized culture online". Presentation at the Riga Summit on the Multilingual Digital Single Market, April 27-29 2015.
http://www.rigasummit2015.eu/
Achieving Interoperability between the CARARE Schema for Monuments and Sites ...Antoine Isaac
This document discusses mapping the metadata schema for the CARARE project to the Europeana Data Model (EDM). CARARE aggregates cultural heritage content for archaeology and historic buildings and provides it to Europeana. The mapping identifies correspondences between elements in the two models so CARARE can submit good metadata to Europeana. It examines different scenarios for how CARARE heritage assets and digital resources map to EDM classes like ProvidedCulturalHeritageObject and WebResource. The mapping provides better metadata for 2 million CARARE objects in Europeana and prompted updates to schemas. It confirms EDM is relevant for aggregations and shows metadata mapping requires human supervision.
Designing a multilingual knowledge graph - DCMI2018Antoine Isaac
Presentation for the paper "Designing a multilingual knowledge graph as service for cultural heritage" at the DCMI2018 conference https://www.dublincore.org/conferences/2018/abstracts/#559
December 2, 2015: NISO/NFAIS Virtual Conference: Semantic Web: What's New and...DeVonne Parks, CEM
This document discusses Europeana's use of semantic web technologies and linked data to improve access to cultural heritage collections. It summarizes that Europeana aggregates metadata from various cultural institutions to provide access to over 48 million digitized objects. It has implemented the Europeana Data Model to represent metadata in a more granular, semantically linked way using vocabularies like GeoNames, DBpedia, and AAT. This has enabled automatic enrichment of metadata as well as multilingual and conceptual searching. Linked open data approaches provide technical and strategic benefits to Europeana by facilitating data sharing and enrichment across domains.
Linked Data for EuropeanaCultural Heritage: the Europeana approachValentine Charles
Presentation given on April 28th in Paris at International Conference organised by ISSN IC
http://www.issn.org/international-conference-organised-by-issn-ic-bibliographic-metadata-getting-linked/
Presenter: Stuart Macdonald
Presentation first given at Open Knowledge Scotland event at Inspace in Edinburgh, 13 May 2010.
EDINA project to create an online crowdsourcing tool which will combine data from digitised Scottish Post Office Directories (PODs) with contemporaneous historical maps
This document summarizes a presentation on using digital audio archives to promote performance studies. It discusses two projects - the Baudelaire Song Project and Visualising Voice. The Baudelaire Song Project analyzes French art songs set to the poetry of Baudelaire over four years with AHRC funding. Visualising Voice uses a Europeana Research Award to create a public-facing web interface for digital audio analysis. Both projects use open-access digital archives but face challenges regarding language barriers, audio quality, copyright and data storage.
This document discusses Europeana, a digital library that provides access to Europe's cultural heritage collections. It describes Europeana's vision of being a single access point to digital content from libraries, archives and museums across Europe. It also discusses linking Europeana data to external datasets using semantic web technologies like SKOS and Linked Open Data to enable new scholarly and eLearning applications by connecting related concepts and making new discoveries.
Challenges for the Language Technology IndustryAntoine Isaac
This document summarizes challenges for the language technology industry in Europe related to Europeana, a platform providing access to cultural heritage collections across Europe. It notes that Europeana provides access to over 33 million objects from over 2,300 contributors in 36 countries, with metadata in 33 languages. However, it faces challenges in facilitating re-use and access across languages due to the diversity of languages and domains in its collections. It discusses the need for automatic translation and natural language processing tools to address multilingual search and access issues at Europeana's scale. The document also outlines resource constraints for libraries, archives, and museums in developing language technologies, and their role in providing open data and use cases to the industry.
Linked Open Data Principles, Technologies and ExamplesOpen Data Support
Theoretical and practical introducton to linked data, focusing both on the value proposition, the theory/foundations, and on practical examples. The material is tailored to the context of the EU institutions.
Nelson Piedra , Janneth Chicaiza
and Jorge López, Universidad Técnica Particular de Loja, Edmundo
Tovar, Universidad Politécnica de Madrid,
and Oscar Martínez, Universitas
Miguel Hernández
Explore the advantages of using linked data with OERs.
Europeana - American Art Collaborative LOD MeetingAntoine Isaac
Presentation at a seminar on linked data and art museums at the Smithsonian Institute, April 29 2013.
Other presentations at http://lodlam.net/2013/05/07/linked-open-data-in-art/
Europeana and the relevance of the DM2E resultsAntoine Isaac
Presentation on the value of results of the DM2E project, from the Europeana perspective.
Presented at the DM2E final event, Pisa, Dec 11 2014
http://dm2e.eu/dm2e-final-event-registration-and-agenda/
Europeana is a digital platform that aggregates over 30 million cultural heritage objects from various European institutions. It aims to make this content openly accessible online through its website, apps, and APIs. The Europeana Data Model was created to better structure metadata and link objects to related entities and multilingual descriptions. Europeana seeks to facilitate reuse of this content through its linked open data approach and by distinguishing between rights for metadata and digital objects. It also works on innovations like semantic search and annotation to help users discover and interact with the cultural heritage materials.
Data modelling at Europeana and DM2E - SMW13Antoine Isaac
Presentation on how the Eduorpeana Data Model is used and extended in the Europeana and DM2E projects.
Made for the Semantic Media Web innovation day, Berlin, Sept 27, 2013: http://semantic-media-web.de/innovationsforum/metadaten/
EuropeanaTech update - Europeana AGM 2015Antoine Isaac
Update on the EuropeanaTech community activities. Presentation with Greg Markus, Sound and Vision. Europeana general Assembly Meeting 2015, November 2-4 2015. http://pro.europeana.eu/event/europeana-annual-general-meeting-2015
Presentation at the H2020-CEF Infoday, 16 January 2014 http://ec.europa.eu/digital-agenda/en/news/information-and-networking-days-h2020-work-programme-2014-2015-connecting-europe-facility
Multilingual challenges for accessing digitized culture online - Riga Summit 15Antoine Isaac
"Multilingual challenges for accessing digitized culture online". Presentation at the Riga Summit on the Multilingual Digital Single Market, April 27-29 2015.
http://www.rigasummit2015.eu/
Achieving Interoperability between the CARARE Schema for Monuments and Sites ...Antoine Isaac
This document discusses mapping the metadata schema for the CARARE project to the Europeana Data Model (EDM). CARARE aggregates cultural heritage content for archaeology and historic buildings and provides it to Europeana. The mapping identifies correspondences between elements in the two models so CARARE can submit good metadata to Europeana. It examines different scenarios for how CARARE heritage assets and digital resources map to EDM classes like ProvidedCulturalHeritageObject and WebResource. The mapping provides better metadata for 2 million CARARE objects in Europeana and prompted updates to schemas. It confirms EDM is relevant for aggregations and shows metadata mapping requires human supervision.
Designing a multilingual knowledge graph - DCMI2018Antoine Isaac
Presentation for the paper "Designing a multilingual knowledge graph as service for cultural heritage" at the DCMI2018 conference https://www.dublincore.org/conferences/2018/abstracts/#559
December 2, 2015: NISO/NFAIS Virtual Conference: Semantic Web: What's New and...DeVonne Parks, CEM
This document discusses Europeana's use of semantic web technologies and linked data to improve access to cultural heritage collections. It summarizes that Europeana aggregates metadata from various cultural institutions to provide access to over 48 million digitized objects. It has implemented the Europeana Data Model to represent metadata in a more granular, semantically linked way using vocabularies like GeoNames, DBpedia, and AAT. This has enabled automatic enrichment of metadata as well as multilingual and conceptual searching. Linked open data approaches provide technical and strategic benefits to Europeana by facilitating data sharing and enrichment across domains.
Linked Data for EuropeanaCultural Heritage: the Europeana approachValentine Charles
Presentation given on April 28th in Paris at International Conference organised by ISSN IC
http://www.issn.org/international-conference-organised-by-issn-ic-bibliographic-metadata-getting-linked/
Presenter: Stuart Macdonald
Presentation first given at Open Knowledge Scotland event at Inspace in Edinburgh, 13 May 2010.
EDINA project to create an online crowdsourcing tool which will combine data from digitised Scottish Post Office Directories (PODs) with contemporaneous historical maps
This document summarizes a presentation on using digital audio archives to promote performance studies. It discusses two projects - the Baudelaire Song Project and Visualising Voice. The Baudelaire Song Project analyzes French art songs set to the poetry of Baudelaire over four years with AHRC funding. Visualising Voice uses a Europeana Research Award to create a public-facing web interface for digital audio analysis. Both projects use open-access digital archives but face challenges regarding language barriers, audio quality, copyright and data storage.
This document discusses Europeana, a digital library that provides access to Europe's cultural heritage collections. It describes Europeana's vision of being a single access point to digital content from libraries, archives and museums across Europe. It also discusses linking Europeana data to external datasets using semantic web technologies like SKOS and Linked Open Data to enable new scholarly and eLearning applications by connecting related concepts and making new discoveries.
Challenges for the Language Technology IndustryAntoine Isaac
This document summarizes challenges for the language technology industry in Europe related to Europeana, a platform providing access to cultural heritage collections across Europe. It notes that Europeana provides access to over 33 million objects from over 2,300 contributors in 36 countries, with metadata in 33 languages. However, it faces challenges in facilitating re-use and access across languages due to the diversity of languages and domains in its collections. It discusses the need for automatic translation and natural language processing tools to address multilingual search and access issues at Europeana's scale. The document also outlines resource constraints for libraries, archives, and museums in developing language technologies, and their role in providing open data and use cases to the industry.
Linked Open Data Principles, Technologies and ExamplesOpen Data Support
Theoretical and practical introducton to linked data, focusing both on the value proposition, the theory/foundations, and on practical examples. The material is tailored to the context of the EU institutions.
Nelson Piedra , Janneth Chicaiza
and Jorge López, Universidad Técnica Particular de Loja, Edmundo
Tovar, Universidad Politécnica de Madrid,
and Oscar Martínez, Universitas
Miguel Hernández
Explore the advantages of using linked data with OERs.
Desafios na implementação de políticas de dados abertos (painel LOD Brasil 2014)Augusto Herrmann Batista
O documento discute os desafios na implementação de políticas de dados abertos, incluindo a necessidade de comprometimento da organização inteira, priorização diante de restrições orçamentárias, e dificuldade de retenção de conhecimento. Ele também apresenta o método ODRA para avaliação da preparação para dados abertos e exemplos de resultados como dados do orçamento federal em formato LOD e planos de dados abertos de ministérios.
The document discusses querying Linked Data using Büchi automata. It introduces Linked Data and SPARQL queries, and notes the infinite nature of social networking applications and Linked Open Numbers. It then discusses using Büchi automata to verify webs of Linked Data by modeling their infinite behavior. The authors propose representing SPARQL queries on infinite webs of Linked Data using Büchi automata with infinite input to check for eventual computability.
Apresentação dissertação mestrado - Verificação automática de Modelo BIMRicardo Moço
Este documento descreve uma dissertação de mestrado sobre a verificação automática de modelos BIM. O objetivo principal era desenvolver uma ferramenta para automatizar o processo de avaliação da qualidade de projetos BIM através da aplicação de um conjunto de regras. Testou-se um método de avaliação português em um caso de estudo e analisaram-se os resultados para avaliar as capacidades e limitações da verificação automática. Concluiu-se que a verificação totalmente automática ainda não é viável, mas a ferramenta desenvolvida
Roberto Navigli gave a presentation on BabelNet, Babelfy, and related projects. BabelNet is a multilingual semantic network created by merging various knowledge resources, including WordNet, Wikipedia, Wiktionary, and other wordnets. It contains over 14 million entries in 271 languages and integrates both encyclopedic and lexicographic knowledge. BabelNet aims to provide a unified semantic representation of concepts across languages to support multilingual natural language processing applications.
What is pattern recognition (lecture 4 of 6)Randa Elanwar
In this series I intend to simplify a beautiful branch of computer science that we as humans use it in everyday life without knowing. Pattern recognition is a sub-branch of the computer vision research and is tightly related to digital signal processing research as well as machine learning and artificial intelligence.
Princípios para o Desenvolvimento de Projetos com Recurso a Ferramentas BIM. ...João Poças Martins
O documento discute os princípios para o desenvolvimento de projetos usando ferramentas BIM. Apresenta melhores práticas de modelagem BIM de outros países e propõe regras de modelagem para projetos estruturais portugueses. Também explica os diferentes níveis de desenvolvimento de um modelo BIM ao longo das fases de um projeto.
This document discusses two presentations on cognition for the semantic web. The first presentation discusses methods for involving humans in semantic data management, including crowdsourcing, citizen science, and games with a purpose. It provides examples of how these techniques can be used for tasks like data linking and validation. The second presentation discusses building cognitive and semantic systems to support understanding data and phenomena through visual examples. It aims to explain why and how these systems can make sense of data and foster understanding.
This document discusses Linked Open Data and how to publish open government data. It explains that publishing data in open, machine-readable formats and linking it to other external data sources increases its value. It provides examples of published open government data and outlines best practices for making data open through licensing, standard formats like CSV and XML, using URIs as identifiers, and linking to related external data. The key benefits outlined are empowering others to build upon the data and improving transparency, competition and innovation.
Linked Open Data-enabled Strategies for Top-N RecommendationsCataldo Musto
Linked Open Data-enabled Strategies for Top-N Recommendations - Cataldo Musto, Pierpaolo Basile, Pasquale Lops, Marco De Gemmis and Giovanni Semeraro - 1st Workshop on New Trends in Content-based Recommender Systems, co-located with ACM Recommender Systems 2014
The document discusses pattern recognition and face recognition systems. It describes how face recognition systems work by measuring nodal points on faces to create a unique "face print." The process involves face detection, extraction of features, comparison to other faces in a database, and a match or non-match determination. Key components are data acquisition, preprocessing, classification, and decision making. Advantages are convenience and low cost, while disadvantages include inability to distinguish identical twins. The document concludes that face recognition technology is now economical, reliable and accurate enough for widespread use.
Within the course, we will present Linked Data as a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the past years, leading to the creation of a global data space that contains many billions of assertions – the Web of Linked Data.
What is #LODLAM?! Understanding linked open data in libraries, archives [and ...Alison Hitchens
This document provides an overview of linked open data (LOD) and the Resource Description Framework (RDF) and their applications in libraries, archives, and museums (LODLAM). It begins by defining linked data and how it extends standard web technologies to share structured data between computers. The document then discusses using structured, machine-readable data to describe resources like people, and how to structure this data using RDF. It provides examples of libraries and archives sharing controlled vocabularies, unique resources and holdings data as linked open data. The document concludes by reviewing current LODLAM projects and the potential for libraries and archives to both contribute and consume linked open data.
Intro to Linked Open Data in Libraries Archives & Museums.Jon Voss
This document discusses a presentation on Linked Open Data in libraries, archives, and museums. The presentation introduces Linked Open Data and how it is being used in cultural heritage institutions. It discusses representing data as graphs using triples and RDF, important vocabularies and ontologies, and following Tim Berners-Lee's principles of Linked Data. The presentation also covers legal and licensing considerations for publishing open cultural data on the web.
Museum Collections Management: Possibilities for Access and Use with Linked D...cbogen
Carly Bogen completed a practicum at a museum where she worked on their collections management system, wrote grants, and assisted with strategic planning. She discusses how modern collection management systems can link objects and their associated data. However, much of this data is not publicly available due to issues like copyright and data sensitivity. Linked open data could help make more museum collection data accessible by standardizing its structure and linking it across institutions. However, barriers include a lack of resources and fears about data accuracy and control. Integrating linked open data with collection management software could help lower these barriers. A few museums have begun publishing linked open data to make their collections more discoverable and connectable with others.
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
Creating knowledge out of interlinked dataSören Auer
This document discusses creating knowledge from interlinked data. It notes that while reasoning over large datasets does not currently scale well, linked data approaches are more feasible as they allow for incremental improvement. The document outlines the linked data lifecycle including extraction, storage and querying, authoring, linking, and enrichment of semantic data. It provides examples of projects that extract, store, author and link diverse datasets including DBpedia, LinkedGeoData, and statistical data. Challenges discussed include improving query performance, developing standardized interfaces, and increasing the amount of interlinking between datasets.
This document discusses computational workflows and FAIR principles. It begins by providing background on computational workflows and their increasing importance. It then discusses challenges around finding, accessing, and sharing workflows. Next, it explores how applying FAIR principles to workflows could help address these challenges by making workflows and their associated objects findable, accessible, interoperable, and reusable. This includes discussing applying metadata standards, using persistent identifiers, and developing principles for FAIR workflows and FAIR software. The document concludes by examining the roles and responsibilities of different stakeholders in working towards FAIR workflows.
The document discusses the semantic web and its potential uses for liberal arts campuses. It provides an overview of semantic web technologies like RDF, OWL, and SPARQL. Examples are given of how semantic web tools could be used for campus projects, pedagogy, and research by exposing metadata and linking data. Challenges mentioned include complexity, lack of visible applications, and the ecological growth needed for widespread adoption.
Data management plans – EUDAT Best practices and case study | www.eudat.euEUDAT
| www.eudat.eu | Presentation given by Stéphane Coutin during the PRACE 2017 Spring School joint training event with the EU H2020 VI-SEEM project (https://vi-seem.eu/) organised by CaSToRC at The Cyprus Institute. Science and more specifically projects using HPC is facing a digital data explosion. Instruments and simulations are producing more and more volume; data can be shared, mined, cited, preserved… They are a great asset, but they are facing risks: we can miss storage, we can lose them, they can be misused,… To start this session, we will review why it is important to manage research data and how to do this by maintaining a Data Management Plan. This will be based on the best practices from EUDAT H2020 project and European Commission recommendation. During the second part we will interactively draft a DMP for a given use case.
German Conference on Bioinformatics 2021
https://gcb2021.de/
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
IRJET- A Workflow Management System for Scalable Data Mining on CloudsIRJET Journal
1. The document discusses a workflow management system for scalable data mining on clouds. It proposes using MapReduce and Hadoop frameworks to parallelize k-means clustering of large datasets on cloud infrastructure.
2. The system aims to improve efficiency, security, and transmission speed over existing cloud systems by generating hash codes for files before classification and storage on cloud. It uses deduplication to avoid redundant uploads.
3. The document outlines the system implementation including user modules for registration, login, profile editing, training data upload, file upload and download while avoiding redundancy, and changing/logging out of passwords. It also discusses testing the system functionality using unit testing libraries.
Presentació a càrrec de Maria Isabel Gandia, cap de Comunicacions del CSUC, duta a terme dins la sessió BoF: "Orchestration, Automation and Virtualisation: Focusing on the user" de la TNC21 Networking Conference de Géant el 25 de juny de 2021.
This presentation gives a brief overview on achievements and challenges of the Data Web and describes different aspects of using the Semantic Data Wiki OntoWiki for Linked Data management.
Connecting the Dots: Linking Digitized Collections Across Metadata SilosOCLC
This document summarizes a presentation about linking digitized collections across metadata silos. It discusses how projects like Europeana and the Digital Public Library of America have struggled to rationalize aggregated data. To better share data within and across organizations, standards and best practices need to be applied universally to connect related items and allow data to be consumed by both humans and machines. The presentation advocates for publishing data as linked open data using identifiers and schemas like Schema.org to form a knowledge graph and improve discoverability on the web.
Presentation I gave at OGF 28 in Munich (Mar. 15-18, 2010). It is about challenges and achievements to date in the GeoChronos project, which is aimed at the development of an on-line collaborative environment for earth observation scientists.
Technologie Proche: Imagining the Archival Systems of Tomorrow With the Tools...Artefactual Systems - AtoM
These slides accompanied a June 4th, 2016 presentation made by Dan Gillean of Artefactual Systems at the Association of Canadian Archivists' 2016 Conference in Montreal, QC, Canada.
This presentation aims to examine several existing or emerging computing paradigms, with specific examples, to imagine how they might inform next-generation archival systems to support digital preservation, description, and access. Topics covered include:
- Distributed Version Control and git
- P2P architectures and the BitTorrent protocol
- Linked Open Data and RDF
- Blockchain technology
The session is part of an attempt by the ACA to create interactive "working sessions" at its conferences. Accompanying notes can be found at: http://bit.ly/tech-Proche
Participants were also asked to use the Twitter hashtag of #techProche for online interaction during the session.
The document discusses the need for greater coherence and interoperability across agricultural information systems. It proposes the establishment of an Alliance for Coherence in Agricultural Information Systems to facilitate the coherent use of standards and tools across distributed databases and information services. The Alliance would register common standards, documentation systems, and act as a clearinghouse to agree on standards and procedures to improve connectivity between agricultural information resources on the web. Next steps proposed include further work on developing common profiles and standards, piloting a shared events calendar, establishing the legal framework and governance structure for the Alliance, and continued advocacy and outreach.
Presented by: Mandy Chessell, IBM
Presented at All Things Open 2020
Abstract: I am one of the leaders in the open metadata and governance initiative. This is seeking to develop standards and a reference implementation through and open source project called ODPi Egeria. Egeria enables organizations to manage data as an asset even when they use tools and platforms from multiple vendors. This type of problem is extremely complex and it needs the collaboration of multiple organizations to make it happen. In this talk I will go through the technical challenges we face and how they are being overcome.
Mark Hughes Annual Seminar Presentation on Open Source Tracy Kent
VuFind was chosen as the discovery system to integrate the catalogs of three different library management systems used by the academic libraries in South West Wales. It required overcoming challenges like hosting multiple instances, merging data from different sources and standards, designing a dual language interface, and developing drivers to connect to each library system. Lessons learned include that open source solutions can work well but require significant staff time and resources, and collaboration is key to success. Future plans include sustaining and mainstreaming the system, exploring additional shared services, and investigating other open source library systems like Evergreen.
The document discusses using the Semantic Web as a knowledge base for artificial intelligence applications. It describes how the Semantic Web publishes data on the web in a standardized, linked format. This vast amount of distributed knowledge could be mined by AI in various ways, such as linking data mining to find patterns, using reasoning to analyze and understand raw data, and assessing agreement between ontologies. The Semantic Web represents a large, collaborative base of formally represented knowledge that provides many opportunities for future AI research and applications.
The ELIXIR FAIR Knowledge Ecosystem for practical know-how: RDMkit and FAIRCo...Carole Goble
Presented at the FAIR Data in Practice Symposium, 16 may 2023 at BioITWorld Boston. https://www.bio-itworldexpo.com/fair-data. The ELIXIR European research Infrastructure for life science data is an inter-governmental organizations coordinating, integrating and sustaining FAIR data and software resources across its 23 nations. To help advise users, data stewards, project managers and service providers, ELIXIR has developed complementary community-driven, open knowledge resources for guiding FAIR Research Data Management (RDMkit) and providing FAIRification recipes (FAIRCookbook). 150+ people have contributed content so far, including representatives of the pharmaceutical industry.
Presentation lors de la journée "Vos collections sur Europeana – Panorama des voies d’agrégation" organisée par le Ministère de la Culture le 27 novembre 2018, à Paris
The Europeana Data Model Principles, community and innovationAntoine Isaac
This document summarizes the Europeana Data Model (EDM), which provides principles for representing metadata from cultural heritage institutions in a connected way on the web. EDM follows linked data best practices like using existing vocabularies and minimizing formalization. It represents metadata elements like full text, rights, and quality. Developing EDM involves experts from different domains and adopting a collaborative approach. Flexibility is needed to avoid overcommitment to formal semantics while reusing standards.
Europeana as a Linked Data (Quality) caseAntoine Isaac
Presentation for the 3rd Workshop on Humanities in the Semantic Web (WHiSe), co-located with the 15th Extended Semantic Web Conference (ESWC 2020)
June 2, 2020, online
http://whise.cc/2020/
Presentaiton at Panel "Interoperable Platforms and CLIR Initiatives: A Global Perspective" at the 2019 IIIF Conference
Göttingen, Thursday 26 June 2019
https://iiif.io/event/2019/goettingen/program/30/
Multilingual challenges and ongoing work to tackle them at EuropeanaAntoine Isaac
Europeana is a digital platform that provides access to over 57 million digitized cultural heritage objects from 3,700 institutions across 44 countries. It faces challenges in being multilingual due to the large amount of metadata in over 400 languages. Europeana is working to tackle these issues through data modeling to allow for richer multilingual data, enriching metadata by linking it to external multilingual vocabularies, and exploring automatic translation of search results and content.
Semantic Interoperability at Europeana - MultilingualDSIs2018Antoine Isaac
Europeana is a digital platform containing over 58 million digitized cultural heritage objects from 3,700 institutions across 44 countries. The document discusses Europeana's efforts to improve semantic interoperability between these diverse datasets by developing the Europeana Data Model, enriching metadata by linking to external vocabularies, and building an Entity Collection and API to provide centralized access to contextual information about places, people, concepts, and organizations. The goal is to enable richer discovery, exploration, and reuse of Europeana's cultural heritage data on the web.
Lightweight rights modeling and linked data publication for online cultural h...Antoine Isaac
Presentation for the special session "Lightweight rights modeling and linked data publication for online cultural heritage - DCMI2018" at the DCMI2018 conference.
http://dublincore.org/conference/2018/abstracts/#a2
Presentation pour la journée IIIF Biblissima "Innover pour redécouvrir le patrimoine écrit", 15 mars 2018, Paris
http://www.biblissima-condorcet.fr/fr/actualites/innover-redecouvrir-patrimoine-ecrit-evenement-biblissima-iiif
Isaac - W3C Data on the Web Best Practices - Data VocabulariesAntoine Isaac
The document discusses best practices for using data vocabularies on the web as developed by the W3C Data on the Web Best Practices Working Group. It recommends reusing existing standardized vocabularies when possible and choosing the appropriate formalization level for data, avoiding both over-commitment to semantics and replication of existing vocabulary terms. It also describes Europeana's experience developing its data model EDM, which reuses many existing vocabularies while requiring significant effort to research, discuss, and maintain flexibility.
This document summarizes the Europeana APIs for accessing metadata and media from the Europeana digital collection. It describes the Search API and Record API, including how to perform basic searches and get search result profiles. It also provides examples of searching, getting search fields, and accessing record metadata in different formats. The document introduces the Europeana Data Model and how digital objects and representations are submitted and stored as proxies in Europeana.
The document discusses modelling and exchanging annotations for Europeana projects. It proposes adopting the W3C Web Annotation Data Model to represent annotations in RDF using JSON-LD serialization. An Annotations API based on the W3C Web Annotation Protocol allows exchanging annotations between Europeana and platforms like HistoryPin.org and Pundit. Representing metadata annotations is also discussed to make them machine-readable and shareable across interfaces. Overall, modelling annotations interoperably and exchanging them across platforms is still a work in progress.
Modelling annotations for Europeana and related projects - DARIAH-EU WSAntoine Isaac
"Modelling annotations for Europeana and related projects" by Hugo Manguinhas, Antoine Isaac. DARIAH-EU Workshop on Practices and Context in Contemporary Annotation Activities, Hamburg, October 29-30, 2015.
Classification schemes, thesauri and other Knowledge Organization Systems - a...Antoine Isaac
"Classification schemes, thesauri and other Knowledge Organization Systems - a Linked Data perspective".
Presentation at the Pelagios Linked Pasts event, July 20-21, 2015.
http://pelagios-project.blogspot.co.uk/2015/03/linked-pasts.html
This document discusses semantic enrichment of metadata in Europeana. It defines semantic enrichment as linking metadata to controlled vocabularies or other datasets to add context. The document outlines the key stages of semantic enrichment as analysis, linking, and augmentation. It also discusses where enrichment can occur in Europeana's systems and considerations for developing APIs and services to enable enrichment of Europeana records by third parties.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Monitoring and Managing Anomaly Detection on OpenShift.pdf
EIFL 2014 - Linked Open Data
1. Linked Open Data
best practices for publishing, sharing, and
interlinking structured data on the Web
Antoine Isaac
EIFL General Assembly, Nov 10, 2014
12. General benefits of linked data
Structured data
URIs and links, not just strings
Good for internationalization
Shareable data
Fits and completes open data strategies
Extensible and mashable
"Open world" - anybody can add descriptive information and annotations
about the same thing
Standard protocols/techniques
http://www.w3.org/2005/Incubator/lld/XGR-lld/
13. Benefits to researchers, students and
patrons
Information seekers can extract and re-mix the parts of the data they
need, add own annotations
Library items and data can be fully integrated into research documents
and bibliographies
Greater discovery and use, across library and non-library resources
14. Benefits to developers
Use of standard protocols and models
Web-based identifiers makes resources immediately
available and up-to-date
Freely mix or mash-up data from libraries with other sources
15. Modeling, linked data style
Cross-community re-use of data models
Models that re-use existing models
Semantic Web technology allows mixing them!
Collaborative, softer form of standardization
16. Benefits to librarians, archivists,
curators and their institutions
Pull together data from outside their direct environment
Concentrate on their domain of local expertise rather than
re-creating existing descriptions
Less duplication of effort, lower infrastructure costs
19. Benefits to librarians, archivists,
curators and their institutions
Use of mainstream technologies rather than systems
specific to libraries
Clarification of metadata licensing
Greater visibility on the web and reuse
20.
21. Challenges and opportunities
Vision works better if data is Open
Some parts of the technology still in maturation
Adaptation to business processes still in progress
Full potential not reached yet
It does not replace librarian work of creating metadata!
But it makes it better focused and more valuable…
22. Thank you!
Antoine Isaac
antoine.isaac@europeana.eu
Thanks to Agnes Simon (BnF) for the RAMEAU example
Relevant past and ongoing activities
Library Linked Data W3C Group http://www.w3.org/2005/Incubator/lld/XGR-lld/
LOD-LAM community http://lod-lam.net
IFLA Semantic Web group http://www.ifla.org/en/swsig
Editor's Notes
Getty illustrates cross-domain benefit
It's not only about re-use of concepts.
Persons, places, objects are also involved.
Links -> quite good web data -> better ranking.
data.bnf.fr acting as hub for specialized subjects