Presentation on STARLab's research, the GOSPL method and prototype presented at the second IFIP WG12.7 "Social Networking Semantics and Collective Intelligence" workshop in Amsterdam (26-27 April 2012).
Deep Content Learning in Traffic Prediction and Text ClassificationHPCC Systems
As part of the 2018 HPCC Systems Community Day event:
In this talk, Jingqing will introduce recent advances at the Data Science Institute, Imperial College London, and focus on a general framework named Deep Content Learning. Two recent projects will be discussed as examples. In the traffic prediction project, we released a new large-scale traffic dataset with auxiliary information including search queries from Baidu Map app and proposed hybrid models to achieve state-of-the-art prediction accuracy. The other project on zero-shot text classification integrated semantic knowledge and used a two-phase architecture to tackle the challenging zero-shot learning in textual data. The integration of TensorLayer and HPCC Systems will be discussed in the talk.
Jingqing Zhang is a 1st-year PhD (HiPEDS) at Data Science Institute, Imperial College London under the supervision of Prof. Yi-Ke Guo. His research interest includes Text Mining, Data Mining, Deep Learning and their applications. He received his MRes degree in Computing from Imperial College with Distinction in 2017 and BEng in Computer Science and Technology from Tsinghua University in 2016.
In this talk I intend to review some basic and high-level concepts like formal languages, grammars and ontologies. Languages to transmit knowledge from a sender to a receiver; grammars to formally specify languages; ontologies as formals specifications of specific knowledge domains. After this introductory revision, enhancing the role of each of those elements in the context of computer-based problem solving (programming), I will talk about a project aimed at automatically infer and generate a Grammar for a Domain Specific Language (DSL) from a given ontology that describes this specific domain. The transformation rules will be presented and the system, Onto2Gra, that fully implements that "Ontological approach for DSL development" will be introduced.
Deep Content Learning in Traffic Prediction and Text ClassificationHPCC Systems
As part of the 2018 HPCC Systems Community Day event:
In this talk, Jingqing will introduce recent advances at the Data Science Institute, Imperial College London, and focus on a general framework named Deep Content Learning. Two recent projects will be discussed as examples. In the traffic prediction project, we released a new large-scale traffic dataset with auxiliary information including search queries from Baidu Map app and proposed hybrid models to achieve state-of-the-art prediction accuracy. The other project on zero-shot text classification integrated semantic knowledge and used a two-phase architecture to tackle the challenging zero-shot learning in textual data. The integration of TensorLayer and HPCC Systems will be discussed in the talk.
Jingqing Zhang is a 1st-year PhD (HiPEDS) at Data Science Institute, Imperial College London under the supervision of Prof. Yi-Ke Guo. His research interest includes Text Mining, Data Mining, Deep Learning and their applications. He received his MRes degree in Computing from Imperial College with Distinction in 2017 and BEng in Computer Science and Technology from Tsinghua University in 2016.
In this talk I intend to review some basic and high-level concepts like formal languages, grammars and ontologies. Languages to transmit knowledge from a sender to a receiver; grammars to formally specify languages; ontologies as formals specifications of specific knowledge domains. After this introductory revision, enhancing the role of each of those elements in the context of computer-based problem solving (programming), I will talk about a project aimed at automatically infer and generate a Grammar for a Domain Specific Language (DSL) from a given ontology that describes this specific domain. The transformation rules will be presented and the system, Onto2Gra, that fully implements that "Ontological approach for DSL development" will be introduced.
GOSPL: A Method and Tool for Fact-Oriented Hybrid Ontology EngineeringChristophe Debruyne
In this paper we present GOSPL, which stands for Grounding Ontologies with Social Processes and Natural Language. GOSPL is a method and tool that supports stakeholders in iteratively interpreting and modeling their common hybrid ontologies using their own terminology for semantic interoperability between autonomously developed and maintained information systems. Hybrid ontologies are ontologies in which concepts are both formally and informally described with the help of a special linguistic resource called glossary. Social interactions between the community members drive the ontology evolution process and result in more stable and agreed upon ontologies.
Christophe Debruyne, Robert Meersman: GOSPL: A Method and Tool for Fact-Oriented Hybrid Ontology Engineering. ADBIS 2012: 153-166
Business Semantics as an Interface between Enterprise Information Management.Christophe Debruyne
Business Semantics as an Interface between
Enterprise Information Management and the
Web of Data:
A Case Study in the Flemish Public Administration
Christophe Debruyne and Pieter De Leenheer eBISS, July 2012
Using a Reputation Framework to Identify Community Leaders in Ontology Engine...Christophe Debruyne
Using a Reputation Framework to Identify Community Leaders in Ontology Engineering
C. Debruyne and N. Nijs
LNCS 8185, p. 677 ff.
Presented at ODBASE 2013, part of On the Move to Meaningful Internet Systems: OTM 2013 Conferences
Deze presentatie is de introductie in de basisworkshop LinkedIn bij De Eigen Zaak. Tijdens de workshop gaan we aan het werk met invullen profiel, mensen uitnodigen en het uitzoeken van je eigen manier om met LinkedIn aan de slag te gaan als zelfstandig ondernemer.
Exploiting Natural Language Definitions and (Legacy) Data for Facilitating Ag...Christophe Debruyne
Debruyne, C. and Vasquez, C. (2013) Exploiting Natural Language Definitions and (Legacy) Data for Facilitating Agreement Processes. In Proc. of Software Quality. Increasing Value in Software and Systems Development 2013 (SWQD 2013), LNBIP, Springer
In IT, ontologies to enable semantic interoperability is only of the branches in which agreement between a heterogeneous group of stakeholders are of vital importance. As agreements are the result of interactions, appropriate methods should take into account the natural language used by the community. In this paper, we extend a method for reaching a consensus on a conceptualization within a community of stakeholders, exploiting the natural language communication between the stakeholders. We describe how agreements on informal and formal descriptions are complementary and interplay. To this end, we introduce, describe and motivate the nature of some of the agreements and the two distinct levels of commitment. We furthermore show how these commitments can be exploited to steer the agreement processes. Concepts introduced in this paper have been implemented in a tool for collaborative ontology engineering, called GOSPL, which can be also adopted for other purposes, e.g., the construction a lexicon for larger software projects.
Extraction of common conceptual components from multiple ontologiesValentina Carriero
Understanding large ontologies, with diverse semantics and modelling practices, is still an issue, and has an impact on many ontology engineering tasks. While existing methods summarise ontologies by extracting the most important nodes or subgraphs, a complete overview of an ontology, and a comparison between multiple ontologies, are not supported. Based on the hypothesis that ontologies are designed as compositions of patterns, this slides present a method able to extract conceptual components from multiple ontologies and the observed ontology design patterns implementing them.
related paper: https://arxiv.org/abs/2106.12831
Metrics for Evaluating Quality of Embeddings for Ontological Concepts Saeedeh Shekarpour
Although there is an emerging trend towards generating embeddings for primarily unstructured data and, recently, for structured data, no systematic suite for measuring the quality of embeddings has been proposed yet.
This deficiency is further sensed with respect to embeddings generated for structured data because there are no concrete evaluation metrics measuring the quality of the encoded structure as well as semantic patterns in the embedding space.
In this paper, we introduce a framework containing three distinct tasks concerned with the individual aspects of ontological concepts: (i) the categorization aspect, (ii) the hierarchical aspect, and (iii) the relational aspect.
Then, in the scope of each task, a number of intrinsic metrics are proposed for evaluating the quality of the embeddings.
Furthermore, w.r.t. this framework, multiple experimental studies were run to compare the quality of the available embedding models.
Employing this framework in future research can reduce misjudgment and provide greater insight about quality comparisons of embeddings for ontological concepts.
We positioned our sampled data and code at https://github.com/alshargi/Concept2vec under GNU General Public License v3.0.
Semantic Interoperation of Information Systems by Evolving Ontologies through...Christophe Debruyne
Presentation of Debruyne, C., and Meersman, R. (2011) Semantic Interoperation of Information Systems by Evolving Ontologies through Formalized Social Processes. In Proc. of Advances in Databases and Information Systems 2011 (ADBIS 2011) - September 2011
Abstract: For autonomously developed information systems to interoperate in a meaningful manner, ontologies capturing the intended semantics of that interoperation have to be developed by a community of stakeholders in those information systems. As the requirements of the ontology and the ontology itself evolve, so in general will the community, and vice versa. Ontology construction should thus be viewed as a complex activity leading to formalized semantic agreement involving various social processes within the community, and that may translate into a number of ontology evolution operators to be implemented. The hybrid ontologies that emerge in this way indeed need to support both the social agreement processes in the stakeholder communities and the eventual reasoning implemented in the information systems that are governed by these ontologies. In this paper, we discuss formal aspects of the social processes involved, a so-called fact-oriented methodology and formalism to structure and describe these, as well as certain relevant aspects of the communities in which they occur. We also report on a prototypical tool set that supports such a methodology, and on examples of some early experiments.
How to model digital objects within the semantic webAngelica Lo Duca
These slides describe the general concept of semantic Web and Linked Data, then they illustrate the concept of digital object. Finally they give a use case.
Fueling the future with Semantic Web patterns - Keynote at WOP2014@ISWCValentina Presutti
I will claim that Semantic Web Patterns can drive the next technological breakthrough: they can be key for providing intelligent applications with sophisticated ways of interpreting data. I will picture scenarios of a possible not so far future in order to support my claim. I will argue that current Semantic Web Patterns are not sufficient for addressing the envisioned requirements, and I will suggest a research direction for fixing the problem, which includes the hybridisation of existing computer science pattern-based approaches, and human computing.
public conference
Design in progress 2009.
La ricerca di design per condividere azioni e favorire dialoghi
designinprogress.org
paper available at http://urijoe.org/designblog/archives/34
PhD defense : Multi-points of view semantic enrichment of folksonomiesFreddy Limpens
This thesis, set at the crossroads of Social Web and Semantic Web, is an attempt to bridge Social tagging-based systems with structured representations such as thesauri or ontologies (in the informatics sense). Folksonomies resulting from the use of social tagging systems suffer from a lack of precision that hinders their potentials to retrieve or exchange information. This thesis proposes supporting the use of folksonomies with formal languages and ontologies from the Semantic Web. Automatic processing of tags allows bootstraping the process by using a combination of a custom method analyzing tags' labels and adapted methods analyzing the structure of folksonomies. The contributions of users are described thanks to our model SRTag, which allows supporting diverging points of view, and captured thanks to our user friendly interface allowing the users to structure tags while searching the folksonomy. Conflicts between individual points of view are detected, solved, and then exploited to help a referent user maintain a global and coherent structuring of the folksonomy, which is in return used to garanty the coherence while enriching individual contributions with the others' contributions. The result of our method allows enhancing the navigation within tag-based knowledge systems, but can also serve as a basis for building thesauri fed by a truly bottom up process.
Generating domain specific sentiment lexicons using the Web Directory acijjournal
In this paper we aim at proposing a method to automatically build a sentiment lexicon which is domain based. There has been a demand for the construction of generated and labeled sentiment lexicon. For data on the social web (E.g., tweets), methods which make use of the synonymy relation don't work well, as we completely ignore the significance of terms belonging to specific domains. Here we propose to
generate a sentiment lexicon for any domain specified, using a twofold method. First we build sentiment scores using the micro-blogging data, and then we use these scores on the ontological structure provided by Open Directory Project [1], to build a custom sentiment lexicon for analyzing domain specific microblogging data.
GOSPL: A Method and Tool for Fact-Oriented Hybrid Ontology EngineeringChristophe Debruyne
In this paper we present GOSPL, which stands for Grounding Ontologies with Social Processes and Natural Language. GOSPL is a method and tool that supports stakeholders in iteratively interpreting and modeling their common hybrid ontologies using their own terminology for semantic interoperability between autonomously developed and maintained information systems. Hybrid ontologies are ontologies in which concepts are both formally and informally described with the help of a special linguistic resource called glossary. Social interactions between the community members drive the ontology evolution process and result in more stable and agreed upon ontologies.
Christophe Debruyne, Robert Meersman: GOSPL: A Method and Tool for Fact-Oriented Hybrid Ontology Engineering. ADBIS 2012: 153-166
Business Semantics as an Interface between Enterprise Information Management.Christophe Debruyne
Business Semantics as an Interface between
Enterprise Information Management and the
Web of Data:
A Case Study in the Flemish Public Administration
Christophe Debruyne and Pieter De Leenheer eBISS, July 2012
Using a Reputation Framework to Identify Community Leaders in Ontology Engine...Christophe Debruyne
Using a Reputation Framework to Identify Community Leaders in Ontology Engineering
C. Debruyne and N. Nijs
LNCS 8185, p. 677 ff.
Presented at ODBASE 2013, part of On the Move to Meaningful Internet Systems: OTM 2013 Conferences
Deze presentatie is de introductie in de basisworkshop LinkedIn bij De Eigen Zaak. Tijdens de workshop gaan we aan het werk met invullen profiel, mensen uitnodigen en het uitzoeken van je eigen manier om met LinkedIn aan de slag te gaan als zelfstandig ondernemer.
Exploiting Natural Language Definitions and (Legacy) Data for Facilitating Ag...Christophe Debruyne
Debruyne, C. and Vasquez, C. (2013) Exploiting Natural Language Definitions and (Legacy) Data for Facilitating Agreement Processes. In Proc. of Software Quality. Increasing Value in Software and Systems Development 2013 (SWQD 2013), LNBIP, Springer
In IT, ontologies to enable semantic interoperability is only of the branches in which agreement between a heterogeneous group of stakeholders are of vital importance. As agreements are the result of interactions, appropriate methods should take into account the natural language used by the community. In this paper, we extend a method for reaching a consensus on a conceptualization within a community of stakeholders, exploiting the natural language communication between the stakeholders. We describe how agreements on informal and formal descriptions are complementary and interplay. To this end, we introduce, describe and motivate the nature of some of the agreements and the two distinct levels of commitment. We furthermore show how these commitments can be exploited to steer the agreement processes. Concepts introduced in this paper have been implemented in a tool for collaborative ontology engineering, called GOSPL, which can be also adopted for other purposes, e.g., the construction a lexicon for larger software projects.
Extraction of common conceptual components from multiple ontologiesValentina Carriero
Understanding large ontologies, with diverse semantics and modelling practices, is still an issue, and has an impact on many ontology engineering tasks. While existing methods summarise ontologies by extracting the most important nodes or subgraphs, a complete overview of an ontology, and a comparison between multiple ontologies, are not supported. Based on the hypothesis that ontologies are designed as compositions of patterns, this slides present a method able to extract conceptual components from multiple ontologies and the observed ontology design patterns implementing them.
related paper: https://arxiv.org/abs/2106.12831
Metrics for Evaluating Quality of Embeddings for Ontological Concepts Saeedeh Shekarpour
Although there is an emerging trend towards generating embeddings for primarily unstructured data and, recently, for structured data, no systematic suite for measuring the quality of embeddings has been proposed yet.
This deficiency is further sensed with respect to embeddings generated for structured data because there are no concrete evaluation metrics measuring the quality of the encoded structure as well as semantic patterns in the embedding space.
In this paper, we introduce a framework containing three distinct tasks concerned with the individual aspects of ontological concepts: (i) the categorization aspect, (ii) the hierarchical aspect, and (iii) the relational aspect.
Then, in the scope of each task, a number of intrinsic metrics are proposed for evaluating the quality of the embeddings.
Furthermore, w.r.t. this framework, multiple experimental studies were run to compare the quality of the available embedding models.
Employing this framework in future research can reduce misjudgment and provide greater insight about quality comparisons of embeddings for ontological concepts.
We positioned our sampled data and code at https://github.com/alshargi/Concept2vec under GNU General Public License v3.0.
Semantic Interoperation of Information Systems by Evolving Ontologies through...Christophe Debruyne
Presentation of Debruyne, C., and Meersman, R. (2011) Semantic Interoperation of Information Systems by Evolving Ontologies through Formalized Social Processes. In Proc. of Advances in Databases and Information Systems 2011 (ADBIS 2011) - September 2011
Abstract: For autonomously developed information systems to interoperate in a meaningful manner, ontologies capturing the intended semantics of that interoperation have to be developed by a community of stakeholders in those information systems. As the requirements of the ontology and the ontology itself evolve, so in general will the community, and vice versa. Ontology construction should thus be viewed as a complex activity leading to formalized semantic agreement involving various social processes within the community, and that may translate into a number of ontology evolution operators to be implemented. The hybrid ontologies that emerge in this way indeed need to support both the social agreement processes in the stakeholder communities and the eventual reasoning implemented in the information systems that are governed by these ontologies. In this paper, we discuss formal aspects of the social processes involved, a so-called fact-oriented methodology and formalism to structure and describe these, as well as certain relevant aspects of the communities in which they occur. We also report on a prototypical tool set that supports such a methodology, and on examples of some early experiments.
How to model digital objects within the semantic webAngelica Lo Duca
These slides describe the general concept of semantic Web and Linked Data, then they illustrate the concept of digital object. Finally they give a use case.
Fueling the future with Semantic Web patterns - Keynote at WOP2014@ISWCValentina Presutti
I will claim that Semantic Web Patterns can drive the next technological breakthrough: they can be key for providing intelligent applications with sophisticated ways of interpreting data. I will picture scenarios of a possible not so far future in order to support my claim. I will argue that current Semantic Web Patterns are not sufficient for addressing the envisioned requirements, and I will suggest a research direction for fixing the problem, which includes the hybridisation of existing computer science pattern-based approaches, and human computing.
public conference
Design in progress 2009.
La ricerca di design per condividere azioni e favorire dialoghi
designinprogress.org
paper available at http://urijoe.org/designblog/archives/34
PhD defense : Multi-points of view semantic enrichment of folksonomiesFreddy Limpens
This thesis, set at the crossroads of Social Web and Semantic Web, is an attempt to bridge Social tagging-based systems with structured representations such as thesauri or ontologies (in the informatics sense). Folksonomies resulting from the use of social tagging systems suffer from a lack of precision that hinders their potentials to retrieve or exchange information. This thesis proposes supporting the use of folksonomies with formal languages and ontologies from the Semantic Web. Automatic processing of tags allows bootstraping the process by using a combination of a custom method analyzing tags' labels and adapted methods analyzing the structure of folksonomies. The contributions of users are described thanks to our model SRTag, which allows supporting diverging points of view, and captured thanks to our user friendly interface allowing the users to structure tags while searching the folksonomy. Conflicts between individual points of view are detected, solved, and then exploited to help a referent user maintain a global and coherent structuring of the folksonomy, which is in return used to garanty the coherence while enriching individual contributions with the others' contributions. The result of our method allows enhancing the navigation within tag-based knowledge systems, but can also serve as a basis for building thesauri fed by a truly bottom up process.
Generating domain specific sentiment lexicons using the Web Directory acijjournal
In this paper we aim at proposing a method to automatically build a sentiment lexicon which is domain based. There has been a demand for the construction of generated and labeled sentiment lexicon. For data on the social web (E.g., tweets), methods which make use of the synonymy relation don't work well, as we completely ignore the significance of terms belonging to specific domains. Here we propose to
generate a sentiment lexicon for any domain specified, using a twofold method. First we build sentiment scores using the micro-blogging data, and then we use these scores on the ontological structure provided by Open Directory Project [1], to build a custom sentiment lexicon for analyzing domain specific microblogging data.
Different Semantic Perspectives for Question Answering SystemsAndre Freitas
Question Answering systems define one of the most complex tasks in computational semantics. The intrinsic complexity of the QA task allows researchers of QA systems to investigate and explore different perspectives of semantics. However, this complexity also induces a bias towards a systems perspective, where researchers are alienated from a deeper reasoning on the semantic principles that are in place within the different components of the system. In this talk we will explore the semantic challenges, principles and perspectives behind the components of QA systems, aiming at providing a principled map and overview on the contribution of each component within the QA semantic interpretation goal.
BURPing Through RML Test Cases (presented at KGC Workshop @ ESWC 2024)KGChristophe Debruyne
Recently, the W3C Community Group on Knowledge Graph Construction created a suite of test cases for all RML modules developed in the Community Group to verify implementations’ compliance with the new RML specifications. However, these RML test cases could not be tested because no existing RML Processor supports them. In this paper, we report on our process of testing the new RML test cases while at the same time implementing support for the new RML modules in a reference implementation, which we call `BURP' (Basic and Unassuming RML Processor), to investigate the feasibility and possible mistakes of the new RML test cases and specifications. We found several problems in the RML modules, ranging from mismatches between the test cases and their specification and invalid SHACL shapes to edge cases not covered by the specifications. Through this work, we improve the quality of RML test cases and the coverage of their corresponding specifications to increase adoption and conformance among RML Processors.
One year of DALIDA Data Literacy Workshops for Adults: a ReportChristophe Debruyne
Christophe Debruyne, Laura Grehan, Mairéad Hurley, Anne Kearns, Ciaran O'Neill. One year of DALIDA Data Literacy Workshops for Adults: a Report. In Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides Gionis, Ivan Herman, and Lionel Médini, editors, Companion of The Web Conference 2022, Virtual Event / Lyon, France, April 25 - 29, 2022, pages 403-407. ACM, 2022
Projet TOXIN : Des graphes de connaissances pour la recherche en toxicologieChristophe Debruyne
Christophe Debruyne. Projet TOXIN : Des graphes de connaissances pour la recherche en toxicologie. INRS Symposium on "L'informatique au service de l'évaluation du risque chimique" (10 November 2022, Nancy, France)
Knowledge Graphs: Concept, mogelijkheden en aandachtspuntenChristophe Debruyne
Kennis en informatie in een bedrijfsorganisatorische context zijn doorgaans versnipperd en verspreid over databases, rekenbladen, documenten, etc. Daarnaast bezitten kenniswerkers ook domeinexpertise die niet in een systeem wordt opgeslagen. Maar wat als men die kennis en informatie wenst te integreren om, bijvoorbeeld, processen te automatiseren of nieuwe inzichten te verwerven?
Knowledge graphs bieden hiervoor een oplossing. In deze presentatie werpt Christophe Debruyne zijn licht op het concept van de knowledge graphs en hun mogelijkheden. Hij behandelt daarvoor de volgende punten:
Wat is een knowledge graph?
Knowledge graphs versus andere initiatieven
Knowledge graphs versus andere AI technieken
Toepassingsgebied van knowledge graphs
Bouwen en onderhouden van een knowledge graph
SAI.be avondseminarie van 16-11-2021
Reusable SHACL Constraint Components for Validating Geospatial Linked DataChristophe Debruyne
Reusable SHACL Constraint Components for Validating Geospatial Linked Data. Paper presented at the 4th International Workshop on Geospatial Linked Data (GeoLD 2021)
Dr Christophe Debruyne and Dr Lynn Kilgallon showcase this exciting Computer Science research strand in Beyond 2022’s work, demonstrating its potential for changing the questions we can ask of the recovered records, and the hidden stories it can reveal.
Facilitating Data Curation: a Solution Developed in the Toxicology DomainChristophe Debruyne
Christophe Debruyne, Jonathan Riggio, Emma Gustafson, Declan O'Sullivan, Mathieu Vinken, Tamara Vanhaecke, Olga De Troyer.
Presented at the 2020 IEEE 14th International Conference on Semantic Computing, San Diego, California, 3-5 February 2020
Toxicology aims to understand the adverse effects of
chemical compounds or physical agents on living organisms. For
chemicals, much information regarding safety testing of cosmetic
ingredients is now scattered in a plethora of safety evaluation
reports. Toxicologists in our university intend to collect this
information into a single repository. Their current approach uses
spreadsheets, does not scale well, and makes data curation and
querying cumbersome. Semantic technologies (e.g., RDF, OWL,
and Linked Data principles) would be more appropriate for
this purpose. However, this technology is not very accessible to
toxicologists without extensive training. In this paper, we report
on a tool that supports subject matter experts in the construction
of an RDF–based knowledge base for the toxicology domain. The
tool is using the jigsaw metaphor for guiding the subject matter
experts. We demonstrate that the jigsaw metaphor is a viable
option for generating RDF. Future work includes investigating
appropriate methods and tools for knowledge evolution and data
analysis.
Linked Data Publication and Interlinking Research within the SFI funded ADAPT...Christophe Debruyne
Linked Data Publication and Interlinking Research within the SFI funded ADAPT Centre. This presentation was given at the LIBER LOD workshop during the 48th LIBER Annual Conference is in Dublin, 26-28 June 2019.
"Towards GeneratingPolicy-compliant Datasets" by Christophe Debruyne, Harshvardhan J. Pandit, Dave Lewis, Declan O’Sullivan. Presented at the The 13th IEEE International Conference on SEMANTIC COMPUTING
Jan 30 - Feb 1, 2019, Newport Beach, California
"Towards GeneratingPolicy-compliant Datasets" by Christophe Debruyne, Harshvardhan J. Pandit, Dave Lewis, Declan O’Sullivan. Presented at the The 13th IEEE International Conference on SEMANTIC COMPUTING
Jan 30 - Feb 1, 2019, Newport Beach, California
Generating Executable Mappings from RDF Data Cube Data Structure DefinitionsChristophe Debruyne
Data processing is increasingly the subject of various internal and external regulations, such as GDPR which has recently come into effect. Instead of assuming that such processes avail of data sources (such as files and relational databases), we approach the problem in a more abstract manner and view these processes as taking datasets as input. These datasets are then created by pulling data from various data sources. Taking a W3C Recommendation for prescribing the structure of and for describing datasets, we investigate an extension of that vocabulary for the generation of executable R2RML mappings. This results in a top-down approach where one prescribes the dataset to be used by a data process and where to find the data, and where that prescription is subsequently used to retrieve the data for the creation of the dataset “just in time”. We argue that this approach to the generation of an R2RML mapping from a dataset description is the first step towards policy-aware mappings, where the generation takes into account regulations to generate mappings that are compliant. In this paper, we describe how one can obtain an R2RML mapping from a data structure definition in a declarative manner using SPARQL CONSTRUCT queries, and demonstrate it using a running example. Some of the more technical aspects are also described.
Reference: Christophe Debruyne, Dave Lewis, Declan O'Sullivan: Generating Executable Mappings from RDF Data Cube Data Structure Definitions. OTM Conferences (2) 2018: 333-350
A Lightweight Approach to Explore, Enrich and Use Data with a Geospatial Dime...Christophe Debruyne
Paper presentation: Christophe Debruyne, Kris McGlinn, Lorraine McNerney and Declan O'Sullivan: A Lightweight Approach to Explore, Enrich and Use Data with a Geospatial Dimension with Semantic Web Technologies. Presented at the Fourth International ACM SIGMOD Workshop on Managing and Mining Enriched Geo-Spatial Data GeoRich 2017 Co-located with SIGMOD/PODS 2017 in Chicago, IL, USA
Client-side Processing of GeoSPARQL Functions with Triple Pattern FragmentsChristophe Debruyne
Christophe Debruyne, Éamonn Clinton, Declan O'Sullivan: Client-side Processing of GeoSPARQL Functions with Triple Pattern Fragments. Presented at the Linked Data on the Web (LDOW 2017), colocated with the 26th International World Wide Web Conference, 2017 (WWW 2017)
Available at: http://events.linkeddata.org/ldow2017/papers/LDOW_2017_paper_8.pdf
Presentation about the collaboration between ADAPT and the Ordnance Survey Ireland at Linked Data Seminar -- Culture, Base Registries & Visualisations held in Amsterdam, The Netherlands on the 2nd of December 2016
Serving Ireland's Geospatial Information as Linked Data (ISWC 2016 Poster)Christophe Debruyne
Christophe Debruyne, Eamonn Clinton, Lorraine McNerney, Atul Nautiyal, Declan O'Sullivan:
Serving Ireland's Geospatial Information as Linked Data. International Semantic Web Conference (Posters & Demos) 2016
We present data.geohive.ie, which aims to provide an authoritative
platform for serving Ireland’s national geospatial data, including Linked Data. Currently, the platform provides information on Irish administrative boundaries and was designed to support two use cases: serving boundary data of geographic features at various level of detail and capturing the evolution of administrative boundaries. We report on the decisions taken for modeling and serving the data such as the adoption of an appropriate URI strategy, the development of necessary ontologies, and the use of (named) graphs to support aforementioned use cases.
http://ceur-ws.org/Vol-1690/paper14.pdf
R2RML-F: Towards Sharing and Executing Domain Logic in R2RML MappingsChristophe Debruyne
Christophe Debruyne and Declan O'Sullivan: R2RML-F: Towards Sharing and Executing Domain Logic in R2RML Mappings
Paper presented at Linked Data on the Web (LDOW2016, collocated with WWW2016)
http://events.linkeddata.org/ldow2016/papers/LDOW2016_paper_14.pdf
Towards a Project Centric Metadata Model and Lifecycle for Ontology Mapping G...Christophe Debruyne
Christophe Debruyne, Brian Walshe, Declan O'Sullivan: Towards a Project Centric Metadata Model and Lifecycle for Ontology Mapping Governance. Paper presented at iiWAS 2015 on the 13th of December 2015, Brussels, Belgium.
Creating and Consuming Metadata from Transcribed Historical Vital Records for...Christophe Debruyne
Dolores Grant, Christophe Debruyne, Rebecca Grant, Sandra Collins:
Creating and Consuming Metadata from Transcribed Historical Vital Records for Ingestion in a Long-Term Digital Preservation Platform - (Short Paper). OTM Workshops 2015: 445-450
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
1. +
Grounding Ontologies
with Social Processes and
Natural Language
2012-04-26
IFIP WG 12.7 Workshop #2
2. +
Definition of Ontology in
Computer Science
n A conceptualization is a mathematical construct that contains
abstract references to (1) objects, (2) relations, (3) functions,
and (4) events as may be observed in a given real world.
n An ontology is a shared, [first order] logical, computer-
stored, specification of such an agreed explicit
conceptualization.
n [Tarski 1908, Gruber 1993, Studer 2000, et al.].
3. +
Definition of Ontologies in
Computer Science
n In summary: Semantics = Agreed Meaning
n Links symbols in autonomously developed systems to shared
reality
n Agreed among humans as cognitive agents
n Stored in ontologies
n key technology for interoperability
n ontologies ≠ data models, but provide annotation for them
n support both human- and system-based reasoning
5. +
Interoperation != Integration
n The autonomous nature of actors needs to be respected
n Interoperation stems from a need or wish to communicate,
and collaborate
n à Motivates the need for agreements, contracts and the
meaningful exchange of concepts
6. +
The need for dual perspectives
n Human perspective: high level reasoning about “shared”
concepts
n put humans “in the loop”
n natural language contexts
n System perspective : vocabulary agreements, lexons
n large volume data access
n low level reasoning
7. +
Ontology Engineering Methods:
Learning from Databases
n Technology matures: involve the less IT-gifted IT experts
n Natural language discourse analysis (NIAM, ORM) as used for
databases
n Use legacy data / output reports / interviews, abstraction
into fact types
n Lift data models into ontologies, remove application-specific
context
8. +
Developing Ontology-Grounded
Methods and Applications
n Communities of users / domain experts own the ontology.
Make use of discourse, social process and “legacy” resources
n Ontologies as approximations of perceived reality at type
level! As ontologies evolve, they approximate the real world
n Users / domain experts rule at every step
n Facts holding in a certain context (the community, see later)
9. +
DOGMA
“Double Articulation”: Ontological Commitments in DOGMA
Lexon Base
Commitment Layer
Applications
10. +
Commitments in DOGMA
n Commitment = < Selection, Encoding, Constraints >
n Where Selection = set of lexons with various Context-ids
n Encoding = reference mapping: Application symbols to lexon
terms
n Constraints = set of Ω-RIDL* statements (expressed in lexon
terms)
11. +
Towards Hybrid Ontology
Engineering
n Revisit discourse analysis, pragmatics, semiotics
n Model communities as 1st class citizens
n Formalize methodologies based on NL involvement of
domain experts à Revisit discourse analysis, pragmatics,
semiotics
n Upgrading role of legacy systems in enterprises
n Scalable semantic re-exploitation of RDF and LOD resources
12. +
Grounding Ontologies with Social
Processes and Natural Language
n Hybrid Ontology Description (HOD) HΩ=<Ω,G>
n Ω is a DOGMA Ontology Description (Lexon base, commitments
and a mapping from terms to concepts)
n The contexts in hybrid ontology descriptions communities
n G is a glossary, a triple with components
n Gloss, a set of linguistic, human-interpretable glosses. Mappings
from community-term
pairs or lexons to glosses
13. +
Method
Implementation of the ontology
OWL, RDF(S), …
E.g., with tools offered by the RDB2RDF community such as D2R Server.
Semantic Interoperation of IS through
Formalized Social Processes
03/21/12 15
19. +
Joint work with CVC on Ω and MTB
Co-evolution
20. +
Exploiting RDF thanks to Hybrid
Ontology Implementations
n Augmenting RDB2RDF
Mappings by means of Ω-RIDL
Commitments
n Adding semantics to the
database structure
21. +
Exploiting RDF thanks to Hybrid
Ontology Implementations
n Fact-oriented querying of RDF.
n LIST Artist NOT with Gender with
Code = ‘M’
n In SPARQL:
SELECT DISTINCT ?a WHERE {
?a a myOnto0:Artist. OPTIONAL {
?g myOnto0:Gender_of_Artist ?a.
?g myOnto0:Gender_with_Code ?c. }
FILTER(?c != "M" || !bound(?c)) }