Presentation about the collaboration between ADAPT and the Ordnance Survey Ireland at Linked Data Seminar -- Culture, Base Registries & Visualisations held in Amsterdam, The Netherlands on the 2nd of December 2016
The XML Business Reporting Language (XBRL) is a standard for business and financial information reporting. It is based on XML so instance documents based on XBRL, e.g. a quarterly report, are highly constrained by the XML document-oriented nature. This makes more difficult to perform queries that mix information from filings from different dates, companies, or accounting principles than with a formalism based on a graph model instead of a tree model. Semantic Web technologies provide a graph model that facilitates mashing-up different XBRL sources. We have put into practice this approach mapping the XBRL filings available from the SEC’s EDGAR program to Resource Description Framework (RDF) and the XML Schema taxonomies these filings are based on to Web Ontology Language (OWL). The resulting semantic metadata, though highly tied to the XML structure it is mapped from, benefits from Semantic Web technologies and tools in order to facilitate integration and cross-querying, even together with other parts of the Web of Linked Data.
Big Data Europe SC6 WS #3: Big Data Europe Platform: Apps, challenges, goals ...BigData_Europe
Talk at the Big Data Europe SC6 workshop number 3 taking place on 11.9.2017 in Amsterdam co-located with SEMANTiCS2017 conference: The Big Data Europe Platform: Apps, challenges, goals by Aad Versteden, TenForce.
Health Sciences Research Informatics, Powered by GlobusGlobus
This presentation was given at the 2019 GlobusWorld Conference in Chicago, IL by Jonathan Silverstein and Mike Davis, both from University of Pittsburgh.
Providing geospatial information as Linked Open DataPat Kenny
ADAPT is revolutionising the way people can seamlessly interact with digital content, systems and each other and enabling users to achieve unprecedented levels of access and efficiency. - Prof. Declan O'Sullivan, Trinity College Dublin. Address given at Ordnance Survey Ireland GI R&D Initiatives, Tuesday, 22 March 2016, 13:00 to 20:30 (GMT), Maynooth University.
Integration and Exploration of Financial Data using Semantics and OntologiesRoberto García
Keynote at the Eurofiling XBRL Week, Academic Track, 6-9 June 2017. Hosted by the European Central Bank, Frankfurt, Germany. The keynote reported about one the first attempts to move a significant amount of XBRL to the Semantic Web, modelling XRML XML with RDF and XBRL Taxonomies with OWL.
The XML Business Reporting Language (XBRL) is a standard for business and financial information reporting. It is based on XML so instance documents based on XBRL, e.g. a quarterly report, are highly constrained by the XML document-oriented nature. This makes more difficult to perform queries that mix information from filings from different dates, companies, or accounting principles than with a formalism based on a graph model instead of a tree model. Semantic Web technologies provide a graph model that facilitates mashing-up different XBRL sources. We have put into practice this approach mapping the XBRL filings available from the SEC’s EDGAR program to Resource Description Framework (RDF) and the XML Schema taxonomies these filings are based on to Web Ontology Language (OWL). The resulting semantic metadata, though highly tied to the XML structure it is mapped from, benefits from Semantic Web technologies and tools in order to facilitate integration and cross-querying, even together with other parts of the Web of Linked Data.
Big Data Europe SC6 WS #3: Big Data Europe Platform: Apps, challenges, goals ...BigData_Europe
Talk at the Big Data Europe SC6 workshop number 3 taking place on 11.9.2017 in Amsterdam co-located with SEMANTiCS2017 conference: The Big Data Europe Platform: Apps, challenges, goals by Aad Versteden, TenForce.
Health Sciences Research Informatics, Powered by GlobusGlobus
This presentation was given at the 2019 GlobusWorld Conference in Chicago, IL by Jonathan Silverstein and Mike Davis, both from University of Pittsburgh.
Providing geospatial information as Linked Open DataPat Kenny
ADAPT is revolutionising the way people can seamlessly interact with digital content, systems and each other and enabling users to achieve unprecedented levels of access and efficiency. - Prof. Declan O'Sullivan, Trinity College Dublin. Address given at Ordnance Survey Ireland GI R&D Initiatives, Tuesday, 22 March 2016, 13:00 to 20:30 (GMT), Maynooth University.
Integration and Exploration of Financial Data using Semantics and OntologiesRoberto García
Keynote at the Eurofiling XBRL Week, Academic Track, 6-9 June 2017. Hosted by the European Central Bank, Frankfurt, Germany. The keynote reported about one the first attempts to move a significant amount of XBRL to the Semantic Web, modelling XRML XML with RDF and XBRL Taxonomies with OWL.
A Web-scale Study of the Adoption and Evolution of the schema.org Vocabulary ...Robert Meusel
Promoted by major search engines, schema.org has become a widely adopted standard for marking up structured data in HTML web pages. In this paper, we use a series of largescale Web crawls to analyze the evolution and adoption of schema.org over time. The availability of data from dierent points in time for both the schema and the websites deploying data allows for a new kind of empirical analysis of standards adoption, which has not been possible before. To conduct our analysis, we compare dierent versions of the schema.org vocabulary to the data that was deployed on hundreds of thousands of Web pages at dierent points in time. We measure both top-down adoption (i.e., the extent to which changes in the schema are adopted by data providers) as well as bottom-up evolution (i.e., the extent to which the actually deployed data drives changes in the schema). Our empirical analysis shows that both processes can be observed.
On-the-fly Integration of Static and Dynamic Linked Dataaharth
Slides of COLD 2013 Paper "On-the-fly Integration of Static and Dynamic Linked Data", Andreas Harth (KIT), Craig Knoblock (USC), Steffen Stadtmüller (KIT), Rudi Studer (KIT), Pedro Szekely (USC)
Search Joins with the Web - ICDT2014 Invited LectureChris Bizer
The talk will discuss the concept of Search Joins. A Search Join is a join operation which extends a local table with additional attributes based on the large corpus of structured data that is published on the Web in various formats. The challenge for Search Joins is to decide which Web tables to join with the local table in order to deliver high-quality results. Search joins are useful in various application scenarios. They allow for example a local table about cities to be extended with an attribute containing the average temperature of each city for manual inspection. They also allow tables to be extended with large sets of additional attributes as a basis for data mining, for instance to identify factors that might explain why the inhabitants of one city claim to be happier than the inhabitants of another.
In the talk, Christian Bizer will draw a theoretical framework for Search Joins and will highlight how recent developments in the context of Linked Data, RDFa and Microdata publishing, public data repositories as well as crowd-sourcing integration knowledge contribute to the feasibility of Search Joins in an increasing number of topical domains.
Generating Executable Mappings from RDF Data Cube Data Structure DefinitionsChristophe Debruyne
Data processing is increasingly the subject of various internal and external regulations, such as GDPR which has recently come into effect. Instead of assuming that such processes avail of data sources (such as files and relational databases), we approach the problem in a more abstract manner and view these processes as taking datasets as input. These datasets are then created by pulling data from various data sources. Taking a W3C Recommendation for prescribing the structure of and for describing datasets, we investigate an extension of that vocabulary for the generation of executable R2RML mappings. This results in a top-down approach where one prescribes the dataset to be used by a data process and where to find the data, and where that prescription is subsequently used to retrieve the data for the creation of the dataset “just in time”. We argue that this approach to the generation of an R2RML mapping from a dataset description is the first step towards policy-aware mappings, where the generation takes into account regulations to generate mappings that are compliant. In this paper, we describe how one can obtain an R2RML mapping from a data structure definition in a declarative manner using SPARQL CONSTRUCT queries, and demonstrate it using a running example. Some of the more technical aspects are also described.
Reference: Christophe Debruyne, Dave Lewis, Declan O'Sullivan: Generating Executable Mappings from RDF Data Cube Data Structure Definitions. OTM Conferences (2) 2018: 333-350
Attend The Data Science Training in Bangalore From ExcelR. Practical Data Science Training in Bangalore Sessions With Assured Placement Support From Experienced Faculty. ExcelR Offers The Data Science Courses in Bangalore.
<a href="https://www.excelr.com/business-analytics-training-in-bangalore">ExcelR business analytics course</a>
Adoption of the Linked Data Best Practices in Different Topical DomainsChris Bizer
Slides from the presentation of the following paper:
Max Schmachtenberg, Christian Bizer, Heiko Paulheim: Adoption of the Linked Data Best Practices in Different Topical Domains. 13th International Semantic Web Conference (ISWC2014) - RDB Track, pp. 245-260, Riva del Garda, Italy, October 2014.
Paper URL:
http://dws.informatik.uni-mannheim.de/fileadmin/lehrstuehle/ki/pub/SchmachtenbergBizerPaulheim-AdoptionOfLinkedDataBestPractices.pdf
Abstract:
The central idea of Linked Data is that data publishers support applications in discovering and integrating data by complying to a set of best practices in the areas of linking, vocabulary usage, and metadata provision. In 2011, the State of the LOD Cloud report analyzed the adoption of these best practices by linked datasets within different topical domains. The report was based on information that was provided by the dataset publishers themselves via the datahub.io Linked Data catalog. In this paper, we revisit and update the findings of the 2011 State of the LOD Cloud report based on a crawl of the Web of Linked Data conducted in April 2014. We analyze how the adoption of the different best practices has changed and present an overview of the linkage relationships between datasets in the form of an updated LOD cloud diagram, this time not based on information from dataset providers, but on data that can actually be retrieved by a Linked Data crawler. Among others, we find that the number of linked datasets has approximately doubled between 2011 and 2014, that there is increased agreement on common vocabularies for describing certain types of entities, and that provenance and license metadata is still rarely provided by the data sources.
balloon Fusion: SPARQL Rewriting Based on Unified Co-Reference InformationKai Schlegel
Presentation for 5th International Workshop on
Data Engineering meets the Semantic Web (DESWeb)
In conjunction with ICDE 2014, Chicago IL, USA, March 31, 2014 held by Kai Schlegel
Field Data Collecting, Processing and Sharing: Using web Service TechnologiesNiroshan Sanjaya
Collecting, Distributing and Analyzing field data is a crucial part in any geospatial study. Field data collection tools and methods have been developed significantly due to the advancement of technologies such as Global Navigational Satellite Systems (GNSS) and development of smartphones. Accurate field data collection is also a necessary task for broad spatial data analysis and proper decision making. Development of Web technologies led to share the data and information effectively. This study tries to develop a framework based on the Geospatial Semantic Web technologies for disseminating and processing field data. Experimental results from an implemented prototype show that the proposed framework allows to visualize and process the field data in any context. The system of this study is capable of distributing and processing field data using web application. Moreover, the study demonstrates the importance and the capabilities of web services for spatial data gathering and processing. The system has been developed based on Free and Open Source Software (FOSS) packages such as ZOO-Project, Open Data Kit, etc. It enables user to further improve or deploy the system for variety of studies.
BioIT 2018 'Easier integration and enrichment of your data by making public d...Hans Constandt
Joint presentation, Hans Constandt, CEO, ONTOFORCE and Chris Evelo, Ph.D., Maastricht University and ELIXIR
Public data has different levels of FAIRness. The higher the FAIRness level of a data source, the easier it is to use this source for data integration and linking. One of the goals of the intergovernmental organization ELIXIR is to facilitate the improvement of finding and sharing data and exchange of expertise in life science. ONTOFORCE focusses on integrating and linking public and private data by bringing data to a higher level of FAIRness. In this joint presentation, we will discuss what ELIXIR is doing to make public data more FAIR and combine this with showing examples of what the direct benefits are for data searching, browsing and visual analytics on the DISQOVER platform by making and using more FAIR internal, private or third party data.
Attend The Data Science Certification From ExcelR. Practical Data Science Certification Sessions With Assured Placement Support From Experienced Faculty. ExcelR Offers The Data Science Certification.
Integration and Management of Diverse Environmental Data SetsCameron Kiddle
Presentation I gave as part of the New Frontiers in Data Integration session at Summit 09 in Banff on Oct. 14, 2009. It discusses some current work that the Grid Research Centre is doing in relation to data management and integration.
BioSHaRE: Opal and Mica: a software suite for data harmonization and federati...Lisette Giepmans
BioSHaRE conference July 28th, 2015, Milan - Latest tools and services for data sharing
Stream 1: Tools for data sharing analysis and enhancement
Opal is a software application to manage study data, and includes a feature enabling data harmonisation and data integration across studies. As such, Opal supports the development and implementation of processing algorithms required to transform study-specific data into a common harmonised format. Moreover, when connected to a Mica web interface, Opal allows users to seamlessly and securely search distributed datasets across several Opal instances.
Opal is freely available for download at www.obiba.org and is provided under the GPL3 open source licence. All studies or networks of studies using the Opal software for data storage, data management or data harmonisation must mention Opal in manuscripts, presentations, or other works made public and include a web link to the Maelstrom Research website (www.maelstrom-research.org).
Mica is a software application developed to create web portals for individual epidemiological studies or for study consortia. Features supported by Mica include a standardised study catalogue, study-specific and harmonised variable data dictionary browsers, online data access request forms, and communication tools (e.g. forums, events, news).
When used in conjunction with the Opal software, Mica also allows authenticated users (i.e. with username and password) to perform distributed queries on the content of study databases hosted on remote servers, and retrieve summary statistics of that content.
Mica is a Java-based, cross-platform, client-server application and comes along with the following two clients: the administrators' user interface and a content management system (Drupal) used to render the catalogue content on the study or consortium.
Mica is freely available for download at www.obiba.org and is provided under the GPL3 open source license.
2015 FOSS4G Track: Open Specifications for the Storage, Transport and Process...GIS in the Rockies
This talk presents an overview of some of the most important Open Specifications (OS) for the storage, transport and processing of geospatial data and why they matter for the development of the next generation of geospatial systems and data infrastructures. What is the importance of being Open? What is the relationship of OS and geospatial software (both FOSS4G and private/proprietary software)? A Web-based system architecture based on OS and FOSS4G will be presented.
RO-Crate: A framework for packaging research products into FAIR Research ObjectsCarole Goble
RO-Crate: A framework for packaging research products into FAIR Research Objects presented to Research Data Alliance RDA Data Fabric/GEDE FAIR Digital Object meeting. 2021-02-25
A Web-scale Study of the Adoption and Evolution of the schema.org Vocabulary ...Robert Meusel
Promoted by major search engines, schema.org has become a widely adopted standard for marking up structured data in HTML web pages. In this paper, we use a series of largescale Web crawls to analyze the evolution and adoption of schema.org over time. The availability of data from dierent points in time for both the schema and the websites deploying data allows for a new kind of empirical analysis of standards adoption, which has not been possible before. To conduct our analysis, we compare dierent versions of the schema.org vocabulary to the data that was deployed on hundreds of thousands of Web pages at dierent points in time. We measure both top-down adoption (i.e., the extent to which changes in the schema are adopted by data providers) as well as bottom-up evolution (i.e., the extent to which the actually deployed data drives changes in the schema). Our empirical analysis shows that both processes can be observed.
On-the-fly Integration of Static and Dynamic Linked Dataaharth
Slides of COLD 2013 Paper "On-the-fly Integration of Static and Dynamic Linked Data", Andreas Harth (KIT), Craig Knoblock (USC), Steffen Stadtmüller (KIT), Rudi Studer (KIT), Pedro Szekely (USC)
Search Joins with the Web - ICDT2014 Invited LectureChris Bizer
The talk will discuss the concept of Search Joins. A Search Join is a join operation which extends a local table with additional attributes based on the large corpus of structured data that is published on the Web in various formats. The challenge for Search Joins is to decide which Web tables to join with the local table in order to deliver high-quality results. Search joins are useful in various application scenarios. They allow for example a local table about cities to be extended with an attribute containing the average temperature of each city for manual inspection. They also allow tables to be extended with large sets of additional attributes as a basis for data mining, for instance to identify factors that might explain why the inhabitants of one city claim to be happier than the inhabitants of another.
In the talk, Christian Bizer will draw a theoretical framework for Search Joins and will highlight how recent developments in the context of Linked Data, RDFa and Microdata publishing, public data repositories as well as crowd-sourcing integration knowledge contribute to the feasibility of Search Joins in an increasing number of topical domains.
Generating Executable Mappings from RDF Data Cube Data Structure DefinitionsChristophe Debruyne
Data processing is increasingly the subject of various internal and external regulations, such as GDPR which has recently come into effect. Instead of assuming that such processes avail of data sources (such as files and relational databases), we approach the problem in a more abstract manner and view these processes as taking datasets as input. These datasets are then created by pulling data from various data sources. Taking a W3C Recommendation for prescribing the structure of and for describing datasets, we investigate an extension of that vocabulary for the generation of executable R2RML mappings. This results in a top-down approach where one prescribes the dataset to be used by a data process and where to find the data, and where that prescription is subsequently used to retrieve the data for the creation of the dataset “just in time”. We argue that this approach to the generation of an R2RML mapping from a dataset description is the first step towards policy-aware mappings, where the generation takes into account regulations to generate mappings that are compliant. In this paper, we describe how one can obtain an R2RML mapping from a data structure definition in a declarative manner using SPARQL CONSTRUCT queries, and demonstrate it using a running example. Some of the more technical aspects are also described.
Reference: Christophe Debruyne, Dave Lewis, Declan O'Sullivan: Generating Executable Mappings from RDF Data Cube Data Structure Definitions. OTM Conferences (2) 2018: 333-350
Attend The Data Science Training in Bangalore From ExcelR. Practical Data Science Training in Bangalore Sessions With Assured Placement Support From Experienced Faculty. ExcelR Offers The Data Science Courses in Bangalore.
<a href="https://www.excelr.com/business-analytics-training-in-bangalore">ExcelR business analytics course</a>
Adoption of the Linked Data Best Practices in Different Topical DomainsChris Bizer
Slides from the presentation of the following paper:
Max Schmachtenberg, Christian Bizer, Heiko Paulheim: Adoption of the Linked Data Best Practices in Different Topical Domains. 13th International Semantic Web Conference (ISWC2014) - RDB Track, pp. 245-260, Riva del Garda, Italy, October 2014.
Paper URL:
http://dws.informatik.uni-mannheim.de/fileadmin/lehrstuehle/ki/pub/SchmachtenbergBizerPaulheim-AdoptionOfLinkedDataBestPractices.pdf
Abstract:
The central idea of Linked Data is that data publishers support applications in discovering and integrating data by complying to a set of best practices in the areas of linking, vocabulary usage, and metadata provision. In 2011, the State of the LOD Cloud report analyzed the adoption of these best practices by linked datasets within different topical domains. The report was based on information that was provided by the dataset publishers themselves via the datahub.io Linked Data catalog. In this paper, we revisit and update the findings of the 2011 State of the LOD Cloud report based on a crawl of the Web of Linked Data conducted in April 2014. We analyze how the adoption of the different best practices has changed and present an overview of the linkage relationships between datasets in the form of an updated LOD cloud diagram, this time not based on information from dataset providers, but on data that can actually be retrieved by a Linked Data crawler. Among others, we find that the number of linked datasets has approximately doubled between 2011 and 2014, that there is increased agreement on common vocabularies for describing certain types of entities, and that provenance and license metadata is still rarely provided by the data sources.
balloon Fusion: SPARQL Rewriting Based on Unified Co-Reference InformationKai Schlegel
Presentation for 5th International Workshop on
Data Engineering meets the Semantic Web (DESWeb)
In conjunction with ICDE 2014, Chicago IL, USA, March 31, 2014 held by Kai Schlegel
Field Data Collecting, Processing and Sharing: Using web Service TechnologiesNiroshan Sanjaya
Collecting, Distributing and Analyzing field data is a crucial part in any geospatial study. Field data collection tools and methods have been developed significantly due to the advancement of technologies such as Global Navigational Satellite Systems (GNSS) and development of smartphones. Accurate field data collection is also a necessary task for broad spatial data analysis and proper decision making. Development of Web technologies led to share the data and information effectively. This study tries to develop a framework based on the Geospatial Semantic Web technologies for disseminating and processing field data. Experimental results from an implemented prototype show that the proposed framework allows to visualize and process the field data in any context. The system of this study is capable of distributing and processing field data using web application. Moreover, the study demonstrates the importance and the capabilities of web services for spatial data gathering and processing. The system has been developed based on Free and Open Source Software (FOSS) packages such as ZOO-Project, Open Data Kit, etc. It enables user to further improve or deploy the system for variety of studies.
BioIT 2018 'Easier integration and enrichment of your data by making public d...Hans Constandt
Joint presentation, Hans Constandt, CEO, ONTOFORCE and Chris Evelo, Ph.D., Maastricht University and ELIXIR
Public data has different levels of FAIRness. The higher the FAIRness level of a data source, the easier it is to use this source for data integration and linking. One of the goals of the intergovernmental organization ELIXIR is to facilitate the improvement of finding and sharing data and exchange of expertise in life science. ONTOFORCE focusses on integrating and linking public and private data by bringing data to a higher level of FAIRness. In this joint presentation, we will discuss what ELIXIR is doing to make public data more FAIR and combine this with showing examples of what the direct benefits are for data searching, browsing and visual analytics on the DISQOVER platform by making and using more FAIR internal, private or third party data.
Attend The Data Science Certification From ExcelR. Practical Data Science Certification Sessions With Assured Placement Support From Experienced Faculty. ExcelR Offers The Data Science Certification.
Integration and Management of Diverse Environmental Data SetsCameron Kiddle
Presentation I gave as part of the New Frontiers in Data Integration session at Summit 09 in Banff on Oct. 14, 2009. It discusses some current work that the Grid Research Centre is doing in relation to data management and integration.
BioSHaRE: Opal and Mica: a software suite for data harmonization and federati...Lisette Giepmans
BioSHaRE conference July 28th, 2015, Milan - Latest tools and services for data sharing
Stream 1: Tools for data sharing analysis and enhancement
Opal is a software application to manage study data, and includes a feature enabling data harmonisation and data integration across studies. As such, Opal supports the development and implementation of processing algorithms required to transform study-specific data into a common harmonised format. Moreover, when connected to a Mica web interface, Opal allows users to seamlessly and securely search distributed datasets across several Opal instances.
Opal is freely available for download at www.obiba.org and is provided under the GPL3 open source licence. All studies or networks of studies using the Opal software for data storage, data management or data harmonisation must mention Opal in manuscripts, presentations, or other works made public and include a web link to the Maelstrom Research website (www.maelstrom-research.org).
Mica is a software application developed to create web portals for individual epidemiological studies or for study consortia. Features supported by Mica include a standardised study catalogue, study-specific and harmonised variable data dictionary browsers, online data access request forms, and communication tools (e.g. forums, events, news).
When used in conjunction with the Opal software, Mica also allows authenticated users (i.e. with username and password) to perform distributed queries on the content of study databases hosted on remote servers, and retrieve summary statistics of that content.
Mica is a Java-based, cross-platform, client-server application and comes along with the following two clients: the administrators' user interface and a content management system (Drupal) used to render the catalogue content on the study or consortium.
Mica is freely available for download at www.obiba.org and is provided under the GPL3 open source license.
2015 FOSS4G Track: Open Specifications for the Storage, Transport and Process...GIS in the Rockies
This talk presents an overview of some of the most important Open Specifications (OS) for the storage, transport and processing of geospatial data and why they matter for the development of the next generation of geospatial systems and data infrastructures. What is the importance of being Open? What is the relationship of OS and geospatial software (both FOSS4G and private/proprietary software)? A Web-based system architecture based on OS and FOSS4G will be presented.
RO-Crate: A framework for packaging research products into FAIR Research ObjectsCarole Goble
RO-Crate: A framework for packaging research products into FAIR Research Objects presented to Research Data Alliance RDA Data Fabric/GEDE FAIR Digital Object meeting. 2021-02-25
PaNOSC Overview - ExPaNDS kick-off meeting - September 2019PaNOSC
This presentation gives an overview on the H2020 INFRAEOSC PaNOSC project, showcasing its activities and expected results, as well as its vision, i.e., to create a PaN scientific commons
The title of this talk is a crass attempt to be catchy and topical, by referring to the recent victory of Watson in Jeopardy.
My point (perhaps confusingly) is not that new computer capabilities are a bad thing. On the contrary, these capabilities represent a tremendous opportunity for science. The challenge that I speak to is how we leverage these capabilities without computers and computation overwhelming the research community in terms of both human and financial resources. The solution, I suggest, is to get computation out of the lab—to outsource it to third party providers.
Abstract follows:
We have made much progress over the past decade toward effective distributed cyberinfrastructure. In big-science fields such as high energy physics, astronomy, and climate, thousands benefit daily from tools that enable the distributed management and analysis of vast quantities of data. But we now face a far greater challenge. Exploding data volumes and new research methodologies mean that many more--ultimately most?--researchers will soon require similar capabilities. How can we possible supply information technology (IT) at this scale, given constrained budgets? Must every lab become filled with computers, and every researcher an IT specialist?
I propose that the answer is to take a leaf from industry, which is slashing both the costs and complexity of consumer and business IT by moving it out of homes and offices to so-called cloud providers. I suggest that by similarly moving research IT out of the lab, we can realize comparable economies of scale and reductions in complexity, empowering investigators with new capabilities and freeing them to focus on their research.
I describe work we are doing to realize this approach, focusing initially on research data lifecycle management. I present promising results obtained to date, and suggest a path towards large-scale delivery of these capabilities. I also suggest that these developments are part of a larger "revolution in scientific affairs," as profound in its implications as the much-discussed "revolution in military affairs" resulting from more capable, low-cost IT. I conclude with some thoughts on how researchers, educators, and institutions may want to prepare for this revolution.
Wide access to spatial Citizen Science data - ECSA Berlin 2016COBWEB Project
Authors: Paul van Genuchten, Lieke Verhelst, Clemens Portele
Presented at the European Citizen Science Association conference Berlin, May 2016.
One of the objectives of COBWEB is to publish citizen science data to GEOSS, the Global Earth Observation System of Systems. GEOSS has a focus on spatial standards (CSW, SensorWeb, WMS/WFS). However, a major part of citizen science community is not aware of these standards, and average users use search engines to discover data and common formats to analyse data. So how do we bridge the gap between services in GEOSS and search engines?
Jana Parvanova, Vladimir Alexiev and Stanislav Kostadinov. In workshop Collaborative Annotations in Shared Environments: metadata, vocabularies and techniques in the Digital Humanities (DH-CASE 2013). Collocated with DocEng 2013. Florence, Italy, Sep 2013.
Science Services and Science Platforms: Using the Cloud to Accelerate and Dem...Ian Foster
Ever more data- and compute-intensive science makes computing increasingly important for research. But for advanced computing infrastructure to benefit more than the scientific 1%, we need new delivery methods that slash access costs, new sustainability models beyond direct research funding, and new platform capabilities to accelerate the development of new, interoperable tools and services.
The Globus team has been working towards these goals since 2010. We have developed software-as-a-service methods that move complex and time-consuming research IT tasks out of the lab and into the cloud, thus greatly reducing the expertise and resources required to use them. We have demonstrated a subscription-based funding model that engages research institutions in supporting service operations. And we are now also showing how the platform services that underpin Globus applications can accelerate the development and use of an integrated ecosystem of advanced science applications, such as NCAR’s Research Data Archive and OSG Connect, thus enabling access to powerful data and compute resources by many more people than is possible today.
In this talk, I introduce Globus services and the underlying Globus platform. I present representative applications and discuss opportunities that this platform presents for both small science and large facilities.
Keynote presentation delivered at ELAG 2013 in Gent, Belgium, on May 29 2013. Discusses Research Objects and the relationship to work my team has been involved in during the past couple of years: OAI-ORE, Open Annotation, Memento.
PROV-O: The W3C Provenance Ontology. Provenance is key for describing the evolution of a resource, the entity responsible for its changes and how these changes affect its final state. A proper description of the provenance of a resource shows who has its attribution and can help resolving whether it can be trusted or not. This tutorial will provide an overview of the W3C PROV data model and its serialization as an OWL ontology. The tutorial will incrementally explain the features of the PROV data model, from the core starting terms to the most complex concepts. Finally, the tutorial will show the relation between PROV-O and the Dublin Core Metadata terms.
Similar to Serving Ireland's Geospatial Information as Linked Data (20)
BURPing Through RML Test Cases (presented at KGC Workshop @ ESWC 2024)KGChristophe Debruyne
Recently, the W3C Community Group on Knowledge Graph Construction created a suite of test cases for all RML modules developed in the Community Group to verify implementations’ compliance with the new RML specifications. However, these RML test cases could not be tested because no existing RML Processor supports them. In this paper, we report on our process of testing the new RML test cases while at the same time implementing support for the new RML modules in a reference implementation, which we call `BURP' (Basic and Unassuming RML Processor), to investigate the feasibility and possible mistakes of the new RML test cases and specifications. We found several problems in the RML modules, ranging from mismatches between the test cases and their specification and invalid SHACL shapes to edge cases not covered by the specifications. Through this work, we improve the quality of RML test cases and the coverage of their corresponding specifications to increase adoption and conformance among RML Processors.
One year of DALIDA Data Literacy Workshops for Adults: a ReportChristophe Debruyne
Christophe Debruyne, Laura Grehan, Mairéad Hurley, Anne Kearns, Ciaran O'Neill. One year of DALIDA Data Literacy Workshops for Adults: a Report. In Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides Gionis, Ivan Herman, and Lionel Médini, editors, Companion of The Web Conference 2022, Virtual Event / Lyon, France, April 25 - 29, 2022, pages 403-407. ACM, 2022
Projet TOXIN : Des graphes de connaissances pour la recherche en toxicologieChristophe Debruyne
Christophe Debruyne. Projet TOXIN : Des graphes de connaissances pour la recherche en toxicologie. INRS Symposium on "L'informatique au service de l'évaluation du risque chimique" (10 November 2022, Nancy, France)
Knowledge Graphs: Concept, mogelijkheden en aandachtspuntenChristophe Debruyne
Kennis en informatie in een bedrijfsorganisatorische context zijn doorgaans versnipperd en verspreid over databases, rekenbladen, documenten, etc. Daarnaast bezitten kenniswerkers ook domeinexpertise die niet in een systeem wordt opgeslagen. Maar wat als men die kennis en informatie wenst te integreren om, bijvoorbeeld, processen te automatiseren of nieuwe inzichten te verwerven?
Knowledge graphs bieden hiervoor een oplossing. In deze presentatie werpt Christophe Debruyne zijn licht op het concept van de knowledge graphs en hun mogelijkheden. Hij behandelt daarvoor de volgende punten:
Wat is een knowledge graph?
Knowledge graphs versus andere initiatieven
Knowledge graphs versus andere AI technieken
Toepassingsgebied van knowledge graphs
Bouwen en onderhouden van een knowledge graph
SAI.be avondseminarie van 16-11-2021
Reusable SHACL Constraint Components for Validating Geospatial Linked DataChristophe Debruyne
Reusable SHACL Constraint Components for Validating Geospatial Linked Data. Paper presented at the 4th International Workshop on Geospatial Linked Data (GeoLD 2021)
Dr Christophe Debruyne and Dr Lynn Kilgallon showcase this exciting Computer Science research strand in Beyond 2022’s work, demonstrating its potential for changing the questions we can ask of the recovered records, and the hidden stories it can reveal.
Facilitating Data Curation: a Solution Developed in the Toxicology DomainChristophe Debruyne
Christophe Debruyne, Jonathan Riggio, Emma Gustafson, Declan O'Sullivan, Mathieu Vinken, Tamara Vanhaecke, Olga De Troyer.
Presented at the 2020 IEEE 14th International Conference on Semantic Computing, San Diego, California, 3-5 February 2020
Toxicology aims to understand the adverse effects of
chemical compounds or physical agents on living organisms. For
chemicals, much information regarding safety testing of cosmetic
ingredients is now scattered in a plethora of safety evaluation
reports. Toxicologists in our university intend to collect this
information into a single repository. Their current approach uses
spreadsheets, does not scale well, and makes data curation and
querying cumbersome. Semantic technologies (e.g., RDF, OWL,
and Linked Data principles) would be more appropriate for
this purpose. However, this technology is not very accessible to
toxicologists without extensive training. In this paper, we report
on a tool that supports subject matter experts in the construction
of an RDF–based knowledge base for the toxicology domain. The
tool is using the jigsaw metaphor for guiding the subject matter
experts. We demonstrate that the jigsaw metaphor is a viable
option for generating RDF. Future work includes investigating
appropriate methods and tools for knowledge evolution and data
analysis.
Linked Data Publication and Interlinking Research within the SFI funded ADAPT...Christophe Debruyne
Linked Data Publication and Interlinking Research within the SFI funded ADAPT Centre. This presentation was given at the LIBER LOD workshop during the 48th LIBER Annual Conference is in Dublin, 26-28 June 2019.
"Towards GeneratingPolicy-compliant Datasets" by Christophe Debruyne, Harshvardhan J. Pandit, Dave Lewis, Declan O’Sullivan. Presented at the The 13th IEEE International Conference on SEMANTIC COMPUTING
Jan 30 - Feb 1, 2019, Newport Beach, California
"Towards GeneratingPolicy-compliant Datasets" by Christophe Debruyne, Harshvardhan J. Pandit, Dave Lewis, Declan O’Sullivan. Presented at the The 13th IEEE International Conference on SEMANTIC COMPUTING
Jan 30 - Feb 1, 2019, Newport Beach, California
A Lightweight Approach to Explore, Enrich and Use Data with a Geospatial Dime...Christophe Debruyne
Paper presentation: Christophe Debruyne, Kris McGlinn, Lorraine McNerney and Declan O'Sullivan: A Lightweight Approach to Explore, Enrich and Use Data with a Geospatial Dimension with Semantic Web Technologies. Presented at the Fourth International ACM SIGMOD Workshop on Managing and Mining Enriched Geo-Spatial Data GeoRich 2017 Co-located with SIGMOD/PODS 2017 in Chicago, IL, USA
Client-side Processing of GeoSPARQL Functions with Triple Pattern FragmentsChristophe Debruyne
Christophe Debruyne, Éamonn Clinton, Declan O'Sullivan: Client-side Processing of GeoSPARQL Functions with Triple Pattern Fragments. Presented at the Linked Data on the Web (LDOW 2017), colocated with the 26th International World Wide Web Conference, 2017 (WWW 2017)
Available at: http://events.linkeddata.org/ldow2017/papers/LDOW_2017_paper_8.pdf
Serving Ireland's Geospatial Information as Linked Data (ISWC 2016 Poster)Christophe Debruyne
Christophe Debruyne, Eamonn Clinton, Lorraine McNerney, Atul Nautiyal, Declan O'Sullivan:
Serving Ireland's Geospatial Information as Linked Data. International Semantic Web Conference (Posters & Demos) 2016
We present data.geohive.ie, which aims to provide an authoritative
platform for serving Ireland’s national geospatial data, including Linked Data. Currently, the platform provides information on Irish administrative boundaries and was designed to support two use cases: serving boundary data of geographic features at various level of detail and capturing the evolution of administrative boundaries. We report on the decisions taken for modeling and serving the data such as the adoption of an appropriate URI strategy, the development of necessary ontologies, and the use of (named) graphs to support aforementioned use cases.
http://ceur-ws.org/Vol-1690/paper14.pdf
R2RML-F: Towards Sharing and Executing Domain Logic in R2RML MappingsChristophe Debruyne
Christophe Debruyne and Declan O'Sullivan: R2RML-F: Towards Sharing and Executing Domain Logic in R2RML Mappings
Paper presented at Linked Data on the Web (LDOW2016, collocated with WWW2016)
http://events.linkeddata.org/ldow2016/papers/LDOW2016_paper_14.pdf
Towards a Project Centric Metadata Model and Lifecycle for Ontology Mapping G...Christophe Debruyne
Christophe Debruyne, Brian Walshe, Declan O'Sullivan: Towards a Project Centric Metadata Model and Lifecycle for Ontology Mapping Governance. Paper presented at iiWAS 2015 on the 13th of December 2015, Brussels, Belgium.
Creating and Consuming Metadata from Transcribed Historical Vital Records for...Christophe Debruyne
Dolores Grant, Christophe Debruyne, Rebecca Grant, Sandra Collins:
Creating and Consuming Metadata from Transcribed Historical Vital Records for Ingestion in a Long-Term Digital Preservation Platform - (Short Paper). OTM Workshops 2015: 445-450
What is Linked Data?
Presented at the Linked Data for Libraries on Thursday, November 6, 2014 at Trinity College Dublin
http://www.dri.ie/linked-data-libraries
Using Semantic Technologies to Create Virtual Families from Historical Vital ...Christophe Debruyne
"Using Semantic Technologies to Create Virtual Families from Historical Vital Records" Presented at the 1st European Ontology Network (EUON) Workshop collocated with EUDAT 2014. Presentation was given in Amsterdam, The Netherlands on the 25th of September, 2014.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Serving Ireland's Geospatial Information as Linked Data
1. Serving Ireland’s Geospatial as
Linked Data on the Web
Dr. Christophe Debruyne
ADAPT @ Trinity College Dublin
The ADAPT Centre is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.
2. www.adaptcentre.ieWhat is Linked Data?
The Web of Documents were created by humans for humans; the links
between documents bore little meaning for machines and documents
provided little structured information.
Structured information can be found on the Web – such as XML, CSV,
etc. – but, … How do we link data rather than documents, and create a
global “database” of information?
Linked Data is a global initiative to publish and interlink structured
(open) data on the Web using a combination of standardized
technologies (HTTP, URI, RDF) such that …
• any agent can explore the data and links …
• that is fit for the agent (human or computer based) …
• via a “protocol” and …
• allowing one to build innovative applications.
3. www.adaptcentre.ieContext
• In 2014, the Ordnance Survey Ireland (OSi) delivered a newly
developed spatial data storage model known as Prime2.
• With Prime2, OSi moved from a traditional map-centric model
towards an object-oriented model from which various types of
mapping and data services can be produced.
• OSi furthermore aims to adopt Linked Data to enable third
parties to explore and consume some of OSi's authoritative
datasets. But how? Can Prime2 form the basis for that?
4. www.adaptcentre.ieOSi Linked Data Projects
Goal: To lay the foundations of a semantic architecture
and Linked Data platform for the OSi taking into account
best practices and guidelines in the domain of geospatial
information and industry and OSi’s current technology
stack.
Starting from the boundaries dataset. These are open
and already available on http://data.gov.ie/, but not as
Linked Data.
5. www.adaptcentre.ieRequirements Analysis
• Requirements analysis included engagement the Central
Statistics Office and the Department of Public Expenditure
and Reform as stakeholders.
• Formulation of two use case scenarios from which
requirements were distilled:
1. Accessing the same features with different geometric
representations, i.e., different generalizations or
“resolutions”.
2. Capturing the provenance and evolution of features and
their geometric representations. E.g., Statutory
Instruments to change boundaries.
6. www.adaptcentre.ieKnowledge Representation and Organization
Ontologies
• Features and Geometries based on GeoSPARQL
• Provenance using Statute Instruments based on PROV-O
• Static and dynamic boundaries (and their relationships)
• Necessary ontologies developed and published
Workshops with DPER and CSO on a URI Strategy
• Information Resources vs. Non-Information Resources
• Using Prime2’s GUIDs and a hint of the instance’s nature
Cleverly using (named) graphs to support both use cases
Mapping the Prime2 database to RDF with R2RML
7. www.adaptcentre.ie
TRIPLESTORE
Knowledge Representation and Organization
Prime2 Database
Ontologies
R2RML
Mapping
R2RML
Processor
Graphs Use Case 1
default • Types
• Labels
• Links
• 100m resoluBon
50 meters • 50m resoluBon
20 meters • 20m resoluBon
• Default geometry
links • With LOD cloud
Graphs Use Case 2
default • AcBviBes [PROV-O]
• EnBBes [PROV-O]
• History of
100m resoluBon
50 meters • History of
50m resoluBon
20 meters • History of
20m resoluBon
8. www.adaptcentre.ieExample evolution of boundaries
B
C D
A
B
C D
A
<http://data.example.com/feature/A>
a geo:Feature ;
rdfs:label "A" ;
geo:hasGeometry
[ a geo:Geometry ;
geo:asWKT "MULTIPOLYGON (((0 1, 0 2, 3 2, 3 1, 0 1)))"^^geo:wktLiteral
] .
<http://data.example.com/feature/B> … .
<http://data.example.com/feature/C> … .
<http://data.example.com/feature/D> … .
<http://data.example.com/feature/A>
a geo:Feature ; rdfs:label "A" ;
geo:hasGeometry
[ a geo:Geometry ;
geo:asWKT "MULTIPOLYGON (((1 1, 1 2, 3 2, 3 1, 1 1)))"^^geo:wktLiteral ;
prov:wasGeneratedBy <http://data.example.com/change/1> ;
prov:wasRevisionOf
[ a geo:Geometry ;
geo:asWKT "MULTIPOLYGON (((0 1, 0 2, 3 2, 3 1, 0 1)))"^^geo:wktLiteral
]
] .
<http://data.example.com/feature/B> … .
<http://data.example.com/feature/C> … .
<http://data.example.com/feature/D> … .
<http://data.example.com/change/1>
a prov:Activity ;
prov:endedAtTime "2000-01-01T12:00:00"^^xsd:dateTime ;
prov:startedAtTime"2000-01-01T12:00:00"^^xsd:dateTime ;
prov:used <http://data.example.com/instrument/1> .
<http://data.example.com/instrument/1>
a prov:Entity ;
<http://purl.org/dc/elements/1.1/date> "2000-01-01" ;
<http://purl.org/dc/elements/1.1/identifier> "1" ;
<http://purl.org/dc/elements/1.1/title> "Change A" .
Legal
Instrument
ordering
change of A’s
boundary on
2000/01/01
Simplified example.
Different graphs are used
for different resolutions.
Types for activities and
entities omitted.
9. www.adaptcentre.ieConceptual Architecture of the LD Platform
Proxy Server
TPF
Server
TPF
Web
Client
Linked
Data
Frontend
Website
(dumps)
Ontologies
SPARQL Endpoint
Triplestore
TPF Client
11. www.adaptcentre.ieOngoing work…
• Publication of boundary data used by CENSUS 2011
• EDs,Towns, Settlements,
etc. published as Linked
Data and linked with
http://data.cso.ie/
• Dynamic vs. Static
boundary datasets
• Merge of North and South Tipperary
• Creation of links with DBpedia,GeoNames, TCD Library, etc.
• Creation of a spatial component to a pollution dataset
12. www.adaptcentre.ieConclusion and Future Directions
We have used OSi’s Prime2 dataset to publish their authoritative
geospatial data as Linked Data on the Web by creating R2RML
mappings using ontologies that extend GeoSPARQL, and
PROV-O.
Future directions include:
• Transforming the geometries of buildings, the evolution
thereof, and how to link with the documents that inform the
OSi of these changes (“closed” Linked Data)
• Access control mechanisms for such “Closed” Linked Data