Overview of recent developments in RDA Vocabulary Services Interest Group, W3C/OGC SPatial Data on the Web Working Group, and ICSU/CODATA Commission on Standards.
Presented at National Oceanography Centre, Liverpool, UK 2017-11-17
PROV ontology supports alignment of observational data (models)Simon Cox
Paper presented at MODSIM 2017
https://www.mssanz.org.au/modsim2017/papersbysession.html session C2.
The W3C PROV ontology provides a flexible process-flow model that can capture many specific applications. A provenance trace is the retrospective view of a workflow, with specific instance data added. Thus it provides a basis for the description of any chain of activities which generate interesting outputs, such as observations, actuations, or acts of sampling. Furthermore, its relatively generic structure and naming allows it to be used as an alignment bridge with other ontologies that have previously challenged simple mappings. In this paper we will show a harmonization of a number of important ontology patterns that can be linked through the PROV-O Owl implementation of PROV.
The alignments stack is as follows:
- PROV-O aligned to W3C OWL-Time
- PROV-O aligned to BFO
- W3C SSN/SOSA aligned to PROV-O
- OBOE, OBI and BCO (from the obo foundation) aligned to SOSA/SSN and thus PROV-O
Some of the alignments have been proposed previously, but the set described here augments both them and is larger in aggregate than previous work.
The availability of these alignments supports the fusion of data from a range of disciplines particularly in earth and environmental sciences, in particular observational data where the act of sampling and observation is understood in a provenance context.
Pitfalls in alignment of observation models resolved using PROV as an upper o...Simon Cox
AGU Fall Meeting, 2015-12-16
A number of models for observation metadata have been developed in the earth and environmental science communities, including OGC’s Observations and Measurements (O&M), the ecosystems community’s Extensible Observation Ontology (OBOE), the W3C’s Semantic Sensor Network Ontology (SSNO), and the CUAHSI/NSF Observations Data Model v2 (ODM2). In order to combine data formalized in the various models, mappings between these must be developed. In some cases this is straightforward: since ODM2 took O&M as its starting point, their terminology is almost completely aligned. In the eco-informatics world observations are almost never made in isolation of other observations, so OBOE pays particular attention to groupings, with multiple atomic ‘Measurements’ in each oboe:Observation which does not have a result of its own and thus plays a different role to an om:Observation. And while SSN also adopted terminology from O&M, mapping is confounded by the fact that SSNO uses DOLCE as its foundation and places ssn:Observations as ‘Social Objects’ which are explicitly disjoint from ‘Events’, while O&M is formalized as part of the ISO/TC 211 harmonised (UML) model and sees om:Observations as value assignment activities.
Foundational ontologies (such as BFO, GFO, UFO or DOLCE) can provide a framework for alignment, but different upper ontologies can be based in profoundly different world-views and use of incommensurate frameworks can confound rather than help. A potential resolution is provided by comparing recent studies that align SSNO and O&M, respectively, with the PROV ontology. PROV provides just three base classes:
Entity, Activity and Agent. om:Observation is sub-classed
from prov:Activity, while ssn:Observation is sub-classed from prov:Entity. This confirms that, despite the same name, om:Observation and ssn:Observation denote different aspects of the observation process: the observation event, and the record of the observation event, respectively.
Alignment with the simple PROV classes has clarified this issue in a way that had previously proved difficult to resolve. The simple 3-class base model from PROV appears to provide just enough logic to serve as a lightweight upper ontology, particularly for workflow or process-based information.
Short Update on ICOS ERIC by ICOS ERIC Director General Werner Kutsch at the 2nd ICOS Science Conference 2016 in Helsinki, Finland, 27-29 September 2016.
Ontology alignment – is PROV-O good enough?Simon Cox
Presentation to OGC Geosemantics Summit, 2015-06-03.
I explain the incompatibiity between the Observation classes in SSN and O&M, and how this can be understood mostly clearly through alignment with PROV. Compared with other 'upper ontologies' PROV provides a very easy to understand framework, with only 3 top level classes, two of which are disjoint.
PROV ontology supports alignment of observational data (models)Simon Cox
Paper presented at MODSIM 2017
https://www.mssanz.org.au/modsim2017/papersbysession.html session C2.
The W3C PROV ontology provides a flexible process-flow model that can capture many specific applications. A provenance trace is the retrospective view of a workflow, with specific instance data added. Thus it provides a basis for the description of any chain of activities which generate interesting outputs, such as observations, actuations, or acts of sampling. Furthermore, its relatively generic structure and naming allows it to be used as an alignment bridge with other ontologies that have previously challenged simple mappings. In this paper we will show a harmonization of a number of important ontology patterns that can be linked through the PROV-O Owl implementation of PROV.
The alignments stack is as follows:
- PROV-O aligned to W3C OWL-Time
- PROV-O aligned to BFO
- W3C SSN/SOSA aligned to PROV-O
- OBOE, OBI and BCO (from the obo foundation) aligned to SOSA/SSN and thus PROV-O
Some of the alignments have been proposed previously, but the set described here augments both them and is larger in aggregate than previous work.
The availability of these alignments supports the fusion of data from a range of disciplines particularly in earth and environmental sciences, in particular observational data where the act of sampling and observation is understood in a provenance context.
Pitfalls in alignment of observation models resolved using PROV as an upper o...Simon Cox
AGU Fall Meeting, 2015-12-16
A number of models for observation metadata have been developed in the earth and environmental science communities, including OGC’s Observations and Measurements (O&M), the ecosystems community’s Extensible Observation Ontology (OBOE), the W3C’s Semantic Sensor Network Ontology (SSNO), and the CUAHSI/NSF Observations Data Model v2 (ODM2). In order to combine data formalized in the various models, mappings between these must be developed. In some cases this is straightforward: since ODM2 took O&M as its starting point, their terminology is almost completely aligned. In the eco-informatics world observations are almost never made in isolation of other observations, so OBOE pays particular attention to groupings, with multiple atomic ‘Measurements’ in each oboe:Observation which does not have a result of its own and thus plays a different role to an om:Observation. And while SSN also adopted terminology from O&M, mapping is confounded by the fact that SSNO uses DOLCE as its foundation and places ssn:Observations as ‘Social Objects’ which are explicitly disjoint from ‘Events’, while O&M is formalized as part of the ISO/TC 211 harmonised (UML) model and sees om:Observations as value assignment activities.
Foundational ontologies (such as BFO, GFO, UFO or DOLCE) can provide a framework for alignment, but different upper ontologies can be based in profoundly different world-views and use of incommensurate frameworks can confound rather than help. A potential resolution is provided by comparing recent studies that align SSNO and O&M, respectively, with the PROV ontology. PROV provides just three base classes:
Entity, Activity and Agent. om:Observation is sub-classed
from prov:Activity, while ssn:Observation is sub-classed from prov:Entity. This confirms that, despite the same name, om:Observation and ssn:Observation denote different aspects of the observation process: the observation event, and the record of the observation event, respectively.
Alignment with the simple PROV classes has clarified this issue in a way that had previously proved difficult to resolve. The simple 3-class base model from PROV appears to provide just enough logic to serve as a lightweight upper ontology, particularly for workflow or process-based information.
Short Update on ICOS ERIC by ICOS ERIC Director General Werner Kutsch at the 2nd ICOS Science Conference 2016 in Helsinki, Finland, 27-29 September 2016.
Ontology alignment – is PROV-O good enough?Simon Cox
Presentation to OGC Geosemantics Summit, 2015-06-03.
I explain the incompatibiity between the Observation classes in SSN and O&M, and how this can be understood mostly clearly through alignment with PROV. Compared with other 'upper ontologies' PROV provides a very easy to understand framework, with only 3 top level classes, two of which are disjoint.
Presentation by ICOS DG Werner Kutsch at the UNFCCC Earth Information Day in UN COP22 on Tue 8 November 2016.
See the Earth Information Day programme: http://unfccc.int/science/workstreams/items/9949.php
SENESCHAL: Semantic ENrichment Enabling Sustainability of arCHAeological Link...CIGScotland
Presented at Linked Open Data: current practice in libraries and archives (Cataloguing & Indexing Group in Scotland 3rd Linked Open Data Conference), Edinburgh, 18 Nov 2013
A Metadata Application Profile for KOS Vocabulary Registries (KOS-AP)Marcia Zeng
Report on the outcomes of the DCMI-NKOS Task Group, which builds on the work done by the NKOS community during the last decade. While we discuss the KOS-AP in the context of KOS registries, the context of microdata should be considered equally important in all aspects.
Polar research is inherently interdisciplinary and is becoming more so. Correspondingly, polar data managers have been working to meet very diverse communities and needs, especially after the progress of the International Polar Year 2007-8 (IPY). But is it enough? Despite their best efforts, the polar data and research communities can be rather insular. The unique challenges of polar research and data management may sometimes blind us to relevant developments in other parts of the world. At the same time, global initiatives and research in the lower latitudes often underplay, or even ignore, data needs and solutions in the polar regions. This conference emphasizes the need to extend polar issues more globally, yet the polar voice is still not loud enough in global conversations about data infrastructure.
Infrastructure, by its nature, must work across all scales. It requires a “glocal” perspective that simultaneously embraces both universalizing and particularizing tendencies. In this presentation I will discuss how there needs to be a constant interplay between local implementation and global design of data infrastructure. I will describe where the polar regions have had success in this area and where key challenges remain. I will describe a path forward for the polar data community to be better represented on the global stage through initiatives like the Research Data Alliance while also amplifying their effectiveness at the regional and local level. A goal is to improve the global understanding of polar issues while also improving the practice of polar data practitioners.
Research Data Infrastructure for Geochemistry (DFG Roundtable)Kerstin Lehnert
This presentation provides an overview of different aspects of data management for geochemistry and resources available at the EarthChem@IEDA data facility.
Presentation by ICOS DG Werner Kutsch at the UNFCCC Earth Information Day in UN COP22 on Tue 8 November 2016.
See the Earth Information Day programme: http://unfccc.int/science/workstreams/items/9949.php
SENESCHAL: Semantic ENrichment Enabling Sustainability of arCHAeological Link...CIGScotland
Presented at Linked Open Data: current practice in libraries and archives (Cataloguing & Indexing Group in Scotland 3rd Linked Open Data Conference), Edinburgh, 18 Nov 2013
A Metadata Application Profile for KOS Vocabulary Registries (KOS-AP)Marcia Zeng
Report on the outcomes of the DCMI-NKOS Task Group, which builds on the work done by the NKOS community during the last decade. While we discuss the KOS-AP in the context of KOS registries, the context of microdata should be considered equally important in all aspects.
Polar research is inherently interdisciplinary and is becoming more so. Correspondingly, polar data managers have been working to meet very diverse communities and needs, especially after the progress of the International Polar Year 2007-8 (IPY). But is it enough? Despite their best efforts, the polar data and research communities can be rather insular. The unique challenges of polar research and data management may sometimes blind us to relevant developments in other parts of the world. At the same time, global initiatives and research in the lower latitudes often underplay, or even ignore, data needs and solutions in the polar regions. This conference emphasizes the need to extend polar issues more globally, yet the polar voice is still not loud enough in global conversations about data infrastructure.
Infrastructure, by its nature, must work across all scales. It requires a “glocal” perspective that simultaneously embraces both universalizing and particularizing tendencies. In this presentation I will discuss how there needs to be a constant interplay between local implementation and global design of data infrastructure. I will describe where the polar regions have had success in this area and where key challenges remain. I will describe a path forward for the polar data community to be better represented on the global stage through initiatives like the Research Data Alliance while also amplifying their effectiveness at the regional and local level. A goal is to improve the global understanding of polar issues while also improving the practice of polar data practitioners.
Research Data Infrastructure for Geochemistry (DFG Roundtable)Kerstin Lehnert
This presentation provides an overview of different aspects of data management for geochemistry and resources available at the EarthChem@IEDA data facility.
LoCloud Vocabulary Services: Thesaurus management introduction, Walter Koch a...locloud
This presentation provides an introduction to thesaurus management in the LoCloud Vocabulary Services given during the LoCloud training workshops. It provides an introduction to controlled vocabularies, thesaurus for information retrieval and interoperability, to SKOS, multilingual vocabulary issues and to the federated model adopted for thesaurus management within the LoCloud service, which is based in TemaTres. The presentation includes a list of the vocabularies that have been integrated within the LoCloud service. There is also a walk-through of MediaThread and how this was used in the vocabulary management training offered in the workshop.
The implementation of the INSPIRE Directive in Europe and similar efforts around the globe to develop spatial data infrastructures and global systems of systems have been focusing largely on the adoption of agreed technologies, standards, and specifications to meet the (systems) interoperability challenge. Addressing the key scientific challenges of humanity in the 21st century requires however a much increased inter-disciplinary effort, which in turn makes more complex demands on the type of systems and arrangements needed to support it. This paper analyses the challenges for inter-disciplinary interoperability using the experience of the EuroGEOSS research project. It argues that inter-disciplinarity requires mutual understanding of requirements, methods, theoretical underpinning and tacit knowledge, and this in turn demands for a flexible approach to interoperability based on mediation, brokering and semantics-aware, cross-thematic functionalities. The paper demonstrates the implications of adopting this approach and charts the trajectory for the evolution of current spatial data infrastructures.
Towards OpenURL Quality Metrics: Initial Findingsalc28
Presentation on creating a method for benchmarking metadata consistency in OpenURL links. See also: <http: />. Delivered at the July 2009 American Library Association conference in Chicago.
Simon Cox (Researcher @ CSIRO) and I presented on outcomes of the CSIRO Summer of Vocabularies.
This project focused on:
- Examining the state of the management of various controlled vocabularies and developing reusable processes to clean & standardise those vocabularies.
- Developing standards & technologies for vocabulary management, curation & visualisation.
We presented on these Summer of Vocabularies activities and how they are being used now to further the work of the Vocram project and the improvement and development of ANDS vocabulary services.
Knowledge Organization System (KOS) for biodiversity information resources, G...Dag Endresen
Slides from a presentation on the Knowledge Organization System (KOS) work program for GBIF. KOS developments for biodiversity information resources and input to the emerging Vocabulary Management Task Group (VoMaG).
Links
GBIF KOS prototype tools, http://kos.gbif.org/
Tool: Semantic Wiki prototype, http://terms.gbif.org/wiki/
Tool: ISOcat prototype demo, http://kos.gbif.org/isocat/
GBIF concept vocabulary term browser, http://kos.gbif.org/termbrowser/
GBIF Resources Repository, http://rs.gbif.org/terms/
GBIF Vocabulary Server, http://vocabularies.gbif.org/
GBIF Resources Browser, http://tools.gbif.org/resource-browser/
Similar to Vocabularies, ontologies, standards for observations: developments from RDA, W3C, OGC and CODATA (20)
explained using only the 1000 most commonly used words in English. Presented as part of the 'Up-goer challenge' at a symposium on Linking Environmental Data and Samples, Canberra, May/June 2017 - see https://confluence.csiro.au/display/LEDS/Linking+Environmental+Data+and+Samples
For the actual performance see https://youtu.be/dq9ZxjBVVbk
The science community has developed many models for representation of scientific data and knowledge. For example, the biomedical communities OBO Foundry federates applications covering various aspects of life sciences, which are united through reference to a common foundational ontology (BFO). The SWEET ontology, originally developed at NASA and now governed through ESIP, is a single large unified ontology for earth and environmental sciences. On a smaller scale, GeoSciML provides a UML and corresponding XML representation of geological mapping and observation data.
Key concepts related to scientific data and observations have now been incorporated into domain-neutral ontologies developed by the World Wide Web consortium. OWL-Time has been enhanced to support temporal reference systems needed for science, and deployed in a linked data representation of the geologic timescale. The Semantic Sensor Network ontology (SSN) has been extended to cover samples and sampling, including relationships between samples. Specific extensions for science are being added to the Data Catalog vocabulary (DCAT) used by data repositories such as RDA and CSIRO-DAP.
These standard vocabularies can be used directly for science data, or can provide a bridge to specialized domain ontologies. The W3C vocabularies support cross-disciplinary applications directly. The W3C vocabularies are aligned with the core ontologies that are the building blocks of the semantic web. The W3C vocabularies are hosted on well known, reliable infrastructure, and are being selectively adopted by the general schema.org discovery framework.
Presented at C3DIS 2018-05-29
http://www.c3dis.com/2017
A common model for scientific observations and samplesSimon Cox
Summary of O&M model and its various implementations (OMXML, OM-JSON, SOSA/SSN) with particular attention to Sampling model. Making the case that a x-disciplinary model for science observations is feasible (and has already been attempted).
Presented at meeting of ICSU/CODATA Commission on Standards, The Royal Society, London, 2017-11-13
Introducing a new encoding of the ISO 19156 Observations and Measurements model, to support transport of observation data using the JSON encoding beloved of web developers
Presentation describing recent work on observation-related vocabularies, undertaken by CSIRO as part of a contribution to Australia's National Environmental Information Infrastructure.
Presented at the 2nd workshop of the Ocean Data Interoperability Platform, La Jolla, Ca. 3rd-6th December, 2013
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
3. Vocabulary and Semantic
Services Interest Group
Founded September 2015 (Paris)
Co-chairs
• Adam Leadbetter
• Adam Shepherd
• Stephan Zednik
• Simon Cox
Activities
• Survey of vocabulary services
• Develop guidelines on vocabulary
governance
Teleconferences, meet at RDA plenary
2017-11-17 BODC | Cox | Vocabs, ontologies, standards3 |
Re-boot September 2017 (Montreal)
Co-chairs
• Simon Cox
• Adam Shepherd
• John Graybeal
• Yann LeFranc
Taskgroups
• Vocabulary service API
• Ontology/vocabulary metadata
• Term selection
• Search aggregations
• Change request process
Slack channels, meet at RDA plenary
4. Vocabulary content model - SKOS
From https://www.w3.org/TR/skos-primer/ :
SKOS has been designed to provide a low-cost migration path for porting existing organization systems to
the Semantic Web. … SKOS can also be seen as a bridging technology, providing the missing link between
the rigorous logical formalism of ontology languages such as OWL and the chaotic, informal and weakly-
structured world of Web-based collaboration tools, as exemplified by social tagging applications.
I.e. SKOS is an initial RDF-based formalization for ‘controlled vocabularies’
Widely used:
>200 SKOS vocabularies in Research Vocabularies Australia
>200 in NERC Vocabulary Service, >100,000 Concepts
2017-11-17 BODC | Cox | Vocabs, ontologies, standards4 |
9. SKOS as lingua-franca for concept description
We have a x-domain representation
Can we design a common API?
2017-11-17 BODC | Cox | Vocabs, ontologies, standards9 |
17. Spatial data on the web WG
2015-2017
Deliverables:
1. Use-cases and
requirements
2. Best practices
Spatial linked
open data
3. OWL-Time
4. SSN Ontology
5. QB4ST
https://www.w3.org/2015/spatial/wiki/Main_Page
2017-11-17 BODC | Cox | Vocabs, ontologies, standards17 |
18. Best practices
W3C Note
Mostly about making
spatial data
Linked Open Data
Document:
https://www.w3.org/TR/sdw-bp/
2017-11-17 BODC | Cox | Vocabs, ontologies, standards18 |
26. SSN 2011 – Process, Observation
2017-11-17 BODC | Cox | Vocabs, ontologies, standards
• Observation, Process both ‘Social Objects’
• Stimulus is the only ‘Event’
M. Compton, P. Barnaghi, L. Bermudez, R. García-Castro, O. Corcho, S.J.D. Cox, et al.,
The SSN ontology of the W3C semantic sensor network incubator group,
Web Semant. Sci. Serv. Agents World Wide Web. 17 (2012) 25–32. doi:10.1016/j.websem.2012.05.003.
26 |
27. SSN 2017
2017-11-17 BODC | Cox | Vocabs, ontologies, standards27 |
1. Simplify core – ‘act of observation’ like O&M
2. Extend scope to also include Sampling and Actuation
28. SSN 2017 - Modular ontology design
2017-11-17 BODC | Cox | Vocabs, ontologies, standards28 |
SOSA can be used with schema.org
30. SCIENCE AND THE DIGITAL REVOLUTION:
DATA, STANDARDS AND INTEGRATION
AN INTER UNION MEETING
2017-06-19/21
ICSU, Paris
2017-11-17 BODC | Cox | Vocabs, ontologies, standards30
32. SCIENCE AND THE DIGITAL REVOLUTION:
DATA, STANDARDS AND INTEGRATION
AN INTER UNION MEETING
2017-11-13/15
The Royal Society, London
2017-11-17 BODC | Cox | Vocabs, ontologies, standards32
33. Program
Day 1 - Why?
The imperatives for interdisciplinary
research and for data integration
Science drivers:
• Simon Coles - Crystallography
• Dimitris Koureas – Biodiversity
Information Standards
• Peter Fox - Deep Time Data
Infrastructure
Interdisciplinary research areas:
• Vasa Curcin –
Learning Health Systems
• Stephen Passmore –
Resilient Cities
• Laura Merson - Tracking and Treatment of
Infectious Diseases
• David Abreu - Agricultural Research and
Food Security
• Virginia Murray –
Sendai Framework for Disaster Risk
Reduction
• Tom Orrell –
Sustainable Development Goals
2017-11-17 BODC | Cox | Vocabs, ontologies, standards33
34. Program
Day 2 - How?
Supporting and developing data capacities
What has community accomplished in
standardising …
• Data repositories
• Data models, structures and formats
• Controlled vocabularies
• Identifier systems
• Services or APIs
• Governance of technical components
Particularly reflecting on:
• What worked and why?
• What has failed and why?
• Tools and platforms
• Engagements and incentives
• Endorsement by Union, or other peak body
• Francois Robida - IUGS
• Bob Hanisch - IAU
• Jeremy Frey - IUPAC
• Stephen Nortcliff – IUSS
• Bill Michener – DataONE (Ecology)
• Hugo Besemer - Agrisemantics
• Bryan Lawrence - WMO
• Steven Ramage - GEO
• Andre Heughebaert - GBIF
• James Malone - Open Biomedical
Ontologies (OBO)
• Peter McQuilton - FAIRSharing
2017-11-17 BODC | Cox | Vocabs, ontologies, standards34
35. Program
Day 3 – What?
1. Survey unions data standardization
activities
2. Develop storyboards for 3 pilots
• Resilient cities
• Infectious disease outbreaks
• Disaster risk reduction
3. Develop prospectus for foundation
funding ($M)
Goals
• Demonstrate significance of data
interoperability
• Highlight reusable platforms, practices in
standardization and vocabularies
• Enable lagging disciplines/unions to
build on demonstrated platforms and
practices
• Support deployment of interdisciplinary
resources
2017-11-17 BODC | Cox | Vocabs, ontologies, standards35 |
36. Land and Water
Simon Cox
Research Scientist
t +61 3 9545 2365
e simon.cox@csiro.au
w www.csiro.au
LAND AND WATER
Thank you