Invited Talk, International Semantic Intelligence Conference (ISIC 2021) New Delhi, India - February 25, 2021
The objective of this talk is, from a pragmatic point of view, to show examples of applications using semantic technologies in whose development the presenter has been involved. The talk will start showing a toy application, used in the classroom for teaching purposes but that illustrates all aspects of a typical development benefiting from semantic technologies. From integrating different data sources to creating an end-user interface. Then, the talk moves to bigger, real-world projects ranging from the application of semantic technologies for copyright management to media monitoring for plant health threats.
Software Metadata: Describing "dark software" in GeoSciencesdgarijo
Credit to Yolanda Gil.
In this talk I provide an overview of the current state of the art for software description in geosciences, along with our approach to facilitate this task in OntoSoft, a distributed semantic registry for scientific software. Three key aspects of OntoSoft are: a software metadata ontology designed for scientists, a distributed approach to software registries that targets communities of interest, and metadata crowdsourcing through access control. Software metadata is organized using the OntoSoft ontology, designed to support scientists to share, document, and reuse software, and organized along six dimensions: identify software, understand and assess software, execute software, get support for the software, do research with the software, and update the software.
Reproducibility Using Semantics: An Overviewdgarijo
Overview of the different approaches for addressing reproducibilities (using semantics) in laboratory protocols, workflow description and publication and workflow infrastructure. Furthermore, Research Objects are introduced as a means to capture the context and annotations of scientific experiments, together with the privacy and IPR concerns that may arise. This presentation was presented in Dagstuhl Seminar 16041: http://www.dagstuhl.de/16041
Being FAIR: Enabling Reproducible Data ScienceCarole Goble
Talk presented at Early Detection of Cancer Conference, OHSU, Portland, Oregon USA, 2-4 Oct 2018, http://earlydetectionresearch.com/ in the Data Science session
Findable Accessable Interoperable Reusable < data |models | SOPs | samples | articles| * >. FAIR is a mantra; a meme; a myth; a mystery; a moan. For the past 15 years I have been working on FAIR in a bunch of projects and initiatives in Life Science projects. Some are top-down like Life Science European Research Infrastructures ELIXIR and ISBE, and some are bottom-up, supporting research projects in Systems and Synthetic Biology (FAIRDOM), Biodiversity (BioVel), and Pharmacology (open PHACTS), for example. Some have become movements, like Bioschemas, the Common Workflow Language and Research Objects. Others focus on cross-cutting approaches in reproducibility, computational workflows, metadata representation and scholarly sharing & publication. In this talk I will relate a series of FAIRy tales. Some of them are Grimm. Some have happy endings. Who are the villains and who are the heroes? What are the morals we can draw from these stories?
Research Objects: more than the sum of the partsCarole Goble
Workshop on Managing Digital Research Objects in an Expanding Science Ecosystem, 15 Nov 2017, Bethesda, USA
https://www.rd-alliance.org/managing-digital-research-objects-expanding-science-ecosystem
Research output is more than just the rhetorical narrative. The experimental methods, computational codes, data, algorithms, workflows, Standard Operating Procedures, samples and so on are the objects of research that enable reuse and reproduction of scientific experiments, and they too need to be examined and exchanged as research knowledge.
A first step is to think of Digital Research Objects as a broadening out to embrace these artefacts or assets of research. The next is to recognise that investigations use multiple, interlinked, evolving artefacts. Multiple datasets and multiple models support a study; each model is associated with datasets for construction, validation and prediction; an analytic pipeline has multiple codes and may be made up of nested sub-pipelines, and so on. Research Objects (http://researchobject.org/) is a framework by which the many, nested and contributed components of research can be packaged together in a systematic way, and their context, provenance and relationships richly described.
Keynote: SemSci 2017: Enabling Open Semantic Science
1st International Workshop co-located with ISWC 2017, October 2017, Vienna, Austria,
https://semsci.github.io/semSci2017/
Abstract
We have all grown up with the research article and article collections (let’s call them libraries) as the prime means of scientific discourse. But research output is more than just the rhetorical narrative. The experimental methods, computational codes, data, algorithms, workflows, Standard Operating Procedures, samples and so on are the objects of research that enable reuse and reproduction of scientific experiments, and they too need to be examined and exchanged as research knowledge.
We can think of “Research Objects” as different types and as packages all the components of an investigation. If we stop thinking of publishing papers and start thinking of releasing Research Objects (software), then scholar exchange is a new game: ROs and their content evolve; they are multi-authored and their authorship evolves; they are a mix of virtual and embedded, and so on.
But first, some baby steps before we get carried away with a new vision of scholarly communication. Many journals (e.g. eLife, F1000, Elsevier) are just figuring out how to package together the supplementary materials of a paper. Data catalogues are figuring out how to virtually package multiple datasets scattered across many repositories to keep the integrated experimental context.
Research Objects [1] (http://researchobject.org/) is a framework by which the many, nested and contributed components of research can be packaged together in a systematic way, and their context, provenance and relationships richly described. The brave new world of containerisation provides the containers and Linked Data provides the metadata framework for the container manifest construction and profiles. It’s not just theory, but also in practice with examples in Systems Biology modelling, Bioinformatics computational workflows, and Health Informatics data exchange. I’ll talk about why and how we got here, the framework and examples, and what we need to do.
[1] Sean Bechhofer, Iain Buchan, David De Roure, Paolo Missier, John Ainsworth, Jiten Bhagat, Philip Couch, Don Cruickshank, Mark Delderfield, Ian Dunlop, Matthew Gamble, Danius Michaelides, Stuart Owen, David Newman, Shoaib Sufi, Carole Goble, Why linked data is not enough for scientists, In Future Generation Computer Systems, Volume 29, Issue 2, 2013, Pages 599-611, ISSN 0167-739X, https://doi.org/10.1016/j.future.2011.08.004
ISMB/ECCB 2013 Keynote Goble Results may vary: what is reproducible? why do o...Carole Goble
Keynote given by Carole Goble on 23rd July 2013 at ISMB/ECCB 2013
http://www.iscb.org/ismbeccb2013
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc "made reproducible", to pre-hoc "born reproducible"? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
Software Metadata: Describing "dark software" in GeoSciencesdgarijo
Credit to Yolanda Gil.
In this talk I provide an overview of the current state of the art for software description in geosciences, along with our approach to facilitate this task in OntoSoft, a distributed semantic registry for scientific software. Three key aspects of OntoSoft are: a software metadata ontology designed for scientists, a distributed approach to software registries that targets communities of interest, and metadata crowdsourcing through access control. Software metadata is organized using the OntoSoft ontology, designed to support scientists to share, document, and reuse software, and organized along six dimensions: identify software, understand and assess software, execute software, get support for the software, do research with the software, and update the software.
Reproducibility Using Semantics: An Overviewdgarijo
Overview of the different approaches for addressing reproducibilities (using semantics) in laboratory protocols, workflow description and publication and workflow infrastructure. Furthermore, Research Objects are introduced as a means to capture the context and annotations of scientific experiments, together with the privacy and IPR concerns that may arise. This presentation was presented in Dagstuhl Seminar 16041: http://www.dagstuhl.de/16041
Being FAIR: Enabling Reproducible Data ScienceCarole Goble
Talk presented at Early Detection of Cancer Conference, OHSU, Portland, Oregon USA, 2-4 Oct 2018, http://earlydetectionresearch.com/ in the Data Science session
Findable Accessable Interoperable Reusable < data |models | SOPs | samples | articles| * >. FAIR is a mantra; a meme; a myth; a mystery; a moan. For the past 15 years I have been working on FAIR in a bunch of projects and initiatives in Life Science projects. Some are top-down like Life Science European Research Infrastructures ELIXIR and ISBE, and some are bottom-up, supporting research projects in Systems and Synthetic Biology (FAIRDOM), Biodiversity (BioVel), and Pharmacology (open PHACTS), for example. Some have become movements, like Bioschemas, the Common Workflow Language and Research Objects. Others focus on cross-cutting approaches in reproducibility, computational workflows, metadata representation and scholarly sharing & publication. In this talk I will relate a series of FAIRy tales. Some of them are Grimm. Some have happy endings. Who are the villains and who are the heroes? What are the morals we can draw from these stories?
Research Objects: more than the sum of the partsCarole Goble
Workshop on Managing Digital Research Objects in an Expanding Science Ecosystem, 15 Nov 2017, Bethesda, USA
https://www.rd-alliance.org/managing-digital-research-objects-expanding-science-ecosystem
Research output is more than just the rhetorical narrative. The experimental methods, computational codes, data, algorithms, workflows, Standard Operating Procedures, samples and so on are the objects of research that enable reuse and reproduction of scientific experiments, and they too need to be examined and exchanged as research knowledge.
A first step is to think of Digital Research Objects as a broadening out to embrace these artefacts or assets of research. The next is to recognise that investigations use multiple, interlinked, evolving artefacts. Multiple datasets and multiple models support a study; each model is associated with datasets for construction, validation and prediction; an analytic pipeline has multiple codes and may be made up of nested sub-pipelines, and so on. Research Objects (http://researchobject.org/) is a framework by which the many, nested and contributed components of research can be packaged together in a systematic way, and their context, provenance and relationships richly described.
Keynote: SemSci 2017: Enabling Open Semantic Science
1st International Workshop co-located with ISWC 2017, October 2017, Vienna, Austria,
https://semsci.github.io/semSci2017/
Abstract
We have all grown up with the research article and article collections (let’s call them libraries) as the prime means of scientific discourse. But research output is more than just the rhetorical narrative. The experimental methods, computational codes, data, algorithms, workflows, Standard Operating Procedures, samples and so on are the objects of research that enable reuse and reproduction of scientific experiments, and they too need to be examined and exchanged as research knowledge.
We can think of “Research Objects” as different types and as packages all the components of an investigation. If we stop thinking of publishing papers and start thinking of releasing Research Objects (software), then scholar exchange is a new game: ROs and their content evolve; they are multi-authored and their authorship evolves; they are a mix of virtual and embedded, and so on.
But first, some baby steps before we get carried away with a new vision of scholarly communication. Many journals (e.g. eLife, F1000, Elsevier) are just figuring out how to package together the supplementary materials of a paper. Data catalogues are figuring out how to virtually package multiple datasets scattered across many repositories to keep the integrated experimental context.
Research Objects [1] (http://researchobject.org/) is a framework by which the many, nested and contributed components of research can be packaged together in a systematic way, and their context, provenance and relationships richly described. The brave new world of containerisation provides the containers and Linked Data provides the metadata framework for the container manifest construction and profiles. It’s not just theory, but also in practice with examples in Systems Biology modelling, Bioinformatics computational workflows, and Health Informatics data exchange. I’ll talk about why and how we got here, the framework and examples, and what we need to do.
[1] Sean Bechhofer, Iain Buchan, David De Roure, Paolo Missier, John Ainsworth, Jiten Bhagat, Philip Couch, Don Cruickshank, Mark Delderfield, Ian Dunlop, Matthew Gamble, Danius Michaelides, Stuart Owen, David Newman, Shoaib Sufi, Carole Goble, Why linked data is not enough for scientists, In Future Generation Computer Systems, Volume 29, Issue 2, 2013, Pages 599-611, ISSN 0167-739X, https://doi.org/10.1016/j.future.2011.08.004
ISMB/ECCB 2013 Keynote Goble Results may vary: what is reproducible? why do o...Carole Goble
Keynote given by Carole Goble on 23rd July 2013 at ISMB/ECCB 2013
http://www.iscb.org/ismbeccb2013
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc "made reproducible", to pre-hoc "born reproducible"? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
PhD Thesis: Mining abstractions in scientific workflowsdgarijo
Slides of the presentation for my PhD dissertation. I strongly recommend downloading the slides, as they have animations that are easier to see in power point. The abstract of the thesis is as follows: "Scientific workflows have been adopted in the last decade to represent the computational methods used in in silico scientific experiments and their associated research products. Scientific workflows have demonstrated to be useful for sharing and reproducing scientific experiments, allowing scientists to visualize, debug and save time when re-executing previous work. However, scientific workflows may be difficult to understand and reuse. The large amount of available workflows in repositories, together with their heterogeneity and lack of documentation and usage examples may become an obstacle for a scientist aiming to reuse the work from other scientists. Furthermore, given that it is often possible to implement a method using different algorithms or techniques, seemingly disparate workflows may be related at a higher level of abstraction, based on their common functionality. In this thesis we address the issue of reusability and abstraction by exploring how workflows relate to one another in a workflow repository, mining abstractions that may be helpful for workflow reuse. In order to do so, we propose a simple model for representing and relating workflows and their executions, we analyze the typical common abstractions that can be found in workflow repositories, we explore the current practices of users regarding workflow reuse and we describe a method for discovering useful abstractions for workflows based on existing graph mining techniques. Our results expose the common abstractions and practices of users in terms of workflow reuse, and show how our proposed abstractions have potential to become useful for users designing new workflows".
Tutorial on the DisGeNET Discovery Platform, with especial focus on its exploitation in the Semantic Web showing how to retrieve and integrate DisGeNET data with other RDF linked datasets.
NSF Workshop Data and Software Citation, 6-7 June 2016, Boston USA, Software Panel
FIndable, Accessible, Interoperable, Reusable Software and Data Citation: Europe, Research Objects, and BioSchemas.org
OntoSoft: A Distributed Semantic Registry for Scientific Softwaredgarijo
Credit to Yolanda Gil.
OntoSoft is a distributed semantic registry for scientific software. This paper describes three major novel contributions of OntoSoft: 1) a software metadata registry designed for scientists, 2) a distributed approach to software registries that targets communities of interest, and 3) metadata crowdsourcing through access control. Software metadata is organized using the OntoSoft ontology along six dimensions that matter to scientists: identify software, understand and assess software, execute software, get support for the software, do research with the software, and update the software. OntoSoft is a distributed registry where each site is owned and maintained by a community of interest, with a distributed semantic query capability that allows users to search across all sites. The registry has metadata crowdsourcing capabilities, supported through access control so that software authors can allow others to expand on specific metadata properties.
Presentation of the "Coming to terms to FAIR semantics" paper for 22nd International Conference on Knowledge Engineering and Knowledge Management (EKAW 2020).
Metadata and Semantics Research Conference, Manchester, UK 2015
Research Objects: why, what and how,
In practice the exchange, reuse and reproduction of scientific experiments is hard, dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: codes fork, data is updated, algorithms are revised, workflows break, service updates are released. Neither should they be viewed just as second-class artifacts tethered to publications, but the focus of research outcomes in their own right: articles clustered around datasets, methods with citation profiles. Many funders and publishers have come to acknowledge this, moving to data sharing policies and provisioning e-infrastructure platforms. Many researchers recognise the importance of working with Research Objects. The term has become widespread. However. What is a Research Object? How do you mint one, exchange one, build a platform to support one, curate one? How do we introduce them in a lightweight way that platform developers can migrate to? What is the practical impact of a Research Object Commons on training, stewardship, scholarship, sharing? How do we address the scholarly and technological debt of making and maintaining Research Objects? Are there any examples
I’ll present our practical experiences of the why, what and how of Research Objects.
Towards Knowledge Graphs of Reusable Research Software Metadatadgarijo
Research software is a key asset for understanding, reusing and reproducing results in computational sciences. An increasing amount of software is stored in code repositories, which usually contain human readable instructions indicating how to use it and set it up. However, developers and researchers often need to spend a significant amount of time to understand how to invoke a software component, prepare data in the required format, and use it in combination with other software. In addition, this time investment makes it challenging to discover and compare software with similar functionality. In this talk I will describe our efforts to address these issues by creating and using Open Knowledge Graphs that describe research software in a machine readable manner. Our work includes: 1) an ontology that extends schema.org and codemeta, designed to describe software and the specific data formats it uses; 2) an approach to publish software metadata as an open knowledge graph, linked to other Web of Data objects; and 3) a framework for automatically extracting metadata from software repositories; and 4) a framework to curate, query, explore and compare research software metadata in a collaborative manner. The talk will illustrate our approach with real-world examples, including a domain application for inspecting and discovering hydrology, agriculture, and economic software models; and the results of our framework when enriching the research software entries in Zenodo.org.
FAIRDOM - FAIR Asset management and sharing experiences in Systems and Synthe...Carole Goble
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs and so forth. Don’t stop reading. Data management isn’t likely to win anyone a Nobel prize. But publications should be supported and accompanied by data, methods, procedures, etc. to assure reproducibility of results. Funding agencies expect data (and increasingly software) management retention and access plans as part of the proposal process for projects to be funded. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems Biology demands the interlinking and exchange of assets and the systematic recording
of metadata for their interpretation.
The FAIR Guiding Principles for scientific data management and stewardship (http://www.nature.com/articles/sdata201618) has been an effective rallying-cry for EU and USA Research Infrastructures. FAIRDOM (Findable, Accessible, Interoperable, Reusable Data, Operations and Models) Initiative has 8 years of experience of asset sharing and data infrastructure ranging across European programmes (SysMO and EraSysAPP ERANets), national initiatives (de.NBI, German Virtual Liver Network, UK SynBio centres) and PI's labs. It aims to support Systems and Synthetic Biology researchers with data and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety.
This talk will use the FAIRDOM Initiative to discuss the FAIR management of data, SOPs, and models for Sys Bio, highlighting the challenges of and approaches to sharing, credit, citation and asset infrastructures in practice. I'll also highlight recent experiments in affecting sharing using behavioural interventions.
http://www.fair-dom.org
http://www.fairdomhub.org
http://www.seek4science.org
Presented at COMBINE 2016, Newcastle, 19 September.
http://co.mbine.org/events/COMBINE_2016
Being FAIR: FAIR data and model management SSBSS 2017 Summer SchoolCarole Goble
Lecture 1:
Being FAIR: FAIR data and model management
In recent years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship [1] have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems and Synthetic Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Our FAIRDOM project (http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety. The FAIRDOM Platform has been installed by over 30 labs or projects. Our public, centrally hosted Asset Commons, the FAIRDOMHub.org, supports the outcomes of 50+ projects.
Now established as a grassroots association, FAIRDOM has over 8 years of experience of practical asset sharing and data infrastructure at the researcher coal-face ranging across European programmes (SysMO and ERASysAPP ERANets), national initiatives (Germany's de.NBI and Systems Medicine of the Liver; Norway's Digital Life) and European Research Infrastructures (ISBE) as well as in PI's labs and Centres such as the SynBioChem Centre at Manchester.
In this talk I will show explore how FAIRDOM has been designed to support Systems Biology projects and show examples of its configuration and use. I will also explore the technical and social challenges we face.
I will also refer to European efforts to support public archives for the life sciences. ELIXIR (http:// http://www.elixir-europe.org/) the European Research Infrastructure of 21 national nodes and a hub funded by national agreements to coordinate and sustain key data repositories and archives for the Life Science community, improve access to them and related tools, support training and create a platform for dataset interoperability. As the Head of the ELIXIR-UK Node and co-lead of the ELIXIR Interoperability Platform I will show how this work relates to your projects.
[1] Wilkinson et al, The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
Being Reproducible: SSBSS Summer School 2017Carole Goble
Lecture 2:
Being Reproducible: Models, Research Objects and R* Brouhaha
Reproducibility is a R* minefield, depending on whether you are testing for robustness (rerun), defence (repeat), certification (replicate), comparison (reproduce) or transferring between researchers (reuse). Different forms of "R" make different demands on the completeness, depth and portability of research. Sharing is another minefield raising concerns of credit and protection from sharp practices.
In practice the exchange, reuse and reproduction of scientific experiments is dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: the codes fork, data is updated, algorithms are revised, workflows break, service updates are released. ResearchObject.org is an effort to systematically support more portable and reproducible research exchange.
In this talk I will explore these issues in more depth using the FAIRDOM Platform and its support for reproducible modelling. The talk will cover initiatives and technical issues, and raise social and cultural challenges.
Reproducibility, Research Objects and Reality, Leiden 2016Carole Goble
Presented at the Leiden Bioscience Lecture, 24 November 2016, Reproducibility, Research Objects and Reality
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. It all sounds very laudable and straightforward. BUT…..
Reproducibility is a R* minefield, depending on whether you are testing for robustness (rerun), defence (repeat), certification (replicate), comparison (reproduce) or transferring between researchers (reuse). Different forms of "R" make different demands on the completeness, depth and portability of research. Sharing is another minefield raising concerns of credit and protection from sharp practices.
In practice the exchange, reuse and reproduction of scientific experiments is dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: the codes fork, data is updated, algorithms are revised, workflows break, service updates are released. ResearchObject.org is an effort to systematically support more portable and reproducible research exchange
In this talk I will explore these issues in data-driven computational life sciences through the examples and stories from initiatives I am involved, and Leiden is involved in too including:
· FAIRDOM which has built a Commons for Systems and Synthetic Biology projects, with an emphasis on standards smuggled in by stealth and efforts to affecting sharing practices using behavioural interventions
· ELIXIR, the EU Research Data Infrastructure, and its efforts to exchange workflows
· Bioschemas.org, an ELIXIR-NIH-Google effort to support the finding of assets.
FOOPS!: An Ontology Pitfall Scanner for the FAIR principlesdgarijo
Slides presented at the DBpedia Day, at the Semantcis conference in 2021. FOOPS! (available at https://w3id.org/foops) is a validator based on the FAIR principles that will guide users when conforming their ontologies to them. For each principle, FOOPS! runs a series of tests and notifies errors, suggestions and ways to conform to the best practices.
The Seven Deadly Sins of BioinformaticsDuncan Hull
Keynote talk at Bioinformatics Open Source Conference (BOSC) Special Interest Group at the 15th Annual International Conference on Intelligent Systems for Molecular Biology (ISMB 2007) in Vienna, July 2007 by Carole Goble, University of Manchester.
Talk at 3th Keystone Training School - Keyword Search in Big Linked Data - Institute for Software Technology and Interactive Systems, TU Wien, Austria, 2017
PhD Thesis: Mining abstractions in scientific workflowsdgarijo
Slides of the presentation for my PhD dissertation. I strongly recommend downloading the slides, as they have animations that are easier to see in power point. The abstract of the thesis is as follows: "Scientific workflows have been adopted in the last decade to represent the computational methods used in in silico scientific experiments and their associated research products. Scientific workflows have demonstrated to be useful for sharing and reproducing scientific experiments, allowing scientists to visualize, debug and save time when re-executing previous work. However, scientific workflows may be difficult to understand and reuse. The large amount of available workflows in repositories, together with their heterogeneity and lack of documentation and usage examples may become an obstacle for a scientist aiming to reuse the work from other scientists. Furthermore, given that it is often possible to implement a method using different algorithms or techniques, seemingly disparate workflows may be related at a higher level of abstraction, based on their common functionality. In this thesis we address the issue of reusability and abstraction by exploring how workflows relate to one another in a workflow repository, mining abstractions that may be helpful for workflow reuse. In order to do so, we propose a simple model for representing and relating workflows and their executions, we analyze the typical common abstractions that can be found in workflow repositories, we explore the current practices of users regarding workflow reuse and we describe a method for discovering useful abstractions for workflows based on existing graph mining techniques. Our results expose the common abstractions and practices of users in terms of workflow reuse, and show how our proposed abstractions have potential to become useful for users designing new workflows".
Tutorial on the DisGeNET Discovery Platform, with especial focus on its exploitation in the Semantic Web showing how to retrieve and integrate DisGeNET data with other RDF linked datasets.
NSF Workshop Data and Software Citation, 6-7 June 2016, Boston USA, Software Panel
FIndable, Accessible, Interoperable, Reusable Software and Data Citation: Europe, Research Objects, and BioSchemas.org
OntoSoft: A Distributed Semantic Registry for Scientific Softwaredgarijo
Credit to Yolanda Gil.
OntoSoft is a distributed semantic registry for scientific software. This paper describes three major novel contributions of OntoSoft: 1) a software metadata registry designed for scientists, 2) a distributed approach to software registries that targets communities of interest, and 3) metadata crowdsourcing through access control. Software metadata is organized using the OntoSoft ontology along six dimensions that matter to scientists: identify software, understand and assess software, execute software, get support for the software, do research with the software, and update the software. OntoSoft is a distributed registry where each site is owned and maintained by a community of interest, with a distributed semantic query capability that allows users to search across all sites. The registry has metadata crowdsourcing capabilities, supported through access control so that software authors can allow others to expand on specific metadata properties.
Presentation of the "Coming to terms to FAIR semantics" paper for 22nd International Conference on Knowledge Engineering and Knowledge Management (EKAW 2020).
Metadata and Semantics Research Conference, Manchester, UK 2015
Research Objects: why, what and how,
In practice the exchange, reuse and reproduction of scientific experiments is hard, dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: codes fork, data is updated, algorithms are revised, workflows break, service updates are released. Neither should they be viewed just as second-class artifacts tethered to publications, but the focus of research outcomes in their own right: articles clustered around datasets, methods with citation profiles. Many funders and publishers have come to acknowledge this, moving to data sharing policies and provisioning e-infrastructure platforms. Many researchers recognise the importance of working with Research Objects. The term has become widespread. However. What is a Research Object? How do you mint one, exchange one, build a platform to support one, curate one? How do we introduce them in a lightweight way that platform developers can migrate to? What is the practical impact of a Research Object Commons on training, stewardship, scholarship, sharing? How do we address the scholarly and technological debt of making and maintaining Research Objects? Are there any examples
I’ll present our practical experiences of the why, what and how of Research Objects.
Towards Knowledge Graphs of Reusable Research Software Metadatadgarijo
Research software is a key asset for understanding, reusing and reproducing results in computational sciences. An increasing amount of software is stored in code repositories, which usually contain human readable instructions indicating how to use it and set it up. However, developers and researchers often need to spend a significant amount of time to understand how to invoke a software component, prepare data in the required format, and use it in combination with other software. In addition, this time investment makes it challenging to discover and compare software with similar functionality. In this talk I will describe our efforts to address these issues by creating and using Open Knowledge Graphs that describe research software in a machine readable manner. Our work includes: 1) an ontology that extends schema.org and codemeta, designed to describe software and the specific data formats it uses; 2) an approach to publish software metadata as an open knowledge graph, linked to other Web of Data objects; and 3) a framework for automatically extracting metadata from software repositories; and 4) a framework to curate, query, explore and compare research software metadata in a collaborative manner. The talk will illustrate our approach with real-world examples, including a domain application for inspecting and discovering hydrology, agriculture, and economic software models; and the results of our framework when enriching the research software entries in Zenodo.org.
FAIRDOM - FAIR Asset management and sharing experiences in Systems and Synthe...Carole Goble
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs and so forth. Don’t stop reading. Data management isn’t likely to win anyone a Nobel prize. But publications should be supported and accompanied by data, methods, procedures, etc. to assure reproducibility of results. Funding agencies expect data (and increasingly software) management retention and access plans as part of the proposal process for projects to be funded. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems Biology demands the interlinking and exchange of assets and the systematic recording
of metadata for their interpretation.
The FAIR Guiding Principles for scientific data management and stewardship (http://www.nature.com/articles/sdata201618) has been an effective rallying-cry for EU and USA Research Infrastructures. FAIRDOM (Findable, Accessible, Interoperable, Reusable Data, Operations and Models) Initiative has 8 years of experience of asset sharing and data infrastructure ranging across European programmes (SysMO and EraSysAPP ERANets), national initiatives (de.NBI, German Virtual Liver Network, UK SynBio centres) and PI's labs. It aims to support Systems and Synthetic Biology researchers with data and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety.
This talk will use the FAIRDOM Initiative to discuss the FAIR management of data, SOPs, and models for Sys Bio, highlighting the challenges of and approaches to sharing, credit, citation and asset infrastructures in practice. I'll also highlight recent experiments in affecting sharing using behavioural interventions.
http://www.fair-dom.org
http://www.fairdomhub.org
http://www.seek4science.org
Presented at COMBINE 2016, Newcastle, 19 September.
http://co.mbine.org/events/COMBINE_2016
Being FAIR: FAIR data and model management SSBSS 2017 Summer SchoolCarole Goble
Lecture 1:
Being FAIR: FAIR data and model management
In recent years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship [1] have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems and Synthetic Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Our FAIRDOM project (http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety. The FAIRDOM Platform has been installed by over 30 labs or projects. Our public, centrally hosted Asset Commons, the FAIRDOMHub.org, supports the outcomes of 50+ projects.
Now established as a grassroots association, FAIRDOM has over 8 years of experience of practical asset sharing and data infrastructure at the researcher coal-face ranging across European programmes (SysMO and ERASysAPP ERANets), national initiatives (Germany's de.NBI and Systems Medicine of the Liver; Norway's Digital Life) and European Research Infrastructures (ISBE) as well as in PI's labs and Centres such as the SynBioChem Centre at Manchester.
In this talk I will show explore how FAIRDOM has been designed to support Systems Biology projects and show examples of its configuration and use. I will also explore the technical and social challenges we face.
I will also refer to European efforts to support public archives for the life sciences. ELIXIR (http:// http://www.elixir-europe.org/) the European Research Infrastructure of 21 national nodes and a hub funded by national agreements to coordinate and sustain key data repositories and archives for the Life Science community, improve access to them and related tools, support training and create a platform for dataset interoperability. As the Head of the ELIXIR-UK Node and co-lead of the ELIXIR Interoperability Platform I will show how this work relates to your projects.
[1] Wilkinson et al, The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
Being Reproducible: SSBSS Summer School 2017Carole Goble
Lecture 2:
Being Reproducible: Models, Research Objects and R* Brouhaha
Reproducibility is a R* minefield, depending on whether you are testing for robustness (rerun), defence (repeat), certification (replicate), comparison (reproduce) or transferring between researchers (reuse). Different forms of "R" make different demands on the completeness, depth and portability of research. Sharing is another minefield raising concerns of credit and protection from sharp practices.
In practice the exchange, reuse and reproduction of scientific experiments is dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: the codes fork, data is updated, algorithms are revised, workflows break, service updates are released. ResearchObject.org is an effort to systematically support more portable and reproducible research exchange.
In this talk I will explore these issues in more depth using the FAIRDOM Platform and its support for reproducible modelling. The talk will cover initiatives and technical issues, and raise social and cultural challenges.
Reproducibility, Research Objects and Reality, Leiden 2016Carole Goble
Presented at the Leiden Bioscience Lecture, 24 November 2016, Reproducibility, Research Objects and Reality
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. It all sounds very laudable and straightforward. BUT…..
Reproducibility is a R* minefield, depending on whether you are testing for robustness (rerun), defence (repeat), certification (replicate), comparison (reproduce) or transferring between researchers (reuse). Different forms of "R" make different demands on the completeness, depth and portability of research. Sharing is another minefield raising concerns of credit and protection from sharp practices.
In practice the exchange, reuse and reproduction of scientific experiments is dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: the codes fork, data is updated, algorithms are revised, workflows break, service updates are released. ResearchObject.org is an effort to systematically support more portable and reproducible research exchange
In this talk I will explore these issues in data-driven computational life sciences through the examples and stories from initiatives I am involved, and Leiden is involved in too including:
· FAIRDOM which has built a Commons for Systems and Synthetic Biology projects, with an emphasis on standards smuggled in by stealth and efforts to affecting sharing practices using behavioural interventions
· ELIXIR, the EU Research Data Infrastructure, and its efforts to exchange workflows
· Bioschemas.org, an ELIXIR-NIH-Google effort to support the finding of assets.
FOOPS!: An Ontology Pitfall Scanner for the FAIR principlesdgarijo
Slides presented at the DBpedia Day, at the Semantcis conference in 2021. FOOPS! (available at https://w3id.org/foops) is a validator based on the FAIR principles that will guide users when conforming their ontologies to them. For each principle, FOOPS! runs a series of tests and notifies errors, suggestions and ways to conform to the best practices.
The Seven Deadly Sins of BioinformaticsDuncan Hull
Keynote talk at Bioinformatics Open Source Conference (BOSC) Special Interest Group at the 15th Annual International Conference on Intelligent Systems for Molecular Biology (ISMB 2007) in Vienna, July 2007 by Carole Goble, University of Manchester.
Talk at 3th Keystone Training School - Keyword Search in Big Linked Data - Institute for Software Technology and Interactive Systems, TU Wien, Austria, 2017
Sources of Change in Modern Knowledge Organization SystemsPaul Groth
Talk covering how knowledge graphs are making us rethink how change occurs in Knowledge Organization Systems. Based on https://arxiv.org/abs/1611.00217
Clariah Tech Day: Controlled Vocabularies and Ontologies in Dataversevty
This presentation is about external CVs support in Dataverse, Open Source data repository. Data Archiving and Networked Services (DANS-KNAW) decided to use Dataverse as a basic technology to build Data Stations and provide FAIR data services for various Dutch research communities.
Keynote presentation delivered at ELAG 2013 in Gent, Belgium, on May 29 2013. Discusses Research Objects and the relationship to work my team has been involved in during the past couple of years: OAI-ORE, Open Annotation, Memento.
dkNET Webinar: Discover the Latest from dkNET - Biomed Resource Watch 06/02/2023dkNET
dkNET Webinar: Discover the Latest from dkNET - Biomed Resource Watch
Presenter: Jeffrey Grethe, PhD, dkNET Principal Investigator, University of California San Diego
Abstract
The dkNET (NIDDK Information Network) team is announcing an exciting new service - Biomed Resource Watch (BRW, https://scicrunch.org/ResourceWatch), a knowledge base for aggregating and disseminating known problems and performance information about research resources such as antibodies, cell lines, and tools. We aggregate trustworthy information from authorized sources such as Cellosaurus, Antibody Registry, Human Protein Atlas, ENCODE, and many more. In addition, BRW includes antibody specificity text mining information extracted from the literature via natural language processing. BRW provides researchers and curators an easy-to-use interface to report their claims about a specific resource. Researchers can check information about a resource before planning their experiments via BRW-enhanced Resource Reports. This new service aims to help improve efficiency in selecting appropriate resources, enhancing scientific rigor and reproducibility, and promoting a FAIR (Findable, Accessible, Interoperable, Reusable) research resource ecosystem in the biomedical research community.
Join us for a webinar to introduce the following resources & topics:
1. An overview of dkNET
2. How Resource Reports benefit you
3. Biomed Resource Watch
3.1 Navigating Biomed Resource Watch
3.2 How to Submit a Claim
Upcoming webinars schedule: https://dknet.org/about/webinar
Towards OpenURL Quality Metrics: Initial Findingsalc28
Presentation on creating a method for benchmarking metadata consistency in OpenURL links. See also: <http: />. Delivered at the July 2009 American Library Association conference in Chicago.
Ontologies For the Modern Age - McGuinness' Keynote at ISWC 2017Deborah McGuinness
Ontologies are seeing a resurgence of interest and usage as big data proliferates, machine learning advances, and integration of data becomes more paramount. The previous models of sometimes labor-intensive, centralized ontology construction and maintenance do not mesh well in today’s interdisciplinary world that is in the midst of a big data, information extraction, and machine learning explosion. In this talk, we will provide some historical perspective on ontologies and their usage, and discuss a model of building and maintaining large collaborative, interdisciplinary ontologies along with the data repositories and data services that they empower. We will give a few examples of heterogeneous semantic data resources made more interconnected and more powerful by ontology-supported infrastructures, discuss a vision for ontology-enabled future research and provide some examples in a large health empowerment joint effort between RPI and IBM Watson Health.
Make our Scientific Datasets Accessible and Interoperable on the WebFranck Michel
The presentation investigates the challenges that we must face to share scientific datasets on the Web following the Linked Open Data principles. We present the standards of the Semantic Web and investigate how they can help address those challenges. We give tips as to how to choose vocabularies to describe data and metadata, link datasets to other related datasets by making appropriate alignments, translate existing data sources to RDF and publish it on the Web as linked data.
The objective of this webinar is to provide a brief overview of the Knowledge Organization Systems (KOS) and the tools used for managing them. The presentation will focus on the management of the multilingual Organic.Edunet ontology as a case study. In this context it will present aspects such as the collaborative work, multilinguality needs and update of the concepts using an online KOS management tool (MoKi).
Presentation made in the context of the FAO AIMS Webinar titled “Knowledge Organization Systems (KOS): Management of Classification Systems in the case of Organic.Edunet” (http://aims.fao.org/community/blogs/new-webinaraims-knowledge-organization-systems-kos-management-classification-systems)
21/2/2014
This presentation was given by guest lecturer Martin Szomszor of Electric Data Solutions LTD, during the seventh session of the NISO Spring training series "Working with Scholarly APIs." Session Seven, Methods and Tools for Scholarly Data Analytics, was moderated by Phill Jones of MoreBrains Cooperative and held on June 9, 2022.
Bollini, Andrea, Ballarini, Emanuele, Buso, Irene, Boychuk, Mykhaylo, Cortese, Claudio, Digilio, Giuseppe, Fazio, Riccardo, Fiorenza, Damiano, Giamminonni, Luca, Lombardi, Corrado, Maffei, Stefano, Negretti, Davide, Orlandi, Sara, Pascarelli, Luigi Andrea, Perelli, Matteo, Scancarello, Immacolata, Scognamiglio, Francesco Pio, & Mornati, Susanna. (2022, June 8). DSpace-CRIS, anticipating innovation. Open Repositories 2022 (OR2022), Denver, Colorado. Zenodo. https://doi.org/10.5281/zenodo.6733234
DSpace-CRIS is the first open source CRIS/RIMS platform in the world. In 2022 the project will reach is 10th anniversary since the first open-source release of the version 1.8.2 alfa took place in November 2012.
Technically it is a fork of the DSpace platform, but the two communities have always walked together with the aim of bringing all the general purposes features of DSpace-CRIS to the main community. With version 7 and, especially, with the introduction of configurable entities in DSpace, the gap between these two "cousin" projects has been drastically reduced. However, thanks to the DSpace-CRIS community's increased experience in dealing with very complex use cases that have only recently found their way into “simple” DSpace, there are still many areas where DSpace-CRIS provides more advanced and still unique functionalities.
The presentation will summarize unique features and characteristics of DSpace-CRIS over DSpace in 7 minutes.
CopyrightLY: Blockchain and Semantic Web for Decentralised Copyright ManagementRoberto García
CopyrightLY focuses on building an authorship and rights management layer that provides a set of services to claim authorship, on both content and data. Moreover, it also makes it possible to attach reuse terms to these claims, which state the conditions to reuse the associated data or content. This authorship and rights management layer will constitute the foundation for future services built on top of it, like social media copyright management or media monetisation through NFTs.
Facilitating an agricultural data ecosystem- The EU Code of conduct on agric...Roberto García
To facilitate the creation of a data ecosystem in the agricultural sector that allows the realisation of its full potential, existing barriers that complicate data collection, integration and exploitation should be lowered. Codes of conduct, such as that of the European Union, are aimed at these difficulties. The experience gained during the development of the Global Forest Biodiversity Initiative data portal shows that following the recommendations of the code of conduct facilitates the emergence of a community of data providers and an ecosystem for its exploitation.
Facilitant un ecosistema de dades agràries:El codi de conducta de la Unió Eu...Roberto García
Per facilitar la creació d'un ecosistema de dades en el sector agropecuari que permeti extreure tot el potencial de la seva digitalització, s'han de trencar les barreres que compliquen la recol·lecció, integració i explotació de les dades. Els codis de conducta, com el de la Unió Europea, justament estan enfocats en aquest sentit. Es presenta l'experiència amb el portal de dades de la Global Forest Biodiversity Initiative, que justament mostra com seguir les recomanacions del codi de conducta permet l'emergència d'una comunitat de proveïdors de dades i un ecosistema per a la seva explotació.
ETHICOMP 2020: Exploring Value Sensitive Design for Blockchain DevelopmentRoberto García
The potential impact that blockchain technologies might have in our society makes it paramount to consider human values during their design and development. Though the blockchain community has been moved from the beginning by a set of values that are favoured by the underlying technologies, it is necessary to explore how these values play among the diverse set of stakeholders and the potential conflicts that might arise. The final aim is to motivate the establishment of a set of guidelines that make blockchains better support human values, despite the initial bias these technologies might impose. The results so far have been used to analyze from a values perspective a blockchain application driven by ethical values.
Social Media Copyright Management using Semantic Web and BlockchainRoberto García
Solutions based on distributed ledgers require sophisticated tools for data modelling and integration that can be overcome using semantic and Linked Data technologies. One example is copyright management, where we attempt to adapt the Copyright Ontology so it can be used to build applications that benefit from both worlds, rich information modelling and reasoning together with immutable and accountable information storage that provides trust and confidence on the modelled rights statements. This approach has been applied in the context of an application for the management of social media re-use for journalistic purposes.
Semantic Web and Blockchain for Decentralized and Web-wide Content Management in the Era of Social Media
Digitalisation and the web have made it difficult for content owners to manage their rights, keep track of how their content is used and paid for. Creators also struggle to be paid royalties in a timely way. While changes are needed, a balance is required so that creativity is not stifled with a resulting loss to society. These challenges require mechanisms that scale to the Web but take into account the subtleties of the underlying copyright regimes, including moral rights, fair use and other exceptions.
We propose using Web 3.0 technologies - ranging from semantic technologies to blockchain. Semantic technologies provide knowledge representation tools capable of modelling copyright and enable computer-supported rights management. Blockchain with smart contracts makes it possible to both register copyright and record transactions in a trustless environment while providing for automatic execution of the smart contract’s terms.
Together, semantic representations like Copyright Ontology smart contracts provide a promising foundation to build a decentralised platform capable of dealing with rights management at the Web scale and enable new business models that better accommodate copyright in the era of social media.
Exploring a Semantic Framework for Integrating DPM, XBRL and SDMX DataRoberto García
Proposal of a common framework based on semantic technologies for integrating financial (and non-financial) data using a multidimensional approach based on the RDF Data Cube vocabulary
Integration and Exploration of Financial Data using Semantics and OntologiesRoberto García
Keynote at the Eurofiling XBRL Week, Academic Track, 6-9 June 2017. Hosted by the European Central Bank, Frankfurt, Germany. The keynote reported about one the first attempts to move a significant amount of XBRL to the Semantic Web, modelling XRML XML with RDF and XBRL Taxonomies with OWL.
Multilingual Ontology for Plant Health Threats Media MonitoringRoberto García
Development and testing of the media monitoring tool MedISys for the early identification and reporting of existing and emerging plant health threats guided by a plant health threats ontology
BESDUI: Benchmark for End-User Structured Data User InterfacesRoberto García
BESDUI is a first proposal to establish an accepted benchmark to measure the performance, from an end-user perspective, of tools for structured data exploration and search. This includes relational and semantic data. More details: http://w3id.org/BESDUI
Semantic Management of your Media Fragments RightsRoberto García
Presentation for the MediaMixer project webinar about the application of semantic technologies for digital asset and media fragments copyright management. The presentation includes motivation for going beyond Digital Rights Management (DRM), details about copyright modelling using Semantic Web technologies, the Copyright Ontology and its implementation.
Semantic Technologies for Copyright ManagementRoberto García
Introduction to the semantic technologies for copyright management put into practice in the context of the MediaMixer project. Presentation at the 1st Winter School on Multimedia Processing and Applications (WMPA'14) organised by the MediaMixer project and colocated with the MultiMedia Modelling Conference (MMM'14) in Dublin, Ireland.
The MediaMixer project and community promote the use of semantic technologies for media mixing through real use cases and demos that showcase them, including Digital Asset Management systems. A typical MediaMixer demo will involve fragmenting media assets, annotating them using semantic descriptions and exposing these descriptions to customers, for fragment level search and selection. Fragments will be also linked to rights information based on a copyright ontology, which integrates contracts, policies and rights expressions based on existing standards like DDEX, Creative Commons or MPEG-21.
Linked Data: the Entry Point for Worldwide Media Fragments Re-use and Copyrig...Roberto García
One of the biggest barriers for the uptake of a Web of Media is the availability of easy ways to reuse media fragments and manage their copyright. Existing proposals provide limited solutions or find it difficult to scale to the Web. MediaMixer contributes state of the art techniques for media fragment detection and semantic annotation.
This is complemented with copyright management integrated into the Web fabric, using Linked Data principles and reasoning based on a Copyright Ontology. Altogether, it can make possible to navigate the Web retrieving the metadata describing a piece of content to be reused, linked to the agreement about its copyright, the parties that will share the revenue, etc.
A typical MediaMixer demo involves:
* Fragmenting media assets
* Annotating them using semantic descriptions
* Modeling licenses, policies,... using the Copyright Ontology
* Exposing them for fragment level retrieval and re-use, including copyright reasoning
Semantic Copyright Management of Media FragmentsRoberto García
The amount of media in the Web poses many scalability issues and among them copyright management. This problem becomes even bigger when not just the copyright of pieces of content has to be considered, but also media fragments. Fragments and the management of their rights, beyond simple access control, are the centrepiece for media reuse. This can become an enormous market where copyright has to be managed through the whole value chain. To attain the required level of scalability, it is necessary to provide highly expressive rights representations that can be connected to media fragments. Ontologies provide enough expressive power and facilitate the implementation of copyright management solutions that can scale in such a scenario. The proposed Copyright Ontology is based on Semantic Web technologies, which facilitate implementations at the Web scale, can reuse existing recommendations for media fragments identifiers and interoperate with existing standards. To illustrate these benefits, the papers presents a use case where the ontology is used to enable copyright reasoning on top of DDEX data, the industry standard for information exchange along media value chains.
MediaMixer: facilitating media fragments mixing and its rights management usi...Roberto García
The MediaMixer project and community promote the use of semantic technologies for media mixing through real use cases and demos that showcase them. A typical MediaMixer demo will involve fragmenting media assets, annotating them using semantic descriptions and exposing these descriptions to customers, for fragment level search and selection. Fragments will be also linked to rights information based on a copyright ontology, which integrates licenses, policies and rights expressions based on existing standards like DDEX, ODRL or MPEG-21.
Talk about Exploring the Semantic Web, and particularly Linked Data, and the Rhizomer approach. Presented August 14th 2012 at the SRI AIC Seminar Series, Menlo Park, CA
Facets and Pivoting for Flexible and Usable Linked Data ExplorationRoberto García
The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
A pragmatic view on Semantic Technologies
1. Invited Talk:
A pragmatic view on Semantic Technologies
Roberto García, Universitat de Lleida, Spain
International Semantic Intelligence Conference (ISIC 2021)
New Delhi, India - February 25, 2021
3. Motivation & Outline
• Illustrate that even
“a little semantics goes a long way”
James Hendler, circa 1997
https://www.cs.rpi.edu/~hendler/LittleSemanticsWeb.html
• Do so through applications using semantic technologies
I have participated in:
• Game of Thrones
• Example project I use in the classroom
• MedISys Plant Health Threats
• Media Monitoring project for the European Food Safety Authority
• InVID Social Media Verification
• European research project about media verification and reuse for
journalistic purposes
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 3
4. Example 1: Classroom Project
• Example project to show what is expected from
the project they should deliver at the end
• Motivation:
application that supports readers of Game of
Thrones books (especially those that have seen
the TV series)
• Characters, houses they are loyal too, books they appear
in, picture showing series actor playing the character,...
• Added value by using semantic technologies:
• Reduced cost by integrating multiple existing data sources
• CSV, SPARQL, Web pages,…
• Facilitate the development of apps that allow exploring the
data
• Ease conceptualisation and maintenance by reusing
existing vocabularies (ontologies)
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 4
5. Reuse Existing Data
• Kaggle dataset (https://www.kaggle.com/mylesoneill/game-of-thrones)
• character-deaths.csv
• Structure: name, allegiance, death year,...
nobility (1: true | 0: false), appear in book (1: true | 0: false)
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 5
13. Tabular to Semantic Data
• Generate unique identifiers as URIs
• Independent from data source
• Table to RDF triples (subject –predicate object/literal)
• Rows
correspond to the same subject identified by URI
• Columns
correspond to subject properties
• Cells
correspond to objects (relationships) or literals (attributes)
• objets, sujects, and properties replace tabular value with URI
• Jon Snow http://mydomain.org/persons/Jon_Snow
• literales, text and make data type explicit if available
• 299 "299"^^<http://www.w3.org/2001/XMLSchema#gYear>
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 13
14. Tabular to Semantic Data
• Example for one row:
@PREFIX : <http://mydomain.org/persons/> .
@PREFIX families: <http://mydomain.org/families/> .
@PREFIX got: <http://mydomain.org/got-ontology/> .
@PREFIX foaf: <http://xmlns.com/foaf/0.1/> .
@PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@PREFIX dbo: <http://dbpedia.org/ontology/> .
@PREFIX dbp: <http://dbpedia.org/property/> .
@PREFIX dbr: <http://dbpedia.org/resource/> .
...
:Robert_Baratheon rdf:type dbo:Nobel, dbo:FictionalCharacter ;
rdfs:label "Robert Baratheon"@en ;
foaf:name "Robert Baratheon"@en ;
dbo:allegiance families:House_Baratheon ;
dbp:genre "Male" ;
got:deathChapter "47"^^<http://www.w3.org/2001/XMLSchema#integer> ;
owl:sameAs dbr:Robert_Baratheon ;
...
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 14
15. Automate Transformation
• Many tools Tabular or Relational DB to RDF
• For tabular: OpenRefine
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 15
16. Additional Sources
• Character’s pictures https://www.hbo.com/game-of-thrones/cast-and-crew
• Get picture, plus characters’ names to reconcile with
previous RDF data using OpenRefine
• Automatic integration using same URI for characters
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 16
17. Final Result
• Automatic User Interface
to explore the data
• Generation driven by
data structure
https://rhizomer.rhizomik.net/datasets/got
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 17
18. Example 2: EFSA Project
• Multilingual Ontology for Plant Health Threats Media Monitoring
• Development and testing of the media monitoring tool MedISys for
the early identification and reporting of existing and emerging
plant health threats
• Timing (duration): January 2014 – June 2016 (2.5 years)
• Funding: EFSA
• Objectives:
• Collate new and appropriate media information sources
• Multilingual ontology for the global identification of emerging new
plant health threats to be appended to MedISys
• English, Spanish, Italian, French, Dutch, German, Portuguese, Russian, Chinese and Arabic
• Develop and test strategies to monitor re-emerging plant health
threats on global and regional scale
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 18
19. Proposed Approach
• Approach based on the use of semantics and ontologies
• Ontology: key component of the developed system that structures and
provides knowledge about plant health threats
• Knowledge captured from existing sources and experts
• Guides applications for
• Knowledge capture
• Indirect sources search
• Terms translation
• Media monitoring
categories generation
An ontology is a formal, explicit specification of a shared conceptualisation.
is
means
implies expressed in
terms of
Abstract model of
portion of world
Machine-readable
and understandable
Based on a
consensus
Concepts,
properties,...
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 19
20. Ontology Generation
• Ontology Skeleton
• Collected 140 pests/diseases from EPPO Alerts, 2000/29-1-
A-1 and EU Emergency Control Measures
• 117 linked to UniProt Taxonomy:
• Taxonomical information, scientific/common/other names,…
• 47 linked also to Wikipedia
• Common names in
multiple languages
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 20
21. Ontology Generation
• Plant Health Threats Ontology
• Enrich ontology with affected crops, hosts, vectors, symptoms
expressions…
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 21
22. Ontology Enrichment
• Plant Health Threats Ontology
• All concepts linked to labels in different languages
• Extract as keywords for MedISys or Web search filters,…
• Example: “Maladie de Pierce” OR ( “grapevine” AND
“sharpshooter” )
Xylella fastidiosa
Gammaproteobacteria
Nerium oleander,
Prunus salicina, Medicago
sp., Sorghum halepense,…
Homalodisca coagulata,
Graphocephala sp.,
Oncometopia sp.,
Draeculacephala sp.,…
Grapevine, Citrus, Olive,
Almond, Peach, Coffee,…
subClassOf
vector
host
crop
“Pierce's disease”, “Citrus
variegated chlorosis” en
“Maladie de Pierce” fr
“葉緣焦枯病菌” zn
“Glassy-winged sharpshooter”,
“Spittlebugs”, “Froghoppers”,
“Planthoppers”,… en
“vite” it,… …
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 22
23. Ontology Enrichment
• Ontology Editor
• Assist experts during the knowledge capture process
23
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021
24. Assisted Knowledge Capture
• Ontology Editor – forms with assistance
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 24
25. Assisted Knowledge Capture
• Ontology Editor - autocomplete
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 25
26. Example 3: InVID Project
• H2020 project InVID, In Video Veritas
• Verification of Social Media Video Content for the News
Industry
• https://www.invid-project.eu
• Reuse of User Generated Video from Social Media for
journalistic purposes
• Discovering social media about current events
• Video verification to avoid fake news
• Request reuse, check licensing, negotiate terms, sign
agreements,… even economic compensation
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 26
27. Objectives
• Sophisticated models for copyright information:
• Rights status
• Reuse terms
• Negotiation
• Copyright agreements
• Trust and confidence on rights statements
• Potentially legally binding
(time stamp, signatures, tamper proof,…)
• Proposed approach:
• Semantic Web: rich information modelling and reasoning
• Blockchain: immutable and accountable information storage
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 27
28. Copyright Ontology
• Copyright knowledge representation
• Copyright Ontology based on the
fundamental ontological distinctions:
• Abstract: intangible
• Process: happens,
temporal stages
(action, event,…)
• Object: can be defined
independent of time
(includes digital objects)
Victor Hugo’s
Les Misérables
Abstract
Objects
Processes
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 28
29. Copyright Ontology
• Also capture the dynamic parts of
the copyright value chain
• Actions performed
by value chain participants
• Plus consumer actions:
• Buy, Attend, Access,
Play, Tune,…
• Plus licensing actions:
• Agree/Disagree
• Transfer, Attribute,…
Victor Hugo’s
Les Misérables
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021
Creator Actor Producer Broadcaster User
Motion Picture
Script
Adaptation Performance
manifest perform record
Communication
broadcast
adapt
Literary Work
tune
29
30. Copyright Ontology
• Model full details of an action,
its dimensions like as a verb in
a sentence (roles):
• who performs it,
what is manipulated,
when, where…
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021
Role Kind Main Role Description
who schema:agent
The direct performer or driver of the
action (animate or inanimate)
schema:participant
Other co-agents that participated in the
action indirectly, for instance a recipient
what schema:object
The object upon which the action is
carried out
schema:result The result produced in the action
where schema:location Where an action takes place
schema:fromLocation
The original location of the object or the
agent before the action
schema:toLocation
The final location of the object or the
agent after the action
when schema:startTime
When the action started or the time it is
expected to start
schema:endTime
When the action finished or the time it is
expected to end
pointInTime
The point in time when the action
happens
duration
The amount of time the action requires to
complete
with schema:instrument
The object that helps the agent perform
the action
why aim The reason or objective of the action
how manner The way the action is carried out
if condition
Something that must hold or happen
before the action starts
then consequence
Something that must hold or happen after
the action is completed
Who?
What?
When? Where?
30
31. Check UGV Rights Status
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021
https://rights.invid.udl.cat
31
32. Request Reuse
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021
Current video, plus
all future YouTube videos by content owner or
in any social network linked to InVID profile
32
33. License Reasoning
• Streamline licensing
• License to organisation or everyone
• License future videos
• Semantic representation
of agreements
• Semantic queries to check
previous agreements
• Including territories,
timeframes
or revocations
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021
InVID Rights
Management
Rights
Database
JSON-LD
Semantic
Repository
Semantic
Copyright
Management
33
34. Store Agreement
• JSON-LD serialisation of a
Reuse Agreement:
• Grants any member of the Daily Planet
permission to republish a YouTube
video whose owner is the Google user
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021
{
"@context": {
"@vocab": "http://invid.udl.cat/ontology/",
"cro": "http://rhizomik.net/ontologies/copyrightonto.owl#",
"schema": "http://schema.org/"
},
"@id": "…/reuseAgreements/1", "@type": "cro:Agree",
"cro:when": "2019-02-16T15:15:00Z",
"cro:who": [
{
"@id": ”…/inVIDUsers/1", "@type": "schema:Person",
"schema:name": "Clark Kent",
"schema:email": "journalist@invid-project.eu",
"schema:memberOf": {
"@id": "…/organizations/1",
"@type": "schema:Organization",
"schema:name": "Daily Planet"
}
},
{
"@id": "…/contentOwners/1", "@type": "schema:Person",
"username": "user”, "schema:email": "user@gmail.com",
"schema:name": "Google User"
} ],
"cro:what": {
"@id": "…/reuseTerms/1", "@type": "cro:MakeAvailable",
"schema:startTime": "2019-03-01T10:44:00Z",
"schema:endTime": "2019-05-01T10:44:00Z",
"cro:who": { "@id": "…/organizations/1" },
"cro:what": {
"@id": "…/youTubeVideos/_5l7vn1QdKM", "@type": "YouTubeVideo",
"user": {
"@id": "…/youTubeChannels/MyChannel", "@type": "YouTubeChannel",
"contactURL": "http://www.youtube.com/channel/MyChannel/about",
"contentOwner": { "@id": "…/contentOwners/1" }
}
}
}
}
34
36. SPARQL for License Reasoning
• SPARQL standard for semantic
queries
• Check intended reuse against
existing agreements
• Encapsulate complexities minimising
implementation cost
• Flexibility and scalability
• Example:
• Active agreements, not disagreed,
with Make Available term
• what: restricted to the YouTube
video _5l7vn1QdKM
• startTime: 2019-11-15
• who: is InVIDUser 2, Organization 1
or any organization InVIDUser 2 is a
member of
• where: Spain or a region Spain is
contained in
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021
PREFIX …
SELECT DISTINCT ?isAuthorized ?why
WHERE {
?agree rdf:type cro:Agree ;
cro:what ?term ; cro:when ?agreeDate
FILTER ( xsd:dateTime( ?agreeDate) <= now() )
OPTIONAL {
?disagree rdf:type cro:Disagree ;
cro:what ?term ; cro:when ?disagreeDate
FILTER ( xsd:dateTime( ?disagreeDate) <= now() )
}
BIND((bound(?agree) && (!bound(?disagree))) AS ?isAuthorized)
BIND(if(bound( ?disagree), ?disagree, ?agree) AS ?why)
?term rdf:type cro:MakeAvailable .
?term cro:what <…/youTubeVideos/_5l7vn1QdKM> .
?term schema:startTime ?start FILTER ("2019-11-15" >= ?start)
…
{
{ ?term cro:who <…/inVIDUsers/2>}
UNION
{ ?term cro:who <…/organizations/1>}
UNION
{ ?term cro:who ?organization .
<…/inVIDUsers/2> schema:memberOf ?organization }
}
{
{ ?term cro:where "Spain"}
UNION
{ ?term cro:where ?regionName .
?region rdfs:label ?regionName .
?country rdfs:label "Spain" .
?country (schema:containedInPlace)+ ?region
}
}
}
36
37. Trustful Agreements
• Use Ethereum Smart Contracts
• Blockchain as a global shared computer
• Immutable transactions (executed in all nodes)
• Encode rules guaranteed to execute
• Smart contract keeps track of semantic agreements
• Participants digitally sign negotiation steps, last by both (agreement)
• Identity management using uPort mobile app
• Self-Sovereign Identities (e.g. email attestations)
• Transaction signing: scan QR code
• Optional: remuneration using cryptocurrency wallet
International Semantic Intelligence Conference, ISIC 2021 New Delhi, India February 25-27,2021 37
https://www.uport.me
Rights
Database
InVID Rights
Management
Distributed
Ledger
Agreement
Time Stamping
Accountability
Auditability
Tamper Proof
Identify
Attestations
38. Thank you for your attention
Questions?
More details:
http://rhizomik.net/~roberto
?
Editor's Notes
(“Objectives“ refers to the whole duration of the project; “focus of year 1“ narrows it down to the first year. Be consistent with the DoA.)