Biomedicine has always been a fertile and challenging domain for computational discovery science. Indeed, the existence of millions of scientific articles, thousands of databases, and hundreds of ontologies, offer exciting opportunities to reuse our collective knowledge, were we not stymied by incompatible formats, overlapping and incomplete vocabularies, unclear licensing, and heterogeneous access points. In this talk, I will discuss our work to create computational standards, platforms, and methods to wrangle knowledge into simple, but effective representations based on semantic web technologies that are maximally FAIR - Findable, Accessible, Interoperable, and Reuseable - and to further use these for biomedical knowledge discovery. But only with additional crucial developments will this emerging Internet of FAIR data and services enable automated scientific discovery on a global scale.
bio:
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research focuses on the development of computational methods for scalable and responsible discovery science. Dr. Dumontier obtained his BSc (Biochemistry) in 1998 from the University of Manitoba, and his PhD (Bioinformatics) in 2005 from the University of Toronto. Previously a faculty member at Carleton University in Ottawa and Stanford University in Palo Alto, Dr. Dumontier founded and directs the interfaculty Institute of Data Science at Maastricht University to develop sociotechnological systems for responsible data science by design. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon 2020, the European Open Science Cloud, the US National Institutes of Health and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
This presentation was given on October 21, 2020 at CIKM2020.
The Role of the FAIR Guiding Principles for an effective Learning Health SystemMichel Dumontier
he learning health system (LHS) is an integrated social and technological system that embeds continuous improvement and innovation for the effective delivery of healthcare. A crucial part of the LHS lies in how the underlying information system will secure and take advantage of relevant knowledge assets towards supporting complex and unusual clinical decision making, facilitating public health surveillance, and aiding comparative effectiveness research. However, key knowledge assets remain difficult to obtain and reuse, particularly in a decentralized context. In this talk, I will discuss the role of the Findable, Accessible, Interoperable, and Reusable (FAIR) Guiding Principles towards the realization of the LHS, along with emerging technologies to publish and refine clinical research and knowledge derived therein.
Keynote given for 2021 Knowledge Representation for Health Care http://banzai-deim.urv.net/events/KR4HC-2021/
Are we FAIR yet? And will it be worth it?
The FAIR Principles propose essential characteristics that all digital resources (e.g. datasets, repositories, web services) should possess to be Findable, Accessible, Interoperable, and Reusable by both humans and machines. The Principles act as a guide that researchers and data stewards should expect from contemporary digital resources, and in turn, the requirements on them when publishing their own scholarly products. As interest in, and support for the Principles has spread, the diversity of interpretations has also broadened, with some resources claiming to already “be FAIR”.
This talk will elaborate on what FAIR is, what it entails, and how we should evaluate FAIRness. I will describe new social and technological infrastructure to support the creation and evaluation of FAIR resources, and how FAIR fits into institutional, national and international efforts. Finally, I will discuss the merits of the FAIR principles (and what we ask of people) in the context of strengthening data-driven scientific inquiry.Are we FAIR yet? And will it be worth it?
The FAIR Principles propose essential characteristics that all digital resources (e.g. datasets, repositories, web services) should possess to be Findable, Accessible, Interoperable, and Reusable by both humans and machines. The Principles act as a guide that researchers and data stewards should expect from contemporary digital resources, and in turn, the requirements on them when publishing their own scholarly products. As interest in, and support for the Principles has spread, the diversity of interpretations has also broadened, with some resources claiming to already “be FAIR”.
This talk will elaborate on what FAIR is, what it entails, and how we should evaluate FAIRness. I will describe new social and technological infrastructure to support the creation and evaluation of FAIR resources, and how FAIR fits into institutional, national and international efforts. Finally, I will discuss the merits of the FAIR principles (and what we ask of people) in the context of strengthening data-driven scientific inquiry.
Keynote given at NETTAB2018 - http://www.igst.it/nettab/2018/
The FAIR Principles propose key characteristics that all digital resources (e.g. datasets, repositories, web services) should possess to be Findable, Accessible, Interoperable, and Reusable by people and machines. The Principles act as a guide that researchers should expect from contemporary digital resources, and in turn, the requirements on them when publishing their own scholarly products. As interest in, and support for the Principles has spread, the diversity of interpretations has also broadened, with some resources claiming to already “be FAIR”. This talk will elaborate on what FAIR is, why we need it, what it entails, and how we should evaluate FAIRness. I will describe new social and technological infrastructure to support the creation and evaluation of FAIR resources, and how FAIR fits into institutional, national and international efforts. Finally, I will discuss the merits of the FAIR principles (and what we ask of people) in the context of strengthening data-driven scientific inquiry.
The FAIR (Findable, Accessible, Interoperable, Reusable) Guiding Principles light a path towards improving the discovery and reuse of digital objects (data, documents, software, web services, etc) by machines. Machine reusability is a crucial strategic component in building robust digital infrastructure that strengthens scholarship and opens new pathways for innovation on a truly global scale. However, as the FAIR principles do not specify any particular implementation, communities have the homework to devise, standardize and implement technical specifications to improve the ‘FAIRness’ of digital assets. In this seminar, I will focus on the history and state of the art in the FAIRness assessment, including manual, semi-automated and fully automated approaches, and how these can be used by developers and consumers alike. This seminar will serve as a springboard for community discussion and adoption of these services to incrementally and realistically improve the FAIRness of their resources.
A talk prepared for Workshop Working on data stewardship? Meet your peers!
Datum: 03 OKT 2017
https://www.surf.nl/agenda/2017/10/workshop-working-on-data-stewardship-meet-your-peers/index.html
The Role of the FAIR Guiding Principles for an effective Learning Health SystemMichel Dumontier
he learning health system (LHS) is an integrated social and technological system that embeds continuous improvement and innovation for the effective delivery of healthcare. A crucial part of the LHS lies in how the underlying information system will secure and take advantage of relevant knowledge assets towards supporting complex and unusual clinical decision making, facilitating public health surveillance, and aiding comparative effectiveness research. However, key knowledge assets remain difficult to obtain and reuse, particularly in a decentralized context. In this talk, I will discuss the role of the Findable, Accessible, Interoperable, and Reusable (FAIR) Guiding Principles towards the realization of the LHS, along with emerging technologies to publish and refine clinical research and knowledge derived therein.
Keynote given for 2021 Knowledge Representation for Health Care http://banzai-deim.urv.net/events/KR4HC-2021/
Are we FAIR yet? And will it be worth it?
The FAIR Principles propose essential characteristics that all digital resources (e.g. datasets, repositories, web services) should possess to be Findable, Accessible, Interoperable, and Reusable by both humans and machines. The Principles act as a guide that researchers and data stewards should expect from contemporary digital resources, and in turn, the requirements on them when publishing their own scholarly products. As interest in, and support for the Principles has spread, the diversity of interpretations has also broadened, with some resources claiming to already “be FAIR”.
This talk will elaborate on what FAIR is, what it entails, and how we should evaluate FAIRness. I will describe new social and technological infrastructure to support the creation and evaluation of FAIR resources, and how FAIR fits into institutional, national and international efforts. Finally, I will discuss the merits of the FAIR principles (and what we ask of people) in the context of strengthening data-driven scientific inquiry.Are we FAIR yet? And will it be worth it?
The FAIR Principles propose essential characteristics that all digital resources (e.g. datasets, repositories, web services) should possess to be Findable, Accessible, Interoperable, and Reusable by both humans and machines. The Principles act as a guide that researchers and data stewards should expect from contemporary digital resources, and in turn, the requirements on them when publishing their own scholarly products. As interest in, and support for the Principles has spread, the diversity of interpretations has also broadened, with some resources claiming to already “be FAIR”.
This talk will elaborate on what FAIR is, what it entails, and how we should evaluate FAIRness. I will describe new social and technological infrastructure to support the creation and evaluation of FAIR resources, and how FAIR fits into institutional, national and international efforts. Finally, I will discuss the merits of the FAIR principles (and what we ask of people) in the context of strengthening data-driven scientific inquiry.
Keynote given at NETTAB2018 - http://www.igst.it/nettab/2018/
The FAIR Principles propose key characteristics that all digital resources (e.g. datasets, repositories, web services) should possess to be Findable, Accessible, Interoperable, and Reusable by people and machines. The Principles act as a guide that researchers should expect from contemporary digital resources, and in turn, the requirements on them when publishing their own scholarly products. As interest in, and support for the Principles has spread, the diversity of interpretations has also broadened, with some resources claiming to already “be FAIR”. This talk will elaborate on what FAIR is, why we need it, what it entails, and how we should evaluate FAIRness. I will describe new social and technological infrastructure to support the creation and evaluation of FAIR resources, and how FAIR fits into institutional, national and international efforts. Finally, I will discuss the merits of the FAIR principles (and what we ask of people) in the context of strengthening data-driven scientific inquiry.
The FAIR (Findable, Accessible, Interoperable, Reusable) Guiding Principles light a path towards improving the discovery and reuse of digital objects (data, documents, software, web services, etc) by machines. Machine reusability is a crucial strategic component in building robust digital infrastructure that strengthens scholarship and opens new pathways for innovation on a truly global scale. However, as the FAIR principles do not specify any particular implementation, communities have the homework to devise, standardize and implement technical specifications to improve the ‘FAIRness’ of digital assets. In this seminar, I will focus on the history and state of the art in the FAIRness assessment, including manual, semi-automated and fully automated approaches, and how these can be used by developers and consumers alike. This seminar will serve as a springboard for community discussion and adoption of these services to incrementally and realistically improve the FAIRness of their resources.
A talk prepared for Workshop Working on data stewardship? Meet your peers!
Datum: 03 OKT 2017
https://www.surf.nl/agenda/2017/10/workshop-working-on-data-stewardship-meet-your-peers/index.html
Blockchain in Health Research Overview - ManionSean Manion PhD
Blockchain in Health Research 2019 was the 2nd annual summit hosted at Georgetown University on 27 Apr 2019 by Sean Manion, Science Distributed and Gilles Hilary, Georgetown University.
Blockchain and Patient-Centered Outcomes Measures - GoldwaterSean Manion PhD
Blockchain in Health Research 2019 was the 2nd annual summit hosted at Georgetown University on 27 Apr 2019 by Sean Manion, Science Distributed and Gilles Hilary, Georgetown University.
Big Data must be processed with advanced collection and analysis tools, based on predetermined algorithms, in order to obtain relevant information. Algorithms must also take into account invisible aspects for direct perceptions. Big Data issues is multi-layered. A distributed parallel architecture distributes data on multiple servers (parallel execution environments) thus dramatically improving data processing speeds. Big Data provides an infrastructure that allows for highlighting uncertainties, performance, and availability of components.
DOI: 10.13140/RG.2.2.12784.00004
Clinical Data Models - The Hyve - Bio IT World April 2019Kees van Bochove
Population genetics and genomics is an emerging topic for the application of machine learning methods in healthcare and biomedical sciences. Currently, several large genomics initiatives, such as Genomics England, UK Biobank, the All of Us Project, and Europe's 1 Million Genomes Initiative are all in the process of making both clinical and genomics data available from large numbers of patients to benefit biomedical research. However, a key challenge in these initiatives is the standardization of the clinical and outcomes data in such a way that machine learning methods can be effectively trained to discover useful medical and scientific insights. In this talk, we will look at what data is available at scale, and review some of examples of the application of common data and evidence models such as OMOP, FHIR, GA4GH etc. in order to achieve this, based on projects which The Hyve has executed with some of these initiatives to harmonize their clinical, genomics, imaging and wearables data and make it FAIR.
Accelerating Biomedical Research with the Emerging Internet of FAIR Data and ...Michel Dumontier
ith its focus on improving the health and well being of people, biomedicine has always been a fertile, if not challenging domain for computational discovery science. Indeed, the existence of millions of scientific articles, thousands of databases, and hundreds of ontologies, offer exciting opportunities to reuse our collective knowledge, were we not stymied by incompatible formats, overlapping and incomplete vocabularies, unclear licensing, and heterogeneous access points. In this talk, I will discuss our work to create computational standards, platforms, and methods to wrangle knowledge into simple, but effective representations based on semantic web technologies that are maximally FAIR - Findable, Accessible, Interoperable, and Reuseable - and to further use these for biomedical knowledge discovery. But only with additional crucial developments will this emerging Internet of FAIR data and services enable automated scientific discovery on a global scale.
Impact of big data congestion in IT: An adaptive knowledgebased Bayesian networkIJECEIAES
Recent progress on real-time systems are growing high in information technology which is showing importance in every single innovative field. Different applications in IT simultaneously produce the enormous measure of information that should be taken care of. In this paper, a novel algorithm of adaptive knowledge-based Bayesian network is proposed to deal with the impact of big data congestion in decision processing. A Bayesian system show is utilized to oversee learning arrangement toward all path for the basic leadership process. Information of Bayesian systems is routinely discharged as an ideal arrangement, where the examination work is to find a development that misuses a measurably inspired score. By and large, available information apparatuses manage this ideal arrangement by methods for normal hunt strategies. As it required enormous measure of information space, along these lines it is a tedious method that ought to be stayed away from. The circumstance ends up unequivocal once huge information include in hunting down ideal arrangement. A calculation is acquainted with achieve quicker preparing of ideal arrangement by constraining the pursuit information space. The proposed algorithm consists of recursive calculation intthe inquiry space. The outcome demonstrates that the ideal component of the proposed algorithm can deal with enormous information by processing time, and a higher level of expectation rates.
From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...Edward Curry
The Real-time Linked Dataspace (RLD) is an enabling platform for data management for intelligent systems within smart environments that combines the pay-as-you-go paradigm of dataspaces, linked data, and knowledge graphs with entity-centric real-time query capabilities.
The RLD contains all the relevant information within a data ecosystem including things, sensors, and data sources and has the responsibility for managing the relationships among these participants.
It manages sources without presuming a pre-existing semantic integration among them using specialised dataspace support services for loose administrative proximity and semantic integration for event and stream systems. Support services leverage approximate and best-effort techniques and operate under a 5 star model for “pay-as-you-go” incremental data management.
Open science and medical evidence generation - Kees van Bochove - The HyveKees van Bochove
Presentation about open science, the FAIR principles, and medical evidence generation with the OHDSI COVID-19 study-a-thon as an example. I've used variations on this deck in a couple of classroom and online courses for PhD and master students early 2020.
Key Technology Trends for Big Data in EuropeEdward Curry
In this presentation we will discuss some of the results of the BIG project including analysis of foundational Big Data research technologies, technology and strategy roadmaps to enable business to understand the potential of Big Data technologies across different sectors, and the necessary collaboration and dissemination infrastructure to link technology suppliers, integrators and leading user organizations.
Edward Curry is leading the Technical Working Group of the BIG Project with over 30 committed experts along the big data value chain (Acquisition, Analysis, Curation, Storage, Usage). With the help of the other technical leads, he will elaborate on the key technology trends identified in the BIG Project and how they bring data-driven value to industrial sectors.
Big Data: Beyond the hype, Delivering valueEdward Curry
Big Data: Beyond the hype, Delivering value explains Big Data technology and how it is transforming industry and society to members of the IDEAL-IST project.
IDEAL-IST is an international ICT (Information and Communication Technologies) network, with more than 65 ICT national partners from EU and Non-EU Countries. It assists ICT companies and research organizations worldwide wishing to find project partners for a participation in the Horizon 2020 program of the European Commission.
Apache Spark + AI Helps and FDA Protects the Nation with Jonathan Chu and Kun...Databricks
The FDA Office of Regulatory Affairs (ORA) manages the process whereby all products imported into United States are screened by electronic systems and human inspections, https://www.fda.gov/ForIndustry/ImportProgram/.
About 40 million products are monitored annually resulting in 6 billion data records that need to be processed every night. Booz Allen built an Apache Spark system to analyze the FDA ORA data and to predict violations. The solution uses enterprise friendly SQL framework to expand from data aggregation to Machine Learning without heavy coding.
The system enables any enterprise DBA or analyst easily access, filter and transform data to apply the latest machine learning models. These analysts are able to process 6 billion records from various databases and other sources every night without any prior experience with Apache Spark. This helped to scale the Apache Spark solution enable data warehouse/RDBM experts to process powerful analytics workloads without needing to know Scala or Python.
Acclerating biomedical discovery with an internet of FAIR data and services -...Michel Dumontier
With its focus on improving the health and well being of people, biomedicine has always been a fertile, if not challenging domain for computational discovery science. Indeed, the existence of millions of scientific articles, thousands of databases, and hundreds of ontologies, offer exciting opportunities to reuse our collective knowledge, were we not stymied by incompatible formats, overlapping and incomplete vocabularies, unclear licensing, and heterogeneous access points. In this talk, I will discuss our work to create computational standards, platforms, and methods to wrangle knowledge into simple, but effective representations based on semantic web technologies that are maximally FAIR - Findable, Accessible, Interoperable, and Reuseable - and to further use these for biomedical knowledge discovery. But only with additional crucial developments will this emerging Internet of FAIR data and services, which is built on Semantic Web technologies, be well positioned to support automated scientific discovery on a global scale.
Blockchain in Health Research Overview - ManionSean Manion PhD
Blockchain in Health Research 2019 was the 2nd annual summit hosted at Georgetown University on 27 Apr 2019 by Sean Manion, Science Distributed and Gilles Hilary, Georgetown University.
Blockchain and Patient-Centered Outcomes Measures - GoldwaterSean Manion PhD
Blockchain in Health Research 2019 was the 2nd annual summit hosted at Georgetown University on 27 Apr 2019 by Sean Manion, Science Distributed and Gilles Hilary, Georgetown University.
Big Data must be processed with advanced collection and analysis tools, based on predetermined algorithms, in order to obtain relevant information. Algorithms must also take into account invisible aspects for direct perceptions. Big Data issues is multi-layered. A distributed parallel architecture distributes data on multiple servers (parallel execution environments) thus dramatically improving data processing speeds. Big Data provides an infrastructure that allows for highlighting uncertainties, performance, and availability of components.
DOI: 10.13140/RG.2.2.12784.00004
Clinical Data Models - The Hyve - Bio IT World April 2019Kees van Bochove
Population genetics and genomics is an emerging topic for the application of machine learning methods in healthcare and biomedical sciences. Currently, several large genomics initiatives, such as Genomics England, UK Biobank, the All of Us Project, and Europe's 1 Million Genomes Initiative are all in the process of making both clinical and genomics data available from large numbers of patients to benefit biomedical research. However, a key challenge in these initiatives is the standardization of the clinical and outcomes data in such a way that machine learning methods can be effectively trained to discover useful medical and scientific insights. In this talk, we will look at what data is available at scale, and review some of examples of the application of common data and evidence models such as OMOP, FHIR, GA4GH etc. in order to achieve this, based on projects which The Hyve has executed with some of these initiatives to harmonize their clinical, genomics, imaging and wearables data and make it FAIR.
Accelerating Biomedical Research with the Emerging Internet of FAIR Data and ...Michel Dumontier
ith its focus on improving the health and well being of people, biomedicine has always been a fertile, if not challenging domain for computational discovery science. Indeed, the existence of millions of scientific articles, thousands of databases, and hundreds of ontologies, offer exciting opportunities to reuse our collective knowledge, were we not stymied by incompatible formats, overlapping and incomplete vocabularies, unclear licensing, and heterogeneous access points. In this talk, I will discuss our work to create computational standards, platforms, and methods to wrangle knowledge into simple, but effective representations based on semantic web technologies that are maximally FAIR - Findable, Accessible, Interoperable, and Reuseable - and to further use these for biomedical knowledge discovery. But only with additional crucial developments will this emerging Internet of FAIR data and services enable automated scientific discovery on a global scale.
Impact of big data congestion in IT: An adaptive knowledgebased Bayesian networkIJECEIAES
Recent progress on real-time systems are growing high in information technology which is showing importance in every single innovative field. Different applications in IT simultaneously produce the enormous measure of information that should be taken care of. In this paper, a novel algorithm of adaptive knowledge-based Bayesian network is proposed to deal with the impact of big data congestion in decision processing. A Bayesian system show is utilized to oversee learning arrangement toward all path for the basic leadership process. Information of Bayesian systems is routinely discharged as an ideal arrangement, where the examination work is to find a development that misuses a measurably inspired score. By and large, available information apparatuses manage this ideal arrangement by methods for normal hunt strategies. As it required enormous measure of information space, along these lines it is a tedious method that ought to be stayed away from. The circumstance ends up unequivocal once huge information include in hunting down ideal arrangement. A calculation is acquainted with achieve quicker preparing of ideal arrangement by constraining the pursuit information space. The proposed algorithm consists of recursive calculation intthe inquiry space. The outcome demonstrates that the ideal component of the proposed algorithm can deal with enormous information by processing time, and a higher level of expectation rates.
From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...Edward Curry
The Real-time Linked Dataspace (RLD) is an enabling platform for data management for intelligent systems within smart environments that combines the pay-as-you-go paradigm of dataspaces, linked data, and knowledge graphs with entity-centric real-time query capabilities.
The RLD contains all the relevant information within a data ecosystem including things, sensors, and data sources and has the responsibility for managing the relationships among these participants.
It manages sources without presuming a pre-existing semantic integration among them using specialised dataspace support services for loose administrative proximity and semantic integration for event and stream systems. Support services leverage approximate and best-effort techniques and operate under a 5 star model for “pay-as-you-go” incremental data management.
Open science and medical evidence generation - Kees van Bochove - The HyveKees van Bochove
Presentation about open science, the FAIR principles, and medical evidence generation with the OHDSI COVID-19 study-a-thon as an example. I've used variations on this deck in a couple of classroom and online courses for PhD and master students early 2020.
Key Technology Trends for Big Data in EuropeEdward Curry
In this presentation we will discuss some of the results of the BIG project including analysis of foundational Big Data research technologies, technology and strategy roadmaps to enable business to understand the potential of Big Data technologies across different sectors, and the necessary collaboration and dissemination infrastructure to link technology suppliers, integrators and leading user organizations.
Edward Curry is leading the Technical Working Group of the BIG Project with over 30 committed experts along the big data value chain (Acquisition, Analysis, Curation, Storage, Usage). With the help of the other technical leads, he will elaborate on the key technology trends identified in the BIG Project and how they bring data-driven value to industrial sectors.
Big Data: Beyond the hype, Delivering valueEdward Curry
Big Data: Beyond the hype, Delivering value explains Big Data technology and how it is transforming industry and society to members of the IDEAL-IST project.
IDEAL-IST is an international ICT (Information and Communication Technologies) network, with more than 65 ICT national partners from EU and Non-EU Countries. It assists ICT companies and research organizations worldwide wishing to find project partners for a participation in the Horizon 2020 program of the European Commission.
Apache Spark + AI Helps and FDA Protects the Nation with Jonathan Chu and Kun...Databricks
The FDA Office of Regulatory Affairs (ORA) manages the process whereby all products imported into United States are screened by electronic systems and human inspections, https://www.fda.gov/ForIndustry/ImportProgram/.
About 40 million products are monitored annually resulting in 6 billion data records that need to be processed every night. Booz Allen built an Apache Spark system to analyze the FDA ORA data and to predict violations. The solution uses enterprise friendly SQL framework to expand from data aggregation to Machine Learning without heavy coding.
The system enables any enterprise DBA or analyst easily access, filter and transform data to apply the latest machine learning models. These analysts are able to process 6 billion records from various databases and other sources every night without any prior experience with Apache Spark. This helped to scale the Apache Spark solution enable data warehouse/RDBM experts to process powerful analytics workloads without needing to know Scala or Python.
Acclerating biomedical discovery with an internet of FAIR data and services -...Michel Dumontier
With its focus on improving the health and well being of people, biomedicine has always been a fertile, if not challenging domain for computational discovery science. Indeed, the existence of millions of scientific articles, thousands of databases, and hundreds of ontologies, offer exciting opportunities to reuse our collective knowledge, were we not stymied by incompatible formats, overlapping and incomplete vocabularies, unclear licensing, and heterogeneous access points. In this talk, I will discuss our work to create computational standards, platforms, and methods to wrangle knowledge into simple, but effective representations based on semantic web technologies that are maximally FAIR - Findable, Accessible, Interoperable, and Reuseable - and to further use these for biomedical knowledge discovery. But only with additional crucial developments will this emerging Internet of FAIR data and services, which is built on Semantic Web technologies, be well positioned to support automated scientific discovery on a global scale.
Developed with Forum for the Future, an international sustainability non-profit organization, and based on our own interviews and executive survey, Vision 2030: A connected future highlights the opportunities that experts and business leaders see for IoT, data and connectivity to create a sustainable future.
The report outlines a future vision for IoT driven connectivity and highlights the barriers that need to be overcome to realize this vision and concludes with recommended next steps.
Developed with Forum for the Future, an international sustainability non-profit organization, and based on our own interviews and executive survey, Vision 2030: A connected future highlights the opportunities that experts and business leaders see for IoT, data and connectivity to create a sustainable future.
The report outlines a future vision for IoT driven connectivity and highlights the barriers that need to be overcome to realize this vision and concludes with recommended next steps.
In this issue of TOP TEN we provide the reader with a wealth of information related to current and future usages of BIG DATA. The reader will get an insight into usages in the realm of education, health, construction, management as well as marketing.
Data-Driven Discovery Science with FAIR Knowledge GraphsMichel Dumontier
Data-Driven Discovery Science with FAIR Knowledge Graphs
Despite the existence of vast amounts of biomedical data, these remain difficult to find and to productively reuse in machine learning and other Artificial Intelligence technologies. In this talk, I will discuss the role of the FAIR Guiding Principles to make AI-ready biomedical data, and their representation as knowledge graphs not only enables powerful ontology-backed semantic queries, but also can be used to predict missing information, as well as to check the quality of knowledge collected.
The main idea of the talk is to introduce the FAIR principles (what they are and what they are not), and how their application with semantic web technologies (ontologies/linked data) creates improved possibilities for large scale data integration, answering sophisticated questions using automated reasoners, and predicting new relations/validating data using graph embeddings. The audience will gain insight into the state of the art in a carefully presented manner that introduces principles, approaches, and outcomes relevant to Health AI.
Cloud Based Services and their Security Evaluation in the Hospitalsijtsrd
T As generation maintains on evolving, distinct establishments make use of the latest tendencies in generation and the fitness sector is not any exception. As the fee of healthcare services is increasing, healthcare professionals are turning into scarce. Healthcare firms have also followed the ultra modern era of cloud computing. The advent of cloud computing has proved to be a viable idea at the facts technology network. Rather than preserving the patient’s data in a report in a medical institution he she become handled in, the records is saved in a cloud in order that it could be shared amongst all health establishments and health professionals. Information is stored in a primary region where it can be without difficulty accessed, thus saving time and heading off repetition of continually writing the statistics every time a patient is attended to in a different facility. However, there are problems with sharing such information at the cloud considering its far touchy facts. Ensuring these sensitive facts security, availability and scalability are a primary factor within the cloud computing environment. In this examine, we proposed a mathematical model for measuring the provision of statistics and machines nodes . We additionally present the contemporary cutting edge research in this discipline with the aid of focusing on numerous shortcomings of contemporary healthcare answers and requirements and we similarly proposed a gadget that will encrypt records earlier than its far being despatched to the cloud. The gadget is intended to be connected to the cloud in such a way that, earlier than the customer submits the data to the cloud and, the statistics will go through that device for encryption. The paper offers the steps to obtain the proposed system and also a sample encrypted and decrypted report. Aishwarya Chauhan | Murugan R "Cloud-Based Services and their Security Evaluation in the Hospitals" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-3 , April 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49838.pdf Paper URL: https://www.ijtsrd.com/computer-science/distributed-computing/49838/cloudbased-services-and-their-security-evaluation-in-the-hospitals/aishwarya-chauhan
Healthcare in Digital Age
by Assit. Prof. Polawat Witoolkollachit,MD
Present for the 3rd Samitivej Sriracha Medical Symposium 2018 "CQI & Innovation in Healthcare 4.0"
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
knowledge graphs are an emerging paradigm to represent information. yet their discovery and reuse is hampered by insufficient or inadequate metadata. here, the COST ACTION Distributed Knowledge Graphs had a first workshop to develop a KG metadata schema. In this presentation, the progress and plans are discussed with the W3C Community Group on Knowledge Graph Construction.
The role of the FAIR Guiding Principles in a Learning Health SystemMichel Dumontier
The learning health system (LHS) is a concept for a socio-technological system that continuously improves the delivery of health care by coupling biomedical research with practice- and evidence- based medicine. Key aspects of the LHS are collecting, integrating, and analyzing data from different sources. While the increased digitalisation of healthcare is creating new data sources, these remain hard to find and use, let alone make use of as part of intelligent systems for the benefit of patients, healthcare providers, and researchers. This talk will examine recent developments towards making key parts of the LHS, such as clinical practice guidelines, Findable, Accessible, Interoperable, and Reusable (FAIR).
The future of science and business - a UM Star LectureMichel Dumontier
I discuss how data science is affecting our way of life and how we at Maastricht University are preparing the next generation of leaders to address opportunities and challenges in responsible manner.
Towards metrics to assess and encourage FAIRnessMichel Dumontier
With an increased interest in the FAIR metrics, there is need to develop tools and appraoches that can assess the FAIRness of a digital resource. This talk begins to explore some ideas in this space, and invites people to participate in a working group focused on the development, application, and evaluation of FAIR metric efforts.
A presentation to the New Year's Event for Maastricht University's Knowledge Engineering @ Work Program. https://www.maastrichtuniversity.nl/news/kework-first-10-students-academic-workstudy-track-graduate
Bio2RDF is an open-source project that offers a large and
connected knowledge graph of Life Science Linked Data. Each dataset is expressed using its own vocabulary, thereby hindering integration, search, query, and browse data across similar or identical types of data. With growth and content changes in source data, a manual approach to maintain mappings has proven untenable. The aim of this work is to develop a (semi)automated procedure to generate high quality mappings
between Bio2RDF and SIO using BioPortal ontologies. Our preliminary results demonstrate that our approach is promising in that it can find new mappings using a transitive closure between ontology mappings. Further development of the methodology coupled with improvements in
the ontology will offer a better-integrated view of the Life Science Linked Data
Ontology has its roots as a field of philosophical study that is focused on the nature of existence. However, today's ontology (aka knowledge graph) can incorporate computable descriptions that can bring insight in a wide set of compelling applications including more precise knowledge capture, semantic data integration, sophisticated query answering, and powerful association mining - thereby delivering key value for health care and the life sciences. In this webinar, I will introduce the idea of computable ontologies and describe how they can be used with automated reasoners to perform classification, to reveal inconsistencies, and to precisely answer questions. Participants will learn about the tools of the trade to design, find, and reuse ontologies. Finally, I will discuss applications of ontologies in the fields of diagnosis and drug discovery.
Bio:
Dr. Michel Dumontier is an Associate Professor of Medicine (Biomedical Informatics) at Stanford University. His research focuses on the development of methods to integrate, mine, and make sense of large, complex, and heterogeneous biological and biomedical data. His current research interests include (1) using genetic, proteomic, and phenotypic data to find new uses for existing drugs, (2) elucidating the mechanism of single and multi-drug side effects, and (3) finding and optimizing combination drug therapies. Dr. Dumontier is the Stanford University Advisory Committee Representative for the World Wide Web Consortium, the co-Chair for the W3C Semantic Web for Health Care and the Life Sciences Interest Group, scientific advisor for the EBI-EMBL Chemistry Services Division, and the Scientific Director for Bio2RDF, an open source project to create Linked Data for the Life Sciences. He is also the founder and Editor-in-Chief for a Data Science, a new IOS Press journal featuring open access, open review, and semantic publishing.
Building a Network of Interoperable and Independently Produced Linked and Ope...Michel Dumontier
Over 15 years ago, Sir Tim Berners Lee proclaimed the founding of an exciting new future involving intelligent agents operating over smarter data in order to perform complex tasks at the behest of their human controllers. At the heart of this vision lies an uneasy alliance between tedious formal knowledge representations and powerful analytics over big, but often messy data. Bio2RDF, our decade old open source project to create Linked Data for the life sciences, has weaved emergent Semantic Web technologies such as ontologies and Linked Data to generate FAIR - Findable, Accessible, Interoperable, and Reusable - data in the form of billions of machine accessible statements for use in downstream biomedical discovery.
This revolution in data publication has been strengthened by action from global bioinformatics institutions such as the NCBI, NCBO, EBI, and DBCLS. Notably, NCBI's PubChem has successfully coupled large scale data integration with community-based standards to offer a remakable biochemical knowledge resource amenable to data hungry discovery tools. Yet, in the face of increasing pressure from researchers, funders, and publishers, will these approaches be sufficient for growing and maintaining a comprehensive knowledge graph that is inclusive of all biomedical research?
Model organisms such as budding yeast provide a common platform to interrogate and understand cellular and physiological processes. Knowledge about model organisms, whether generated during the course of scientific investigation, or extracted from published articles, are made available by model organism databases (MODs) such as the Saccharomyces Genome Database (SGD) for powerful, data-driven bioinformatic analyses. Integrative platforms such as InterMine offer a standard platform for MOD data exploration and data mining. Yet, today’s bioinformatic analyses also requires access to a significantly broader set of structured biomedical data, such as what can be found in the emerging network of Linked Open Data (LOD). If MOD data could be provisioned as FAIR (Findable, Accessible, Interoperable, and Reusable), then scientists could leverage a greater amount of interoperable data in knowledge discovery.
The goal of this proposal is to increase the utility of MOD data by implementing standards-compliant data access interfaces that interoperate with Linked Data. We will focus our efforts on developing interfaces for data access, data retrieval, and query answering for SGD. Our software will publish InterMine data as LOD that are semantically annotated with ontologies and be retrieved using standardized formats (e.g. JSON-LD, Turtle). We will facilitate the exploration of MOD data for hypothesis testing, by implementing efficient query answering using Linked Data Fragments, and by developing a set of graphical user interfaces to search for data of interest, explore connections, and answer questions that leverage the wider LOD network. Finally, we will develop a locally and cloud-deployable image to enable the rapid deployment of the proposed infrastructure. Our efforts to increase interoperability and ease of deployment for biomedical data repositories will increase research productivity and reduce costs associated with data integration and warehouse maintenance.
Making it Easier, Possibly Even Pleasant, to Author Rich Experimental MetadataMichel Dumontier
Biomedical researchers will remain stymied in their ability to take full advantage of the Big Data revolution if they can never find the datasets that they need to analyze, if there is lack of clarity about what particular datasets contain, and if data are insufficiently described.
CEDAR, an NIH BD2K Center of Excellence, aims to develop methods and tools to vastly ease the burden of authoring good experimental metadata, and to maximally use this information to zero in on datasets of interest.
Semantic web technologies offer a potential mechanism for the representation and integration of thousands of biomedical databases. Many of these databases offer cross-references to other data sources, but these are generally incomplete and prone to error. In this paper, we conduct an empirical analysis of the link structure of life science Linked Data, obtained from the Bio2RDF project. Three different link graphs for datasets, entities and terms are characterized by degree, connectivity, and clustering metrics, and their correlation is measured as well. Furthermore, we utilize the symmetry and transitivity of entity links to build a benchmark and evaluate several popular entity matching approaches. Our findings indicate that the life science data network can help find hidden links, can be used to validate links, and may offer a mechanism to integrate a wider set of resources to support biomedical knowledge discovery.
Making the most of phenotypes in ontology-based biomedical knowledge discoveryMichel Dumontier
A phenotype is an observable characteristic of an individually and typically pertains to its morphology, function, and behavior. Phenotypes, whether observed at the bench or the bedside, are increasingly being used to gain insight into the diagnosis, mechanism, and treatment of disease. A key aspect of these approaches involve comparing phenotypes that are defined in multiple terminologies that often cater to altogether different organisms, such as mice and humans. In this seminar, I will discuss computational approaches for harmonizing and utilizing phenotypes for translational research. We will examine case studies which involve the computation of semantic similarity including the use of phenotypes to inform clinical diagnosis of rare diseases, to identify human drug targets using mice knock-out models, and to explore phenotype-based approaches for drug repositioning .
Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. This document describes a consensus among participating stakeholders in the Health Care and the Life Sciences domain on the description of datasets using the Resource Description Framework (RDF). This specification meets key functional requirements, reuses existing vocabularies to the extent that it is possible, and addresses elements of data description, versioning, provenance, discovery, exchange, query, and retrieval.
With its focus on investigating the basis for the sustained existence
of living systems, modern biology has always been a fertile, if not
challenging, domain for formal knowledge representation and automated
reasoning. With thousands of databases and hundreds of ontologies now
available, there is a salient opportunity to integrate these for
discovery. In this talk, I will discuss our efforts to build a rich
foundational network of ontology-annotated linked data, develop
methods to intelligently retrieve content of interest, uncover
significant biological associations, and pursue new avenues for drug
discovery. As the portfolio of Semantic Web technologies continue to
mature in terms of functionality, scalability, and an understanding of
how to maximize their value, researchers will be strategically poised
to pursue increasingly sophisticated KR projects aimed at improving
our overall understanding of human health and disease.
bio: Dr. Michel Dumontier is an Associate Professor of Medicine
(Biomedical Informatics) at Stanford University. His research aims to
find new treatments for rare and complex diseases. His research
interest lie in the publication, integration, and discovery of
scientific knowledge. Dr. Dumontier serves as a co-chair for the World
Wide Web Consortium Semantic Web in Health Care and Life Sciences
Interest Group (W3C HCLSIG) and is the Scientific Director for
Bio2RDF, a widely used open-source project to create and provide
linked data for life sciences.
Despite the massive amount of biomedical literature, only a small amount is available in a form that is readily computable. The National Center for Biomedical Ontology (NCBO) is hosting the first hackathon to develop a comprehensive Network of BioThings (proteins, genes, pathways, mutations, drugs, diseases) extracted from scientific research articles and integrated with public biomedical data (see blog post http://goo.gl/i91ngK). During this hackathon, we will (1) identify motivating use cases, (2) define a shared, sustainable, multi-component infrastructure to build the NoB, and (3) implement common data representations, ontology-based programmatic interfaces, and develop cool applications. We will do this in an open, scalable, responsive manner so that it becomes a major asset for hackers and biomedical researchers worldwide.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
CIKM2020 Keynote: Accelerating discovery science with an Internet of FAIR data and services
1. Accelerating Discovery Science
with an Internet of FAIR Data and Services
@micheldumontier::CIKM:2020-10-211
Michel Dumontier, Ph.D.
Distinguished Professor of Data Science
Director, Institute of Data Science
4. 4
A common rejection module (CRM) for acute rejection across multiple organs identifies novel
therapeutics for organ transplantation
Khatri et al. JEM. 210 (11): 2205
DOI: 10.1084/jem.20122709
@micheldumontier::CIKM:2020-10-21
Main Findings:
1. CRM of 11 overexpressed genes predicted future injury to a graft
2. Mice treated with existing drugs against specific CRM genes extended graft survival
3. Retrospective EHR data analysis supports treatment prediction
Key Observations:
1. Meta-analysis offers a more reliable estimate of the magnitude of the effect
2. Data can be used to generate and support/dispute new hypotheses
5. However, significant effort is
still needed to find the right
dataset(s), make sense of them,
and use for a new purpose
@micheldumontier::CIKM:2020-10-215
7. 7 @micheldumontier::CIKM:2020-10-21
Our ability to reproduce landmark studies is surprisingly low:
39% (39/100) in psychology1
21% (14/67) in pharmacology2
11% (6/53) in cancer3
unsatisfactory in machine learning4
1doi:10.1038/nature.2015.17433 2doi:10.1038/nrd3439-c1 3doi:10.1038/483531a 4https://openreview.net/pdf?id=By4l2PbQ-
Most published research findings are false.
- John Ioannidis, Stanford University
PLoS Med 2005;2(8): e124.
11. @micheldumontier::CIKM:2020-10-2111
Poor quality
(meta)data Reproducibility
Crisis
Translational
Failure
Broken windows theory
Inadequate reusability theory
visible signs of crime, anti-
social behavior, and civil
disorder create an
environment that
encourages more serious
crimes
Poor quality metadata and the
inaccessibility of original research
results make it less likely to
reproduce original work, resulting
in an ineffective translation of
research into useful applications
14. Rethinking Publishing Scientific Research
@micheldumontier::CIKM:2020-10-2114
Data Science. 2017 1(1-2):139-154. DOI: 10.3233/DS-170010
http://www.tkuhn.org/pub/sempub/
15. De-centralized knowledge graphs
@micheldumontier::CIKM:2020-10-2115
Kuhn T., Chichester C., Krauthammer M., Dumontier M. (2015) Publishing
Without Publishers: A Decentralized Approach to Dissemination, Retrieval, and
Archiving of Data. In: Arenas M. et al. (eds) The Semantic Web - ISWC 2015.
ISWC 2015. Lecture Notes in Computer Science, vol 9366. Springer, Cham
16. We need a new social contract, supported
by legal and technological infrastructure
to make digital resources available in a
responsible manner
@micheldumontier::CIKM:2020-10-2116
19. An international, bottom-up paradigm for
the discovery and reuse of digital content
for the machines that people use
@micheldumontier::CIKM:2020-10-2119
21. FAIR in a nutshell
FAIR aims to enhance social and economic outcomes by facilitating the
discovery and reuse of digital resources through key requirements:
– unique identifiers to distinguish and retrieve all forms of digital content and
knowledge
– high quality meta(data) to enhance discovery of relevant digital resources
– use of common vocabularies to facilitate query and statistical analysis
– establishment of community standards to reduce the effort in data reuse
– detailed provenance to provide adequate context and to enable reproducibility
– registered in appropriate repositories to fulfill a promise to future content seekers
– simpler terms of use to clarify expectations and intensify innovation
– social and technological commitments to make data ready for intelligent applications
@micheldumontier::CIKM:2020-10-2121
24. Why Should *you* Go FAIR?
• Makes it easier for to use your own data for a new purpose
• Makes it easier for other people to find, use and cite your
data, and for them to understand what you expect in return
• Makes it easier/possible for people to verify your work
• Ensure that the data are available in the future, especially as
you may not want the responsibility
• Satisfy the expectations around data management from
institution, funding agency, journal, my peers
@micheldumontier::CIKM:2020-10-2124
25. Let’s build and use the
Internet of FAIR data and services
@micheldumontier::CIKM:2020-10-2125
26. FAIRification process
@micheldumontier::CIKM:2020-10-2126
GO FAIR Fairification: https://www.go-fair.org/fair-principles/fairification-process/
FAIRplus FAIR cookbook: https://fairplus.github.io/cookbook-dev/intro.html
Utrecht FAIR: https://www.uu.nl/en/research/research-data-management/guides/how-to-make-your-data-fair
EC H2020 Guidelines: https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-oa-data-mgt_en.pdf
30. The Semantic Web
is a portal to the web of knowledge
30 @micheldumontier::CIKM:2020-10-21
standards for publishing, sharing and querying
facts, expert knowledge and services
scalable approach for the discovery
of independently constructed,
collaboratively described,
distributed knowledge
(in principle)
35. • 30+ biomedical data sources
• 10B+ interlinked statements
• EBI, SIB, NCBI, DBCLS, NCBO, and many others
produce this content
chemicals/drugs/formulations,
genomes/genes/proteins, domains
Interactions, complexes & pathways
animal models and phenotypes
Disease, genetic markers, treatments
Terminologies & publications
35
Alison Callahan, Jose Cruz-Toledo, Peter Ansell, Michel Dumontier:
Bio2RDF Release 2: Improved Coverage, Interoperability and
Provenance of Life Science Linked Data. ESWC 2013: 200-212
Linked Data for the Life Sciences
Bio2RDF is an open source project that uses semantic web
technologies to make it easier to reuse biomedical data
@micheldumontier::CIKM:2020-10-21
36. Query multiple databases on the biological web of data
@micheldumontier::CIKM:2020-10-2136
Phenotypes of
knock-out
mouse models
for the targets
of a selected
drug (Imatinib)
37. @micheldumontier::CIKM:2020-10-2137
Explore we know, and formulate hypotheses about what we don’t
Finding melanoma drugs through a probabilistic knowledge graph.
PeerJ Computer Science. 2017. 3:e106 https://doi.org/10.7717/peerj-cs.106
by exploring a probabilistic
semantic knowledge graph
And validate them against
pipelines for drug discovery
38. Reproduce original research
@micheldumontier::CIKM:2020-10-2138
AUC 0.91 across all therapeutic indications Scripts not available. Feature tables available.
Result: AUC 0.83 … doesn’t match! (but now you can see what exactly we did)
Towards FAIR protocols and workflows: the OpenPREDICT use case. 2020. PeerJ Computer Science 6:e281
https://doi.org/10.7717/peerj-cs.281
40. Mine distributed, access restricted FAIR datasets
in a privacy preserving manner
Maastricht Study + MUMC CBS
Goal is to learn high confidence determinants of health in a privacy preserving manner
over vertically partitioned data from the Maastricht Study and Statistics Netherlands.
The data are made available through FAIR data stations that provide access to
allowable subsets of data to authorized users of approved algorithms.
Establish a new social, legal, ethical and technological infrastructure for discovery
science in and across health and non-health settings, including scalable governance
and flexible consent to underpin the responsible use of Big Data.
@micheldumontier::CIKM:2020-10-2140
s
41. FAIR data and services
to accelerate discovery science
@micheldumontier::CIKM:2020-10-2141
42. Summary
FAIR represents a global initiative to enhance the discovery and reuse of all kinds of
digital resources. It is a work in progress and it needs you!
FAIR requires new social, legal, ethical, scientific and technological infrastructure:
– How does your research group or community make their data/findings FAIR?
– What support does your organization provide you?
– Are you making use of all the data and findings that you could?
– What is responsible data science and artificial intelligence?
Semantics, coupled with AI technologies, may enable humans, aided by intelligent
machine agents, to exploit the Internet of FAIR data and services, and hence to
accelerate discovery in biomedicine and in other disciplines.
@micheldumontier::CIKM:2020-10-2142
43. Acknowledgements
@micheldumontier::CIKM:2020-10-2143
FAIR
Dumontier Lab (Maastricht University, Stanford University, Carleton University)
MU: Seun Adekunle, Thales Bertaglia, Remzi Celebi, Yenisel Calana, Ricardo De Miranda Azevedo, Vincent Emonet, Lars Jacobs, Andreea Grigoriu,
Carlos Guerrero, Tim Hendriks, Massimiliano Grassi, Andine Havelange, Pedro Hernandez Serrano, Vikas Jaiman, Parveen Kumar, Lianne Ippel,
Alexander Malic, Helder Monteiro, Stefan Meier, Kody Moodley, Stuti Nayak, Hercules Panoutsopoulos, Linda Rieswijk, Carola Roubin, Nadine
Rouleaux, Claudia van open, Chang Sun, Johan van Soest, Binosha Weerarathna, Turgay Saba, Weiwei Wang, Jinzhou Yang, Amrapali Zaveri, Leto Peel,
Rohan Nanda, Visara Urovi, Andre Dekker, David Townend, Gijs van Dijck, Christopher Brewster
SU: Sandeep Ayyar, Remzi Celebi, Shima Dastgheib, Maulik Kamdar, David Odgers, Maryam Panahiazar, Amrapali Zaveri
CU: Alison Callahan, Jose Toledo-Cruz, Natalia Villaneuva-Rosales
44. michel.dumontier@maastrichtuniversity.nl
Website: http://maastrichtuniversity.nl/ids
44 @micheldumontier::CIKM:2020-10-21
The mission of the Institute of Data Science at Maastricht University is to foster a
collaborative environment for multi-disciplinary data science research,
interdisciplinary training, and data-driven innovation .
We tackle key scientific, technical, social, legal, ethical issues that advance our
understanding across a variety of disciplines and strengthen our communities in the
face of these developments.
Editor's Notes
Abstract
Using meta-analysis of eight independent transplant datasets (236 graft biopsy samples) from four organs, we identified a common rejection module (CRM) consisting of 11 genes that were significantly overexpressed in acute rejection (AR) across all transplanted organs. The CRM genes could diagnose AR with high specificity and sensitivity in three additional independent cohorts (794 samples). In another two independent cohorts (151 renal transplant biopsies), the CRM genes correlated with the extent of graft injury and predicted future injury to a graft using protocol biopsies. Inferred drug mechanisms from the literature suggested that two FDA-approved drugs (atorvastatin and dasatinib), approved for nontransplant indications, could regulate specific CRM genes and reduce the number of graft-infiltrating cells during AR. We treated mice with HLA-mismatched mouse cardiac transplant with atorvastatin and dasatinib and showed reduction of the CRM genes, significant reduction of graft-infiltrating cells, and extended graft survival. We further validated the beneficial effect of atorvastatin on graft survival by retrospective analysis of electronic medical records of a single-center cohort of 2,515 renal transplant patients followed for up to 22 yr. In conclusion, we identified a CRM in transplantation that provides new opportunities for diagnosis, drug repositioning, and rational drug design.
Cost-benefit analysis for FAIR research data
https://op.europa.eu/en/publication-detail/-/publication/d375368c-1a0a-11e9-8d04-01aa75ed71a1/language-en/format-PDF/source-161880070
The Bio2RDF project transforms silos of life science data into a globally distributed network of linked data for biological knowledge discovery.