Introduction to FAIR principles in the context of computational biology models. Presented at a Workshop at the Basel Conference of Computational Biology. Grants: European Commission: EOSCsecretariat.eu - EOSCsecretariat.eu (831644)
This talk was part of the 2020 Disease Map Modeling Community meeting, covering the steps towards publishing reproducible simulation studies (based on a reused model). Links to different COMBINE guidelines, tutorials and efforts. Grants: European Commission: EOSCsecretariat.eu - EOSCsecretariat.eu (831644)
Slides from the presentation at IDAMO 2016, Rostock. May 2016.
Most scientific discoveries rely on previous or other findings. A lack of transparency and openness led to what many consider the "reproducibility crisis" in systems biology and systems medicine. The crisis arose from missing standards and inappropriate support of
standards in software tools. As a consequence, numerous results in low-and high-profile publications cannot be reproduced.
In my presentation, I summarise key challenges of reproducibility in systems biology and systems medicine, and I demonstrate available solutions to the related problems.
Making Data FAIR (Findable, Accessible, Interoperable, Reusable)Tom Plasterer
What to do About FAIR…
In the experience of most pharma professionals, FAIR remains fairly abstract, bordering on inconclusive. This session will outline specific case studies – real problems with real data, and address opportunities and real concerns.
·
Why making data Findable, Actionable, Interoperable and Reusable is important.
Talk presented at the Data Driven Drug Development (D4) conference on March 20th, 2019.
Introduction to the hands on session on "Standards and tools for model management" at the ICSB 2015.
Focus on COMBINE standards, tools for search, version control and archiving. Used management platform is SEEK.
Introduction to FAIR principles in the context of computational biology models. Presented at a Workshop at the Basel Conference of Computational Biology. Grants: European Commission: EOSCsecretariat.eu - EOSCsecretariat.eu (831644)
This talk was part of the 2020 Disease Map Modeling Community meeting, covering the steps towards publishing reproducible simulation studies (based on a reused model). Links to different COMBINE guidelines, tutorials and efforts. Grants: European Commission: EOSCsecretariat.eu - EOSCsecretariat.eu (831644)
Slides from the presentation at IDAMO 2016, Rostock. May 2016.
Most scientific discoveries rely on previous or other findings. A lack of transparency and openness led to what many consider the "reproducibility crisis" in systems biology and systems medicine. The crisis arose from missing standards and inappropriate support of
standards in software tools. As a consequence, numerous results in low-and high-profile publications cannot be reproduced.
In my presentation, I summarise key challenges of reproducibility in systems biology and systems medicine, and I demonstrate available solutions to the related problems.
Making Data FAIR (Findable, Accessible, Interoperable, Reusable)Tom Plasterer
What to do About FAIR…
In the experience of most pharma professionals, FAIR remains fairly abstract, bordering on inconclusive. This session will outline specific case studies – real problems with real data, and address opportunities and real concerns.
·
Why making data Findable, Actionable, Interoperable and Reusable is important.
Talk presented at the Data Driven Drug Development (D4) conference on March 20th, 2019.
Introduction to the hands on session on "Standards and tools for model management" at the ICSB 2015.
Focus on COMBINE standards, tools for search, version control and archiving. Used management platform is SEEK.
COMBINE 2019, EU-STANDS4PM, Heidelberg, Germany 18 July 2019
FAIR: Findable Accessable Interoperable Reusable. The “FAIR Principles” for research data, software, computational workflows, scripts, or any other kind of Research Object one can think of, is now a mantra; a method; a meme; a myth; a mystery. FAIR is about supporting and tracking the flow and availability of data across research organisations and the portability and sustainability of processing methods to enable transparent and reproducible results. All this is within the context of a bottom up society of collaborating (or burdened?) scientists, a top down collective of compliance-focused funders and policy makers and an in-the-middle posse of e-infrastructure providers.
Making the FAIR principles a reality is tricky. They are aspirations not standards. They are multi-dimensional and dependent on context such as the sensitivity and availability of the data and methods. We already see a jungle of projects, initiatives and programmes wrestling with the challenges. FAIR efforts have particularly focused on the “last mile” – “FAIRifying” destination community archive repositories and measuring their “compliance” to FAIR metrics (or less controversially “indicators”). But what about FAIR at the first mile, at source and how do we help Alice and Bob with their (secure) data management? If we tackle the FAIR first and last mile, what about the FAIR middle? What about FAIR beyond just data – like exchanging and reusing pipelines for precision medicine?
Since 2008 the FAIRDOM collaboration [1] has worked on FAIR asset management and the development of a FAIR asset Commons for multi-partner researcher projects [2], initially in the Systems Biology field. Since 2016 we have been working with the BioCompute Object Partnership [3] on standardising computational records of HTS precision medicine pipelines.
So, using our FAIRDOM and BioCompute Object binoculars let’s go on a FAIR safari! Let’s peruse the ecosystem, observe the different herds and reflect what where we are for FAIR personalised medicine.
References
[1] http://www.fair-dom.org
[2] http://www.fairdomhub.org
[3] http://www.biocomputeobject.org
Reproducible and citable data and models: an introduction.FAIRDOM
Prepared and presented by Carole Goble (University of Manchester), Wolfgang Mueller (HITS), Dagmar Waltermath (University of Rostock), at the Reproducible and Citable Data and Models Workshop, Warnemünde, Germany. September 14th - 16th 2015.
FAIR data and model management for systems biology.FAIRDOM
Written and presented by Carole Goble (University of Manchester) as part of Intelligent Systems for Molecular Biology (ISMB), Dublin. July 10th - 14th 2015.
FAIR Data and Model Management for Systems Biology(and SOPs too!)Carole Goble
MultiScale Biology Network Springboard meeting, Nottingham, UK, 1 June 2015
FAIR Data and model management for Systems Biology
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs and so forth. Don’t stop reading. Yes, data management isn’t likely to win anyone a Nobel prize. But publications should be supported and accompanied by data, methods, procedures, etc. to assure reproducibility of results. Funding agencies expect data (and increasingly software) management retention and access plans as part of the proposal process for projects to be funded. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. And the multi-component, multi-disciplinary nature of Systems Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Data and model management for the Systems Biology community is a multi-faceted one including: the development and adoption appropriate community standards (and the navigation of the standards maze); the sustaining of international public archives capable of servicing quantitative biology; and the development of the necessary tools and know-how for researchers within their own institutes so that they can steward their assets in a sustainable, coherent and credited manner while minimizing burden and maximising personal benefit.
The FAIRDOM (Findable, Accessible, Interoperable, Reusable Data, Operations and Models) Initiative has grown out of several efforts in European programmes (SysMO and EraSysAPP ERANets and the ISBE ESRFI) and national initiatives (de.NBI, German Virtual Liver Network, SystemsX, UK SynBio centres). It aims to support Systems Biology researchers with data and model management, with an emphasis on standards smuggled in by stealth.
This talk will use the FAIRDOM Initiative to discuss the FAIR management of data, SOPs, and models for Sys Bio, highlighting the challenges multi-scale biology presents.
http://www.fair-dom.org
http://www.fairdomhub.org
http://www.seek4science.org
Improving the Management of Computational Models -- Invited talk at the EBIMartin Scharm
Improving the Management of Computational Models:
storage – retrieval & ranking – version control
More information and slides to download at http://sems.uni-rostock.de/2013/12/martin-visits-the-ebi/
Started in 2004 (under ASTM Committee E13.15) the Analytical Information Markup Language (AnIML) is an XML based standard for capturing, sharing, viewing, and archiving analytical instrument data from any analytical technique.
This paper discusses the AnIML standard in terms of philosophy, structure, usage, and the resources available to work with the standard. Examples will be given for different techniques as well as strategies for migration of legacy data. Finally, the current status of the standard and time frame for promulgation through ASTM will be reported.
Citing data in research articles: principles, implementation, challenges - an...FAIRDOM
Prepared and presented by Jo McEntyre (EMBL_EBI) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
Presented by Richard Kidd at "The Future Information Needs of Pharmaceutical & Medicinal Chemistry", Monday 28 November 2011 at The Linnean Society, Burlington Square, London run by the RSC CICAG group.
Written and presented by Carole Goble (University of Manchester) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
How are we Faring with FAIR? (and what FAIR is not)Carole Goble
Keynote presented at the workshop FAIRe Data Infrastructures, 15 October 2020
https://www.gmds.de/aktivitaeten/medizinische-informatik/projektgruppenseiten/faire-dateninfrastrukturen-fuer-die-biomedizinische-informatik/workshop-2020/
Remarkably it was only in 2016 that the ‘FAIR Guiding Principles for scientific data management and stewardship’ appeared in Scientific Data. The paper was intended to launch a dialogue within the research and policy communities: to start a journey to wider accessibility and reusability of data and prepare for automation-readiness by supporting findability, accessibility, interoperability and reusability for machines. Many of the authors (including myself) came from biomedical and associated communities. The paper succeeded in its aim, at least at the policy, enterprise and professional data infrastructure level. Whether FAIR has impacted the researcher at the bench or bedside is open to doubt. It certainly inspired a great deal of activity, many projects, a lot of positioning of interests and raised awareness. COVID has injected impetus and urgency to the FAIR cause (good) and also highlighted its politicisation (not so good).
In this talk I’ll make some personal reflections on how we are faring with FAIR: as one of the original principles authors; as a participant in many current FAIR initiatives (particularly in the biomedical sector and for research objects other than data) and as a veteran of FAIR before we had the principles.
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
FAIR Data, Operations and Model management for Systems Biology and Systems Me...Carole Goble
FAIR Data, Operations and Model management for Systems Biology and Systems Medicine Projects given at 1st Conference of the European Association of Systems Medicine, 26-28 October 2016, Berlin. the FAIRDOM project is described.
Presentation on the Chemical Analysis Metadata Platform (ChAMP) as a new project to characterize and organize metadata about chemical analysis methods. The project will develop an ontology, controlled vocabularies, and design rules
COMBINE 2019, EU-STANDS4PM, Heidelberg, Germany 18 July 2019
FAIR: Findable Accessable Interoperable Reusable. The “FAIR Principles” for research data, software, computational workflows, scripts, or any other kind of Research Object one can think of, is now a mantra; a method; a meme; a myth; a mystery. FAIR is about supporting and tracking the flow and availability of data across research organisations and the portability and sustainability of processing methods to enable transparent and reproducible results. All this is within the context of a bottom up society of collaborating (or burdened?) scientists, a top down collective of compliance-focused funders and policy makers and an in-the-middle posse of e-infrastructure providers.
Making the FAIR principles a reality is tricky. They are aspirations not standards. They are multi-dimensional and dependent on context such as the sensitivity and availability of the data and methods. We already see a jungle of projects, initiatives and programmes wrestling with the challenges. FAIR efforts have particularly focused on the “last mile” – “FAIRifying” destination community archive repositories and measuring their “compliance” to FAIR metrics (or less controversially “indicators”). But what about FAIR at the first mile, at source and how do we help Alice and Bob with their (secure) data management? If we tackle the FAIR first and last mile, what about the FAIR middle? What about FAIR beyond just data – like exchanging and reusing pipelines for precision medicine?
Since 2008 the FAIRDOM collaboration [1] has worked on FAIR asset management and the development of a FAIR asset Commons for multi-partner researcher projects [2], initially in the Systems Biology field. Since 2016 we have been working with the BioCompute Object Partnership [3] on standardising computational records of HTS precision medicine pipelines.
So, using our FAIRDOM and BioCompute Object binoculars let’s go on a FAIR safari! Let’s peruse the ecosystem, observe the different herds and reflect what where we are for FAIR personalised medicine.
References
[1] http://www.fair-dom.org
[2] http://www.fairdomhub.org
[3] http://www.biocomputeobject.org
Reproducible and citable data and models: an introduction.FAIRDOM
Prepared and presented by Carole Goble (University of Manchester), Wolfgang Mueller (HITS), Dagmar Waltermath (University of Rostock), at the Reproducible and Citable Data and Models Workshop, Warnemünde, Germany. September 14th - 16th 2015.
FAIR data and model management for systems biology.FAIRDOM
Written and presented by Carole Goble (University of Manchester) as part of Intelligent Systems for Molecular Biology (ISMB), Dublin. July 10th - 14th 2015.
FAIR Data and Model Management for Systems Biology(and SOPs too!)Carole Goble
MultiScale Biology Network Springboard meeting, Nottingham, UK, 1 June 2015
FAIR Data and model management for Systems Biology
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs and so forth. Don’t stop reading. Yes, data management isn’t likely to win anyone a Nobel prize. But publications should be supported and accompanied by data, methods, procedures, etc. to assure reproducibility of results. Funding agencies expect data (and increasingly software) management retention and access plans as part of the proposal process for projects to be funded. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. And the multi-component, multi-disciplinary nature of Systems Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Data and model management for the Systems Biology community is a multi-faceted one including: the development and adoption appropriate community standards (and the navigation of the standards maze); the sustaining of international public archives capable of servicing quantitative biology; and the development of the necessary tools and know-how for researchers within their own institutes so that they can steward their assets in a sustainable, coherent and credited manner while minimizing burden and maximising personal benefit.
The FAIRDOM (Findable, Accessible, Interoperable, Reusable Data, Operations and Models) Initiative has grown out of several efforts in European programmes (SysMO and EraSysAPP ERANets and the ISBE ESRFI) and national initiatives (de.NBI, German Virtual Liver Network, SystemsX, UK SynBio centres). It aims to support Systems Biology researchers with data and model management, with an emphasis on standards smuggled in by stealth.
This talk will use the FAIRDOM Initiative to discuss the FAIR management of data, SOPs, and models for Sys Bio, highlighting the challenges multi-scale biology presents.
http://www.fair-dom.org
http://www.fairdomhub.org
http://www.seek4science.org
Improving the Management of Computational Models -- Invited talk at the EBIMartin Scharm
Improving the Management of Computational Models:
storage – retrieval & ranking – version control
More information and slides to download at http://sems.uni-rostock.de/2013/12/martin-visits-the-ebi/
Started in 2004 (under ASTM Committee E13.15) the Analytical Information Markup Language (AnIML) is an XML based standard for capturing, sharing, viewing, and archiving analytical instrument data from any analytical technique.
This paper discusses the AnIML standard in terms of philosophy, structure, usage, and the resources available to work with the standard. Examples will be given for different techniques as well as strategies for migration of legacy data. Finally, the current status of the standard and time frame for promulgation through ASTM will be reported.
Citing data in research articles: principles, implementation, challenges - an...FAIRDOM
Prepared and presented by Jo McEntyre (EMBL_EBI) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
Presented by Richard Kidd at "The Future Information Needs of Pharmaceutical & Medicinal Chemistry", Monday 28 November 2011 at The Linnean Society, Burlington Square, London run by the RSC CICAG group.
Written and presented by Carole Goble (University of Manchester) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
How are we Faring with FAIR? (and what FAIR is not)Carole Goble
Keynote presented at the workshop FAIRe Data Infrastructures, 15 October 2020
https://www.gmds.de/aktivitaeten/medizinische-informatik/projektgruppenseiten/faire-dateninfrastrukturen-fuer-die-biomedizinische-informatik/workshop-2020/
Remarkably it was only in 2016 that the ‘FAIR Guiding Principles for scientific data management and stewardship’ appeared in Scientific Data. The paper was intended to launch a dialogue within the research and policy communities: to start a journey to wider accessibility and reusability of data and prepare for automation-readiness by supporting findability, accessibility, interoperability and reusability for machines. Many of the authors (including myself) came from biomedical and associated communities. The paper succeeded in its aim, at least at the policy, enterprise and professional data infrastructure level. Whether FAIR has impacted the researcher at the bench or bedside is open to doubt. It certainly inspired a great deal of activity, many projects, a lot of positioning of interests and raised awareness. COVID has injected impetus and urgency to the FAIR cause (good) and also highlighted its politicisation (not so good).
In this talk I’ll make some personal reflections on how we are faring with FAIR: as one of the original principles authors; as a participant in many current FAIR initiatives (particularly in the biomedical sector and for research objects other than data) and as a veteran of FAIR before we had the principles.
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
FAIR Data, Operations and Model management for Systems Biology and Systems Me...Carole Goble
FAIR Data, Operations and Model management for Systems Biology and Systems Medicine Projects given at 1st Conference of the European Association of Systems Medicine, 26-28 October 2016, Berlin. the FAIRDOM project is described.
Presentation on the Chemical Analysis Metadata Platform (ChAMP) as a new project to characterize and organize metadata about chemical analysis methods. The project will develop an ontology, controlled vocabularies, and design rules
Talk by Martin Scharm at the COMBINE meeting September 2013 in Paris.
Find more information and download the slides at http://sems.uni-rostock.de/2013/09/sems-at-the-combine-2013/
Reproducibility of model-based results: standards, infrastructure, and recogn...FAIRDOM
Written and presented by Dagmar Waltemath (University of Rostock) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
Presentation given at the NBT / ECCB 2020, presenting COMBINE standards. Also providing links to related projects, introducing open model repositories and giving some hints for creating reusable models.
German Conference on Bioinformatics 2021
https://gcb2021.de/
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
Model Management in Systems Biology: Challenges – Approaches – SolutionsMartin Scharm
I gave this talk as a webinar in the FAIRDOM webinar series 2016. The recordings of the webinar are available from http://fair-dom.org/knowledgehub/webinars-2/martin-scharm/
Summary
The Cytoscape Cyberinfrastructure (CI) extends the successful Cytoscape development and community model by enabling network biologists to contribute and leverage microservices deployable at scale. The CI solves many of Cytoscape’s limitations while also delivering novel and dynamic functionality to both Cytoscape and standalone workflows, thus further empowering the already vital network biology community.
Abstract
Cytoscape is an indispensable tool for network data analysis and visualization. One of Cytoscape’s greatest strengths is that it is powered by a vibrant array of developer-contributed apps. However, as network biologists’ requirements evolve, Cytoscape is challenged not only to keep pace, but to lead new and existing developers to create even greater value. Currently, multiscale and multifaceted networks push the memory limits of a Cytoscape workstation, while complex calculations such as Network Based Stratification and Network Based GWAS strain workstation processors. Increasingly, users demand support for collaborative projects, reproducible workflows, and interoperability with external tool chains. Finally, economic pressures favor solutions that promote code and algorithm reusability and evolvability.
In response, we have created the Cytoscape Cyberinfrastructure (CI), which is both an Internet-scale distributed system (based on Microservices [1]) and the network biology community it serves. Its mission is to enable and encourage network biologists to create and deploy high quality, innovative and scalable services focusing on network-based computation, collaboration and visualization.
Microservices can be written in any language, and are highly testable and evolvable. They can run on servers ranging from a single thread to a large cloud-based cluster. They can easily be reused in reproducible workflows or can serve as components in larger services. The CI links microservices via a light weight REST-based aspect-oriented interchange protocol (called CX), which enables tailored data streams while supporting service innovation via evolvable standards. CI infrastructure services support user authentication, long duration job execution, and a service repository that enables researchers to publish their services or discover services published by others. This model builds on the successful Cytoscape app community, which is based on similar mechanisms though at the scale of individual workstations.
Prominent examples of microservices include NDEx [2] (a repository for biological networks), NodeWalker (which uses heat dispersion to identify the most relevant subnetworks containing a given set of genes), cyNetShare [3] (which visualizes a network in a browser) and Cytoscape itself (which can also call CI services). Interfaces are available for Python, IPython, R and Matlab. Future work includes adding clustering, analysis, layout, publishing and display microservices and interfaces to Galaxy and Taverna workflows.
Redes de sensores sem fio autonômicas: abordagens, aplicações e desafiosPET Computação
Este curso tem como principal objetivo apresentar aos ouvintes conceitos sobre redes de sensores sem fio (RSSF), protocolos de comunicação para RSSF e conceitos de computação autonômica. Além disso, aplicações focadas nas áreas de monitoramento ambiental, agricultura de precisão, segurança e defesa também serão apresentados.
Saint: A Lightweight Model Annotation and Data Integration ToolAllyson Lister
A talk given by Allyson Lister at BioSysBio (http://conferences.theiet.org/biosysbio/) in March 2009. Describes Saint, a lightweight model annotation and data integration tool. You can find out more at http://saint-annotate.sourceforge.net. CellML support is coming soon.
Presentation on how to enable model reuse in systems biology. Presented as part of the series "Führende Köpfe in der IT - Wissenschaftlerinnen im Dialog" (ZB Med, Bonn, Germany)
These are the slides from COMBINE 2015. In this talk, I presented the different approaches we take to determine the similarity between simulation models encoded in SBML or CeLLML -- namely: Information Retrieval based ranked model retrieval; annotation-based feature extraction for sets of models; and structure-based similarity search and clustering of model sets.
Some slides put together on analogies between biosamples and model samples. Prepared for the Biosamples workshop at The University of Manchester, 17th June 2015.
Talk in the research seminar of the Systems Biology group at the University of Rostock. The goal was to introduce the two new projects running in SEMS from summer 2015: The de.NBI-SYSBIO German Network for Bioinformatics infrastructure (focus: systems biology data management) and SBGN-ED (support and further development of SBGN-ED and libSBGN).
Ron Henkel's presentation of our Ranked Retrieval approach; 2012 PALs meeting of the Sysmo-SEEK project in Heidelberg, Germany. 28th-30th of November 2012.
A presentation on annotations for computational biological models. Second part is on SED-ML, a format for the storage of simulation experiment descriptions.
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilitySciAstra
The Indian Statistical Institute (ISI) has extended its application deadline for 2024 admissions to April 2. Known for its excellence in statistics and related fields, ISI offers a range of programs from Bachelor's to Junior Research Fellowships. The admission test is scheduled for May 12, 2024. Eligibility varies by program, generally requiring a background in Mathematics and English for undergraduate courses and specific degrees for postgraduate and research positions. Application fees are ₹1500 for male general category applicants and ₹1000 for females. Applications are open to Indian and OCI candidates.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
COMBINE standards & tools: Getting model management right
1. Prof. Dr. Dagmar Waltemath
Medical Informatics Laboratory
University Medicine Greifswald
dagmarwaltemath
COMBINE standards & tools
Getting [sysmed] model
management right
Computational Systems Biology for Complex Human Disease
December 10, 2020 | Wellcome Advanced Courses | slideshare
2. COMBINE coordinates standards
developments in systems biology.
Editorial boards
Specifications
Software tool support
Mailing lists
Annual meetings
2
https://co.mbine.org/
3. 3
Who is behind COMBINE?
BioPAX
(GD Bader)
SBOL Visual
(T Gorochowski)
SBML
(S Keating)
SBOL & chair
(C Myers)
CellML
(D Nickerson)
SED-ML
(M König)
NeuroML
(P Gleeson)
M Golebiewski
Semantics, med inf & vice-chair
(D Waltemath)
10th COMBINE Forum (2019)
SBGN
(F Schreiber)
5. What are the benefits?
https://pixabay.com/images/id-92566/
6. 6
You can retrieve and verify
reproducible virtual studies.
Looking for
more COVID-
19 models
https://www.ebi.ac.uk/biomodels/covid-19 (EOSC OVID-19 fast track funding)
7. Example model: https://www.ebi.ac.uk/biomodels/BIOMD0000000144
Simulation by MathSBML Original Simulation, COPASI, SED-ML Web Tools
Scharm & Waltemath (2016) A fully featured COMBINE archive of a simulation study on syncytial mitotic cycles in
Drosophila embryos. F1000Research 5:2421. https://doi.org/10.12688/f1000research.9379.1
You can retrieve and verify
reproducible virtual studies.
7
Select Download Reuse
8. 8
You can build & simulate your
models using different tools.
Screenshot: Frank Bergmann, Bruce E. Shapiro and Michael Hucka. SBML Software Matrix.
http://sbml.org/SBML_Software_Guide/SBML_Software_Matrix (accessed 2020-11-12)
http://ginsim.org/
9. 9
You can share your studies
with partners and beyond.
https://cat.bio.informatik.uni-
rostock.de/#archive/ff486cd9-2a01-419f-a7dd-
d240cd430c49
COVID-19 disease maps on FAIRDOMHub
https://fairdomhub.org/projects/190
Wolstencroft et al (2017) FAIRDOMHub: a repository and collaboration environment for sharing systems biology research.
NAR 45(D1) 10.1093/nar/gkw1032
10. 10
You can generate (and publish)
reproducible virtual studies.
https://cat.bio.informatik.uni-rostock.de/
https://sysbioapps.spdns.org/SED-ML_Web_Tools/
11. 11
You can generate (and publish)
reproducible virtual studies.
https://www.ebi.ac.uk/biomodels/
http://bigg.ucsd.edu/ http://www.opensourcebrain.org/
https://jjj.bio.vu.nl/
12. 12
You get support for managing &
modifying your models. (biased)
Reproduce a simulation Detect differences Understand model evolution
http://sed-ml.org/ https://github.com/SemsProject/BiVeS https://most.bio.informatik.uni-rostock.de/
https://yomost.bio.informatik.uni-rostock.de/
F Bergmann D Nickerson M Scharm T GebhardtV Touré
13. 13
You get support for managing &
modifying your models. (biased)
Bundle all files in one archive Retrieve models efficiently Link models and other data
https://github.com/MaSyMoShttps://combinearchive.org/ https://covidgraph.org/
Wolfgang
Müller
Ron
Henkel
Mariam
Nassar
Martin
Peters
Henkel et al (2015) Combining computational models, semantic annotations and simulation experiments in
a graph database. Oxford DATABASE
2 experiments,
3 model versions,
changes, meta-data
Martin
Preusse
Lea
Gütebier
14. 14
You can build tool chains to
check & automate analyses.
Cardiac Electrophysiology Web Lab, Oxford
M2CAT, SEMS
WebCAT, SEMS
JWS Online, Stellenbosch, SA SED-ML Web Tools, BIOQUANT
15. 15
Fig.: M2CAT, Scharm & Waltemath (2015) BTW http://www.btw-
2015.de/res/proceedings/Workshops/DMS/Scharm-Extracting_reproducible_sim.pdf
You can build tool chains to
check & automate analyses.
16. What‘s in a shareable
archive?
https://pixabay.com/images/id-2110767/
17. 17
Scharm & Waltemath (2016) A fully featured COMBINE archive of a simulation study on syncytial mitotic cycles in
Drosophila embryos. F1000Research 5
Biosimulation studies comprise
of heterogenous data items.
Original
publication
Visualisation Model encoding Simulation encoding
COMBINE
Archive
18. How long did it take to
build this infrastructure?
https://pixabay.com/images/id-1623517/
19. Dräger & Waltemath (2020) Overview: Standards for Modeling in Systems Medicine.
Systems Medicine https://doi.org/10.1016/B978-0-12-816077-0.00001-7
It‘s been a long journey
Invention of FAIR
19
20. What does COMBINE
offer today?
m n
the
computational modeling in biology network
http://co.mbine.org/
21. Standard formats for
model representation
• Standardised
representation of
models
• Export from major
modeling tools
• Available from major
model repositories
• Formats: XML/RDF
+semantic annotations
(RDF)
21Myers et al (2017) A brief history of COMBINE. WSC’17. https://dl.acm.org/doi/abs/10.5555/3242181.3242249
22. 22Fig.: Keating et al (2020) SBML Level 3: an extensible format for the exchange and reuse of biological models.
Molecular Systems Biology, 16(8), p.e9110
What does SBML
cover?
http://co.mbine.org/standards
Standard formats for
model representation
23. • Standardised
representation of a
model‘s layout
• Glyphs supported in
major modeling tools
• Conversion into model
code possible
• Formats: XML, Format
for SBOLVis?
23Myers et al (2017) A brief history of COMBINE. WSC’17. https://dl.acm.org/doi/abs/10.5555/3242181.3242249
Vis
Standard formats for
model representation
24. • Standardised
representation of a
model‘s layout
• Conversion into model
code possible
• Formats: XML, Format
for SBOLVis?
24
Fig: Touré et al (2018) Quick tips for creating effective and
impactful biological pathways using the Systems Biology
Graphical Notation. PLoS Comput Biol
https://doi.org/10.1371/journal.pcbi.1005740
Myers et al (2017) A brief history of COMBINE. WSC’17. https://dl.acm.org/doi/abs/10.5555/3242181.3242249
Standard formats for
graphical representation
25. • Standardised
representation of a
model‘s layout
• Conversion into model
code possible
• Formats: XML, Format
for SBOLVis?
25
Fig: Quinn et al (2015) SBOL Visual: A Graphical Language
for Genetic Designs. PLOS Biol
https://doi.org/10.1371/journal.pbio.1002310
Vis
Standard formats for
graphical representation
Myers et al (2017) A brief history of COMBINE. WSC’17. https://dl.acm.org/doi/abs/10.5555/3242181.3242249
26. • Standardised
representation of a
simulation setups
applied to a (set of)
models
• Coverage of main
simulation types
• Export from major
simulation tools
• Formats: XML+RDF
26
Standard formats for
graphical representation
Myers et al (2017) A brief history of COMBINE. WSC’17. https://dl.acm.org/doi/abs/10.5555/3242181.3242249
27. • Enrichment of the
„technical“ encoding
with semantic
information
• Enables reuse,
comparibility, search
for virtual models and
components
27Myers et al (2017) A brief history of COMBINE. WSC’17. https://dl.acm.org/doi/abs/10.5555/3242181.3242249
Standard ontologies for
semantic representation
28. • Minimum Information
guidelines describe
what information
needs to be
transported during
publication
• If encoding your virtual
experiments in
COMBINE standards,
you‘ll be MI-compliant
28Myers et al (2017) A brief history of COMBINE. WSC’17. https://dl.acm.org/doi/abs/10.5555/3242181.3242249
Guidelines for good
modeling and simulation
29. How can I make use of the
COMBINE infrastructure to build
and provide better models?
30
30. BioModels
• Provides models and
associated files
• Keeps file history
• Description of model
components
• Curation results,
metadata, tagging,
• Detailed curation
https://www.ebi.ac.uk/biomodels/
31
Reuse published studies
whenever possible.
33. 34
Fig.: Schreiber et al (2020) Specifications of standards in systems and synthetic biology: status and
developments in 2020. Journal of integrative bioinformatics, 17(2-3). https://doi.org/10.1515/jib-2018-0013
Use standards whenever
possible.
35. Draw meaningful networks.
40
Mol Syst Biol, Volume: 3, Issue: 1, First published:
31 July 2007, DOI: (10.1038/msb4100171)
Touré et al (2018) Quick tips for creating effective and impactful biological pathways using the Systems
Biology Graphical Notation. PLoS Comput Biol 14(2): e1005740. https://doi.org/10.1371/journal.pcbi.1005740
37. Semantically enrich your model.
1. Use technical standards to encode semantic annotations, e.g. RDF, identifiers.org URIs and
BioModels.net qualifiers
2. Store annotations in a separate file: normalize the format in which annotations are stored
3. Develop a software library for support of semantic annotation standards
4. Develop standards-compliant software, promote consistency in annotation practices
5. Document which knowledge resources should be used for annotation and why: publicly
available documentation, e.g. Curation guidelines for a collaborative development of the
COVID-19 Disease Map
6. Establish a repository of reusable annotations: reduce the time required for annotation and
promote inter-annotator consistency
7. Ensure high-quality semantic annotations through training and quality control processes:
specific, complete and consistent annotations, e.g. https://www.ebi.ac.uk/biomodels-
static/jummp-biomodels-help/annotating_models.html
8. Establish and maintain collaborations with knowledge resource developers
42
Neal et al (2019) Harmonizing semantic annotations for computational models in biology, Briefings in
Bioinformatics 20:2, https://doi.org/10.1093/bib/bby087
39. Check out guidelines and tutorials
for further help.
10 tips for building useful SBGN maps Building fully featured COMBINE archives
44
40. 45
Extensive curation leads to higher
quality leads to increased trust.
Easy access access leads to
increased visibility.
Easy executability leads to
increased collaboration.
Better reproducibility
leads to increased reuse.
Remember: Reproducible
studies = better science.
41. Thank you for your attention!
46
Dagmar Waltemath
Medical Informatics Lab &
Core Unit Research Data Management
University Medicine Greifswald
0000-0002-5886-5563
Mila (now)
SEMS (<2017)
Drawings: Anna Zhukova