Toxicology is in the midst of transformation, moving towards a data rich quantitative future. As we move towards a Biological Big Data reality, investigators and risk assessors need computational tools that can help them make decisions. In this slide deck I lay out some of my vision for using ontologies to drive predictive toxicology using machine intelligence.
Timothy Dawes of Genentech and Elliot Hui of the University of California, Irvine share their well-received presentation from SLAS2017 in Washington, DC.
El 12 de mayo de 2017 celebramos en la Fundación Ramó Areces una jornada con IS Global y Unitaid sobre enfermedades transmitidas por vectores, como la malaria, entre otras.
Deep learning in medicine: An introduction and applications to next-generatio...Allen Day, PhD
Deep learning has enabled dramatic advances in image recognition performance. In this talk I will discuss using a deep convolutional neural network to detect genetic variation in aligned next-generation sequencing human read data. Our method, called DeepVariant, both outperforms existing genotyping tools and generalizes across genome builds and even to other species. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data.
Presented August 2011 to the federal Ideation Community of Practice re: NASA’s innovation strategy, including their use of innovation platforms (internal and external tools for crowdsourcing collaboration) and innovation spaces (hacking spaces, TechShop/FabLab/etc.)
Best Practices for Building an End-to-End Workflow for Microbial GenomicsJonathan Jacobs, PhD
Invited talk presented at 2019 CAFPA-ASM D.C. Branch
FALL MEETING on "Current Testing Approaches & Implications for Public Health" at the FDA in College Park, MD.
Timothy Dawes of Genentech and Elliot Hui of the University of California, Irvine share their well-received presentation from SLAS2017 in Washington, DC.
El 12 de mayo de 2017 celebramos en la Fundación Ramó Areces una jornada con IS Global y Unitaid sobre enfermedades transmitidas por vectores, como la malaria, entre otras.
Deep learning in medicine: An introduction and applications to next-generatio...Allen Day, PhD
Deep learning has enabled dramatic advances in image recognition performance. In this talk I will discuss using a deep convolutional neural network to detect genetic variation in aligned next-generation sequencing human read data. Our method, called DeepVariant, both outperforms existing genotyping tools and generalizes across genome builds and even to other species. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data.
Presented August 2011 to the federal Ideation Community of Practice re: NASA’s innovation strategy, including their use of innovation platforms (internal and external tools for crowdsourcing collaboration) and innovation spaces (hacking spaces, TechShop/FabLab/etc.)
Best Practices for Building an End-to-End Workflow for Microbial GenomicsJonathan Jacobs, PhD
Invited talk presented at 2019 CAFPA-ASM D.C. Branch
FALL MEETING on "Current Testing Approaches & Implications for Public Health" at the FDA in College Park, MD.
Fundamentals Of Genetic Toxicology In The Pharmaceutical Industry Sept 2010TigerTox
Historical and current perspectives on genetic toxicology, with commentary and slides on assay predictivity and shortcomings, regulatory guidance, and high-throughput screens to enhance preclinical drug safety.
Introduction to Gene Mining Part A: BLASTn-off!adcobb
In this lesson, students will learn to use bioinformatics portals and tools to mine plant versions of human genes. Student handout and teacher resource materials are available at www.Araport.org, Teaching Resources (Community tab). Suitable for grades 9-12 or first year undergraduate students.
Slides contain information about why bioinformatics appeared,
who bioinformaticians are, what they do, what kind of cool applications and challenges in bioinformatics there are.
Slides were prepared for the Bioinformatics seminar 2016, Institute of Computer Science, University of Tartu.
A high level overview of using artificial intelligence and chemical structure information to predict toxicity in various species. Discusses molecular docking, deep learning, quantitative structure activity relationships, Bayesian networks and cats (lots of cat pictures). Part of my artificial intelligence for national security, artificial intelligence for warfighter readiness, and alternative methods for toxicity prediction research portfolios.
Apache Spark NLP for Healthcare: Lessons Learned Building Real-World Healthca...Databricks
The speaker will review case studies from real-world projects that built AI systems using Natural Language Processing (NLP) in healthcare. These case studies cover projects that deployed automated patient risk prediction, automated diagnosis, clinical guidelines, and revenue cycle optimization.
Seminario en CIFASIS, Rosario, Argentina - Seminar in CIFASIS, Rosario, Argen...Alejandra Gonzalez-Beltran
La biología experimental se ha convertido en una ciencia intensiva en datos, gracias a los avances en las tecnologías de adquisición de señales digitales y biosensores. La disponibilidad de los datos es fundamental para la transparencia del proceso científico: para poder reproducir los resultados y también para la reutilización de los datos en estudios futuros. Esta charla explorará distintas herramientas de software que facilitan el proceso de generación de metadatos para mejorar la calidad, el reporte, la publicación y la revisión de datos, con énfasis en aplicaciones biomédicas.
Presentation at ESCAIDE 2016 by Thibaut Jombart. The R Epidemics Consortium: Building the next generation of statistical tools for outbreak response using R
Fundamentals Of Genetic Toxicology In The Pharmaceutical Industry Sept 2010TigerTox
Historical and current perspectives on genetic toxicology, with commentary and slides on assay predictivity and shortcomings, regulatory guidance, and high-throughput screens to enhance preclinical drug safety.
Introduction to Gene Mining Part A: BLASTn-off!adcobb
In this lesson, students will learn to use bioinformatics portals and tools to mine plant versions of human genes. Student handout and teacher resource materials are available at www.Araport.org, Teaching Resources (Community tab). Suitable for grades 9-12 or first year undergraduate students.
Slides contain information about why bioinformatics appeared,
who bioinformaticians are, what they do, what kind of cool applications and challenges in bioinformatics there are.
Slides were prepared for the Bioinformatics seminar 2016, Institute of Computer Science, University of Tartu.
A high level overview of using artificial intelligence and chemical structure information to predict toxicity in various species. Discusses molecular docking, deep learning, quantitative structure activity relationships, Bayesian networks and cats (lots of cat pictures). Part of my artificial intelligence for national security, artificial intelligence for warfighter readiness, and alternative methods for toxicity prediction research portfolios.
Apache Spark NLP for Healthcare: Lessons Learned Building Real-World Healthca...Databricks
The speaker will review case studies from real-world projects that built AI systems using Natural Language Processing (NLP) in healthcare. These case studies cover projects that deployed automated patient risk prediction, automated diagnosis, clinical guidelines, and revenue cycle optimization.
Seminario en CIFASIS, Rosario, Argentina - Seminar in CIFASIS, Rosario, Argen...Alejandra Gonzalez-Beltran
La biología experimental se ha convertido en una ciencia intensiva en datos, gracias a los avances en las tecnologías de adquisición de señales digitales y biosensores. La disponibilidad de los datos es fundamental para la transparencia del proceso científico: para poder reproducir los resultados y también para la reutilización de los datos en estudios futuros. Esta charla explorará distintas herramientas de software que facilitan el proceso de generación de metadatos para mejorar la calidad, el reporte, la publicación y la revisión de datos, con énfasis en aplicaciones biomédicas.
Presentation at ESCAIDE 2016 by Thibaut Jombart. The R Epidemics Consortium: Building the next generation of statistical tools for outbreak response using R
Big Data and AI in Fighting Against COVID-19Bill Liu
Website: https://learn.xnextcon.com/event/eventdetails/W20070810
As the COVID-19 pandemic sweeps the globe, big data and AI have emerged as crucial tools for everything from diagnosis and epidemiology to therapeutic and vaccine development.
In this talk, we collect and review how big data is fighting back against COVID-19. We also provide a deep diving for two interesting use cases: 1) Use NLP and BERT to answer scientific questions. 2) Covid-19 data lake from Databricks, Google and Amazon
Agenda:
Introduction
Supercomputers for Scientific Research
Covid-19 Tracking and Prediction
Covid-19 Research and Diagnosis
Use Case 1 NLP and BERT to answer scientific questions
Use Case 2 Covid-19 Data Lake and Platform
Agenda:
Introduction
Supercomputers for Scientific Research
Covid-19 Tracking and Prediction
Covid-19 Research and Diagnosis
Use Case 1 NLP and BERT to answer scientific questions
Use Case 2 Covid-19 Data Lake and Platform
NGS: How what we are measuring impacts data models and implications for data commons. New sequencing technologies, such as long read transcriptomic sequencing, gives us new gene models. These gene models alter the way we see past sequencing data and impacts how we assess the biological relevance of results. The disruption this causes to our view of the biological systems under study needs to be absorbed validated and the new view built upon. Understanding the lifecycle of data, the measurement technologies is imperative. Ultimately, statements, in sights may be the most long lived item. Claims validated by experiments and re-validated in every new context. Ultimately, old measurement technologies may go by way of the kilogram, replaced by reproducible experiments. What do we need to do to ensure that the persistent data stores upon which we rely enable this, promote this and enable us to become better data stewards.
Open Source Pharma: Crowd computing: A new approach to predictive modelingOpen Source Pharma
Presentation about "Predictive in silico models," given by Joerg Bentzien at the Open Source Pharma Conference. The event took place at Rockefeller Foundation Bellagio Center in July 2014.
Joerg Bentzien Bio:
http://www.opensourcepharma.net/participants/jorg-bentzien
Conference Agenda (see Day 1, Session 2):
http://www.opensourcepharma.net/agenda.html
A talk given at the International Congress "Contrasts in Pharmacology 2.0" held in Turin, May 14-16 2015
It describes our work with Bigger datasets, working on Tuberculosis as well as other areas.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Using Machine Intelligence to Perform Predictive Toxicology
1. Using Machine Intelligence To Perform
Predictive Toxicology
Lyle D. Burgoon, Ph.D.
Team Leader: Bioinformatics and Computational
Toxicology (Data Ninja Corps)
Environmental Laboratory
The views and opinions expressed are
those of the author and not those of the
US Army or any other federal agency.
2. Innovative solutions for a safer, better worldBUILDING STRONG®
Challenge
Data-to-
Decisions Chasm
TOX21/ToxCast/
Toxicogenomics
Test Data
Modeling
AOPs
Decisions
3. Innovative solutions for a safer, better worldBUILDING STRONG®
Multi-Year Army Investment In
Engineering Predictive Toxicology
2012-2016: Rapid Hazard Assessment Focus Area
► AOP Ontology: ontology to predict AOP outcomes using assay data
► AOPXplorer: R and Cytoscape software to facilitate data
visualization within the AOP context
2017-2021: Next Generation Risk Assessment Focus Area
► Development of High Throughput Zebrafish Embryo Toxicity Assays
► Predicting Molecular Initiating Events through Deep Learning of
Molecular Interactions
► Predicting Assay Responses through Deep Learning
► Toolkit that integrates predictive toxicology tools
► Further development of content for AOPO and AOPXplorer
8. Innovative solutions for a safer, better worldBUILDING STRONG®
Bachelor : is an unmarried : Male
Bachelor : is a : Human
Homo sapiens : is a : mammal
Human : owl:sameAs : Homo sapiens
Is a
bachelor a
mammal?
Computer + Ontology = Classify via Deduction
Subject : Object : Predicate
9. Innovative solutions for a safer, better worldBUILDING STRONG®
LOGICAL TERMINOLOGY
Necessary; Sufficient; Necessary and Sufficient
10. Innovative solutions for a safer, better worldBUILDING STRONG®
Sufficient: To get an A in this course it is sufficient to get an A on all
work turned in
Necessary: To get an A in this course, you must turn in a report
Necessary: states the criteria required to achieve
something
Sufficient: if you meet these criteria you are guaranteed
to achieve something
Necessary and Sufficient: to be guaranteed to achieve
something, you must meet these criteria
11. Innovative solutions for a safer, better worldBUILDING STRONG®
Bachelor : is an unmarried : Male
Bachelor : is a : Human
Homo sapiens : is a : mammal
Human : owl:sameAs : Homo sapiens
Given:
Bob : is a : Bachelor
Sufficient: Bob must be a human, unmarried male
Necessary: To be a bachelor, one must be an unmarried male
12. Innovative solutions for a safer, better worldBUILDING STRONG®
AOP Ontology
Like modern software, it’s a constantly evolving work
in progress
Model
► Assay classes
► Assay result classes
► Biological pathway classes
► AOP classes
Predict toxicity
13. Innovative solutions for a safer, better worldBUILDING STRONG®
DHB4 HTS Data: B[k]F inhibits activity (Red)
Predict: Steatosis (blue)
Predict: ALT and AST levels increased (green)
ALT
AST
Benzo[k]Fluoranthene effects on Steatosis AOP network
Burgoon, et al (2016). Risk Analysis. doi/10.1111/risa.12613
14. Innovative solutions for a safer, better worldBUILDING STRONG®
Application to Developing Screening Level Risk
Assessments
► Identify all available data for a chemical or mixture
► Use AOPs to identify potential adverse outcomes (hazard ID)
► Use concentration-response or dose-response data to calculate
a POD for an AOP
• Use sufficient key event – key event sufficient to infer adversity based
on network theory
► Reverse dosimetry on POD (if in vitro data) to estimate adult
POD
► Determine a safe margin from the POD (divide by 100 if a 100x
safe margin is desired)
Burgoon, et al (2016). Risk Analysis. doi/10.1111/risa.12613
15. Innovative solutions for a safer, better worldBUILDING STRONG®
Fish Fecundity AOPs
AOPwiki
AOPXplorer
Visualization of AOPs
20. Innovative solutions for a safer, better worldBUILDING STRONG®
Tutorials and Help Documents
A series of examples are included with AOPXplorer
A vignette is being developed to provide written
documentation of how to use the AOPXplorer
► Will include omics examples, HTS, etc…
► For instance, examples will allow users to go from raw
microarray data, analyse and find genes in the AOPN, add that
data to the AOPN graph object, and send it to Cytoscape
Video tutorials
► To walk users through step-by-step how to analyze and visualize
their data
22. Innovative solutions for a safer, better worldBUILDING STRONG®
AOPXplorer Changed Our TxGen Workflow
Before:
► Fishing expedition for what changed
► Non-model org gene annotation is poor :(
► Massive penalties for multiple testing for probes with little
annotation
Today
► Hypothesis-based analysis focused on AOPs
► We use a fully Bayesian analysis approach (takes longer, but
better)
• Focused on probes connected to AOPs
► Data visualization is an intimate part of our analysis workflow
23. Innovative solutions for a safer, better worldBUILDING STRONG®
AOPXplorer + AOPO
What We Can Do Now:
► AOPO as an artificial intelligence engine
• Ask: Given the data, is there sufficient evidence to predict that Chemical
X causes this AO?
• Ask: Given this AO, what is the minimum set of KEs that need to be
measured to make a prediction?
Assay battery design
• Ask: What is the likelihood, given the data, that chemical X causes this
AO? How would additional data change this likelihood?
► Near Future:
• Exploit these capabilities within the AOPXplorer itself (and thus, within
R)
24. Innovative solutions for a safer, better worldBUILDING STRONG®
Long-Term Strategy
Open Source Toxicological Data Store
Susceptible populations data and AOPs
Data Integration Using AOPO
Allow our benefactors to ask questions in plain English (or
near-plain English)
► What are the hazards posed by Chemical X?
► At what external doses will Chemical X cause cancer of any type?
► Is an exposure of X mg/kg-day okay for this population?
25. Innovative solutions for a safer, better worldBUILDING STRONG®
Acknowledgements
US Army ERDC Bioinformatics and Computational Toxicology Group
► Gabriel Weinreb (Bennett Aerospace)
► Larry Wu (Bennett Aerospace)
US Army ERDC
► Ed Perkins
► Natalia Vinas
Integrated Laboratory Systems, Inc (supporting NIEHS/NICEATM)
► Shannon Bell
Oak Ridge Institute for Science and Education
► Ingrid Druwe
► Kyle Painter
► Erin Yost
US Environmental Protection Agency
► Steve Edwards
► Ila Cote
Editor's Notes
The challenge is how do we bridge the divide from all of these data sources to risk management decisions? How do we integrate all of these data together to inform decision making?
Warfighters are exposed to an unknown chemical agent by contact with native building material
Biomarkers obtained non-invasively
Predict potential hazard outcomes based on biomarkers
Predict potential chemicals and chemical countermeasures based on predictions
Here ya go. Note that I don't have inhibition arrows turned on yet -- so everything is just regular arrows.
The fish fecundity AOPN also illustrates a problem going forward that I'm finding A LOT in the wiki that EAGMST needs to address -- different levels of "acceptable" organization when dealing with the same AO. This is even evident in AOPs from the SAME author. I suspect Dan would say this was on purpose to illustrate the problem, but regardless, it's a problem that the AOP Ontology seeks to solve with or without EAGMST assistance :)
This goes through a demonstration. We perform our analyses in R, and we visualize in Cytoscape. 1) We query the AOPO for the steatosis network. 2) We read in some expression data. 3) We attach the expression data to the steatosis AOPN. 4) We send this to Cytoscape to visualize the overlay.
The spaghetti plot shows a random assortment of 40 models from the bootstrap metaregression (1,000 models were fit to the bootstrap data). The other plot is the 95% confidence envelope + median – so take all 1,000 models, and calculate the 95% confidence envelope and the median and then plot those values.
Detoxification via Txn and Gsr occur at low doses, along with p53-mediated cell death and Stat3-mediated inflammation. At low doses, Stat3 is likely able to overcome bbc3-mediated apoptosis; however, it is unclear if that continues at higher doses (in other words, we may see some cell death at higher doses). As the inflammatory process continues and escalates at higher doses, the extracellular matrix will begin to break down, releasing additional inflammatory cell recruiting molecules and chemoattractants. Glycosaminoglycans (GAGs) are also likely to begin entering the cell and are shuttled to the lysosome. These GAGs are toxic if they accumulate in the lysosome, so the cell will begin to increase the amount of Galns protein available to break the GAGs down. Thus, Galns expression is likely a compensatory mechanism to handle the ECM breakdown and inflammatory response.
The challenge is how do we bridge the divide from all of these data sources to risk management decisions? How do we integrate all of these data together to inform decision making?