Keynote Talk presented at the 1st Annual BiVi Community Annual Meeting (17 December 2014)
http://bivi.co/page/bivi-annual-meeting-16-17th-december-2014
Visualization Approaches for Biomedical Omics Data: Putting It All Together
The rapid proliferation of high quality, low cost genome-wide measurement technologies such as whole-genome and transcriptome sequencing, as well as advances in epigenomics and proteomics, are enabling researchers to perform studies that generate heterogeneous datasets for cohorts of thousands of individuals. A common feature of these studies is that a collection of genome-wide, molecular data types and phenotypic or clinical characterizations are available for each individual. These data can be used to identify the molecular basis of diseases and to characterize and describe the variations that are relevant for improved diagnosis, prognosis and targeted treatment of patients. An example for a study in which this approach has been successfully applied is The Cancer Genome Atlas project (http://cancergenome.nih.gov).
In my talk I will discuss how visualization approaches can be applied to enable exploration and support analysis of data generated by such studies. Specifically, I will review techniques and tools for visual exploration of individual omics data types, their ability to scale to large numbers of individuals or samples, and emerging techniques that integrate multiple omics data types for interactive visual analysis. I will also examine technical and legal challenges that developers of such visualization tools are facing. To conclude my talk, I will outline research opportunities for the biological data visualization community that address major challenges in this domain.
Presentation for teaching faculty about resources, data, issues, and strategies for including personal genomics in the classroom, within the context of precision medicine as an overarching theme.
Building Genomic Data Processing and Machine Learning Workflows Using Apache ...Databricks
Epinomics is advancing epigenetic research to drive personalized medicine, using epigenomic data analysis. Their goal is to provide an analysis resource to the community that will promote high-quality data and replicable and interpretable results. They work with academic and commercial users to ingest and analyze their genomic sequencing data and metadata. They extract epigenetic features from the sequenced genome, called “chromatin accessibility”, which are indicative of instrumental epigenetic changes responsible for differential gene expression and disease development.
Epinomics has built an Apache Spark-based pipeline that retrieves chromatin accessibility data from the epigenome, uses GraphX to find overlapping accessibility atlas and then clusters the data and runs machine learning algorithms. This session will provide a primer on epigenomics, details about Epinomics’ Spark-based data pipeline focusing on parallel bioinformatic analysis, and how they use machine learning models to build the epigenomic landscape and accelerate the field of personalized immunotherapy. use GraphX to find overlapping accessibility atlas and then cluster the data and run machine learning algorithms.
In this talk we will provide a primer on epigenomics, details about our Spark based data pipeline focusing on parallel bioinformatic analysis and how we use machine learning models to build the epigenomic landscape and accelerate the field of personalized immunotherapy.
Making the most of phenotypes in ontology-based biomedical knowledge discoveryMichel Dumontier
A phenotype is an observable characteristic of an individually and typically pertains to its morphology, function, and behavior. Phenotypes, whether observed at the bench or the bedside, are increasingly being used to gain insight into the diagnosis, mechanism, and treatment of disease. A key aspect of these approaches involve comparing phenotypes that are defined in multiple terminologies that often cater to altogether different organisms, such as mice and humans. In this seminar, I will discuss computational approaches for harmonizing and utilizing phenotypes for translational research. We will examine case studies which involve the computation of semantic similarity including the use of phenotypes to inform clinical diagnosis of rare diseases, to identify human drug targets using mice knock-out models, and to explore phenotype-based approaches for drug repositioning .
Illuminating the Druggable Genome with Knowledge Engineering and Machine Lear...Jeremy Yang
Talk given at 14th Annual New Mexico BioInformatics, Science and Technology (NMBIST) Symposium, entitled Integrative Omics, on March 14-15, 2019. Most slides c/o IDG KMC PI Tudor Oprea, MD, PhD.
Keynote presented at the Phenotype Foundation first annual meeting.
Describes data sharing, data annotation and the needs for further tool and ontology and ontology mapping development.
Amsterdam, January 18, 2016
Why the world needs phenopacketeers, and how to be onemhaendel
Keynote presented at the the Ninth International Biocuration Conference Geneva, Switzerland, April 10-14, 2016
The health of an individual organism results from complex interplay between its genes and environment. Although great strides have been made in standardizing the representation of genetic information for exchange, there are no comparable standards to represent phenotypes (e.g. patient disease features, variation across biodiversity) or environmental factors that may influence such phenotypic outcomes. Phenotypic features of individual organisms are currently described in diverse places and in diverse formats: publications, databases, health records, registries, clinical trials, museum collections, and even social media. In these contexts, biocuration has been pivotal to obtaining a computable representation, but is still deeply challenged by the lack of standardization, accessibility, persistence, and computability among these contexts. How can we help all phenotype data creators contribute to this biocuration effort when the data is so distributed across so many communities, sources, and scales? How can we track contributions and provide proper attribution? How can we leverage phenotypic data from the model organism or biodiversity communities to help diagnose disease or determine evolutionary relatedness? Biocurators unite in a new community effort to address these challenges.
Presentation for teaching faculty about resources, data, issues, and strategies for including personal genomics in the classroom, within the context of precision medicine as an overarching theme.
Building Genomic Data Processing and Machine Learning Workflows Using Apache ...Databricks
Epinomics is advancing epigenetic research to drive personalized medicine, using epigenomic data analysis. Their goal is to provide an analysis resource to the community that will promote high-quality data and replicable and interpretable results. They work with academic and commercial users to ingest and analyze their genomic sequencing data and metadata. They extract epigenetic features from the sequenced genome, called “chromatin accessibility”, which are indicative of instrumental epigenetic changes responsible for differential gene expression and disease development.
Epinomics has built an Apache Spark-based pipeline that retrieves chromatin accessibility data from the epigenome, uses GraphX to find overlapping accessibility atlas and then clusters the data and runs machine learning algorithms. This session will provide a primer on epigenomics, details about Epinomics’ Spark-based data pipeline focusing on parallel bioinformatic analysis, and how they use machine learning models to build the epigenomic landscape and accelerate the field of personalized immunotherapy. use GraphX to find overlapping accessibility atlas and then cluster the data and run machine learning algorithms.
In this talk we will provide a primer on epigenomics, details about our Spark based data pipeline focusing on parallel bioinformatic analysis and how we use machine learning models to build the epigenomic landscape and accelerate the field of personalized immunotherapy.
Making the most of phenotypes in ontology-based biomedical knowledge discoveryMichel Dumontier
A phenotype is an observable characteristic of an individually and typically pertains to its morphology, function, and behavior. Phenotypes, whether observed at the bench or the bedside, are increasingly being used to gain insight into the diagnosis, mechanism, and treatment of disease. A key aspect of these approaches involve comparing phenotypes that are defined in multiple terminologies that often cater to altogether different organisms, such as mice and humans. In this seminar, I will discuss computational approaches for harmonizing and utilizing phenotypes for translational research. We will examine case studies which involve the computation of semantic similarity including the use of phenotypes to inform clinical diagnosis of rare diseases, to identify human drug targets using mice knock-out models, and to explore phenotype-based approaches for drug repositioning .
Illuminating the Druggable Genome with Knowledge Engineering and Machine Lear...Jeremy Yang
Talk given at 14th Annual New Mexico BioInformatics, Science and Technology (NMBIST) Symposium, entitled Integrative Omics, on March 14-15, 2019. Most slides c/o IDG KMC PI Tudor Oprea, MD, PhD.
Keynote presented at the Phenotype Foundation first annual meeting.
Describes data sharing, data annotation and the needs for further tool and ontology and ontology mapping development.
Amsterdam, January 18, 2016
Why the world needs phenopacketeers, and how to be onemhaendel
Keynote presented at the the Ninth International Biocuration Conference Geneva, Switzerland, April 10-14, 2016
The health of an individual organism results from complex interplay between its genes and environment. Although great strides have been made in standardizing the representation of genetic information for exchange, there are no comparable standards to represent phenotypes (e.g. patient disease features, variation across biodiversity) or environmental factors that may influence such phenotypic outcomes. Phenotypic features of individual organisms are currently described in diverse places and in diverse formats: publications, databases, health records, registries, clinical trials, museum collections, and even social media. In these contexts, biocuration has been pivotal to obtaining a computable representation, but is still deeply challenged by the lack of standardization, accessibility, persistence, and computability among these contexts. How can we help all phenotype data creators contribute to this biocuration effort when the data is so distributed across so many communities, sources, and scales? How can we track contributions and provide proper attribution? How can we leverage phenotypic data from the model organism or biodiversity communities to help diagnose disease or determine evolutionary relatedness? Biocurators unite in a new community effort to address these challenges.
WikiPathways: how open source and open data can make omics technology more us...Chris Evelo
Presentation about collaborative development of open source pathway analysis code and pathways and about usage in analytical software distributed with analytical machines like mass spectrophotometers.
On the frontier of genotype-2-phenotype data integrationmhaendel
Presented at AMIA TBI 2016 BD2K Panel. A description of the Monarch Initiative's efforts to perform deep phenotyping data integration across species, facilitate exchange, and build computable G2P evidence modesl to aid variant interpretation.
Global Phenotypic Data Sharing Standards to Maximize Diagnostics and Mechanis...mhaendel
Presented at the IRDiRC 2017 conference in Paris, Feb 9th, 2017 (http://irdirc-conference.org/). This talk reviews use of the Human Phenotype Ontology for phenotype comparisons against other patients, known diseases, and animal models for diagnostic discovery. It also discusses the new Phenopackets Exchange mechanism for open phenotypic data sharing.
www.monarchinitiative.org
www.phenopackets.org
www.human-phenotype-ontology.org
Hail: SCALING GENETIC DATA ANALYSIS WITH APACHE SPARK: Keynote by Cotton SeedSpark Summit
In 2001, it cost ~$100M to sequence a single human genome. In 2014, due to dramatic improvements in sequencing technology far outpacing Moore’s law, we entered the era of the $1,000 genome. At the same time, the power of genetics to impact medicine has become evident: for example, drugs with supporting genetic evidence have twice the clinical trial success rate. These factors have led to an explosion in the volume of genetic data, in the face of which existing analysis tools are breaking down.
Therefore, we began the open-source Hail project (https://hail.is) to be a scalable platform built on Apache Spark to enable the worldwide genetics community to build, share, and apply new tools. Hail is focused on variant-level (post-read) data; querying genetic data, annotations and sample data; and performing rare and common variant association analyses. Hail has already been used to analyze datasets with hundreds of thousands of exomes and tens of thousands of whole genomes.
We will give an overview of the goals of the Hail project and its architecture. The challenge of efficiently manipulating genetic data in Spark has led to several innovations that may have wider applicability, including an RDD-like abstraction for representing multidimensional data and an OrderedRDD abstraction for ordered data, (for example, data indexed by position in the genome). Finally, we will discuss Hail performance and future directions.
Presentation pathway extensions using knowledge integration and network approaches presented at the Systems Biology Institute in Luxembourg on November 28 2012.
Collaboratively Creating the Knowledge Graph of LifeChris Mungall
Overview of collaborative projects in the life sciences building out the necessary ontologies, schemas, and knowledge graphs for describing biological knowledge
A Genome Sequence Analysis System Built with HypertableDATAVERSITY
Deep genome sequencing has revolutionized the fields of biology and medicine. Since January 2008, the capacity to generate sequence data has increased exponentially, far outpacing Moore's Law. The emergence of scalable NoSQL database technologies has made the analysis of this vast amount of sequence data not only feasible, but cost effective.
The University of California at San Francisco UCSF-Abbott Viral Detection and Discovery Center, led by director Charles Chiu, MD, PhD, Taylor Sittler, MD and the Hypertable development team have embarked upon a project to build a scalable software platform to facilitate deep sequencing analysis in diagnostic microbiology, transcriptomic analysis, and clinical / environmental metagenomics, areas for which existing commercial and academic solutions are sorely lacking. Doug Judd, the original creator of Hypertable, will present an overview of this genome sequencing analysis system. The presentation will cover the following topics:
Rationale for choosing NoSQL
Schema design
Sources and description of input data
Algorithms for generating and querying lookup tables
Table sizes and compression ratios
Lessons learned during system deployment
WikiPathways: how open source and open data can make omics technology more us...Chris Evelo
Presentation about collaborative development of open source pathway analysis code and pathways and about usage in analytical software distributed with analytical machines like mass spectrophotometers.
On the frontier of genotype-2-phenotype data integrationmhaendel
Presented at AMIA TBI 2016 BD2K Panel. A description of the Monarch Initiative's efforts to perform deep phenotyping data integration across species, facilitate exchange, and build computable G2P evidence modesl to aid variant interpretation.
Global Phenotypic Data Sharing Standards to Maximize Diagnostics and Mechanis...mhaendel
Presented at the IRDiRC 2017 conference in Paris, Feb 9th, 2017 (http://irdirc-conference.org/). This talk reviews use of the Human Phenotype Ontology for phenotype comparisons against other patients, known diseases, and animal models for diagnostic discovery. It also discusses the new Phenopackets Exchange mechanism for open phenotypic data sharing.
www.monarchinitiative.org
www.phenopackets.org
www.human-phenotype-ontology.org
Hail: SCALING GENETIC DATA ANALYSIS WITH APACHE SPARK: Keynote by Cotton SeedSpark Summit
In 2001, it cost ~$100M to sequence a single human genome. In 2014, due to dramatic improvements in sequencing technology far outpacing Moore’s law, we entered the era of the $1,000 genome. At the same time, the power of genetics to impact medicine has become evident: for example, drugs with supporting genetic evidence have twice the clinical trial success rate. These factors have led to an explosion in the volume of genetic data, in the face of which existing analysis tools are breaking down.
Therefore, we began the open-source Hail project (https://hail.is) to be a scalable platform built on Apache Spark to enable the worldwide genetics community to build, share, and apply new tools. Hail is focused on variant-level (post-read) data; querying genetic data, annotations and sample data; and performing rare and common variant association analyses. Hail has already been used to analyze datasets with hundreds of thousands of exomes and tens of thousands of whole genomes.
We will give an overview of the goals of the Hail project and its architecture. The challenge of efficiently manipulating genetic data in Spark has led to several innovations that may have wider applicability, including an RDD-like abstraction for representing multidimensional data and an OrderedRDD abstraction for ordered data, (for example, data indexed by position in the genome). Finally, we will discuss Hail performance and future directions.
Presentation pathway extensions using knowledge integration and network approaches presented at the Systems Biology Institute in Luxembourg on November 28 2012.
Collaboratively Creating the Knowledge Graph of LifeChris Mungall
Overview of collaborative projects in the life sciences building out the necessary ontologies, schemas, and knowledge graphs for describing biological knowledge
A Genome Sequence Analysis System Built with HypertableDATAVERSITY
Deep genome sequencing has revolutionized the fields of biology and medicine. Since January 2008, the capacity to generate sequence data has increased exponentially, far outpacing Moore's Law. The emergence of scalable NoSQL database technologies has made the analysis of this vast amount of sequence data not only feasible, but cost effective.
The University of California at San Francisco UCSF-Abbott Viral Detection and Discovery Center, led by director Charles Chiu, MD, PhD, Taylor Sittler, MD and the Hypertable development team have embarked upon a project to build a scalable software platform to facilitate deep sequencing analysis in diagnostic microbiology, transcriptomic analysis, and clinical / environmental metagenomics, areas for which existing commercial and academic solutions are sorely lacking. Doug Judd, the original creator of Hypertable, will present an overview of this genome sequencing analysis system. The presentation will cover the following topics:
Rationale for choosing NoSQL
Schema design
Sources and description of input data
Algorithms for generating and querying lookup tables
Table sizes and compression ratios
Lessons learned during system deployment
Introduction to Discovery Studio as part of the hands on Demonstration organized at Structural Bioinformatics Laboratory, Åbo Akademi University. The tutorials are available upon request.
DRUG DESIGN BASED ON BIOINFORMATICS TOOLSNIPER MOHALI
Drug design is a very complex process it takes many more times but using the these specific tools we can reduce complex process and save the time and produce a effective new drug that will be helpful in heath environment.
Computation and visualization of protein topology graphs including ligandsTim Schäfer
Talk by Tim Schäfer at the German Conference on Bioinformatics 2012. Jena, Germany.
Ligand information is of great interest to understand protein function. Protein structure topology can be modeled as a graph, with secondary structure elements as vertices and spatial contacts between them as edges. Meaningful representations of such graphs in 2D are required for the visual inspection, comparison and analysis of protein folds, but their automatic visualization is still challenging.
We present an approach which solves this task, supports several graph types and includes ligands. Our method generates a mathematically unique representation and high quality 2D plots of the secondary structure of proteins based on a protein ligand graph. This graph is computed from 3D atom coordinates in Protein Databank (PDB) and the corresponding secondary structure elements (SSE) assignments of the DSSP algorithm.
The Visualization of Protein-Ligand Graphs (VPLG) software enables rapid visualization of protein structures and exports graphs in various standard formats for further analysis.
Visual Exploration of Clinical and Genomic Data for Patient StratificationNils Gehlenborg
Talk presented at the Simons Foundation Biotech Symposium "Complex Data Visualization: Approach and Application" (12 September 2014)
http://www.simonsfoundation.org/event/complex-data-visualization-approach-and-application/
In this talk I describe how we integrated a sophisticated computational framework directly into the StratomeX visualization technique to enable rapid exploration of tens of thousands of stratifications in cancer genomics data, creating a unique and powerful tool for the identification and characterization of tumor subtypes. The tool can handle a wide range of genomic and clinical data types for cohorts with hundreds of patients. StratomeX also provides direct access to comprehensive data sets generated by The Cancer Genome Atlas Firehose analysis pipeline.
http://stratomex.caleydo.org
Examining gene expression and methylation with next gen sequencingStephen Turner
Slides on RNA-seq and methylation studies using next-gen sequencing given at the University of Miami Hussman Institute for Human Genomics "Genetic Analysis of Complex Human Diseases" course in 2012 (http://hihg.med.miami.edu/educational-programs/analysis-of-complex-human-diseases/genetic-analysis-of-complex-human-diseases/)
Power to the People: Data Visualization in Biology and MedicineNils Gehlenborg
In this talk, I discuss how data visualization contributes to the democratization of data in biology and medicine. I emphasize the need to increase visualization literacy in order to achieve true democratization of biomedical data.
A Unified Approach to Exploration, Authoring, and Communication with Reproduc...Nils Gehlenborg
Visualization plays two essential roles in data-driven scientific discovery. First, visualization is a key tool for data exploration and hypothesis generation. Second, visualization facilitates communication of insights and findings. In a typical analysis scenario, however, visualization for exploration and visualization for communication are two separate processes. They often involve different software tools and data representations. Even though sophisticated interactive visualization tools are available to explore data sets, findings are usually shared in form of static images or functionally limited interactive visualizations. While these capture a particular state, they do not include any information about the exploration process that lead to the finding.
In this talk I will describe how by capturing the visual exploration process, visualizations can be made reproducible and sharable. My collaborators and I leverage such data about the analysis process to allow analysts to create "vistories", which are interactive and annotated figures, that communicate insights and findings.
Receiving the John Kendrew Award is a great honour for me, and I am humbled to be joining the ranks of the previous recipients. None of this would have been possible without the many people who influenced my career at EMBL and Harvard Medical School, in particular, my past and present mentors. To me, the John Kendrew Award is not only a recognition of my achievements. I also consider it an acknowledgment of the importance of my field—visualisation of biomedical data—which was in its infancy when I started my PhD at the EMBL-EBI in 2006.
https://www.embl.de/aboutus/alumni/news/news_2018/20180302_gehlenborg/index.html
Mining Gems from the Data Visualization LiteratureNils Gehlenborg
What is the data visualization community and what can we learn from it?
What are some great examples?
What are the reasons why we don’t see more of this work in bioinformatics? The valley death ...
Interpreting data from cohort studies where clinical and molecular data across hundreds to thousands of patient samples need to be integrated, potentially spanning multiple time points, is challenging. In this presentation, I will discuss how data visualization can be used to drive or support this process, using tools that are applying the concept of “divide and conquer” to visual exploration. I will be presenting our early work on StratomeX and illustrate how this approach led to techniques such as Domino and LineUp, and will also introduce OncoThreads and Lineage, tools that we designed for visualization of cohorts with temporal and genealogical information, respectively.
Data Visualization in Biomedical Sciences: More than Meets the EyeNils Gehlenborg
In science, data visualization serves two primary purposes. The first is to explore data sets interactively and the second is to communicate discoveries. However, the requirements for visualizations employed in these activities are very different. Therefore, the software tools used for these purposes are typically disconnected, creating significant challenges for reproducibility and effective communication of discoveries in data-driven biomedical science. In this presentation, I will address how a new approach to creating data visualization tools can connect data analysts and other stakeholders inside and outside the scientific community. I will introduce and demonstrate the "Vistories" approach that was motivated by these question.
Presented at the 5th Cancer Research UK Big Data Analytics Conference on Data Visualization.
Talk presented at a Bayer Data Science Meetup. How can data visualization bridge between analysts and decision makers? How can we enable data-driven discovery with visualization and data-driven communication? I introduce and demonstrate the Vistories approach motivated by the reproducibility crisis in science.
HiGlass + HiPiler: Making Sense of Chromosome Interaction Data with Multi-Sca...Nils Gehlenborg
How can we visualize a 3,000,000 x 3,000,000 cell matrix and allow analysts to explore features across a wide range of different scales? We built HiGlass, a web-based visualization tool for analysis of Hi-C and other genome-wide chromosome interaction data that enables comparison of multiple contact matrices and integration of other data types. To complement this functionality, we also created HiPiler, which enables investigators to view and explore thousands of features such as loops or TADs and correlate their appearance with their genomic locations and experimental conditions. In my talk, I will discuss the design of HiGlass and HiPiler and present a range of use cases for these applications.
(Thanks to Fritz Lekschas for providing many of the slides.)
Multi-Scale Visualization Tools for Exploration of Chromosome Interaction ...Nils Gehlenborg
How do you visualize a 3 million x 3 million matrix and allow users to explore features across a wide range of different scales? We built HiGlass and HiPiler, web-based visualization tools for analysis of Hi-C and other genome-wide chromosome interaction data that enables comparison of multiple contact matrices and integration of other data types. In my talk, I will discuss several use cases and describe how we architected HiGlass and HiPiler.
Approaches for the Integration of Visual and Computational Analysis of Biomed...Nils Gehlenborg
The integration of computational and statistical approaches with visualization tools is becoming crucial as biomedical data sets are rapidly growing in size. Finding efficient solutions that address the interplay between data management, algorithmic and visual analysis tools is challenging. I will discuss some of these challenges and demonstrate how we are addressing them in our Refinery Platform project (http://www.refinery-platform.org).
The international Symposium on Biological Data Visualization (BioVis) is an interdisciplinary event covering all aspects of visualization in biology. The Symposium brings together researchers from the visualization, bioinformatics, and biology communities with the purpose of educating, inspiring, and engaging visualization researchers in problems in biological data visualization as well as bioinformatics and biology researchers in state-of-the-art visualization research. In order to further engage with a biological audience, the fourth and fifth editions were organized in collaboration with the International Society for Computational Biology and held jointly with their ISMB annual conference.
We are keen to maintain a presence with the VIS community and this meetup will serve as a focus for researchers in BioVis to meet up at VIS to discuss ideas for further development of the Biological Visualisation Community. In particular, this meetup will bring together BioVis researchers and groups of interest within the City of Chicago, who runs a regular Data Visualization Meetup in Chicago. Website: http://www.meetup.com/The-Chicago-Data-Visualization-Group/
Visualization Tools for the Refinery Platform - Supporting reproducible resea...Nils Gehlenborg
The Refinery Platform (http://www.refinery-platform.org) is a web-based data visualization and analysis system for epigenomic and genomic data designed to support reproducible biomedical research. The analysis backend employs the Galaxy Workbench and connects to a data repository based on the ISA-Tab data description format. In my talk I will discuss the exploratory visualization tools that we have integrated into Refinery.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Visualization Approaches for Biomedical Omics Data: Putting It All Together
1. Visualization Approaches for
Biomedical Omics Data:
Putting It All Together
Nils Gehlenborg
Harvard Medical School
Center for Biomedical Informatics
!nils_gehlenborg
5. “In every chain of reasoning, the evidence of the last conclusion can
be no greater than that of the weakest link of the chain, whatever
may be the strength of the rest.”
- Thomas Reid, Essays on the Intellectual Powers of Man (1786)
6. Human
"
INTERPRETATION
Data
COMPUTATION GENERATION
#
Machine
Hypotheses
Discoveries
Knowledge
Cognition
28. Proteome
HOW?
mass spectrometry of peptides
array-based techniques
WHAT?
presence of peptides & proteins
abundance of peptides & proteins
29.
30. Genome
Transcriptome
Proteome
Metabolome
What is the DNA sequence?
Which genes are active?
Which proteins are present?
Which metabolites can be identified?
31. Metabolome
HOW?
mass spectrometry
NMR spectroscopy
WHAT?
presence of metabolites
abundance of metabolites
32. Genome
Transcriptome
Proteome
Metabolome
What is the DNA sequence?
Which genes are active?
Which proteins are present?
Which metabolites can be identified?
Interactome Which molecules are interacting?
33. Interactome
HOW?
mass spectrometry, yeast-2-hybrid
text mining
WHAT?
links between molecules
34. Epigenome
Genome
Transcriptome
Proteome
Metabolome
How are DNA and associated proteins modified?
What is the DNA sequence?
Which genes are active?
Which proteins are present?
Which metabolites can be identified?
Interactome Which molecules are interacting?
35. Epigenome
HOW?
ChIP-seq, ChIP-chip (histones modifications)
bisulfite sequencing (DNA methylation)
WHAT?
histone modifications along genome
DNA methylation patterns along genome
39. Nucleome
Epigenome
Genome
Transcriptome
Proteome
Metabolome
How is the DNA organized in space/time?
How are DNA and associated proteins modified?
What is the DNA sequence?
Which genes are active?
Which proteins are present?
Which metabolites can be identified?
Interactome Which molecules are interacting?
40. Nucleome
HOW?
3C/4C/5C chromosome conformation capture
Hi-C sequencing
WHAT?
contact probabilities for different parts
of the genome
41. Lieberman-Aiden et al., Comprehensive Mapping of Long-Range Interactions
Reveals Folding Principles of the Human Genome, 2009
58. StratomeX
M Streit, A Lex, S Gratzl, C Partl, D Schmalstieg, H Pfister, PJ Park, N Gehlenborg, “Guided Visual
Exploration of Genomic Stratifications in Cancer“, Nature Methods 11:884-885 (2014)
A Lex, M Streit, H-J Schulz, C Partl, D Schmalstieg, PJ Park, N Gehlenborg, “StratomeX: Visual Anal-ysis
of Large-Scale Heterogeneous Genomics Data for Cancer Subtype Characterization“, Computer
Graphics Forum 31:1175-1184 (2012)
59.
60. Is there a mutation that overlaps with this mRNA cluster?
Is there a mutually exclusive mutation?
Is there a CNV that affects survival?
Is there a pathway that is enriched in this cluster?
Guided Exploration
Query
Rank
Visualize
Stratifications
Clinical Params
Pathways
77. Acknowledgements
Harvard SEAS Alexander Lex, Hanspeter Pfister
MD Anderson Cancer Center
University of Rostock
Psalm Haseley, Richard Park, Peter J Park
Michael S Noble, Douglas Voet, Lihua Zou, Spring Liu, Hailei
Zhang, Sachet Shukla, Aaron McKenna, Andrew Cherniak,
Pei Lin, Gad Getz
Jianhua Zhang, Terrence Wu, Ian Watson, Steven Quayle,
Lynda Chin
Harvard Medical School
Broad Institute of MIT & Harvard
Graz University of Technology Christian Partl, Dieter Schmalstieg
Johannes Kepler University Linz Samuel Gratzl, Stefan Luger, Marc Streit
Hans-Jörg Schulz
Harvard School of Public Health
Funding
NIH/NHGRI K99 HG007583
Ilya Sytchev, Shannan Ho Sui, Winston Hide