This document provides a summary and review of notable publications in translational bioinformatics from approximately 2014 to early 2015. It begins with an introduction and overview of the goals and process for selecting publications. Several key topics and publications are then highlighted, including precision medicine and clinical prediction models, variation analysis, cancer genomics, clinical applications of genomics, pharmacogenomics, systems biology approaches, and natural language processing. The document concludes with thanks and acknowledges limitations in scope.
This document provides a summary and review of trends in translational bioinformatics in 2013 by Russ Altman. It begins with an overview and goals section, followed by sections highlighting important papers from 2013 in areas like omics medicine, cool new methods, cancer research, and drugs/delivery. The document reviews over 350 papers, focusing on 27 that are briefly summarized. It aims to provide colleagues in the field a "snapshot" of important progress and opportunities in using informatics approaches to link basic biological research to clinical applications in 2013.
The document provides an overview and review of scientific trends and publications in translational bioinformatics from approximately the last 14 months. It discusses several key papers related to topics like clinical genomics, drugs, genetic basis of disease, and emerging data sources. The author evaluated over 100 papers and selected around 50 final papers to highlight, discussing 32 of them briefly across 8 topic areas. The goal was to provide a "snapshot" of important progress and opportunities in the field.
This document provides a summary of the 2012 Translational Bioinformatics conference. It highlights several important papers presented at the conference in areas like systems medicine, finding and defining phenotypes, biomarkers, and genomic infrastructure. The document outlines the goals of the conference, the process used to select papers, caveats about the selection, and thanks various contributors. It then briefly summarizes several key papers from the conference in these areas.
This document summarizes a presentation on new sources of big data for precision medicine. It discusses how new data sources like genomics, the human microbiome, epigenomics, and the exposome are generating large amounts of data. It then covers the evolution of precision medicine from concepts like personalized medicine and how strategic initiatives in the UK and US are supporting precision medicine research through funding programs and projects like the Cancer Genome Atlas, eMERGE, and exposome studies. The presentation raises the question of whether we are ready for precision medicine given these new data sources and research efforts.
From Bits to Bedside: Translating Big Data into Precision Medicine and Digita...Dexter Hadley
Lecture Objectives:
1) To use examples from my research to define and introduce the ideals of precision medicine and digital health. 2) To introduce how large scale population-wide analysis of data can be used to facilitate these two ideals. 3) To introduce how freely available open data can be used to facilitate these two ideals. 4) To show how mobile technology can be used to facilitate these two ideals.
The document summarizes Dr. Matthieu-P. Schapranow's presentation at the Festival of Genomics in Boston on turning big medical data into precision medicine. It describes an in-memory database approach that enables real-time analysis of heterogeneous medical data sources. This allows clinicians and researchers to interactively explore patient data, clinical trials, pathways, and literature to obtain personalized treatment recommendations. The system was designed using a human-centered methodology to ensure usability, effectiveness, and feasibility for precision medicine applications.
Precision Medicine in Oncology InformaticsWarren Kibbe
Precision medicine in oncology aims to provide targeted cancer treatments based on a patient's individual tumor characteristics. The presentation discusses precision oncology initiatives including NCI-MATCH clinical trials which assign cancer therapies based on a tumor's molecular abnormalities rather than location. It outlines plans to expand genomically-based cancer trials, understand and overcome treatment resistance through molecular analysis, and establish a national cancer database integrating genomic and clinical data to accelerate cancer research. Cloud computing platforms are being developed to provide researchers access to large cancer genomic and clinical datasets. The goal is to advance precision cancer treatment by incorporating individual patient genetics and biomarkers into therapeutic decision making.
Digital Pathology, FDA Approval and Precision MedicineJoel Saltz
Digital pathology platforms combined with machine learning can improve the consistency and quality of clinical decision making by precisely scoring known criteria from pathology images and predicting treatment outcomes and cancer types. Researchers are developing tools to extract features from pathology images, link these features to molecular data and clinical outcomes, and use these integrated datasets to gain new insights into cancer and select the best interventions. The SEER Virtual Tissue Repository aims to enable population-level cancer research by creating a linked collection of de-identified clinical data and whole slide images from pathology samples that can be analyzed using computational methods.
This document provides a summary and review of trends in translational bioinformatics in 2013 by Russ Altman. It begins with an overview and goals section, followed by sections highlighting important papers from 2013 in areas like omics medicine, cool new methods, cancer research, and drugs/delivery. The document reviews over 350 papers, focusing on 27 that are briefly summarized. It aims to provide colleagues in the field a "snapshot" of important progress and opportunities in using informatics approaches to link basic biological research to clinical applications in 2013.
The document provides an overview and review of scientific trends and publications in translational bioinformatics from approximately the last 14 months. It discusses several key papers related to topics like clinical genomics, drugs, genetic basis of disease, and emerging data sources. The author evaluated over 100 papers and selected around 50 final papers to highlight, discussing 32 of them briefly across 8 topic areas. The goal was to provide a "snapshot" of important progress and opportunities in the field.
This document provides a summary of the 2012 Translational Bioinformatics conference. It highlights several important papers presented at the conference in areas like systems medicine, finding and defining phenotypes, biomarkers, and genomic infrastructure. The document outlines the goals of the conference, the process used to select papers, caveats about the selection, and thanks various contributors. It then briefly summarizes several key papers from the conference in these areas.
This document summarizes a presentation on new sources of big data for precision medicine. It discusses how new data sources like genomics, the human microbiome, epigenomics, and the exposome are generating large amounts of data. It then covers the evolution of precision medicine from concepts like personalized medicine and how strategic initiatives in the UK and US are supporting precision medicine research through funding programs and projects like the Cancer Genome Atlas, eMERGE, and exposome studies. The presentation raises the question of whether we are ready for precision medicine given these new data sources and research efforts.
From Bits to Bedside: Translating Big Data into Precision Medicine and Digita...Dexter Hadley
Lecture Objectives:
1) To use examples from my research to define and introduce the ideals of precision medicine and digital health. 2) To introduce how large scale population-wide analysis of data can be used to facilitate these two ideals. 3) To introduce how freely available open data can be used to facilitate these two ideals. 4) To show how mobile technology can be used to facilitate these two ideals.
The document summarizes Dr. Matthieu-P. Schapranow's presentation at the Festival of Genomics in Boston on turning big medical data into precision medicine. It describes an in-memory database approach that enables real-time analysis of heterogeneous medical data sources. This allows clinicians and researchers to interactively explore patient data, clinical trials, pathways, and literature to obtain personalized treatment recommendations. The system was designed using a human-centered methodology to ensure usability, effectiveness, and feasibility for precision medicine applications.
Precision Medicine in Oncology InformaticsWarren Kibbe
Precision medicine in oncology aims to provide targeted cancer treatments based on a patient's individual tumor characteristics. The presentation discusses precision oncology initiatives including NCI-MATCH clinical trials which assign cancer therapies based on a tumor's molecular abnormalities rather than location. It outlines plans to expand genomically-based cancer trials, understand and overcome treatment resistance through molecular analysis, and establish a national cancer database integrating genomic and clinical data to accelerate cancer research. Cloud computing platforms are being developed to provide researchers access to large cancer genomic and clinical datasets. The goal is to advance precision cancer treatment by incorporating individual patient genetics and biomarkers into therapeutic decision making.
Digital Pathology, FDA Approval and Precision MedicineJoel Saltz
Digital pathology platforms combined with machine learning can improve the consistency and quality of clinical decision making by precisely scoring known criteria from pathology images and predicting treatment outcomes and cancer types. Researchers are developing tools to extract features from pathology images, link these features to molecular data and clinical outcomes, and use these integrated datasets to gain new insights into cancer and select the best interventions. The SEER Virtual Tissue Repository aims to enable population-level cancer research by creating a linked collection of de-identified clinical data and whole slide images from pathology samples that can be analyzed using computational methods.
The reality of moving towards precision medicineElia Stupka
How do we move towards precision medicine? How can we deliver on the big data in health promise? Who will be the enablers and players? Pharma, Big Tech, or newcomers?
Chapter 15 precision medicine in oncologyNilesh Kucha
This document discusses precision medicine in oncology and molecular monitoring of cancer patients. It describes how molecular characterization of tumors can guide treatment decisions and help develop targeted therapies. Next-generation DNA sequencing is allowing large amounts of tumor DNA to be analyzed to identify molecular targets and guide clinical trials matching treatments to tumor mutations. Challenges include limiting sequencing to known targets, accounting for germline variants, incidental findings, and integrating sequencing results into clinical decision making. Repeated biopsies during treatment can provide insights into drug sensitivity and resistance mechanisms in individual patients.
The global precision medicine market was valued at USD 50.99 billion in 2018 and is projected to reach USD 88.25 billion by 2023, growing at a CAGR of 11.60%. The market is segmented by ecosystem players, therapeutics, and technology, with cancer being the largest therapeutic segment. Key drivers of growth include increasing investments in precision medicine R&D, rising prevalence of chronic diseases, and advancements in technologies such as genomics, big data analytics, and companion diagnostics.
There are only around 500 geneticists and 2,400 genetic counselors in the U.S. to help integrate genomic medicine into patient care. DNA Direct aims to address this shortage and other barriers through technology solutions that provide education, decision support, and expert guidance to patients, providers, payors, and medical centers. Their programs have shown success in improving patient compliance with genetic screening and understanding of test results.
This curriculum vitae summarizes the qualifications and experience of Weiliang Qiu. Qiu has over 12 years of experience in data analysis, especially of clinical trial and observational data. He has published over 70 peer-reviewed papers and edited two academic journals. Qiu has a Ph.D. in Statistics and is currently an Associate Biostatistician and Assistant Professor at Brigham and Women's Hospital, where he provides statistical support for clinical trials and develops novel statistical methods.
This document discusses precision medicine and its future applications. It notes that currently many patients do not respond to initial treatments for common conditions like depression, asthma, diabetes and Alzheimer's. Precision medicine aims to change this by using massive datasets including genomics, clinical information, and population data to better understand disease at the individual level and tailor diagnosis and treatment specifically for each patient. This more personalized approach could help get the right treatment to patients more quickly and effectively.
Summary, outcomes and action plan presented by Dr. Angela Christiano at the end of the two-day Alopecia Areata Research Summit held November 14-15, 2016 in New York, NY.
Seventh Annual Next Generation Dx SummitJaime Hodges
The Next Generation Dx Summit (www.nextgenerationdx.com), entering its seventh year, brings together more than 800 diagnostics professionals from across the world, providing comprehensive programming and valuable networking opportunities. Spanning from clinical diagnostics to business strategy, this year’s expanded program encompasses predictive cancer biomarkers, companion diagnostics, infectious disease, point-of-care, pharmacy-based diagnostics, cell-free DNA, commercialization, cancer immunotherapy, and reimbursement. With widespread coverage of all the most relevant diagnostics topics, the Next Generation Dx Summit promises to be a must-attend event to hear the latest announcements and developments in this rapidly evolving field.
Presenter: Marina Sirota, UCSF
Recent advances in genome typing and sequencing technologies have enabled quick generation of a vast amount of molecular data at very low cost. The mining and computational analysis of this type of data can help shape new diagnostic and therapeutic strategies in biomedicine. In this talk, I will discuss how such technological advances in combination with data science and integrative analysis can be applied to drug discovery in the context of drug target identification, computational drug repurposing, and population stratification approaches.
To date, the Registry has epidemiology and quality-of-life data from 11,180 self-registered patients with 4,196
well-characterized samples of DNA, lymphoblast lines, and sera for future research studies.
Accelerating the benefits of genomics worldwideJoaquin Dopazo
Grand Challenges in Genomics
A Joint NHGRI and Wellcome Trust Strategic Meeting
25 and 26 February 2019
https://www.wellcomeevents.org/WELLCOME/media/uploaded/EVWELLCOME/event_661/Draft_agenda_for_WT_December_2018.pdf
Join lecture: Nicky Mulder, Han Brunner and Joaquin Dopazo
The document summarizes a presentation on the Norwegian clinical genetics analysis platform "genAP". Key points include:
- "genAP" aims to develop an ICT infrastructure for centralized storage of human genome data to allow distributed use nationally and potentially internationally.
- It seeks to efficiently analyze sensitive genome data through existing tools and integrate genome data into clinical diagnostics and treatment.
- Challenges include standardizing analyses, conveying complex information to clinicians, ensuring data security and privacy, and navigating research versus clinical applications.
- Examples provided automation of variant analysis, a decision support system for drug dosages, and plans for future clinical pilots.
This document summarizes a presentation on genes and environment in personalized medicine. It discusses:
1) How existing databases of gene-disease associations are limited for applying genome sequencing results to an individual patient due to incomplete data.
2) A database called VARIMED that aims to address these limitations by curating over 12,000 papers and 192,000 SNPs and their associations with 4,400 diseases and phenotypes.
3) Challenges in moving from odds ratios to likelihood ratios for assessing disease risk based on genomic data and an individual's clinical information.
UCSF Informatics Day 2014 - Keith R. Yamamoto, "Precision Medicine"CTSI at UCSF
Keith R. Yamamoto, PhD — Opening Remarks – Precision Medicine
Vice Chancellor for Research
Executive Vice Dean of the School of Medicine
Professor of Cellular and Molecular Pharmacology
UCSF
This document provides a summary of major scientific events, trends, and publications in translational bioinformatics from 2009 to early 2010. It highlights 24 seminal papers covering topics such as whole genome sequencing, genetic associations/mechanisms, network biomedicine, drug discovery, and infrastructure. The document aims to create a "snapshot" of important developments in the field for future generations to examine the progress made and opportunities that lay ahead in translational bioinformatics.
Please share this webinar with anyone who may be interested!
Watch all our webinars: https://www.youtube.com/playlist?list=PL4dDQscmFYu_ezxuxnAE61hx4JlqAKXpR
Cancer care is increasingly tailored to individual patients, who can undergo genetic or biomarker testing soon after diagnosis, to determine which treatments have the best chance of shrinking or eliminating tumours.
In this webinar, a pathologist and clinical oncologist discuss:
● how they are using these new tests,
● how they communicate results and treatment options to patients and caregivers, and
● how patients can be better informed on the kinds of tests that are in development or in use across Canada
View the video: https://youtu.be/_Wai_uMQKEQ
Follow our social media accounts:
Twitter - https://twitter.com/survivornetca
Facebook - https://www.facebook.com/CanadianSurvivorNet
Pinterest - https://www.pinterest.com/survivornetwork
YouTube - https://www.youtube.com/user/Survivornetca
Towards Digitally Enabled Genomic Medicine: the Patient of The FutureLarry Smarr
12.02.22
Invited Speaker
Hacking Life
TTI/Vanguard Conference
Title: Towards Digitally Enabled Genomic Medicine: the Patient of The Future
San Jose, CA
Precision Medicine in Oncology InformaticsWarren Kibbe
This document summarizes a presentation on precision medicine in oncology from an informatics perspective. It discusses the goals of precision oncology to target cancer treatments based on a patient's individual tumor characteristics. Major initiatives are described, including the NCI-MATCH trial which assigns cancer therapies based on a tumor's molecular abnormalities. The presentation outlines efforts through the Precision Medicine Initiative to expand genomic cancer trials, understand and overcome resistance to therapies through additional tumor profiling and preclinical models, and establish an integrated national cancer database.
Estimating the Statistical Significance of Classifiers used in the Predictio...IOSR Journals
This document summarizes a research paper that analyzes the statistical significance of different classifiers for predicting tuberculosis. The paper first compares the accuracy of classifiers like decision trees, support vector machines, k-nearest neighbor, and naive Bayes on tuberculosis data. It then evaluates the performance of these classifiers using a paired t-test to select the optimal model. The results showed that support vector machines and decision trees were not statistically significant, while support vector machines combined with naive Bayes and k-nearest neighbor were statistically significant.
This document discusses the theory of demand in economics. It defines demand as the quantity of a product that will be bought per unit of time at a given price. Demand can be individual demand, which is a single consumer's demand, or market demand, which is the total demand from all buyers in the market. The key factors that determine demand are price of the product, income, tastes, prices of related goods, population, and expectations. The law of demand states that demand varies inversely with price - as price increases, quantity demanded decreases. There are assumptions and exceptions to this law, such as Giffen goods or goods with speculative demand. The document also discusses demand curves, autonomous vs. derived demand, short run
The reality of moving towards precision medicineElia Stupka
How do we move towards precision medicine? How can we deliver on the big data in health promise? Who will be the enablers and players? Pharma, Big Tech, or newcomers?
Chapter 15 precision medicine in oncologyNilesh Kucha
This document discusses precision medicine in oncology and molecular monitoring of cancer patients. It describes how molecular characterization of tumors can guide treatment decisions and help develop targeted therapies. Next-generation DNA sequencing is allowing large amounts of tumor DNA to be analyzed to identify molecular targets and guide clinical trials matching treatments to tumor mutations. Challenges include limiting sequencing to known targets, accounting for germline variants, incidental findings, and integrating sequencing results into clinical decision making. Repeated biopsies during treatment can provide insights into drug sensitivity and resistance mechanisms in individual patients.
The global precision medicine market was valued at USD 50.99 billion in 2018 and is projected to reach USD 88.25 billion by 2023, growing at a CAGR of 11.60%. The market is segmented by ecosystem players, therapeutics, and technology, with cancer being the largest therapeutic segment. Key drivers of growth include increasing investments in precision medicine R&D, rising prevalence of chronic diseases, and advancements in technologies such as genomics, big data analytics, and companion diagnostics.
There are only around 500 geneticists and 2,400 genetic counselors in the U.S. to help integrate genomic medicine into patient care. DNA Direct aims to address this shortage and other barriers through technology solutions that provide education, decision support, and expert guidance to patients, providers, payors, and medical centers. Their programs have shown success in improving patient compliance with genetic screening and understanding of test results.
This curriculum vitae summarizes the qualifications and experience of Weiliang Qiu. Qiu has over 12 years of experience in data analysis, especially of clinical trial and observational data. He has published over 70 peer-reviewed papers and edited two academic journals. Qiu has a Ph.D. in Statistics and is currently an Associate Biostatistician and Assistant Professor at Brigham and Women's Hospital, where he provides statistical support for clinical trials and develops novel statistical methods.
This document discusses precision medicine and its future applications. It notes that currently many patients do not respond to initial treatments for common conditions like depression, asthma, diabetes and Alzheimer's. Precision medicine aims to change this by using massive datasets including genomics, clinical information, and population data to better understand disease at the individual level and tailor diagnosis and treatment specifically for each patient. This more personalized approach could help get the right treatment to patients more quickly and effectively.
Summary, outcomes and action plan presented by Dr. Angela Christiano at the end of the two-day Alopecia Areata Research Summit held November 14-15, 2016 in New York, NY.
Seventh Annual Next Generation Dx SummitJaime Hodges
The Next Generation Dx Summit (www.nextgenerationdx.com), entering its seventh year, brings together more than 800 diagnostics professionals from across the world, providing comprehensive programming and valuable networking opportunities. Spanning from clinical diagnostics to business strategy, this year’s expanded program encompasses predictive cancer biomarkers, companion diagnostics, infectious disease, point-of-care, pharmacy-based diagnostics, cell-free DNA, commercialization, cancer immunotherapy, and reimbursement. With widespread coverage of all the most relevant diagnostics topics, the Next Generation Dx Summit promises to be a must-attend event to hear the latest announcements and developments in this rapidly evolving field.
Presenter: Marina Sirota, UCSF
Recent advances in genome typing and sequencing technologies have enabled quick generation of a vast amount of molecular data at very low cost. The mining and computational analysis of this type of data can help shape new diagnostic and therapeutic strategies in biomedicine. In this talk, I will discuss how such technological advances in combination with data science and integrative analysis can be applied to drug discovery in the context of drug target identification, computational drug repurposing, and population stratification approaches.
To date, the Registry has epidemiology and quality-of-life data from 11,180 self-registered patients with 4,196
well-characterized samples of DNA, lymphoblast lines, and sera for future research studies.
Accelerating the benefits of genomics worldwideJoaquin Dopazo
Grand Challenges in Genomics
A Joint NHGRI and Wellcome Trust Strategic Meeting
25 and 26 February 2019
https://www.wellcomeevents.org/WELLCOME/media/uploaded/EVWELLCOME/event_661/Draft_agenda_for_WT_December_2018.pdf
Join lecture: Nicky Mulder, Han Brunner and Joaquin Dopazo
The document summarizes a presentation on the Norwegian clinical genetics analysis platform "genAP". Key points include:
- "genAP" aims to develop an ICT infrastructure for centralized storage of human genome data to allow distributed use nationally and potentially internationally.
- It seeks to efficiently analyze sensitive genome data through existing tools and integrate genome data into clinical diagnostics and treatment.
- Challenges include standardizing analyses, conveying complex information to clinicians, ensuring data security and privacy, and navigating research versus clinical applications.
- Examples provided automation of variant analysis, a decision support system for drug dosages, and plans for future clinical pilots.
This document summarizes a presentation on genes and environment in personalized medicine. It discusses:
1) How existing databases of gene-disease associations are limited for applying genome sequencing results to an individual patient due to incomplete data.
2) A database called VARIMED that aims to address these limitations by curating over 12,000 papers and 192,000 SNPs and their associations with 4,400 diseases and phenotypes.
3) Challenges in moving from odds ratios to likelihood ratios for assessing disease risk based on genomic data and an individual's clinical information.
UCSF Informatics Day 2014 - Keith R. Yamamoto, "Precision Medicine"CTSI at UCSF
Keith R. Yamamoto, PhD — Opening Remarks – Precision Medicine
Vice Chancellor for Research
Executive Vice Dean of the School of Medicine
Professor of Cellular and Molecular Pharmacology
UCSF
This document provides a summary of major scientific events, trends, and publications in translational bioinformatics from 2009 to early 2010. It highlights 24 seminal papers covering topics such as whole genome sequencing, genetic associations/mechanisms, network biomedicine, drug discovery, and infrastructure. The document aims to create a "snapshot" of important developments in the field for future generations to examine the progress made and opportunities that lay ahead in translational bioinformatics.
Please share this webinar with anyone who may be interested!
Watch all our webinars: https://www.youtube.com/playlist?list=PL4dDQscmFYu_ezxuxnAE61hx4JlqAKXpR
Cancer care is increasingly tailored to individual patients, who can undergo genetic or biomarker testing soon after diagnosis, to determine which treatments have the best chance of shrinking or eliminating tumours.
In this webinar, a pathologist and clinical oncologist discuss:
● how they are using these new tests,
● how they communicate results and treatment options to patients and caregivers, and
● how patients can be better informed on the kinds of tests that are in development or in use across Canada
View the video: https://youtu.be/_Wai_uMQKEQ
Follow our social media accounts:
Twitter - https://twitter.com/survivornetca
Facebook - https://www.facebook.com/CanadianSurvivorNet
Pinterest - https://www.pinterest.com/survivornetwork
YouTube - https://www.youtube.com/user/Survivornetca
Towards Digitally Enabled Genomic Medicine: the Patient of The FutureLarry Smarr
12.02.22
Invited Speaker
Hacking Life
TTI/Vanguard Conference
Title: Towards Digitally Enabled Genomic Medicine: the Patient of The Future
San Jose, CA
Precision Medicine in Oncology InformaticsWarren Kibbe
This document summarizes a presentation on precision medicine in oncology from an informatics perspective. It discusses the goals of precision oncology to target cancer treatments based on a patient's individual tumor characteristics. Major initiatives are described, including the NCI-MATCH trial which assigns cancer therapies based on a tumor's molecular abnormalities. The presentation outlines efforts through the Precision Medicine Initiative to expand genomic cancer trials, understand and overcome resistance to therapies through additional tumor profiling and preclinical models, and establish an integrated national cancer database.
Estimating the Statistical Significance of Classifiers used in the Predictio...IOSR Journals
This document summarizes a research paper that analyzes the statistical significance of different classifiers for predicting tuberculosis. The paper first compares the accuracy of classifiers like decision trees, support vector machines, k-nearest neighbor, and naive Bayes on tuberculosis data. It then evaluates the performance of these classifiers using a paired t-test to select the optimal model. The results showed that support vector machines and decision trees were not statistically significant, while support vector machines combined with naive Bayes and k-nearest neighbor were statistically significant.
This document discusses the theory of demand in economics. It defines demand as the quantity of a product that will be bought per unit of time at a given price. Demand can be individual demand, which is a single consumer's demand, or market demand, which is the total demand from all buyers in the market. The key factors that determine demand are price of the product, income, tastes, prices of related goods, population, and expectations. The law of demand states that demand varies inversely with price - as price increases, quantity demanded decreases. There are assumptions and exceptions to this law, such as Giffen goods or goods with speculative demand. The document also discusses demand curves, autonomous vs. derived demand, short run
This document summarizes a study on future water availability and food security in the Nile Basin. Population growth is increasing food demand substantially by 2025. The study models three scenarios for meeting this demand: national priorities, upstream control, and basin cooperation. Basin cooperation through improved rainfed agriculture, irrigation, hydropower, and food trade could increase agricultural production by 38% and gross profits by $1154 million, better satisfying needs across the region. Stabilizing political relations and optimizing water allocation over space and time are keys to ensuring long-term food security for all Nile Basin countries.
Este documento compara las funcionalidades de Pages de Apple y Microsoft Word. Explica que Pages puede abrir y exportar documentos en formato Word (.doc), aunque algunas características como estilos de lista, fuentes y efectos de texto pueden no ser compatibles. Al exportar un documento de Pages a Word, el texto se conserva pero es posible que varien formatos como tablas, hipervínculos y comentarios.
The document depicts a bleak, dystopian view of a valley farmland under foggy conditions. Saplings are shown in the foggy landscape with shades of gray dominating the scenery. The earth appears cold and uninviting based on the title "Cold Earth".
La computadora de escritorio es una computadora personal diseñada para ser usada en una ubicación fija como un escritorio y por una sola persona a la vez. Puede usarse para navegar en Internet, estudiar, editar textos, ver videos, chatear y escuchar música. Existen dos tipos: las computadoras de uso doméstico y las computadoras de oficina. La computadora portátil es una computadora personal móvil que puede realizar la mayoría de las tareas de una computadora de escritorio y viene en formatos como notebooks, netbooks y tablets
Spencer Peak analyzes how languages shape culture through Arjun Appadurai's theory of "scapes", including languagescapes. The document discusses how globalization has led to a diversity of languages and the rise of multilingualism. While learning new languages can foster cultural understanding, the decline of some languages may threaten cultural diversity. The ideal is to embrace multilingualism while maintaining unique cultural identities, as seen in the Philippines which incorporated English without disrupting local languages and culture. In conclusion, understanding languagescapes can promote cultural unity by broadening perspectives and respect for other cultures.
We had a change in government with a ‘change agenda’. Downturn in the global oil market dampening Nigeria’s economic outlook. Advertising industry had it rough in spite of the election windfall. Weakened economies of many states dislocated disposable income of consumers across the country. A new political leadership brings optimism with the biggest budget in Nigeria’s history. In 2016, agencies will have to do more with less, get more creative and widen their scope of competence. Boom in modern retail would increase brand activation and BTL campaigns. Technology would continue to change the way brands and consumers relate but the latter would take more control of their media consumption. We can’t predict the future. This outlook is an expression of our expertise and consistent thought leadership.
Researching codes and conventions of music magazines double page spreadEvijaKapeljuha
The document discusses the codes and conventions used in music magazines including Q Magazine, Kerrang!, and Billboard. Some common conventions discussed include:
- Using large, prominent images of artists that take up an entire page to highlight their importance.
- Employing direct eye contact in photos to create a personal connection with readers.
- Organizing articles in columns for easy reading.
- Maintaining consistent color schemes and fonts to build brand recognition over time.
- Including additional photos on double page spreads to provide more insight into the featured artist.
Leon Cordeau is a professional engineer with over 20 years of experience in civil engineering and project management. He has worked in various roles including as a quality manager, faculty member, project engineer, and consultant. Cordeau has extensive experience managing infrastructure projects and teams across industries such as utilities, transportation, and education. He possesses strong technical skills and expertise in areas like project management, quality systems, construction, and engineering design.
Dr. Dennis Wang discusses possible ways to enable ML methods to be more powerful for discovery and to reduce ambiguity within translational medicine, allowing data-informed decision-making to deliver the next generation of diagnostics and therapeutics to patients quicker, at lowered costs, and at scale.
The talk by Dr. Dennis Wang was followed by a panel discussion with Mr. Albert Wang, M. Eng., Head, IT Business Partner, Translational Research & Technologies, Bristol-Myers Squibb.
Math, Stats and CS in Public Health and Medical ResearchJessica Minnier
Jessica Minnier gave a talk on her career path from studying mathematics at Lewis & Clark College to her current position as an Assistant Professor in biostatistics. She discussed how biostatistics, bioinformatics, and computational biology are applied in medical research, using examples like analyzing RNA sequencing data and building predictive models from electronic health records. Minnier also shared resources for learning more about careers in public health research and statistics.
Translational Genomics towards Personalized medicine - Medhavi Vashisth.pptMedhavi27
This document discusses various approaches to personalized and precision medicine, including stratified medicine, personalized medicine, and precision medicine. It also discusses the role of biomarkers, pharmacogenomics, genetic testing, biobanking, and examples of individualized cancer treatments. Key points include the use of targeted medicines based on disease stage or individual information, and ensuring best outcomes while reducing side effects. The goal of precision medicine is to integrate genomic data to guide health and disease prevention.
From Data to Action: Bridging Chemistry and Biology with Informatics at NCATSRajarshi Guha
This document discusses the work of the National Center for Advancing Translational Sciences (NCATS) in bridging chemistry, biology and informatics to improve the process of translational research. It describes NCATS' mission to develop new methods and technologies to enhance drug development and implementation of interventions to improve human health. Specifically, it outlines initiatives at NCATS such as the Chemical Genomics Center, which performs high-throughput screens and develops chemical probes and leads. It also discusses how translational bioinformatics uses data integration to move between molecular to clinical scales to enable decision-making in areas like drug design and target validation.
The document discusses the intersection of precision medicine, biomarkers, and healthcare policy. It describes how biomarkers and -omics data can be used for precision medicine to improve diagnostic accuracy, deliver targeted therapies, and stratify patient populations. However, clinical validation of biomarkers now requires large datasets and years of studies due to regulatory and payer requirements. This has reduced incentives for diagnostic innovation. The document also discusses challenges around clinical interpretation of complex multi-omic tests, evolving medical training and workflows, and disconnects between patent and reimbursement policies.
SLC CME- Evidence based medicine 07/27/2007cddirks
Saint Luke's Care, a quality improvement organization within Saint Luke's Health System, presents a CME presentation by Dr. Brent Beasley on Evidence Based Medical Care.
MseqDR consortium: a grass-roots effort to establish a global resource aimed ...Human Variome Project
The success of whole exome sequencing (WES) for highly heterogeneous disorders, such as mitochondrial disease, is limited by substantial technical and bioinformatics challenges to correctly identify and prioritize the extensive number of sequence variants present in each patient. The likelihood of success can be greatly improved if a large cohort of patient data is assembled in which sequence variants can be systematically analysed, annotated, and interpreted relative to known phenotype. This effort has engaged and united more than 100 international mitochondrial clinicians, researchers, and bioinformaticians in the Mitochondrial Disease Sequence Data Resource (MSeqDR) consortium that formed in June 2012 to identify and prioritize the specific WES data analysis needs of the global mitochondrial disease community. Through regular web-based meetings, we have familiarized ourselves with existing strengths and gaps facing integration of MSeqDR with public resources, as well as the major practical, technical, and ethical challenges that must be overcome to create a sustainable data resource. We have now moved forward toward our common goal by establishing a central data resource (http://mseqdr.org/) that has both public access and secure web-based features that allow the coherent compilation, organization, annotation, and analysis of WES and mtDNA genome data sets generated in both clinical- and research-based settings of suspected mitochondrial disease patients. The most important aims of the MSeqDR consortium are summarized in the MSeqDR portal within the Consortium overview sections. Consortium participants are organized in 3 working groups that include (1) Technology and Bioinformatics; (2) Phenotyping, databasing, IRB concerns and access; and (3) Mitochondrial DNA specific concerns. The online MSeqDR resource is organized into discrete sections to facilitate data deposition and common reannotation, data visualization, data set mining, and access management. With the support of the United Mitochondrial Disease Foundation (UMDF) and the NINDS/NICHD U54 supported North American Mitochondrial Disease Consortium (NAMDC), the MSeqDR prototype has been built. Current major components include common data upload and reannotation using a novel HBCR based annotation tool that has also been made publicly available through the website, MSeqDR GBrowse that allows ready visualization of all public and MSeqDR specific data including labspecific aggregate data visualization tracks, MSeqDR-LSDB instance of nearly 1250 mitochondrial disease and mitochodnrial localized genes that is based on the Locus Specific Database model, exome data set mining in individuals or families using the GEM.app tool, and Account & Access Management. Within MSeqDR GBrowse it is now possible to explore data derived from MitoMap, HmtDB, ClinVar, UCSC-NumtS, ENCODE, 1000 genomes, and many other resources that bioinformaticians recruited to the project are organizing.
Emerging collaboration models for academic medical centers _ our place in the...Rick Silva
- The document discusses emerging collaboration models between academic medical centers and other organizations in the genomics and precision medicine field, as genomic sequencing capabilities advance and more clinical cases are needed to power artificial intelligence platforms. It explores new partnership approaches around data sharing, patient engagement, infrastructure needs, and how academic medical centers can position themselves in this evolving ecosystem.
TCGC The Clinical Genome Conference 2015Nicole Proulx
Bio-IT World and Cambridge Healthtech Institute are again proud to host the Fourth Annual TCGC: The Clinical Genome Conference, inviting stakeholders impacting clinical genomics to share new findings and solutions for advancing the applications of clinical genome medicine.
Presentation "The Impact of All Data on Healthcare"
Keith Perry
Associate VP & Deputy CIO
UT MD Anderson Cancer Center
With continuing advancement in both technology and medicine, the drive is on to make all data meaningful to drive medical discovery and create actionable outcomes. With tools and capabilities to capture more data than ever before, the challenge becomes linking existing structured and unstructured clinical data with genomic data to increase the industry’s analytical footprint.
Learning Objectives:
∙ Discuss the need to make all data meaningful in order to speed discovery of new knowledge
∙ Provide examples of an analytical direction that supports evolution in medicine
∙ Expose the challenges facing the industry with respect to ~omits
This document provides an overview of the November 2000 issue of JALA (Journal of Analytical Laboratories Automation). It describes the development of a novel robotic system for the New York Cancer Project biorepository in collaboration with the Medical Automation Research Center. The biorepository receives 50-100 blood samples per day which are processed robotically to extract, quantify, aliquot and store DNA, plasma and RNA to be accessible to investigators. The robotic system aims to provide rapid random access to the hundreds of thousands of DNA samples stored for high-throughput analysis in studies of gene-environment interactions and cancer risk.
Data sharing drivers in precision oncology, biomedical research, and healthcare. Accelerating discovery, innovation, providing credit for all stakeholders - patients, researchers, care providers, payers.
Here are tutorial (Methods and Applications of NLP in Medicine) slides at AIME 2020 (International Conference on Artificial Intelligence in Medicine) provided by Dr. Hua Xu, Dr. Yifan Peng, Dr. Yanshan Wang, Dr. Rui Zhang. Through this half-day tutorial, we introduced our methodological efforts in applying NLP to the clinical domain, and showcase our real-world NLP applications in clinical practice and research across four institutions. We reviewed NLP techniques in solving clinical problems and facilitating clinical research, the state-of-the art clinical NLP tools, and share collaboration experience with clinicians, as well as publicly available EHR data and medical resources, and also concluded the tutorial with vast opportunities and challenges of clinical NLP. The tutorial will provide an overview of clinical backgrounds, and does not presume knowledge in medicine or health care.
- Discover new methods for managing clinical next-gen data with insights from Pfizer, Boston Children’s Hospital and AstraZeneca
- Uncover and critique the latest technologies out there for you to use in clinical trials. Mayo Clinic, Merck and Harvard Medical School let you into their trade secrets
- Hear the genomics strategies that Roche, Millennium and Regeneron are using for discovery and validation of clinically actionable biomarkers
-Bristol-Myers Squibb, Takeda and Partners Healthcare the role that NGS can play when implementing an effective strategy in the lab to speed up CDx development
- Learn how to integrate molecular details into medical decision making, with fresh data from Washington University School of Medicine and Genzyme
Systems Medicine: an introduction to the application of systems biology to health care applications. A prime for engineers, physicist, and mathematicians interested in a career in biomedicine
Using real-world evidence to investigate clinical research questionsKarin Verspoor
Adoption of electronic health records to document extensive clinical information brings with it the opportunity to utilise that information to support clinical research, and ultimately to support clinical decision making. In this talk, I discuss both these opportunities and the challenges that we face when working with real-world clinical data, and introduce some of the strategies that we are adopting to make this data more usable, and to extract more value from it. I specifically discuss the use of natural language processing to transform clinical documentation into structured data for this purpose.
This document provides a review and summary of major scientific events, trends, and publications in translational bioinformatics in 2008 by Russ B. Altman from Stanford University. Some of the key topics covered include the sequencing and analysis of an individual's diploid genome, next-generation sequencing technologies, genome-wide association studies, pharmacogenomics, analysis of high-throughput molecular data, neuroscience datasets, and using molecular information to improve disease detection and treatment. The review highlights over 25 seminal papers from 2008 and provides insights on emerging trends in the field.
This document provides an overview of precision medicine. It defines precision medicine as an emerging approach to disease treatment and prevention that considers individual variability in genes, environment, and lifestyle. It discusses key concepts such as genetics, genomics, genetic variation, and applications in oncology and pharmacogenomics. The document also outlines several national initiatives focused on precision medicine like the Precision Medicine Initiative and research networks such as eMERGE. Examples of precision medicine implementation in clinical practice and research are also summarized.
1) Manuel L. Gonzalez-Garay presented research projects at UTHealth from 2009-2015 investigating rare genetic disorders using next-generation sequencing and metabolomics.
2) An experimental design involved whole exome sequencing of 81 healthy volunteers from the Young Presidents' Organization to explore the practical value and challenges of genomic information for healthy individuals.
3) Analysis of the sequencing data and metabolomics profiles identified several disease-causing variants and metabolic deficiencies, demonstrating the potential for precision medicine approaches in volunteers of normal health.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
2. Disclosures
•Founder & Consultant, Personalis Inc (genome
sequencing for clinical applications). Consultant,
Pfizer (pharmaceuticals).
•Funding support: NIH, NSF, Pfizer, Oracle,
Microsoft, LightspeedVentures, PARSA Foundation.
•I am a fan of informatics, genomics, medicine &
clinical pharmacology.
3. Goals
•Provide an overview of the scientific trends and
publications in translational bioinformatics
•Create a “snapshot” of what seems to be
important in Spring, 2015 for the amusement of
future generations.
•Marvel at the progress made and the
opportunities ahead.
4. Process
1. Follow literature through the year
2. Solicit nominations from colleagues
3. Search key journals and key topics on PubMed
4. Evaluate & ponder
5. Select papers to highlight in ~1-3 slides
5. Caveats
•Translational bioinformatics = informatics methods
that link biological entities (genes, proteins, small
molecules) to clinical entities (diseases, symptoms,
drugs)--or vice versa.
•Considered last ~14 months
•Focused on human biology and clinical implications:
molecules, clinical data, informatics.
•NOTE: Amazing biological papers with
straightforward informatics generally not included.
•NOTE: Amazing informatics papers which don’t link
clinical to molecular generally not included.
6. Final list
•215 Semi-Finalists, 101 Finalists
•22 Presented here + 29 “shout outs” = 51
•Apologies to those I misjudged. Mistakes are mine.
•7 TOPICS: TBI & Society, Variation Triage, Cancer,
Clinical Genomics, Drugs, Systems & Networks, NLP
Applications, Odds & Ends
•Slides and bibliography will be posted at
rbaltman.wordpress.com
7. Thanks!
Conversations and recommendations
Phil Bourne
Atul Butte
Andrea Califano
Josh Denny
Michel Dumontier
Peter Elkin
Emily Flynn
Lewis Frey
Mark Gerstein
George Hripcsak
John Hogenesch
Enoch Huang
Larry Hunter
Rachel Karchin
Natalia Khuri
Alan Laederach
Yong Li
Tianyun Liu
Yves Lussier
Hua Fan-Minogue
Lucila Ohno-Machado
Chirag Patel
Beth Percha
Raul Rabadan
Dan Roden
Neil Sarkar
Nigam Shah
Jost Stuart
Peter Tarczy-Hornoch
Nick Tatonetti
Jessie Tenenbaum
Olga Troyanskaya
Piet van der Graaf
Scott Waldman
Dennis Wall
Rong Xu
9. “A new initiative on precision medicine.” (Collins &
Varmus, NEJM)
• Goal: Define & advance precision medicine—
treating disease considering individual variability.
• Method: Follow up on President Obama’s
announcement in State of Union address.
• Result: Major funding effort by NIH focused on
cancer initially, all diseases eventually. Create cohort
of 1x 106 individuals to support this.
• Conclusion:Translational Informatics is central to
discovery and implementation of precision medicine.
This conference will grow.
25635347
10.
11. “Transparent Reporting of a multivariable prediction
model for Individual Prognosis or Diagnosis (TRIPOD):
the TRIPOD statement.” (Collins et al,Ann Intern Med)
• Goal: Improve the ways that prediction models (of all
types) are reported in the literature.
• Method: Develop a set of recommendations for
reporting studies that develop, validate, or update a
prediction model—both diagnostic & prognostic
• Result: Checklist of 22 items, copublished in multiple
journals.
• Conclusion: Prediction models will be key to precision
medicine, and should be communicated clearly.
25560714
14. “Why we should care about what you get for ‘only $99’
from a personal genomic service.” (Murray, Ann Intern
Med)
• Conclusion: DTC testing challenges traditional role of
physicians
“Misinterpretation of TPMT by a DTC Genetic Testing
Company” (Brownstein et al, Clin Pharm & Ther)
• Conclusion: Rare variants were misinterpreted and
could have caused harm.
“Regulatory changes raise troubling questions for
genomic testing.” (Evans et al, Genet Med)
• Conclusion:There are logical inconsistencies between
CLIA and HIPPA 25255365
24514942
24714787
16. or Ninja v Samurai…
CLIA v. HIPAA…kind of like King Kong v. Godzilla…
17. “Meaningful use of pharmacogenetics.” (Ratain &
Johnson, Clin Pharm & Ther)
• Conclusion: PGx is ready for implementation
“Useless until proven effective: the clinical utility of
preemptive pharmacogenetic testing.” (Janssens &
Deverka, Clin Pharm & Ther)
• Conclusion: PGx is not ready for implementation
“Pharmacogenomic knowledge gaps and educational
resource needs among physicians in selected
specialties.” (Johansen Taber & Dickinson et al, PGx
Pers Med)
• Conclusion: MDs are unsure how to use PGx.
25399712
25399713
25045280
18. King Kong favors PGx
Ninjas favor PGx…but
EVERYONE needs more
education
about the issue
20. “Guidelines for investigating causality of sequence
variants in human disease.” (MacArthur et al, Nature)
• Goal: Clear guidelines for reporting disease-causing
variants in genome
• Method: Discuss key challenges in establishing and
documenting evidence of causality
• Result: Propose guidelines for summarizing
confidence in variant pathogenicity.
• Conclusion: Harmonization of reporting
expectations will assist in dissemination of good
genetic annotation information.
24759409
23. “GRASP: analysis of genotype-phenotype results from
1390 genome-wide association studies and
corresponding open access database” (Leslie et al,
Bioinformatics)
• Goal: Create a publicly available database of GWAS
results.
• Method: Annotation of 1390 GWAS studies with
search + manual annotation.
• Result: > 6.2 Million SNPs associated with
phenotypes.
• Conclusion:A useful resource for integration with
other data sets in support of precision medicine.
24931982
24.
25. “Effective diagnosis of genetic disease by
computational phenotype analysis of the disease-
associated genome.” (Zemojtel et al, Sci Trans Med)
• Goal: Create an automated platform for diagnosis of
rare Mendelian disease, including enriched “disease-
associated” NGS panel.
• Method: Assess semantic similarity of phenotype to
known diseases, and assess variant pathogenicity.
• Result: Mean rank of 2.1 on 50 retrospective cases,
and 2.4 on 11/40 prospective cases.
• Conclusion: Methods for automated diagnosis of
novel genetic diseases are within reach.
25186178
26.
27.
28. “Disease Risk Factors Identified Through Shared Genetic
Architecture and Electronic Medical Records ” (Li et al,
Sci Trans Med)
• Goal: Evaluate relationships between risk factors and
diseases based on shared genetic architecture.
• Method: Using statistical similarity measure between
diseases and traits, found 120 similar pairs and
evaluated EMR for 5 of them.
• Result: Several traits appear before their associated
disease, offering a potential early warning system.
• Conclusion: Shared genetic architecture can provide
early clues to disease risk and prognosis.
24786325
29.
30.
31. “A network based method for analysis of lncRNA-
disease associations and prediction of lncRNAs
implicated in diseases.” (Yang et al, PLoS ONE)
• Goal: Understand role of long noncoding RNAs in
disease
• Method: Build lncRNA-gene network and use
propagation algorithm to infer lncRNA-disease arcs.
• Result: 768 potential lncRNA-disease associations,
with validation on known cases.
• Conclusion: lncRNA are important modulators of
epigenetic and genetic signals relevant to disease.
24498199
32.
33. “Pathogenic variants for Mendelian and complex traits
in exomes of 6,517 European and African Americans:
implications for the return of incidental
results.” (Tabor et al,A J Hum Genet)
Result: Risk alleles of potential utility for both
Mendelian and complex are in every individual.
Implications for return of results.
“Joint Analysis of Functional Genomic Data and
Genome-wide Association Studies of 18 Human
Traits.” (Pickrell,A J Hum Genet)
Result:Assessed which (of 450) genetic/epigenetic
features are most associated with GWAS hits.
Shout Outs for Variant Triage
25087612
24702953
34. “Adjusting for heritable covariates can bias effect
estimates in genome-wide association
studies.” (Aschard et al,A J Hum Genet)
“Meta-analysis of Correlated Traits via Summary
Statistics from GWASs with an Application in
Hypertension.” (Zhu et al,A J Hum Genet)
“Clinical phenotype-based gene prioritization: an initial
study using semantic similarity and the human
phenotype ontology” (Masino et al, BMC Bioinf)
Shout Outs for Variant Triage
25640676
25047600
25500260
35. “SNPsea: an algorithm to identify cell types, tissues
and pathways affected by risk loci.” (Slowikowski et al,
Bioinformatics)
Result: Tool for assessing SNP-related gene
enrichment in cell types, tissues, pathways.
Shout Outs for Variant Triage
24813542
37. “Cancer etiology.Variation in cancer risk among
tissues can be explained by the number of stem cell
divisions.” (Tomasetti &Vogelstein, Science)
• Goal: Assess the contribution of cell division
frequency to cancer risk
• Method: Assess for each tissue of origin the
expected number of cell divisions
• Result: Risk of cancer is strongly associated with the
normal number of divisions for self-renewal. Less
than 1/3 due to inherited mutations/environment.
• Conclusion: Cancer is mostly due to bad luck.
25554788
38.
39. “Pan-cancer network analysis identifies combinations
of rare somatic mutations across pathways and
protein complexes.” (Leiserson et al, Nat Gen)
• Goal: Integrate many cancer data sets to find
mutated subnetworks that recur
• Method: HotNet2 algorithm uses diffusion
algorithm over protein-protein interaction network
• Result: 16 significantly mutated subnetworks mixing
known and unknown pathways, including rare
mutations.
• Conclusion: New diagnostic and therapeutic
opportunities associated with accumulating mass of
cancer genomic information. 25501392
42. “Genetic basis for clinical response to CTLA-4
blockade in melanoma.” (Snyder et al, NEJM)
• Goal: Understand basis for differential response to
therapeutic immunomodulatory antibodies in
melanoma
• Method: Sequence genome of responders/
nonresponders. Analyze.
• Result: Creation of certain mutated versions of host
proteins on cell surface (neo-antigens) correlates
with efficacy.
• Conclusion: Immune attack may be mediated by
neo-antigens (potentially similar previously
presented antigens from infections) 25409260
45. “Predicting cancer-specific vulnerability via data-
driven detection of synthetic lethality.” (Jerby-
Arnon et al, Cell)
Result: Developed a pipeline to characterize
synthetic lethal genes in cancer. Captures known
partners and suggests new ones, particularly those
that are gain-of-function and thus amenable to
potential druggability.
Shout Outs for Cancer Genomics
25171417
47. “Genomic surveillance elucidates Ebola virus origin
and transmission during the 2014 outbreak.” (Gire et
al, Science)
• Goal: Understand genomes of Ebola outbreak in
Africa
• Method: Sequence 99 Ebola genomes from 78
patients to 2000x coverage.
• Result: West African variant diverged from central in
2004, crossed 2014, no new sources. Lots of
mutations for therapeutic opportunity.
• Conclusion: Genomics can be applied very rapidly
and effectively in public health emergencies.
25214632
50. “Clinical Interpretation and Implications of Whole-
Genome Sequencing” (Dewey et al, JAMA)
• Goal: Examine coverage of current NGS data on
clinically relevant genome.
• Method: 12 individual genomes deeply analyzed
manually.
• Result: 10-20% of key variants not interrogated
adequately, 100 SNPs/genome took humans 54
hours to annotate, 2-6 disease causing variants per
subject.
• Conclusion: Accuracy is still an issue, and manual
annotation is still necessary for best genome
interpretations. 24618965
53. “A probabilistic model to predict clinical phenotypic
traits from genome sequencing.” (Chen et al, PLoS
Comp Bio)
• Goal: Assess our ability to predict binary
phenotypes from genome data.
• Method: Bayesian model based on Personal
Genome Project data, applied to 146 phenotypes.
• Result: 16% of phenotypes robustly predictable, best
performer in CAGI assessment.
• Conclusion: Although not diagnostic, we are starting
to use genetics to adjust disease probabilities.
25188385
54.
55.
56. “Personalized pharmacogenomics profiling using
whole-genome sequencing.” (Mizzi et al,
Pharmacogenomics)
Result: Analyzed 482 genomes, found 1012 novel
pharmacogene variations. Conclude: sequencing is
necessary, genotyping not sufficient
“Missense variants in CFTR nucleotide-binding
domains predict quantitative phenotypes associated
with cystic fibrosis disease severity.” (Masica et al,
Hum Mol Gen)
Result: Classifier can predict disease severity from
genotypes in three prognostic classes.
Shout Outs for Genomic Applications
25141897
25489051
58. “A community computational challenge to predict the
activity of pairs of compounds.” (Bansal et al, Nat
Biotech)
• Goal: Community assessment of ability to predict
synergism/antagonism of cancer drugs.
• Method: Blinding prediction, based on individual
drug response ‘omic profiles
• Result: 4/32 methods better than random. Best
algorithm assumed serial drug use, modeled
“residual” contribution. Ensemble of methods =
best.
• Conclusion: Extrapolation is harder than
interpolation. Encouraging results on hard problem.
25419740
59.
60.
61. “Systems pharmacology augments drug safety
surveillance.” (Lorberbaum et al, Clin Pharm & Ther)
• Goal: Improve pharmacovigilance with integration of
systems biology, chemical genomics data
• Method: Modular assembly of drug safety
subnetworks (MADSS) algorithm.
• Result: Improved ability to predict drug associations
to side effects for MI, GI, Liver, Kidney systems.
• Conclusion: System biology network inference can
assist in prediction and understanding of side effects.
25670520
64. “Validating drug repurposing signals using electronic
health records: a case study of metformin associated
with reduced cancer mortality.” (Xu et al, JAMIA)
Result: EHR analysis suggests Metformin protective
of cancer.
“3D Pharmacophoric Similarity improves Multi
Adverse Drug Event Identification in
Pharmacovigilance.” (Vilar et al, Sci Rep)
Result: Structural similarity of drugs allows them to
‘borrow” information for improved
pharmacovigilance
Shout Outs for Drugs
25053577
25744369
66. “Human symptoms-disease network.” (Zhou et al, Nat
Comms)
• Goal: Mine PubMED to build a symptom-based
network of human diseases, relate to underlying
molecular interactions.
• Method: Combine disease-symptom network with
disease-gene network to evaluate overlap of both.
• Result: Symptom-based similarity correlates with
shared genetic structure. Diversity of symptoms
correlates to disease genetic complexity.
• Conclusion: Similarity of diseases, symptoms, genetic
architecture are all highly linked and can lead to
useful hypotheses about diagnosis & treatment.
24967666
67.
68.
69. “Obesity accelerates epigenetic aging of human
liver.” (Horvath et al, PNAS)
• Goal: Understand the relationship between
epigenetic and obesity.
• Method: Use novel epigenetic biomarker of aging
(measure of DNA methylation) to associate BMI and
‘effective’ age.
• Result: Epigenetic age increases 3.3 yrs for each 10
BMI units. Not clearly reversible with weight loss.
279 genes under-expressed in old livers.
• Conclusion: Epigenetic changes associated with
disease may be useful for understanding disease
onset, natural history and comorbidities. 25313081
70.
71. “A circadian gene expression atlas in mammals:
implications for biology and medicine.” (Zhang et al,
PNAS)
• Goal: Characterize the role of circadian clock in
‘mammal’ gene expression.
• Method: Measure tissue-specific gene expression
over 24 hours.
• Result: 43% of proteins show circadian expression
variation, often tissue specific. noncoding RNAs may
be involved in control. Most drugs target genes that
are rhythmic.
• Conclusion: The clock may have important
implications for biological variability and drug 25349387
72.
73.
74. “Robust clinical outcome prediction based on Bayesian analysis of
transcriptional profiles and prior causal networks” (Zarringhalam et al,
Bioinformatics)
Result: Generate differential expression profile for individual patients,
and infer specific regulation model.
“A multiscale statistical mechanical framework integrates biophysical
and genomic data to assemble cancer networks.” (AlQuraishi et al, Nat
Gen)
Result: Combine genomic, structural, biochemical data to infer detailed
impact of mutations in proteins involved in cancer signaling networks.
“Cross-species regulatory network analysis identifies a synergistic
interaction between FOXM1 and CENPF that drives prostate cancer
malignancy.” (Aytes et al, Cancer Cell)
Result: Compare regulatory networks for mouse/human to find
conserved master regulators promoting tumor growth.
22995991
25362484
Shout Outs for Systems & Networks
24823640
76. “dRiskKB: a large-scale disease-disease risk
relationship knowledge base constructed from
biomedical text” (Xu et al, BMC Bioinformatics)
• Goal: Systematically (re)characterize phenotype
relationships among diseases.
• Method: Use text mining to extract disease risk
pairs, analyzed correlations with underlying genetics.
• Result: 34,448 unique pairs among 12,981 diseases.
• Conclusion:
24725842
77.
78.
79.
80. “Integrated text mining and chemoinformatics analysis
associates diet to health benefit at molecular
level” (Jensen et al, PLoS Comp Bio)
• Goal:Assemble knowledge of food-phytochemical
and food-disease associations.
• Method: Data mining of text, classification to
assemble associations, including both positive/
negative associations.
• Result: 20,654 phytochemicals associated to 1,592
human disease phenotypes
• Conclusion: Systematic approach to nutrition can be
incorporated into precision medicine
24453957
84. “NCBI disease corpus: a resource for disease name
recognition and concept normalization” (Dogan et al, J
Biomed Inf)
Result: Fully and carefully annotated corpus of 793
papers.
“Literome: PubMed-scale genomic knowledge base in
the cloud” (Poon et al, Bioinformatics)
“A literature search tool for intelligent extraction of
disease-associated genes” (Jung et al, JAMIA)
Results: Publicly available gene-gene & gene-
phenotype interactions mined from PubMED.
Shout Outs for NLP Applications
24393765
24939151
23999671
86. “Modeling 3D facial shape from DNA.” (Claes et al,
PLoS Genet)
• Goal:Assess the impact of genetic variations on
facial shape.
• Method: Parameterize “face space” and associate
features with Ancestry Informative Markers
• Result: 20 genes show significant effects on facial
features.
• Conclusion:These allow approximation of
appearance based on SNPs in genome.
24651127
91. “Proteomics.Tissue-based map of the human
proteome.” (Uhlen et al, Science)
• Goal: Survey human proteome variation in human
tissues.
• Method: Quantitative transcriptomics +
immunohistochemistry for localization in 32 tissues.
• Result: Detected > 90% of putative protein coding
genes. Characterized secretome, membraneome,
druggome.
• Conclusion: Major resource for integrative analysis
of human biology.
25613900
92.
93. “A field guide to genomics research.” (Bild et al, PLoS
Biol)
• Goal: Characterize common pitfalls in genomics
research
• Method: Reflection on personality types
• Result: 6 genome researcher phenotypes…
• Conclusion:You can figure out which phenotype you
match, and you don’t need your SNPs…
24409093
94. 24409093
1. Farmer—storehouse of data, tools—no design
2. Gold Miner—keeps digging until finds something
significant
3. Cowboy—wrangles data without analyzing it properly
4. Hermit—always isolates themselves, no collaboration
5. Master(with Servant)—unreasonable expectations
about time and complexity of appropriate analysis
6. Jailer—keeps own data locked up, never shares
95. “Temporal disease trajectories condensed from
population-wide registry data covering 6.2 million
patients.” (Jensen et al, Natt Comms)
“The top 100 papers.” (Van Noorden, Nature)
“Bibliometrics: Is your most cited work your
best?” (Ioannidis et al, Nature)
“Humans can discriminate more than 1 trillion
olfactory stimuli.” (Bushdid et al, Science)
“Fossilized nuclei and chromosomes reveal 180 million
years of genomic stasis in royal ferns.” (Bomfleur,
Science)
Shout Outs for Odds & Ends
24959948
25355343
24653037
25355346
24653035
96.
97. 2014 Crystal ball...
Emphasis on non European-descent populations for
discovery of disease associations
Crowd-based discovery in translational bioinformatics
Methods to recommend treatment for cancer based on
genome/transcriptome
Increase in “trained systems” (ala Watson) applications
in translational bioinformatics
Repurposing with combinations of drugs (vs. one)
More cost-effectiveness evidence for genomics
Linking essential genes, drug targets, and drug response
98. 2015 Crystal ball...
Increase in “trained systems” (a la IBM’s Watson)
applications in translational bioinformatics
Increased attention to genetic x environment analyses
Mega-cohort studies start to report out findings
Immuno-informatics and systems immunology explode
Increased integration of EMR, genomics, imaging
The term “precision medicine” will be mentioned more
frequently in PubMED abstracts.
IF invited, I will give this talk in person.