NIH Data Initiatives: Harnessing Big (and small) Data to Improve Health
Presentation at the internet2 Global Forum, April 28, 2015
Session NIH Perspectives
Some Frameworks for Improving Analytic Operations at Your CompanyRobert Grossman
I review three frameworks for analytic operations that are designed to improve the value obtained when deploying analytic models into products, services and internal operations.
This a talk that I gave at BioIT World West on March 12, 2019. The talk was called: A Gen3 Perspective of Disparate Data:From Pipelines in Data Commons to AI in Data Ecosystems.
What is Data Commons and How Can Your Organization Build One?Robert Grossman
This is a talk that I gave at the Molecular Medicine Tri Conference on data commons and data sharing to accelerate research discoveries and improve patient outcomes. It also covers how your organization can build a data commons using the Open Commons Consortium's Data Commons Framework and the University of Chicago's Gen3 data commons platform.
Smith RDAP11 NSF Data Management Plan Case StudiesASIS&T
MacKenzie Smith, MIT; NSF Data Management Plan Case Studies; RDAP11 Summit
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
Micah Altman, Harvard; Policy-based Data Management
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
NIH Data Initiatives: Harnessing Big (and small) Data to Improve Health
Presentation at the internet2 Global Forum, April 28, 2015
Session NIH Perspectives
Some Frameworks for Improving Analytic Operations at Your CompanyRobert Grossman
I review three frameworks for analytic operations that are designed to improve the value obtained when deploying analytic models into products, services and internal operations.
This a talk that I gave at BioIT World West on March 12, 2019. The talk was called: A Gen3 Perspective of Disparate Data:From Pipelines in Data Commons to AI in Data Ecosystems.
What is Data Commons and How Can Your Organization Build One?Robert Grossman
This is a talk that I gave at the Molecular Medicine Tri Conference on data commons and data sharing to accelerate research discoveries and improve patient outcomes. It also covers how your organization can build a data commons using the Open Commons Consortium's Data Commons Framework and the University of Chicago's Gen3 data commons platform.
Smith RDAP11 NSF Data Management Plan Case StudiesASIS&T
MacKenzie Smith, MIT; NSF Data Management Plan Case Studies; RDAP11 Summit
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
Micah Altman, Harvard; Policy-based Data Management
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
This is an overview of the Data Biosphere Project, its goals, its architecture, and the three core projects that form its foundation. We also discuss data commons.
DataCite and its Members: Connecting Research and Identifying KnowledgeETH-Bibliothek
PIDs and their metadata support scholarly research and its increasing amounts and
variety of scholarly output. DataCite provides services which enable the research community to identify, connect, cite and track these outputs, making content FAIR. New
services include data level metrics and the use of identifiers for organizations and new
types of content, e.g. software, repositories and instruments. As an open, collaborative
and community driven membership organization we rely on our members for their
input and experience to build services that are beneficial for the research community
as a whole. DataCite services as well as current and future initiatives will be described
and it will be shown how members can contribute and benefit. Over the course of the
years, our membership has grown and diversified and we are therefore refreshing and
clarifying our member model. The new member model will be presented and described.
NADA is an open source web application for archiving, searching and browsing microdata using the data documentation initiative (DDI).
Key features are:
- Support DDI and RDF
- Search studies and variables
- Compare variables
- Provides data access for datasets using Public, Licensed, Direct and Data Enclave
dkNET Webinar: FAIR Data & Software in the Research Life Cycle 01/22/2021dkNET
Abstract
Good data stewardship is the cornerstone of knowledge, discovery, and innovation in research. The FAIR Data Principles address data creators, stewards, software engineers, publishers, and others to promote maximum use of research data. The principles can be used as a framework for fostering and extending research data services.
This talk will provide an overview of the FAIR principles and the drivers behind their development by a broad community of international stakeholders. We will explore a range of topics related to putting FAIR data into practice, including how and where data can be described, stored, and made discoverable (e.g., data repositories, metadata); methods for identifying and citing data; interoperability of (meta)data; best-practice examples; and tips for enabling data reuse (e.g., data licensing). Practical examples of how FAIR is applied will be provided along the way.
Presenter: Christopher Erdmann, Engagement, support, and training expert on the NHLBI BioData Catalyst project at University of North Carolina Renaissance Computing Institute
dkNET Webinars Information: https://dknet.org/about/webinar
Rots RDAP11 Data Archives in Federal AgenciesASIS&T
Arnold Rots, VAO; Data Archives in Federal Agencies; RDAP11 Summit
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
Data Publishing at Harvard's Research Data Access SymposiumMerce Crosas
Data Publishing: The research community needs reliable, standard ways to make the data produced by scientific research available to the community, while giving credit to data authors. As a result, a new form of scholarly publication is emerging: data publishing. Data publishing - or making data reusable, citable, and accessible for long periods - is more than simply providing a link to a data file or posting the data to the researcher’s web site. We will discuss best practices, including the use of persistent identifiers and full data citations, the importance of metadata, the choice between public data and restricted data with terms of use, the workflows for collaboration and review before data release, and the role of trusted archival repositories. The Harvard Dataverse repository (and the Dataverse open-source software) provides a solution for data publishing, making it easy for researchers to follow these best practices, while satisfying data management requirements and incentivizing the sharing of research data.
Data grids are an emerging technology that enables the formation of sharable collections from data distributed across multiple storage resources. The integrated Rule Oriented Data System (iRODS) is a data grid developed by the DICE Center at UNC-CH. The iRODS data grid enforces management policies that control properties of the collection. Examples of policies include retention, disposition, distribution, replication, metadata extraction, time-dependent access controls, data processing, data redaction, and integrity checking. Policies can be defined that automate administrative functions (file migration and replication) and that validate assessment criteria (authenticity, integrity, chain of custody). iRODS is used to build data sharing environments, digital libraries, and preservation environments. The iRODS data grid is used at UNC-CH to support the Carolina Digital Repository, the LifeTime Library for the School of Information and Library Science, data grids for the Renaissance Computing Institute (RENCI), collaborations within North Carolina, and both national and international data sharing. At RENCI, the TUCASI data grid supports shared collections between UNC-CH, Duke, and NCSU. The RENCI data grid is federated with ten other data grids including the National Climatic Data Center, the Texas Advanced Computing Center data grid, and the Ocean Observatories Initiative data grid. International applications include the CyberSKA Square Kilometer Array for radio astronomy and the French National Institute for Nuclear Physics and Particle Physics. The collections that are assembled may contain hundreds of millions of files, and petabytes of data. A specific goal is the integration of institutional repositories with the national data infrastructure that is being assembled under the NSF DataNet program. The software is available as an open source distribution from http://irods.diceresearch.org.
FAIR Data Knowledge Graphs–from Theory to PracticeTom Plasterer
FAIR data has flown up the hype curve without a clear sense of return from the required data stewardship investment. The killer use case for FAIR data is a science knowledge graph. It enables you to richly address novel questions of your and the world’s data. We started with data catalogues (findability) which exploited linked/referenced data using a few focused vocabularies (interoperability), for credentialed users (accessibility), with provenance and attribution (reusability) to make this happen. Our processes enable simple creation of dataset records and linking to source data, providing a seamless federated knowledge graph for novice and advanced users alike.
Presented May 7th, 2019 at the Knowledge Graph Conference, Columbia University.
OAIS: What is it and Where is it Going? - Don Sawyer (2002)faflrt
Open Archival Information Service (OAIS) workshop. Presented by Don Sawyer of NASA Goddard and Lou Reich, CSC contractor to NASA. Sponsored by ALA Federal and Armed Forces Libraries Roundtable (FAFLRT). Presented on June 15, 2002 at ALA Annual Conference.
Preparing your data for sharing and publishingVarsha Khodiyar
Talk given as part of the MRC Cognition and Brain Sciences Unit Open Science Day on 20th November 2018 , University of Cambridge (https://www.eventbrite.co.uk/e/open-science-day-at-the-mrc-cbu-tickets-50363553745)
FAIR Data Management and FAIR Data SharingMerce Crosas
Presentation at the Critical Perspective on the Practice of Digiral Archeology symposium: http://archaeology.harvard.edu/critical-perspectives-practice-digital-archaeology
Making Data FAIR (Findable, Accessible, Interoperable, Reusable)Tom Plasterer
What to do About FAIR…
In the experience of most pharma professionals, FAIR remains fairly abstract, bordering on inconclusive. This session will outline specific case studies – real problems with real data, and address opportunities and real concerns.
·
Why making data Findable, Actionable, Interoperable and Reusable is important.
Talk presented at the Data Driven Drug Development (D4) conference on March 20th, 2019.
This presentation addresses common questions asked to ensure your successful Postgres rollout. In addition, it shares best practices and lessons learned from Postgres implementations.
This presentation reviews:
- How to leverage EDB’s expert guidance to maximize results and achieve ROI with Postgres more quickly
- When to access EDB’s personalized training for assistance with implementations or ongoing management
- Who can help you with a detailed assessment or health check to optimize your environment
- What a successful Postgres journey looks like for you
Target Audience: This presentation is intended for IT leaders, Managers, and Directors. DBAs, Data Architects, Developers, DevOps, IT Operations responsible for supporting a Postgres environment. This presentation is equally suitable for organizations using community PostgreSQL as well as EDB’s Postgres Plus product family currently looking into Postgres or have already established a Postgres database.
This is an overview of the Data Biosphere Project, its goals, its architecture, and the three core projects that form its foundation. We also discuss data commons.
DataCite and its Members: Connecting Research and Identifying KnowledgeETH-Bibliothek
PIDs and their metadata support scholarly research and its increasing amounts and
variety of scholarly output. DataCite provides services which enable the research community to identify, connect, cite and track these outputs, making content FAIR. New
services include data level metrics and the use of identifiers for organizations and new
types of content, e.g. software, repositories and instruments. As an open, collaborative
and community driven membership organization we rely on our members for their
input and experience to build services that are beneficial for the research community
as a whole. DataCite services as well as current and future initiatives will be described
and it will be shown how members can contribute and benefit. Over the course of the
years, our membership has grown and diversified and we are therefore refreshing and
clarifying our member model. The new member model will be presented and described.
NADA is an open source web application for archiving, searching and browsing microdata using the data documentation initiative (DDI).
Key features are:
- Support DDI and RDF
- Search studies and variables
- Compare variables
- Provides data access for datasets using Public, Licensed, Direct and Data Enclave
dkNET Webinar: FAIR Data & Software in the Research Life Cycle 01/22/2021dkNET
Abstract
Good data stewardship is the cornerstone of knowledge, discovery, and innovation in research. The FAIR Data Principles address data creators, stewards, software engineers, publishers, and others to promote maximum use of research data. The principles can be used as a framework for fostering and extending research data services.
This talk will provide an overview of the FAIR principles and the drivers behind their development by a broad community of international stakeholders. We will explore a range of topics related to putting FAIR data into practice, including how and where data can be described, stored, and made discoverable (e.g., data repositories, metadata); methods for identifying and citing data; interoperability of (meta)data; best-practice examples; and tips for enabling data reuse (e.g., data licensing). Practical examples of how FAIR is applied will be provided along the way.
Presenter: Christopher Erdmann, Engagement, support, and training expert on the NHLBI BioData Catalyst project at University of North Carolina Renaissance Computing Institute
dkNET Webinars Information: https://dknet.org/about/webinar
Rots RDAP11 Data Archives in Federal AgenciesASIS&T
Arnold Rots, VAO; Data Archives in Federal Agencies; RDAP11 Summit
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
Data Publishing at Harvard's Research Data Access SymposiumMerce Crosas
Data Publishing: The research community needs reliable, standard ways to make the data produced by scientific research available to the community, while giving credit to data authors. As a result, a new form of scholarly publication is emerging: data publishing. Data publishing - or making data reusable, citable, and accessible for long periods - is more than simply providing a link to a data file or posting the data to the researcher’s web site. We will discuss best practices, including the use of persistent identifiers and full data citations, the importance of metadata, the choice between public data and restricted data with terms of use, the workflows for collaboration and review before data release, and the role of trusted archival repositories. The Harvard Dataverse repository (and the Dataverse open-source software) provides a solution for data publishing, making it easy for researchers to follow these best practices, while satisfying data management requirements and incentivizing the sharing of research data.
Data grids are an emerging technology that enables the formation of sharable collections from data distributed across multiple storage resources. The integrated Rule Oriented Data System (iRODS) is a data grid developed by the DICE Center at UNC-CH. The iRODS data grid enforces management policies that control properties of the collection. Examples of policies include retention, disposition, distribution, replication, metadata extraction, time-dependent access controls, data processing, data redaction, and integrity checking. Policies can be defined that automate administrative functions (file migration and replication) and that validate assessment criteria (authenticity, integrity, chain of custody). iRODS is used to build data sharing environments, digital libraries, and preservation environments. The iRODS data grid is used at UNC-CH to support the Carolina Digital Repository, the LifeTime Library for the School of Information and Library Science, data grids for the Renaissance Computing Institute (RENCI), collaborations within North Carolina, and both national and international data sharing. At RENCI, the TUCASI data grid supports shared collections between UNC-CH, Duke, and NCSU. The RENCI data grid is federated with ten other data grids including the National Climatic Data Center, the Texas Advanced Computing Center data grid, and the Ocean Observatories Initiative data grid. International applications include the CyberSKA Square Kilometer Array for radio astronomy and the French National Institute for Nuclear Physics and Particle Physics. The collections that are assembled may contain hundreds of millions of files, and petabytes of data. A specific goal is the integration of institutional repositories with the national data infrastructure that is being assembled under the NSF DataNet program. The software is available as an open source distribution from http://irods.diceresearch.org.
FAIR Data Knowledge Graphs–from Theory to PracticeTom Plasterer
FAIR data has flown up the hype curve without a clear sense of return from the required data stewardship investment. The killer use case for FAIR data is a science knowledge graph. It enables you to richly address novel questions of your and the world’s data. We started with data catalogues (findability) which exploited linked/referenced data using a few focused vocabularies (interoperability), for credentialed users (accessibility), with provenance and attribution (reusability) to make this happen. Our processes enable simple creation of dataset records and linking to source data, providing a seamless federated knowledge graph for novice and advanced users alike.
Presented May 7th, 2019 at the Knowledge Graph Conference, Columbia University.
OAIS: What is it and Where is it Going? - Don Sawyer (2002)faflrt
Open Archival Information Service (OAIS) workshop. Presented by Don Sawyer of NASA Goddard and Lou Reich, CSC contractor to NASA. Sponsored by ALA Federal and Armed Forces Libraries Roundtable (FAFLRT). Presented on June 15, 2002 at ALA Annual Conference.
Preparing your data for sharing and publishingVarsha Khodiyar
Talk given as part of the MRC Cognition and Brain Sciences Unit Open Science Day on 20th November 2018 , University of Cambridge (https://www.eventbrite.co.uk/e/open-science-day-at-the-mrc-cbu-tickets-50363553745)
FAIR Data Management and FAIR Data SharingMerce Crosas
Presentation at the Critical Perspective on the Practice of Digiral Archeology symposium: http://archaeology.harvard.edu/critical-perspectives-practice-digital-archaeology
Making Data FAIR (Findable, Accessible, Interoperable, Reusable)Tom Plasterer
What to do About FAIR…
In the experience of most pharma professionals, FAIR remains fairly abstract, bordering on inconclusive. This session will outline specific case studies – real problems with real data, and address opportunities and real concerns.
·
Why making data Findable, Actionable, Interoperable and Reusable is important.
Talk presented at the Data Driven Drug Development (D4) conference on March 20th, 2019.
This presentation addresses common questions asked to ensure your successful Postgres rollout. In addition, it shares best practices and lessons learned from Postgres implementations.
This presentation reviews:
- How to leverage EDB’s expert guidance to maximize results and achieve ROI with Postgres more quickly
- When to access EDB’s personalized training for assistance with implementations or ongoing management
- Who can help you with a detailed assessment or health check to optimize your environment
- What a successful Postgres journey looks like for you
Target Audience: This presentation is intended for IT leaders, Managers, and Directors. DBAs, Data Architects, Developers, DevOps, IT Operations responsible for supporting a Postgres environment. This presentation is equally suitable for organizations using community PostgreSQL as well as EDB’s Postgres Plus product family currently looking into Postgres or have already established a Postgres database.
Conception of a water level detector (tide gauge) based on a electromagnetic ...eSAT Journals
Abstract The mechanism that we propose is a water level sensor (Tide-Gauge), using the Archimedes principle, according to which a body immersed in a fluid in equilibrium, subjected to on behalf of this one a thrust which the intensity is equal to the weight of the fluid displaced. It is composed of an electromagnetic sensor forces of which is suspended a plunger cylindrical glass. The latter is partially introduced into a measuring cylinder filled with water. The electromagnetic sensor of forces allows us to measure the difference between the weight of the cylinder and the force due to the buoyancy exerted by the water on the submerged part of the cylinder. When the water level in the test piece changes it is implied by a variation of the submerged portion of the cylinder, and causes a variation of the buoyancy, as a consequence of variation of the force exerted on the sensor. The electromagnetic sensor allows the conversion of this force into a voltage by the intermediate of an electronic circuit. Key Words: Electromagnetic Sensor of Force, Water Level Detector, Tide-Gauge.
Informativo Tributário Mensal com as principais alterações ocorridas na legislação tributária no mês de julho, destacando algumas decisões administrativas e judiciais relevantes, bem como relatando outros assuntos importantes.
The NIH Data Commons - BD2K All Hands Meeting 2015Vivien Bonazzi
Presentation given at the BD2K All Hands meeting in Bethesda, MD, USA in November 2015
https://datascience.nih.gov/bd2k/events/NOV2015-AllHands
Video cast of this presentation:
http://videocast.nih.gov/summary.asp?Live=17480&bhcp=1
talk starts at 2hrs 40min (its about 55mins long) - includes video!
Document describing the Commons : https://datascience.nih.gov/commons
Presentation at the Department of Health and Human Services October 17, 2014 to introduce other agencies outside of NIH the development of the Commons concept.
Data Harmonization for a Molecularly Driven Health SystemWarren Kibbe
Seminar for Dr. Min Zhang's Purdue Bioinformatics Seminar Series. Touched on learning health systems, the Gen3 Data Commons, the NCI Genomic Data Commons, Data Harmonization, FAIR, and open science.
The global need to securely derive (instant) insights, have motivated data architectures from distributed storage, to data lakes, data warehouses and lake-houses. In this talk we describe Tag.bio, a next generation data mesh platform that embeds vital elements such as domain centricity/ownership, Data as Products, Self-serve architecture, with a federated computational layer. Tag.bio data products combine data sets, smart APIs, statistical and machine learning algorithms into decentralized data products for users to discover insights using FAIR Principles. Researchers can use its point and click (no-code) system to instantly perform analysis and share versioned, reproducible results. The platform combines a dynamic cohort builder with analysis protocols and applications (low-code) to drive complex analysis workflows. Applications within data products are fully customizable via R and Python plugins (pro-code), and the platform supports notebook based developer environments with individual workspaces.
Join us for a talk/demo session on Tag.bio data mesh platform and learn how major pharma industries and university health systems are using this technology to promote value based healthcare, precision healthcare, find cures for disease, and promote collaboration (without explicitly moving data around). The talk also outlines Tag.bio secure data exchange features for real world evidence datasets, privacy centric data products (confidential computing) as well as integration with cloud services
Presentation to IASSIST 2013, in the session Expanding Scholarship: Research Journals and Data Linkages. Describes PREPARDE workshop on repository accreditation for data publication and invites comments on guidelines.
FAIRy stories: the FAIR Data principles in theory and in practiceCarole Goble
https://ucsb.zoom.us/meeting/register/tZYod-ippz4pHtaJ0d3ERPIFy2QIvKqjwpXR
FAIRy stories: the FAIR Data principles in theory and in practice
The ‘FAIR Guiding Principles for scientific data management and stewardship’ [1] launched a global dialogue within research and policy communities and started a journey to wider accessibility and reusability of data and preparedness for automation-readiness (I am one of the army of authors). Over the past 5 years FAIR has become a movement, a mantra and a methodology for scientific research and increasingly in the commercial and public sector. FAIR is now part of NIH, European Commission and OECD policy. But just figuring out what the FAIR principles really mean and how we implement them has proved more challenging than one might have guessed. To quote the novelist Rick Riordan “Fairness does not mean everyone gets the same. Fairness means everyone gets what they need”.
As a data infrastructure wrangler I lead and participate in projects implementing forms of FAIR in pan-national European biomedical Research Infrastructures. We apply web-based industry-lead approaches like Schema.org; work with big pharma on specialised FAIRification pipelines for legacy data; promote FAIR by Design methodologies and platforms into the researcher lab; and expand the principles of FAIR beyond data to computational workflows and digital objects. Many use Linked Data approaches.
In this talk I’ll use some of these projects to shine some light on the FAIR movement. Spoiler alert: although there are technical issues, the greatest challenges are social. FAIR is a team sport. Knowledge Graphs play a role – not just as consumers of FAIR data but as active contributors. To paraphrase another novelist, “It is a truth universally acknowledged that a Knowledge Graph must be in want of FAIR data.”
[1] Wilkinson, M., Dumontier, M., Aalbersberg, I. et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3, 160018 (2016). https://doi.org/10.1038/sdata.2016.18
Data Harmonization for a Molecularly Driven Health SystemWarren Kibbe
Maximizing the value of data, computing, data science in an academic medical center, or 'towards a molecularly informed Learning Health System. Given in October at the University of Florida in Gainesville
FAIRness Assessment of the Library of Integrated Network-based Cellular Signa...Kathleen Jagodnik
The FAIR Guiding Principles facilitate the Findability, Accessibility, Interoperability, and Reusability of digital resources. The Library of Integrated Network-based Cellular Signatures (LINCS) Project has sought to implement the FAIR principles in the provision of its resources in order to optimize usability. We have surveyed the FAIR principles and are implementing specific facets within the LINCS resources. Subsequently, with reference to the literature and other efforts to measure FAIRness, we are developing quantitative metrics to assess the FAIRness of each dataset and resource in order to provide users with objective measures of the characteristics of the LINCS project. Assessing and improving the FAIRness of LINCS is an ongoing effort by our team that will benefit from community input to ensure that all LINCS users are optimally engaged with this resource.
Similar to Komatsoulis internet2 executive track (20)
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
1. George A. Komatsoulis, Ph.D.
National Center for Biotechnology Information
National Library of Medicine
National Institutes of Health
U.S. Department of Health and Human Services
7. The Commons: Business Model
Researcher
Discovery Index
The Commons
Cloud Provider
C
Cloud Provider
B
Cloud Provider
A
NIH
Provides Digital Objects
Retrieves/Uses Digital
Objects
Option: Fund Providers to
Support NIH Directed
Resources
Indexes Commons
Provide
Credits
Uses
Credits
Finds
Objects
Commons
Implemented as a
federation of
‘conformant’ cloud
providers and HPC
environments
Funded primarily
by providing
credits to
investigators
8. Cost effective - Only pay for IT support used
Drives competition – Better services at lower
cost
Supports Data sharing by driving science into
the Commons
Facilitates public-private partnership
Scalable to most categories of data expected in
the next 5 years.
9. Novelty:
Never been tried, so we don’t have data about likelihood of success
Cost Models:
Predicated on stable or declining prices among providers
True for the last several years, but we can’t guarantee that it will
continue, particularly if there is significant consolidation in industry
Service Providers:
Predicated on service providers willing to make the investment to
become conformant
Market research suggests 3-5 providers within 2-3 months of program
launch
Persistence:
The model is ‘Pay As You Go’ which means if you stop paying it stops
going
Giving investigators an unprecedented level of control over what lives
(or dies) in the Commons
10. Minimum set of requirements for
Business relationships (reseller, investigators)
Interfaces (upload, download, manage, compute)
Capacity (storage, compute)
Networking and Connectivity
Information Assurance
Authentication and authorization
Likely to be reviewed self-certification in pilot phase
A conformant cloud ≠ an IaaS provider
11. Likely to evolve into multiple ‘Levels of Compliance’ corresponding to
increasing degrees of making data/software meet ‘FAIR’ criteria.
Some of our current thinking for basic compliance
Objects are physically or logically available in the Commons
Objects are indexed with a usable identifier
Objects have basic search metadata attached to index entries
Objects have clear access rules
Objects have basic semantic metadata available
Higher levels could include
Objects indexed with standards based identifiers (ORCID, doi, etc.)
Objects are open to the public (or as open as reasonable given data type)
Objects conform to agreed upon standards (CDISC, DICOM, etc.)
Data objects are accessible via standard APIs
Software is encapsulated (containers, other technology) for easier usage
We want and need your feedback on these matters!
12. Phase 0: Build the plumbing
Phase 1: Pilot the model on a small number of
investigators experienced with cloud computing, probably
within the context of BD2K awards
Phase 2: Open the Commons credit process to grantees
from a subset of NIH Institutes and Centers
Phase 3: Open the process to all NIH grantees
13.
14.
15. Approved March 23, 2015
“In light of the advances made in security protocols for cloud
computing in the past several years and given the expansion in
the volume and complexity of genomic data generated by the
research community, the National Institutes of Health (NIH) is
now allowing investigators to request permission to transfer
controlled-access genomic and associated phenotypic data
obtained from NIH-designated data repositories under the
auspices of the NIH Genomic Data Sharing (GDS) Policy to
public or private cloud systems for data storage and analysis.”
Responsibility for ensuring the security and integrity remains
with the institution.
18. Sensor Stream = 500 EB/day
Stores 69 TB/day
Collection = 14 EB/day
Store 1PB/day
Total Data = 14 PB
Store an average of 3.3TB/day for 10 years!
19.
20. NIH Office of ADDS
Vivien Bonazzi, Ph.D.
Philip Bourne, Ph.D
Michelle Dunn, Ph.D
Mark Guyer, Ph.D.
Jennie Larkin, Ph.D.
Leigh Finnegan
Beth Russell
NCBI
Dennis Benson, Ph.D.
Alan Graeff
David Lipman, MD
Jim Ostell, Ph.D.
Don Preuss
Steve Sherry
Editor's Notes
Mimimum Requirements:
Business relationship is to allow distribution and billing of credits and to ensure that liability issues are resolved. Investigator that puts digital object in the commons is the one that retains the liability associated with its use.
Interfaces – would need to be open, but not necessarily open-source. Requires support for basic operations. In addition, environment has to be open to all; so a private environment behind a university firewall won’t work.
Identifiers and metadata: Tied together and together enable researchers to search for and find resources.
Networking and Connectivity: Make sure that stuff is accessible, require connection to commodity internet and internet2, but key element from investigator point of view is a free egress tier for academics
Environment is secure
A&A: Must support inCommon because most NIH investigators have it. Minimizes hassle of granting access to collaborators across multiple platforms.
Approval of clouds: Self certify vs. NIH certify vs. 3rd party certify. In early test cases, may simply say ‘FedRamped’
Cloud vs IaaS: Some IaaS (AWS comes to mind) may be uninterested in providing the ‘conformant’ layer but support other companies that provide these services using AWS backend. Already exemplars of this: Seven Bridges Genomics and the Cancer Genomics Cloud Pilots are all software layers over an IaaS provider.
1965 – Generation capacity < 100 aa’s/year/person => Dayhoff creates 1 base code to simplify computing in punch card era
1977 – Sanger and Maxam-Gilbert Sequencing invented. By mid 1980’s increase in production of 2 orders of magnitude (maybe 10-20K bases total 2-3K finished/year)
1986 – Development of dye based sequencing, ABI 370A 2000 bases/day/instrument by mid 1990’s
1996 – Development of DNA microarrays. 2 dye 100K chips => 200K/chip/day
2000’s- Next gen sequencing; 100M’s/day