I gave this talk in the "Presidential Symposium" at the annual meeting of the American Association of Physicists in Medicine, in Annaheim, California. The President of AAPM, Dr. Maryellen Giger, wanted some people to give some visionary talks. She invited (I kid you not) Foster, Gates, and Obama. Fortunately Bill and Barack had other commitments, so I did not need to share the time with them.
PREDICTIVE ANALYTICS IN HEALTHCARE SYSTEM USING DATA MINING TECHNIQUEScscpconf
The health sector has witnessed a great evolution following the development of new computer technologies, and that pushed this area to produce more medical data, which gave birth to multiple fields of research. Many efforts are done to cope with the explosion of medical data on one hand, and to obtain useful knowledge from it on the other hand. This prompted researchers to apply all the technical innovations like big data analytics, predictive analytics, machine learning and learning algorithms in order to extract useful knowledge and help in making decisions. With the promises of predictive analytics in big data, and the use of machine learning
algorithms, predicting future is no longer a difficult task, especially for medicine because predicting diseases and anticipating the cure became possible. In this paper we will present an overview on the evolution of big data in healthcare system, and we will apply a learning algorithm on a set of medical data. The objective is to predict chronic kidney diseases by using Decision Tree (C4.5) algorithm.
PREDICTIVE ANALYTICS IN HEALTHCARE SYSTEM USING DATA MINING TECHNIQUEScscpconf
The health sector has witnessed a great evolution following the development of new computer technologies, and that pushed this area to produce more medical data, which gave birth to multiple fields of research. Many efforts are done to cope with the explosion of medical data on one hand, and to obtain useful knowledge from it on the other hand. This prompted researchers to apply all the technical innovations like big data analytics, predictive analytics, machine learning and learning algorithms in order to extract useful knowledge and help in making decisions. With the promises of predictive analytics in big data, and the use of machine learning
algorithms, predicting future is no longer a difficult task, especially for medicine because predicting diseases and anticipating the cure became possible. In this paper we will present an overview on the evolution of big data in healthcare system, and we will apply a learning algorithm on a set of medical data. The objective is to predict chronic kidney diseases by using Decision Tree (C4.5) algorithm.
Frankie Rybicki slide set for Deep Learning in Radiology / MedicineFrank Rybicki
These are my #AI slides for medical deep learning using #radiology and medical imaging examples. Please use them & modify to teach your own group about medical AI.
Framework for efficient transformation for complex medical data for improving...IJECEIAES
The adoption of various technological advancement has been already adopted in the area of healthcare sector. This adoption facilitates involuntary generation of medical data that can be autonomously programmed to be forwarded to a destined hub in the form of cloud storage units. However, owing to such technologies there is massive formation of complex medical data that significantly acts as an overhead towards performing analytical operation as well as unwanted storage utilization. Therefore, the proposed system implements a novel transformation technique that is capable of using a template based stucture over cloud for generating structured data from highly unstructured data in a non-conventional manner. The contribution of the propsoed methodology is that it offers faster processing and storage optimization. The study outcome also proves this fact to show propsoed scheme excels better in performance in contrast to existing data transformation scheme.
HEALTH PREDICTION ANALYSIS USING DATA MININGAshish Salve
As we know that health care industry is completely based on assumptions, which after get tested and verified via various tests and patient have to be depend on the doctors knowledge on that topic . so we made a system that uses data mining techniques to predict the health of a person based on various medical test results. so we can predict the health of that person based on that analysis performed by the system.The system currently design only for heart issues, for that we had used Statlog (Heart) Data Set from UCI Machine Learning Repository it includes attributes like age, sex, chest pain type, cholesterol, sugar, outcomes,etc.for training the system. we only need to passed few general inputs in order to generate the prediction and the prediction results from all algorithms are they merged together by calculating there mean value that value shows the actual outcome of the prediction process which entirely works in background
Smart Health Prediction Using Data Mining.Data mining is a new powerful technology which is of high interest in computer world. It is a sub field of computer science that uses already existing data in different databases to transform it into new researches and results. It makes use of Artificial Intelligence, machine learning and database management to extract new patterns from large data sets and the knowledge associated with these patterns. The actual task is to extract data by automatic or semi-automatic means. The different parameters included in data mining includes clustering, forecasting, path analysis and predictive analysis.
A novel methodology for diagnosing the heart disease using fuzzy databaseeSAT Journals
Abstract A familiar method used for information storing is using a Database. Sometimes the user’s criteria may not be fulfilled due to the presence of vast amount of data in the system of regular database and for the decision making they should be provided with the exact information. In this paper, in the database of the heart disease where there is inaccuracy occurs, accurate information is provided for the users to help them by introducing a medical fuzzy database. The occurrence of uncertainty in the available information for the decision making, the vague values or imprecise in the representation of the data is called Inaccuracy in given data. In this paper, the severity of the patient with the heart disease is diagnosed by introducing a new database system called a fuzzy database management system by utilizing an existing data in the common database systems. Key Words: Fuzzy database, inaccurate data, membership function, linguistic representation, SQL.
DISEASE PREDICTION USING MACHINE LEARNING OVER BIG DATAcseij
Due to big data progress in biomedical and healthcare communities, accurate study of medical data
benefits early disease recognition, patient care and community services. When the quality of medical data
is incomplete the exactness of study is reduced. Moreover, different regions exhibit unique appearances of
certain regional diseases, which may results in weakening the prediction of disease outbreaks. In the
proposed system, it provides machine learning algorithms for effective prediction of various disease
occurrences in disease-frequent societies. It experiment the altered estimate models over real-life hospital
data collected. To overcome the difficulty of incomplete data, it use a latent factor model to rebuild the
missing data. It experiment on a regional chronic illness of cerebral infarction. Using structured and
unstructured data from hospital it use Machine Learning Decision Tree algorithm and Map Reduce
algorithm. To the best of our knowledge in the area of medical big data analytics none of the existing work
focused on both data types. Compared to several typical estimate algorithms, the calculation exactness of
our proposed algorithm reaches 94.8% with a convergence speed which is faster than that of the CNNbased
unimodal disease risk prediction (CNN-UDRP) algorithm.
Outline
Value Based Healthcare System – How it is seen today
Healthcare Challenge & IoT as a Solution
IoT – Big Data Structure
Recent Trends in IoT Big Data Analytics
Challenges & Our Future
In-depth Knowledge of
What causes the most premature death?
Distribution of Disease burden from 1990 - 2020
Challenges in Healthcare
Future Healthcare
IoT Machine Talking to Machine
Prediction of IoT Usage
About PEPGRA HEALTHCARE,
A leading healthcare communication firm with years of excellence serving clients with a dedicated team of Medical, Regulatory and Scientific writers specialized in all therapeutic areas.
Contact us at :
UK: +44-1143520021
US/Canada: +1-972-502-9262
India: +91-8754446690
info@pepgra.com
www.pepgra.com
How can we make a Radiologist more efficient?
Increased Imaging for Chronic Diseases and Emergencies raise the demand for radiologists globally & AI could definitely assist them in increasing their efficiency & meet the requirements.
PERFORMANCE OF DATA MINING TECHNIQUES TO PREDICT IN HEALTHCARE CASE STUDY: CH...ijdms
With the promises of predictive analytics in big data, and the use of machine learning algorithms,
predicting future is no longer a difficult task, especially for health sector, that has witnessed a great
evolution following the development of new computer technologies that gave birth to multiple fields of
research. Many efforts are done to cope with medical data explosion on one hand, and to obtain useful
knowledge from it, predict diseases and anticipate the cure on the other hand. This prompted researchers
to apply all the technical innovations like big data analytics, predictive analytics, machine learning and
learning algorithms in order to extract useful knowledge and help in making decisions. In this paper, we
will present an overview on the evolution of big data in healthcare system, and we will apply three learning
algorithms on a set of medical data. The objective of this research work is to predict kidney disease by
using multiple machine learning algorithms that are Support Vector Machine (SVM), Decision Tree (C4.5),
and Bayesian Network (BN), and chose the most efficient one.
White paper examines the unstructured data management challenges healthcare organizations face and how the Hitachi Data Systems solution employs metadata to address the data storm.
Artificial intelligence, such as neural networks, deep learning and predictive analytics, has the potential to transform radiology, by enhancing the productivity of radiologists and helping them to make better diagnoses. This short report from Signify Research presents 5 reasons why artificial intelligence will increasingly be used in radiology in the coming years and concludes with a list of the barriers that will first need to be overcome before mainstream adoption will occur.
MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTIONIJDKP
Developing predictive modelling solutions for risk estimation is extremely challenging in health-care
informatics. Risk estimation involves integration of heterogeneous clinical sources having different
representation from different health-care provider making the task increasingly complex. Such sources are
typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel
computing tools collectively termed big data tools are in need which can synthesize and assist the physician
to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel
approach for combining the predictive ability of multiple models for better prediction accuracy. We
demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study.
Results show that the proposed multi-model predictive architecture is able to provide better accuracy than
best model approach. By modelling the error of predictive models we are able to choose sub set of models
which yields accurate results. More information was modelled into system by multi-level mining which has
resulted in enhanced predictive accuracy.
Machine Learning for Disease PredictionMustafa Oğuz
A great application field of machine learning is predicting diseases. This presentation introduces what is preventable diseases and deaths. Then examines three diverse papers to explain what has been done in the field and how the technology works. Finishes with future possibilities and enablers of the disease prediction technology.
Standardization and wider use of Electronic Health records (EHR) creates opportunities for
better understanding patterns of illness and care within and across medical systems. In the healthcare
systems, hidden event signatures allow taking decision for patient’s diagnosis, prognosis, and
management. Temporal history of event codes embedded in patients' records, investigates frequently
occurring sequences of event codes across patients. There is a framework that enables the
representation, retrieval, and mining of high order latent event structure and relationships within
single and multiple event sequences. There is a wealth of hidden information present in the large
databases. Different data mining techniques can be used for retrieving data. A classifier approach for
detection of diabetes is presented in this paper and shows how Naive Bayes can be used for
classification purpose. In this system, medical data is categories into five categories namely low,
average, high and very high and critical, treatment is given as per the predicted category. The system
will predict the class label of unknown sample. Hence two basic functions namely classification
(training) and prediction (testing) will be performed. An algorithm and database used affects the
accuracy of the system. It can answer complex queries for diagnosing diabetes disease and thus assist
healthcare practitioners to make intelligent clinical decisions which traditional decision support
systems cannot.Over the last decade, so many information visualization techniques have been
developed to support the exploration of large data sets. There are various interactive visual data
mining tools available for visual data analysis. It is possible to perform clinical assessment for visual
interactive knowledge discovery in large electronic health record databases. In this paper, we
proposed that it is possible to develop a tool for data visualization for interactive knowledge
discovery.
Disease prediction in big data healthcare using extended convolutional neural...IJAAS Team
Diabetes Mellitus is one of the growing fatal diseases all over the world. It leads to complications that include heart disease, stroke, and nerve disease, kidney damage. So, Medical Professionals want a reliable prediction system to diagnose Diabetes. To predict the diabetes at earlier stage, different machine learning techniques are useful for examining the data from different sources and valuable knowledge is synopsized. So, mining the diabetes data in an efficient way is a crucial concern. In this project, a medical dataset has been accomplished to predict the diabetes. The R-Studio and Pypark software was employed as a statistical computing tool for diagnosing diabetes. The PIMA Indian database was acquired from UCI repository will be used for analysis. The dataset was studied and analyzed to build an effective model that predicts and diagnoses the diabetes disease earlier.
Frankie Rybicki slide set for Deep Learning in Radiology / MedicineFrank Rybicki
These are my #AI slides for medical deep learning using #radiology and medical imaging examples. Please use them & modify to teach your own group about medical AI.
Framework for efficient transformation for complex medical data for improving...IJECEIAES
The adoption of various technological advancement has been already adopted in the area of healthcare sector. This adoption facilitates involuntary generation of medical data that can be autonomously programmed to be forwarded to a destined hub in the form of cloud storage units. However, owing to such technologies there is massive formation of complex medical data that significantly acts as an overhead towards performing analytical operation as well as unwanted storage utilization. Therefore, the proposed system implements a novel transformation technique that is capable of using a template based stucture over cloud for generating structured data from highly unstructured data in a non-conventional manner. The contribution of the propsoed methodology is that it offers faster processing and storage optimization. The study outcome also proves this fact to show propsoed scheme excels better in performance in contrast to existing data transformation scheme.
HEALTH PREDICTION ANALYSIS USING DATA MININGAshish Salve
As we know that health care industry is completely based on assumptions, which after get tested and verified via various tests and patient have to be depend on the doctors knowledge on that topic . so we made a system that uses data mining techniques to predict the health of a person based on various medical test results. so we can predict the health of that person based on that analysis performed by the system.The system currently design only for heart issues, for that we had used Statlog (Heart) Data Set from UCI Machine Learning Repository it includes attributes like age, sex, chest pain type, cholesterol, sugar, outcomes,etc.for training the system. we only need to passed few general inputs in order to generate the prediction and the prediction results from all algorithms are they merged together by calculating there mean value that value shows the actual outcome of the prediction process which entirely works in background
Smart Health Prediction Using Data Mining.Data mining is a new powerful technology which is of high interest in computer world. It is a sub field of computer science that uses already existing data in different databases to transform it into new researches and results. It makes use of Artificial Intelligence, machine learning and database management to extract new patterns from large data sets and the knowledge associated with these patterns. The actual task is to extract data by automatic or semi-automatic means. The different parameters included in data mining includes clustering, forecasting, path analysis and predictive analysis.
A novel methodology for diagnosing the heart disease using fuzzy databaseeSAT Journals
Abstract A familiar method used for information storing is using a Database. Sometimes the user’s criteria may not be fulfilled due to the presence of vast amount of data in the system of regular database and for the decision making they should be provided with the exact information. In this paper, in the database of the heart disease where there is inaccuracy occurs, accurate information is provided for the users to help them by introducing a medical fuzzy database. The occurrence of uncertainty in the available information for the decision making, the vague values or imprecise in the representation of the data is called Inaccuracy in given data. In this paper, the severity of the patient with the heart disease is diagnosed by introducing a new database system called a fuzzy database management system by utilizing an existing data in the common database systems. Key Words: Fuzzy database, inaccurate data, membership function, linguistic representation, SQL.
DISEASE PREDICTION USING MACHINE LEARNING OVER BIG DATAcseij
Due to big data progress in biomedical and healthcare communities, accurate study of medical data
benefits early disease recognition, patient care and community services. When the quality of medical data
is incomplete the exactness of study is reduced. Moreover, different regions exhibit unique appearances of
certain regional diseases, which may results in weakening the prediction of disease outbreaks. In the
proposed system, it provides machine learning algorithms for effective prediction of various disease
occurrences in disease-frequent societies. It experiment the altered estimate models over real-life hospital
data collected. To overcome the difficulty of incomplete data, it use a latent factor model to rebuild the
missing data. It experiment on a regional chronic illness of cerebral infarction. Using structured and
unstructured data from hospital it use Machine Learning Decision Tree algorithm and Map Reduce
algorithm. To the best of our knowledge in the area of medical big data analytics none of the existing work
focused on both data types. Compared to several typical estimate algorithms, the calculation exactness of
our proposed algorithm reaches 94.8% with a convergence speed which is faster than that of the CNNbased
unimodal disease risk prediction (CNN-UDRP) algorithm.
Outline
Value Based Healthcare System – How it is seen today
Healthcare Challenge & IoT as a Solution
IoT – Big Data Structure
Recent Trends in IoT Big Data Analytics
Challenges & Our Future
In-depth Knowledge of
What causes the most premature death?
Distribution of Disease burden from 1990 - 2020
Challenges in Healthcare
Future Healthcare
IoT Machine Talking to Machine
Prediction of IoT Usage
About PEPGRA HEALTHCARE,
A leading healthcare communication firm with years of excellence serving clients with a dedicated team of Medical, Regulatory and Scientific writers specialized in all therapeutic areas.
Contact us at :
UK: +44-1143520021
US/Canada: +1-972-502-9262
India: +91-8754446690
info@pepgra.com
www.pepgra.com
How can we make a Radiologist more efficient?
Increased Imaging for Chronic Diseases and Emergencies raise the demand for radiologists globally & AI could definitely assist them in increasing their efficiency & meet the requirements.
PERFORMANCE OF DATA MINING TECHNIQUES TO PREDICT IN HEALTHCARE CASE STUDY: CH...ijdms
With the promises of predictive analytics in big data, and the use of machine learning algorithms,
predicting future is no longer a difficult task, especially for health sector, that has witnessed a great
evolution following the development of new computer technologies that gave birth to multiple fields of
research. Many efforts are done to cope with medical data explosion on one hand, and to obtain useful
knowledge from it, predict diseases and anticipate the cure on the other hand. This prompted researchers
to apply all the technical innovations like big data analytics, predictive analytics, machine learning and
learning algorithms in order to extract useful knowledge and help in making decisions. In this paper, we
will present an overview on the evolution of big data in healthcare system, and we will apply three learning
algorithms on a set of medical data. The objective of this research work is to predict kidney disease by
using multiple machine learning algorithms that are Support Vector Machine (SVM), Decision Tree (C4.5),
and Bayesian Network (BN), and chose the most efficient one.
White paper examines the unstructured data management challenges healthcare organizations face and how the Hitachi Data Systems solution employs metadata to address the data storm.
Artificial intelligence, such as neural networks, deep learning and predictive analytics, has the potential to transform radiology, by enhancing the productivity of radiologists and helping them to make better diagnoses. This short report from Signify Research presents 5 reasons why artificial intelligence will increasingly be used in radiology in the coming years and concludes with a list of the barriers that will first need to be overcome before mainstream adoption will occur.
MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTIONIJDKP
Developing predictive modelling solutions for risk estimation is extremely challenging in health-care
informatics. Risk estimation involves integration of heterogeneous clinical sources having different
representation from different health-care provider making the task increasingly complex. Such sources are
typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel
computing tools collectively termed big data tools are in need which can synthesize and assist the physician
to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel
approach for combining the predictive ability of multiple models for better prediction accuracy. We
demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study.
Results show that the proposed multi-model predictive architecture is able to provide better accuracy than
best model approach. By modelling the error of predictive models we are able to choose sub set of models
which yields accurate results. More information was modelled into system by multi-level mining which has
resulted in enhanced predictive accuracy.
Machine Learning for Disease PredictionMustafa Oğuz
A great application field of machine learning is predicting diseases. This presentation introduces what is preventable diseases and deaths. Then examines three diverse papers to explain what has been done in the field and how the technology works. Finishes with future possibilities and enablers of the disease prediction technology.
Standardization and wider use of Electronic Health records (EHR) creates opportunities for
better understanding patterns of illness and care within and across medical systems. In the healthcare
systems, hidden event signatures allow taking decision for patient’s diagnosis, prognosis, and
management. Temporal history of event codes embedded in patients' records, investigates frequently
occurring sequences of event codes across patients. There is a framework that enables the
representation, retrieval, and mining of high order latent event structure and relationships within
single and multiple event sequences. There is a wealth of hidden information present in the large
databases. Different data mining techniques can be used for retrieving data. A classifier approach for
detection of diabetes is presented in this paper and shows how Naive Bayes can be used for
classification purpose. In this system, medical data is categories into five categories namely low,
average, high and very high and critical, treatment is given as per the predicted category. The system
will predict the class label of unknown sample. Hence two basic functions namely classification
(training) and prediction (testing) will be performed. An algorithm and database used affects the
accuracy of the system. It can answer complex queries for diagnosing diabetes disease and thus assist
healthcare practitioners to make intelligent clinical decisions which traditional decision support
systems cannot.Over the last decade, so many information visualization techniques have been
developed to support the exploration of large data sets. There are various interactive visual data
mining tools available for visual data analysis. It is possible to perform clinical assessment for visual
interactive knowledge discovery in large electronic health record databases. In this paper, we
proposed that it is possible to develop a tool for data visualization for interactive knowledge
discovery.
Disease prediction in big data healthcare using extended convolutional neural...IJAAS Team
Diabetes Mellitus is one of the growing fatal diseases all over the world. It leads to complications that include heart disease, stroke, and nerve disease, kidney damage. So, Medical Professionals want a reliable prediction system to diagnose Diabetes. To predict the diabetes at earlier stage, different machine learning techniques are useful for examining the data from different sources and valuable knowledge is synopsized. So, mining the diabetes data in an efficient way is a crucial concern. In this project, a medical dataset has been accomplished to predict the diabetes. The R-Studio and Pypark software was employed as a statistical computing tool for diagnosing diabetes. The PIMA Indian database was acquired from UCI repository will be used for analysis. The dataset was studied and analyzed to build an effective model that predicts and diagnoses the diabetes disease earlier.
These slides review problems with current electronic medical record (EMR) systems and makes suggestions for future improvements in design and usability. This work was sponsored by the Szollosi Healthcare Innovation Program (www.TheSHIPHome.org).
Next generation electronic medical records and search a test implementation i...lucenerevolution
Presented by David Piraino, Chief Imaging Information Officer, Imaging Institute Cleveland Clinic, Cleveland Clinic
& Daniel Palmer, Chief Imaging Information Officer, Imaging Institute Cleveland Clinic, Cleveland Clinic
Most patient specifc medical information is document oriented with varying amounts of associated meta-data. Most of pateint medical information is textual and semi-structured. Electronic Medical Record Systems (EMR) are not optimized to present the textual information to users in the most understandable ways. Present EMRs show information to the user in a reverse time oriented patient specific manner only. This talk discribes the construction and use of Solr search technologies to provide relevant historical information at the point of care while intepreting radiology images.
Radiology reports over a 4 year period were extracted from our Radiology Information System (RIS) and passed through a text processing engine to extract the results, impression, exam description, location, history, and date. Fifteen cases reported during clinical practice were used as test cases to determine if ""similar"" historical cases were found . The results were evaluated by the number of searches that returned any result in less than 3 seconds and the number of cases that illustrated the questioned diagnosis in the top 10 results returned as determined by a bone and joint radiologist. Also methods to better optimize the search results were reviewed.
An average of 7.8 out of the 10 highest rated reports showed a similar case highly related to the present case. The best search showed 10 out of 10 cases that were good examples and the lowest match search showed 2 out of 10 cases that were good examples.The talk will highlight this specific use case and the issues and advances of using Solr search technology in medicine with focus on point of care applications.
Real-World Evidence: The Future of Data Generation and UsageApril Bright
As data is captured through electronic health records, registries and unique device identifiers, the generation of evidence based on this data is expected to play a crucial role in informing orthopedic manufacturers’ decisions before and after regulatory approval. While regulators, payors, hospitals and manufacturers support this shift, they acknowledge that gaps remain in its optimal execution. Priority considerations include how to generate evidence to expedite regulatory market decisions, device indication expansion, postmarket studies, postmarket surveillance and reimbursement decisions. The National Evaluation System for health Technology Coordinating Center (NESTcc), an initiative of the Medical Device Innovation Consortium (MDIC), is leading the conversation with various stakeholders, including FDA and orthopedic device companies to support the sustainable generation of Real-World Evidence (RWE) using Real-World Data (RWD).
Independent forces on the biomedical ecosystem is causing a convergence of care, quality measurement, and clinical research at the point of care. The presentation outlines some of the informatics implications of this convergence.
The Randomized Controlled Trial: The Gold Standard of Clinical Science and a ...marcus evans Network
Tim Fayram, St. Jude Medical Inc. - Speaker at the marcus evans Medical Device R&D Summit Fall 2013, held in Palm Beach, FL delivered his presentation entitled The Randomized Controlled Trial: The Gold Standard of Clinical Science and a Barrier to Innovation?
HEALTH PREDICTION ANALYSIS USING DATA MININGAshish Salve
Data mining techniques are used for a variety of applications. In healthcare industry, datamining plays an important
role in predicting diseases. For detecting a disease number of tests should be required from the patient. But using data
mining technique the number of tests can be reduced. This reduced test plays an important role in time and performance.
This report analyses data mining techniques which can be used for predicting different types of diseases. This report reviewed
the research papers which mainly concentrate on predicting various disease
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Global Services for Global Science March 2023.pptxIan Foster
We are on the verge of a global communications revolution based on ubiquitous high-speed 5G, 6G, and free-space optics technologies. The resulting global communications fabric can enable new ultra-collaborative research modalities that pool sensors, data, and computation with unprecedented flexibility and focus. But realizing these modalities requires new services to overcome the tremendous friction currently associated with any actions that traverse institutional boundaries. The solution, I argue, is new global science services to mediate between user intent and infrastructure realities. I describe our experiences building and operating such services and the principles that we have identified as needed for successful deployment and operations.
The Earth System Grid Federation: Origins, Current State, EvolutionIan Foster
I describe the origins, current state and potential future directions for the Earth System Grid Federation, an international consortium that develops infrastructure for sharing of climate simulation and related datasets.
Keynote talk at 2022-10-11 ESnet6 launch. A lovely event by a great team. It was a pleasure to talk about how ESnet6 will enable new "smart instruments"--and some of the work that we are doing to that end.
Linking Scientific Instruments and ComputationIan Foster
[Talk presented at Monterey Data Conference, August 31, 2022]
Powerful detectors at modern experimental facilities routinely collect data at multiple GB/s. Online analysis methods are needed to enable the collection of only interesting subsets of such massive data streams, such as by explicitly discarding some data elements or by directing instruments to relevant areas of experimental space. Thus, methods are required for configuring and running distributed computing pipelines—what we call flows—that link instruments, computers (e.g., for analysis, simulation, AI model training), edge computing (e.g., for analysis), data stores, metadata catalogs, and high-speed networks. We review common patterns associated with such flows and describe methods for instantiating these patterns. We present experiences with the application of these methods to the processing of data from five different scientific instruments, each of which engages powerful computers for data inversion, machine learning model training, or other purposes. We also discuss implications of such methods for operators and users of scientific facilities.
A Global Research Data Platform: How Globus Services Enable Scientific DiscoveryIan Foster
Talk in the National Science Data Fabric (NSDF) Distinguished Speaker Series
The Globus team has spent more than a decade developing software-as-a-service methods for research data management, available at globus.org. Globus transfer, sharing, search, publication, identity and access management (IAM), automation, and other services enable reliable, secure, and efficient managed access to exabytes of scientific data on tens of thousands of storage systems. For developers, flexible and open platform APIs reduce greatly the cost of developing and operating customized data distribution, sharing, and analysis applications. With 200,000 registered users at more than 2,000 institutions, more than 1.5 exabytes and 100 billion files handled, and 100s of registered applications and services, the services that comprise the Globus platform have become essential infrastructure for many researchers, projects, and institutions. I describe the design of the Globus platform, present illustrative applications, and discuss lessons learned for cyberinfrastructure software architecture, dissemination, and sustainability.
Video is at https://www.youtube.com/watch?v=p8pCHkFFq1E
Daniel Lopresti, Bill Gropp, Mark D. Hill, Katie Schuman, and I put together a white paper on "Building a National Discovery Cloud" for the Computing Community Consortium (http://cra.org/ccc). I presented these slides at a Computing Research Association "Best Practices on using the Cloud for Computing Research Workshop" (https://cra.org/industry/events/cloudworkshop/).
Abstract from White Paper:
The nature of computation and its role in our lives have been transformed in the past two decades by three remarkable developments: the emergence of public cloud utilities as a new computing platform; the ability to extract information from enormous quantities of data via machine learning; and the emergence of computational simulation as a research method on par with experimental science. Each development has major implications for how societies function and compete; together, they represent a change in technological foundations of society as profound as the telegraph or electrification. Societies that embrace these changes will lead in the 21st Century; those that do not, will decline in prosperity and influence. Nowhere is this stark choice more evident than in research and education, the two sectors that produce the innovations that power the future and prepare a workforce able to exploit those innovations, respectively. In this article, we introduce these developments and suggest steps that the US government might take to prepare the research and education system for its implications.
Big Data, Big Computing, AI, and Environmental ScienceIan Foster
I presented to the Environmental Data Science group at UChicago, with the goal of getting them excited about the opportunities inherent in big data, big computing, and AI--and to think about how to collaborate with Argonne in those areas. We had a great and long conversation about Takuya Kurihana's work on unsupervised learning for cloud classification. I also mentioned our work making NASA and CMIP data accessible on AI supercomputers.
In 2001, as early high-speed networks were deployed, George Gilder observed that “when the network is as fast as the computer's internal links, the machine disintegrates across the net into a set of special purpose appliances.” Two decades later, our networks are 1,000 times faster, our appliances are increasingly specialized, and our computer systems are indeed disintegrating. As hardware acceleration overcomes speed-of-light delays, time and space merge into a computing continuum. Familiar questions like “where should I compute,” “for what workloads should I design computers,” and "where should I place my computers” seem to allow for a myriad of new answers that are exhilarating but also daunting. Are there concepts that can help guide us as we design applications and computer systems in a world that is untethered from familiar landmarks like center, cloud, edge? I propose some ideas and report on experiments in coding the continuum.
Data Tribology: Overcoming Data Friction with Cloud AutomationIan Foster
A talk at the CODATA/RDA meeting in Gaborone, Botswana. I made the case that the biggest barriers to effective data sharing and reuse are often those associated with "data friction" and that cloud automation can be used to overcome those barriers.
The image on the first slide shows a few of the more than 20,000 active Globus endpoints.
Research Automation for Data-Driven DiscoveryIan Foster
Talk presented at Workshop on Maximizing the Scientific Return of NASA Data. Makes the case that automation and outsourcing of data management tasks to cloud services is essential for effective data-driven discovery. Describes how the Globus research data management platform addresses this need.
Scaling collaborative data science with Globus and JupyterIan Foster
The Globus service simplifies the utilization of large and distributed data on the Jupyter platform. Ian Foster explains how to use Globus and Jupyter to seamlessly access notebooks using existing institutional credentials, connect notebooks with data residing on disparate storage systems, and make data securely available to business partners and research collaborators.
New learning technologies seem likely to transform much of science, as they are already doing for many areas of industry and society. We can expect these technologies to be used, for example, to obtain new insights from massive scientific data and to automate research processes. However, success in such endeavors will require new learning systems: scientific computing platforms, methods, and software that enable the large-scale application of learning technologies. These systems will need to enable learning from extremely large quantities of data; the management of large and complex data, models, and workflows; and the delivery of learning capabilities to many thousands of scientists. In this talk, I review these challenges and opportunities and describe systems that my colleagues and I are developing to enable the application of learning throughout the research process, from data acquisition to analysis.
Plenary talk at the international Synchrotron Radiation Instrumentation conference in Taiwan, on work with great colleagues Ben Blaiszik, Ryan Chard, Logan Ward, and others.
Rapidly growing data volumes at light sources demand increasingly automated data collection, distribution, and analysis processes, in order to enable new scientific discoveries while not overwhelming finite human capabilities. I present here three projects that use cloud-hosted data automation and enrichment services, institutional computing resources, and high- performance computing facilities to provide cost-effective, scalable, and reliable implementations of such processes. In the first, Globus cloud-hosted data automation services are used to implement data capture, distribution, and analysis workflows for Advanced Photon Source and Advanced Light Source beamlines, leveraging institutional storage and computing. In the second, such services are combined with cloud-hosted data indexing and institutional storage to create a collaborative data publication, indexing, and discovery service, the Materials Data Facility (MDF), built to support a host of informatics applications in materials science. The third integrates components of the previous two projects with machine learning capabilities provided by the Data and Learning Hub for science (DLHub) to enable on-demand access to machine learning models from light source data capture and analysis workflows, and provides simplified interfaces to train new models on data from sources such as MDF on leadership scale computing resources. I draw conclusions about best practices for building next-generation data automation systems for future light sources.
We presented these slides at the NIH Data Commons kickoff meeting, showing some of the technologies that we propose to integrate in our "full stack" pilot.
Going Smart and Deep on Materials at ALCFIan Foster
As we acquire large quantities of science data from experiment and simulation, it becomes possible to apply machine learning (ML) to those data to build predictive models and to guide future simulations and experiments. Leadership Computing Facilities need to make it easy to assemble such data collections and to develop, deploy, and run associated ML models.
We describe and demonstrate here how we are realizing such capabilities at the Argonne Leadership Computing Facility. In our demonstration, we use large quantities of time-dependent density functional theory (TDDFT) data on proton stopping power in various materials maintained in the Materials Data Facility (MDF) to build machine learning models, ranging from simple linear models to complex artificial neural networks, that are then employed to manage computations, improving their accuracy and reducing their cost. We highlight the use of new services being prototyped at Argonne to organize and assemble large data collections (MDF in this case), associate ML models with data collections, discover available data and models, work with these data and models in an interactive Jupyter environment, and launch new computations on ALCF resources.
Title: Sense of Taste
Presenter: Dr. Faiza, Assistant Professor of Physiology
Qualifications:
MBBS (Best Graduate, AIMC Lahore)
FCPS Physiology
ICMT, CHPE, DHPE (STMU)
MPH (GC University, Faisalabad)
MBA (Virtual University of Pakistan)
Learning Objectives:
Describe the structure and function of taste buds.
Describe the relationship between the taste threshold and taste index of common substances.
Explain the chemical basis and signal transduction of taste perception for each type of primary taste sensation.
Recognize different abnormalities of taste perception and their causes.
Key Topics:
Significance of Taste Sensation:
Differentiation between pleasant and harmful food
Influence on behavior
Selection of food based on metabolic needs
Receptors of Taste:
Taste buds on the tongue
Influence of sense of smell, texture of food, and pain stimulation (e.g., by pepper)
Primary and Secondary Taste Sensations:
Primary taste sensations: Sweet, Sour, Salty, Bitter, Umami
Chemical basis and signal transduction mechanisms for each taste
Taste Threshold and Index:
Taste threshold values for Sweet (sucrose), Salty (NaCl), Sour (HCl), and Bitter (Quinine)
Taste index relationship: Inversely proportional to taste threshold
Taste Blindness:
Inability to taste certain substances, particularly thiourea compounds
Example: Phenylthiocarbamide
Structure and Function of Taste Buds:
Composition: Epithelial cells, Sustentacular/Supporting cells, Taste cells, Basal cells
Features: Taste pores, Taste hairs/microvilli, and Taste nerve fibers
Location of Taste Buds:
Found in papillae of the tongue (Fungiform, Circumvallate, Foliate)
Also present on the palate, tonsillar pillars, epiglottis, and proximal esophagus
Mechanism of Taste Stimulation:
Interaction of taste substances with receptors on microvilli
Signal transduction pathways for Umami, Sweet, Bitter, Sour, and Salty tastes
Taste Sensitivity and Adaptation:
Decrease in sensitivity with age
Rapid adaptation of taste sensation
Role of Saliva in Taste:
Dissolution of tastants to reach receptors
Washing away the stimulus
Taste Preferences and Aversions:
Mechanisms behind taste preference and aversion
Influence of receptors and neural pathways
Impact of Sensory Nerve Damage:
Degeneration of taste buds if the sensory nerve fiber is cut
Abnormalities of Taste Detection:
Conditions: Ageusia, Hypogeusia, Dysgeusia (parageusia)
Causes: Nerve damage, neurological disorders, infections, poor oral hygiene, adverse drug effects, deficiencies, aging, tobacco use, altered neurotransmitter levels
Neurotransmitters and Taste Threshold:
Effects of serotonin (5-HT) and norepinephrine (NE) on taste sensitivity
Supertasters:
25% of the population with heightened sensitivity to taste, especially bitterness
Increased number of fungiform papillae
micro teaching on communication m.sc nursing.pdfAnurag Sharma
Microteaching is a unique model of practice teaching. It is a viable instrument for the. desired change in the teaching behavior or the behavior potential which, in specified types of real. classroom situations, tends to facilitate the achievement of specified types of objectives.
New Directions in Targeted Therapeutic Approaches for Older Adults With Mantl...i3 Health
i3 Health is pleased to make the speaker slides from this activity available for use as a non-accredited self-study or teaching resource.
This slide deck presented by Dr. Kami Maddocks, Professor-Clinical in the Division of Hematology and
Associate Division Director for Ambulatory Operations
The Ohio State University Comprehensive Cancer Center, will provide insight into new directions in targeted therapeutic approaches for older adults with mantle cell lymphoma.
STATEMENT OF NEED
Mantle cell lymphoma (MCL) is a rare, aggressive B-cell non-Hodgkin lymphoma (NHL) accounting for 5% to 7% of all lymphomas. Its prognosis ranges from indolent disease that does not require treatment for years to very aggressive disease, which is associated with poor survival (Silkenstedt et al, 2021). Typically, MCL is diagnosed at advanced stage and in older patients who cannot tolerate intensive therapy (NCCN, 2022). Although recent advances have slightly increased remission rates, recurrence and relapse remain very common, leading to a median overall survival between 3 and 6 years (LLS, 2021). Though there are several effective options, progress is still needed towards establishing an accepted frontline approach for MCL (Castellino et al, 2022). Treatment selection and management of MCL are complicated by the heterogeneity of prognosis, advanced age and comorbidities of patients, and lack of an established standard approach for treatment, making it vital that clinicians be familiar with the latest research and advances in this area. In this activity chaired by Michael Wang, MD, Professor in the Department of Lymphoma & Myeloma at MD Anderson Cancer Center, expert faculty will discuss prognostic factors informing treatment, the promising results of recent trials in new therapeutic approaches, and the implications of treatment resistance in therapeutic selection for MCL.
Target Audience
Hematology/oncology fellows, attending faculty, and other health care professionals involved in the treatment of patients with mantle cell lymphoma (MCL).
Learning Objectives
1.) Identify clinical and biological prognostic factors that can guide treatment decision making for older adults with MCL
2.) Evaluate emerging data on targeted therapeutic approaches for treatment-naive and relapsed/refractory MCL and their applicability to older adults
3.) Assess mechanisms of resistance to targeted therapies for MCL and their implications for treatment selection
Prix Galien International 2024 Forum ProgramLevi Shapiro
June 20, 2024, Prix Galien International and Jerusalem Ethics Forum in ROME. Detailed agenda including panels:
- ADVANCES IN CARDIOLOGY: A NEW PARADIGM IS COMING
- WOMEN’S HEALTH: FERTILITY PRESERVATION
- WHAT’S NEW IN THE TREATMENT OF INFECTIOUS,
ONCOLOGICAL AND INFLAMMATORY SKIN DISEASES?
- ARTIFICIAL INTELLIGENCE AND ETHICS
- GENE THERAPY
- BEYOND BORDERS: GLOBAL INITIATIVES FOR DEMOCRATIZING LIFE SCIENCE TECHNOLOGIES AND PROMOTING ACCESS TO HEALTHCARE
- ETHICAL CHALLENGES IN LIFE SCIENCES
- Prix Galien International Awards Ceremony
Explore natural remedies for syphilis treatment in Singapore. Discover alternative therapies, herbal remedies, and lifestyle changes that may complement conventional treatments. Learn about holistic approaches to managing syphilis symptoms and supporting overall health.
Pulmonary Thromboembolism - etilogy, types, medical- Surgical and nursing man...VarunMahajani
Disruption of blood supply to lung alveoli due to blockage of one or more pulmonary blood vessels is called as Pulmonary thromboembolism. In this presentation we will discuss its causes, types and its management in depth.
Anti ulcer drugs and their Advance pharmacology ||
Anti-ulcer drugs are medications used to prevent and treat ulcers in the stomach and upper part of the small intestine (duodenal ulcers). These ulcers are often caused by an imbalance between stomach acid and the mucosal lining, which protects the stomach lining.
||Scope: Overview of various classes of anti-ulcer drugs, their mechanisms of action, indications, side effects, and clinical considerations.
- Video recording of this lecture in English language: https://youtu.be/lK81BzxMqdo
- Video recording of this lecture in Arabic language: https://youtu.be/Ve4P0COk9OI
- Link to download the book free: https://nephrotube.blogspot.com/p/nephrotube-nephrology-books.html
- Link to NephroTube website: www.NephroTube.com
- Link to NephroTube social media accounts: https://nephrotube.blogspot.com/p/join-nephrotube-on-social-media.html
Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists Saeid Safari
Preoperative Management of Patients on GLP-1 Receptor Agonists like Ozempic and Semiglutide
ASA GUIDELINE
NYSORA Guideline
2 Case Reports of Gastric Ultrasound
Factory Supply Best Quality Pmk Oil CAS 28578–16–7 PMK Powder in Stockrebeccabio
Factory Supply Best Quality Pmk Oil CAS 28578–16–7 PMK Powder in Stock
Telegram: bmksupplier
signal: +85264872720
threema: TUD4A6YC
You can contact me on Telegram or Threema
Communicate promptly and reply
Free of customs clearance, Double Clearance 100% pass delivery to USA, Canada, Spain, Germany, Netherland, Poland, Italy, Sweden, UK, Czech Republic, Australia, Mexico, Russia, Ukraine, Kazakhstan.Door to door service
Hot Selling Organic intermediates
1. The present and future role of computers in medicine Ian Foster Computation Institute Argonne National Lab & University of Chicago
2. Credits Thanks for support from Chan Soon-Shiong Foundation Department of Energy National Institutes of Health National Science Foundation And for many helpful conversations, Carl Kesselman, Jonathan Silverstein, Steve Tuecke, Stephan Erberich, Steve Graham, Ravi Madduri, and Patrick Soon-Shiong
3. Biology is shifting from being an observational science to a quantitative molecular science Old biology: measure one/two things in two/three conditions High cost per measurement Analysis straightforward as little data Enormously difficult to work out pathways due to inadequate data New biology: measure 10,000 things under many conditions Low cost per measurement Analysis no longer straightforward Payoff can be bigger: potential to understand a complex system Ajay Jain, UCSF
4. Change health care from an empirical, qualitative systemof silos of information to a model of predictive, quantitative, shared,evidence-based outcomes
5. The health care information technology chasm Health care IT [is] rarely used to provide clinicians with evidence-based decision support and feedback; to support data-driven process improvement; or to link clinical care and research. Computational Technology for Effective Health Care, NRC, 2009
6.
7.
8.
9. Digital power = computing x communicationxstorage x content Moore’s law doubles every 18 months disk law doubles x every 12 months fiber law doubles xevery9 months community law n x 2 where n is # people John SeelyBrown
12. Marching towards manycore Intel’s 80 core prototype 2-D mesh interconnect 62 W power Tilera 64 core system 8x8 grid of cores 5 MB coherent cache 4 DDR2 controllers 2 10 GbE interfaces IBM Cell PowerPC and 8 cores Dan Reed, Microsoft 12
13. 1E+17 multi-Petaflop Petaflop Blue Gene/L 1E+14 Thunder Red Storm Earth Blue Pacific ASCI White, ASCI Q SX-5 ASCI Red Option ASCI Red T3E SX-4 NWT CP-PACS 1E+11 CM-5 Paragon T3D Delta SX-3/44 Doubling time = 1.5 yr. i860 (MPPs) VP2600/10 SX-2 CRAY-2 Y-MP8 S-810/20 X-MP4 Peak Speed (flops) Cyber 205 X-MP2 (parallel vectors) 1E+8 CRAY-1 CDC STAR-100 (vectors) CDC 7600 ILLIAC IV CDC 6600 (ICs) IBM Stretch 1E+5 IBM 7090 (transistors) IBM 704 IBM 701 UNIVAC ENIAC (vacuum tubes) 1E+2 1940 1950 1960 1970 1980 1990 2000 2010 Year Introduced The evolution of the fastest supercomputer Argonne My laptop
21. More data does not always mean more knowledge Folker Meyer, Genome Sequencing vs. Moore’s Law: Cyber Challenges for the Next Decade, CTWatch, August 2006.
22. The Red Queen’s race "Well, in our country," said Alice … "you'd generally get to somewhere else — if you run very fast for a long time, as we've been doing.” "A slow sort of country!" said the Queen. "Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!"
23. Computing ondemand Public PUMA knowledge base Information about proteins analyzed against ~2 million gene sequences Back officeanalysis on Grid Millions of BLAST, BLOCKS, etc., onOSG and TeraGrid Natalia Maltsev et al.
28. Empiricism Theory Simulation Data New ways of knowing 300 BCE 1700 1950 1990 Enhanced by the power of collaboration
29.
30. Quantitative medicine is the key to reducing healthcare costs and improving healthcare outcomes Patients with same diagnosis
31. Quantitative medicine is the key to reducing healthcare costs and improving healthcare outcomes Non-responderstoxic responders Non-toxic responders Patients with same diagnosis Misdiagnosed
34. Currently, 17% of Burkitt's Lymphoma are incorrectly diagnosed as Diffuse Large B Cell Lymphoma Classic Burkitt’sLymphoma Atypical Burkitt’sLymphoma Diffuse Large B Cell Lymphoma Louis Staudt, National Cancer Institute
36. Survival estimates for patients with Burkitt's Lymphoma Best treatment for Burkitt’s Lymphoma Best treatment for Diffuse Large B Cell Lymphoma Dave et al, NEJM, June 8, 2006.
37. Burkitt’s Lymphoma Diffuse Large B-cell Lymphoma Classic Atypical Louis Staudt, National Cancer Institute
39. Enabling quantitative medicine Collect a lot of patient data Analyze data to infer effective treatments Identify personalized treatment plans Clinical practice Basic research Clinical trials
40. Challenges Increasing volumes of data, types of data: genomics, blood proteins, imaging, … New science and treatments are hidden in the data, not the biology (biomarkers) Too much for the individual physician or researcher to absorb … have to pay attention to cognitive support … computer-based tools and systems that offer clinicians and patients assistance for thinking about and solving problems related to specific instances of health care. NRC Report on Computational Technology for Effective Health Care: Immediate Steps and Strategic Directions, 2009
41. Bridging silos to enable quantitative medicine Basic research ongoing investigative studies Outcomes, tissue bank screening tests pathways library Clinical practice Clinical trials trial subjects, outcomes
43. Important characteristics We must integrate systems that may not have worked together before These are human systems, with differing goals, incentives, capabilities All components are dynamic—change is the norm, not the exception Processes are evolving rapidly too We are not building something simple like a bridge or an airline reservation system
50. We need to function in the zone of complexity Low Chaos Agreement about outcomes Plan and control High Low High Certainty about outcomes Ralph Stacey, Complexity and Creativity in Organizations, 1996
51. We call these groupingsvirtual organizations (VOs) A set of individuals and/or institutions engaged in the controlled sharing of resources in pursuit of a common goal But U.S. health system is marked by fragmented and inefficient VOs with insufficient mechanisms for controlled sharing Healthcare = dynamic, overlapping VOs, linking Patient – primary care Sub-specialist – hospital Pharmacy – laboratory Insurer – … I advocate … a model of virtual integration rather than true vertical integration … G. Halvorson, CEO Kaiser
52. The Grid paradigm Principles and mechanisms for dynamic VOs Leverage service oriented architecture (SOA) Loose coupling of data and services Open software,architecture Engineering Biomedicine Computer science Physics Healthcare Astronomy Biology 1995 2000 2005 2010
53. The Grid paradigm and healthcare information integration [Grid architecture joint work with Carl Kesselman, Steve Tuecke, Stephan Erberich, and others] Manage who can do what Make data usable and useful Platform services Name data and move it around Make data accessible over the network Data sources Radiology Medical records Pathology Genomics Labs RHIO
54. The Grid paradigm and healthcare information integration Enhance user cognitive processes Security and policy Incorporate into business processes Transform data into knowledge Integration Platform services Management Publication Data sources Radiology Medical records Pathology Genomics Labs RHIO
55. The Grid paradigm and healthcare information integration Cognitive support Security and policy Valueservices Applications Analysis Integration Platform services Management Publication Data sources Radiology Medical records Pathology Genomics Labs RHIO
56. We partition the multi-faceted interoperability problem Process interoperability Integrate work across healthcare enterprise Data interoperability Syntactic: move structured data among system elements Semantic: use information across system elements Systems interoperability Communicate securely, reliably among system elements Applications Analysis Integration Management Publication
57. Publication:Make information accessible Make data available in a remotely accessible, reusable manner Leave mediation for integration layer Gateway from local policy/protocol into wide area mechanisms (transport, security, …)
63. Integration:Making data usable and useful ? Adaptive approach 100% Degree of communication Loosely coupled approach Rigid standards-based approach 0% 0% 100% Degree of prior syntactic and semantic agreement
70. in absence of agreementGlobal Data Model Query reformulation Query in union of exported source schema Query optimization Distributed query execution Query execution engine Wrapper Wrapper Query in the sourceschema Alon Halevy, 2000
71. Analytics:Transform data into knowledge “The overwhelming success of genetic and genomic research efforts has created an enormous backlog of data with the potential to improve the quality of patient care and cost effectiveness of treatment.” — US Presidential Council of Advisors on Science and Technology, Personalized Medicine Themes, 2008
73. Query and retrieve microarray data from a caArray data service:cagridnode.c2b2.columbia.edu:8080/wsrf/services/cagrid/CaArrayScrub Normalize microarray data using GenePattern analytical servicenode255.broad.mit.edu:6060/wsrf/services/cagrid/PreprocessDatasetMAGEService Hierarchical clustering using geWorkbench analytical service: cagridnode.c2b2.columbia.edu:8080/wsrf/services/cagrid/HierarchicalClusteringMage Microarray clustering using Taverna Workflow in/output caGrid services “Shim” services others Wei Tan et al.
75. 6 GB 2M structures (6 GB) ~4M x 60s x 1 cpu ~60K cpu-hrs FRED DOCK6 Select best ~5K Select best ~5K ~10K x 20m x 1 cpu ~3K cpu-hrs Amber Select best ~500 ~500 x 10hr x 100 cpu ~500K cpu-hrs GCMC ZINC 3-D structures Manually prep DOCK6 rec file Manually prep FRED rec file NAB scriptparameters (defines flexible residues, #MDsteps) NAB Script Template DOCK6 Receptor (1 per protein: defines pocket to bind to) FRED Receptor (1 per protein: defines pocket to bind to) PDB protein descriptions 1 protein (1MB) BuildNABScript Amber prep: 2. AmberizeReceptor 4. perl: gen nabscript NAB Script start Amber Score: 1. AmberizeLigand 3. AmberizeComplex 5. RunNABScript For 1 target: 4 million tasks500,000 cpu-hrs (50 cpu-years) end report ligands complexes
76. DOCK on BG/P: ~1M tasks on 118,000 CPUs CPU cores: 118784 Tasks: 934803 Elapsed time: 7257 sec Compute time: 21.43 CPU years Average task time: 667 sec Relative Efficiency: 99.7% (from 16 to 32 racks) Utilization: Sustained: 99.6% Overall: 78.3% Time (secs) Ioan Raicu et al.
77. The health care information technology chasm Health care IT [is] rarely used to provide clinicians with evidence-based decision support and feedback; to support data-driven process improvement; or to link clinical care and research. Computational Technology for Effective Health Care, NRC, 2009
78. Six research challenges for information technology and healthcare Patient-centered cognitive support Modeling—an individualized virtual patient Automation—integrated use, adaptivity Data sharing and collaboration Data management at scale Automated full capture of physician-patient interactions Computational Technology for Effective Health Care, NRC, 2009
79. Six research challenges for information technology and healthcare Patient-centered cognitive support Modeling—an individualized virtual patient Automation—integrated use, adaptivity Data sharing and collaboration Data management at scale Automated full capture of physician-patient interactions Computational Technology for Effective Health Care, NRC, 2009
80. Functioning in the zone of complexity Low Chaos Agreement about outcomes Plan and control High Low High Certainty about outcomes Ralph Stacey, Complexity and Creativity in Organizations, 1996
81. The Grid paradigm and healthcare information integration Cognitive support Security and policy Valueservices Applications Analysis Integration Platform services Management Publication Data sources Radiology Medical records Pathology Genomics Labs RHIO
82. “People tend to overestimate the short-term impact of change, and underestimate the long-term impact.” — Roy Amara “The computer revolution hasn’t happened yet.” — Alan Kay, 1997
Medicine is approaching a profound transition as the methods of molecular medicine start to transform the nature of health care.What is the significance of such methods? For the researcher, it is a paradigm shift, as the number of things that can be measured increases dramatically.
Researchers express a vision for a scientific revolution in health care, from the qualitative to the quantitative-- A revolution based on information and thus computing
However, even as we talk about transformation and revolution, we must recognize that computing is poorly used in health care today.These are the words of a recent National Research Council report.Thus, I will seek in my remarks today to shed light on three questions: how information technology is evolving, how this evolution may impact medicine, and how changes in medicine and health care will stress information techology.
The story of computers is one of exponentials
The story of computers is one of exponentials
The story of computers is one of exponentials
But things are not quite as bad as that
What does this mean for medicine?We will certainly continue to see increasingly sophisticated computer applications aiding the physician in their tasks of observing, diagnosing, and treating – what used to be solely the domain the human senses, the brain, and the hands.More accurate, higher resolution, and more automated data acquisition systems.Computer-aided diagnosis and treatment planning systems that use large-scale data analysis and computer simulations.Automated radiation treatment and surgery systems. However, I want to focus here on some larger systems issues relating to quantitative medicine.
Using gene expression microarrays, we find that these two diseases have quite different phenotypes—that quite different genes are expressed in the two conditions.Here, columns are patients; rows are genes.Not sure what is the significance of the Stage 1/Stage 2.”The beauty of gene expression profiling data is that it is quantitative and highly reproducible. Because of this, these data can be used to generate multivariate statistical models of the clinical behavior of cancer that have great predictive power.” -- http://lymphochip.nih.gov/Staudt_Adv_Immunol_2005.pdf
And of course, we must not forget image-based biomarkers, as used in computer aided diagnosis of breast cancer, or as shown here, in an attempt to identify biomarkers for traumatic brain injury.ROIs used in a study at UIC(A) forceps minor (green), cortico-spinal tract (purple), inferior frontal-occipital fasciculus (red), external capsule (yellow), sagittal stratum (blue) (B) anterior corona radiata (green), superior longitudinal fasciculus (red), posterior corona radiata (blue); (C) cingulum (red), corpus callosum body (blue), splenium (yellow), and genu (green), and forceps major (purple).
Then, by tracking the personalized treatment plan, we collect more patient data.Success demands that we integrate, to a far greater degree than previously possible, clinical practice, basic research, and clinical trials. A profound challenge for health care system and for information technology.
Collecting and managing the enormous quantities of data that are now feasible, and required for EBM, is a huge challenge.However, merely putting in place the systems required to collect large quantities of data is not enough.We then need to make sense of that data. A challenge both for the physician and the researcher.
These problems arise at multiple scales. E.g. …
What these (and other examples that we will not have time to review) have in common …
We cite [Rouse, Health Care as a CAS: Implications for Design… , NAE 2008] for the righthand side aprt.Must supportDynamic composition for a specific purposeEvolving community, function, environmentMessy data, failure, incomplete knowledgeNice, but insufficientData standardsPlatform standardsFederal policies
Another perspective on the problem. A few words of explanation. If we are deploying a hospital IT system, we are (hopefully) in the bottom left hand corner.“You can’t achieve success via central planning.” Quoted in Crossing the Quality Chasm, p. 312In our scenarios, we don’t have that ability to control.
What is the alternative? We can put in place mechanisms that facilitate groups with some common goal to form and function.Over time, things change, these groups evolve.If we are successful, they can expand, perhaps merge.Challenges: make this easy. Leverage scale effects.
These are issues that the grid community has been working on for many years. We call these groupings Virtual Organizations.In healthcare today, there are of course many such “VOs.”But they are hard to form, fragmented, …
Principles and mechanisms that has been under development for some years.First CS, then physical sciences, then biology, most recently biomedicine –
What are these grid mechanisms and concepts, then? Hard to say something sensible in a few minutes.But basically it is about separating out concerns in a way that reduces barriers to entry and permits flexible use.
API vs. protocol? “Illities”?
[Create an image here.]For example DICOM and HL7 combine messaging and data model in the same interoperability standard. People are contextualizing this problem at the data interoperability level. Systems interoperability often neglected. An area of differentiation, bringing in best practice in industry and science into health care space. Open source platform. Experience with systems interoperability standards: IETF, OASIS, W3C,
Scaling via automating data adaptersRepresentations of those things and semantics of those representations.Talk about how services are published, data modeling, etc.Publish data basesPublish servicesName published objects
Loose coupling and encapsulationInteroperability through integration based on data mediation Evolutionary in nature Set of scalable systems and methods Explicit in architecture – data integration layerDemonstrated in GSI, GridFTP, MDS, ECOG
Most images are never seen—and are not available—outside their originating institution