Big Data, CEP and IoT : Redefining Holistic Healthcare Information Systems an...Tauseef Naquishbandi
Healthcare industry has been a significant area for innovative application of various technologies over decades. Being an area of social relevance governmental spending on healthcare have always been on the rise over the years. Event Processing (CEP) has been in use for many years for situational awareness and response generation. Computing technologies have played an important role in improvising several aspects of healthcare. Recently emergent technology paradigms of Big Data, Internet of Things (IoT) and Complex Event Processing (CEP) have the potential not only to deal with pain areas of healthcare domain but also to redefine healthcare offerings. This paper aims to lay the groundwork for a healthcare system which builds upon integration of Big Data, CEP and IoT.
Welcome to the age of cognitive computing: where intelligent machines have
moved from the realms of science fiction to the present day. This groundbreaking
technology is driving advanced discoveries and allowing improved decision-making –
resulting in better patient care
E-Health is alluded to as utilizing of information and communication technologies (ICT) in restorative field to control treatment of patients, research, and wellbeing training and checking of general wellbeing. The reason for this paper is thusly to investigate an institutionalized system for E-Health challenges confronted
by e-wellbeing A rundown of both e-wellbeing difficulties are given and a proposed structure is likewise accommodated E-Health and could give direction in the execution of e-wellbeing To understand the motivation behind the paper, an inductive substance examination procedure was taken after. The
fundamental outcomes were that in spite of the fact that the difficulties exceeds the advantages in the gave records, there is still trust that through appropriate ICT arrangements the advantages of e-wellbeing can develop all the more quickly. This can prompt to enhanced e-wellbeing administration conveyance and nationals in nations can all profit by this.
This document includes three blog posts recently featured in PharmaVOICE.
The blogs focus on how enhanced access to in-depth health data is impacting an understanding of personhood, the environment around us, and the pharma operating model.
BLOG 1 (Pages 2-7)
Waves of Real Life Data Are Inundating Pharma...Can They Keep Up?
BLOG 2 (Pages 8-13)
Better understanding where and how we live will vastly improve remote patient
monitoring approaches
BLOG 3 (Pages 14-18)
5 Ways Pharma Can Be More Patient-Centered & Usher in Digital Transformation
Send me a note with your comments and feedback. Thanks for reading!
Big Data, CEP and IoT : Redefining Holistic Healthcare Information Systems an...Tauseef Naquishbandi
Healthcare industry has been a significant area for innovative application of various technologies over decades. Being an area of social relevance governmental spending on healthcare have always been on the rise over the years. Event Processing (CEP) has been in use for many years for situational awareness and response generation. Computing technologies have played an important role in improvising several aspects of healthcare. Recently emergent technology paradigms of Big Data, Internet of Things (IoT) and Complex Event Processing (CEP) have the potential not only to deal with pain areas of healthcare domain but also to redefine healthcare offerings. This paper aims to lay the groundwork for a healthcare system which builds upon integration of Big Data, CEP and IoT.
Welcome to the age of cognitive computing: where intelligent machines have
moved from the realms of science fiction to the present day. This groundbreaking
technology is driving advanced discoveries and allowing improved decision-making –
resulting in better patient care
E-Health is alluded to as utilizing of information and communication technologies (ICT) in restorative field to control treatment of patients, research, and wellbeing training and checking of general wellbeing. The reason for this paper is thusly to investigate an institutionalized system for E-Health challenges confronted
by e-wellbeing A rundown of both e-wellbeing difficulties are given and a proposed structure is likewise accommodated E-Health and could give direction in the execution of e-wellbeing To understand the motivation behind the paper, an inductive substance examination procedure was taken after. The
fundamental outcomes were that in spite of the fact that the difficulties exceeds the advantages in the gave records, there is still trust that through appropriate ICT arrangements the advantages of e-wellbeing can develop all the more quickly. This can prompt to enhanced e-wellbeing administration conveyance and nationals in nations can all profit by this.
This document includes three blog posts recently featured in PharmaVOICE.
The blogs focus on how enhanced access to in-depth health data is impacting an understanding of personhood, the environment around us, and the pharma operating model.
BLOG 1 (Pages 2-7)
Waves of Real Life Data Are Inundating Pharma...Can They Keep Up?
BLOG 2 (Pages 8-13)
Better understanding where and how we live will vastly improve remote patient
monitoring approaches
BLOG 3 (Pages 14-18)
5 Ways Pharma Can Be More Patient-Centered & Usher in Digital Transformation
Send me a note with your comments and feedback. Thanks for reading!
Standardization and wider use of Electronic Health records (EHR) creates opportunities for
better understanding patterns of illness and care within and across medical systems. In the healthcare
systems, hidden event signatures allow taking decision for patient’s diagnosis, prognosis, and
management. Temporal history of event codes embedded in patients' records, investigates frequently
occurring sequences of event codes across patients. There is a framework that enables the
representation, retrieval, and mining of high order latent event structure and relationships within
single and multiple event sequences. There is a wealth of hidden information present in the large
databases. Different data mining techniques can be used for retrieving data. A classifier approach for
detection of diabetes is presented in this paper and shows how Naive Bayes can be used for
classification purpose. In this system, medical data is categories into five categories namely low,
average, high and very high and critical, treatment is given as per the predicted category. The system
will predict the class label of unknown sample. Hence two basic functions namely classification
(training) and prediction (testing) will be performed. An algorithm and database used affects the
accuracy of the system. It can answer complex queries for diagnosing diabetes disease and thus assist
healthcare practitioners to make intelligent clinical decisions which traditional decision support
systems cannot.Over the last decade, so many information visualization techniques have been
developed to support the exploration of large data sets. There are various interactive visual data
mining tools available for visual data analysis. It is possible to perform clinical assessment for visual
interactive knowledge discovery in large electronic health record databases. In this paper, we
proposed that it is possible to develop a tool for data visualization for interactive knowledge
discovery.
Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games – tasks which would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in healthcare. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades – and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome. Tarun Jaiswal | Sushma Jaiswal ""Deep Learning in Medicine"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23641.pdf
Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/23641/deep-learning-in-medicine/tarun-jaiswal
Big data for healthcare analytics final -v0.3 mizYusuf Brima
Sources of Big Data in Health (a comparative description of national and international data sources and identification of new/emerging sources of data)
Benefits of Big Data in Health Care A Revolutionijtsrd
Lifespan of a normal human is increasing with the world population and it produces new challenge in health care. big data change the method of data management ,leverage data and analyzing data.with the help of big data we can reduces the costs of treatment, reducing medication and provide better treatment with predictive analytics. Health related data collected from various sources like electronic health record EHR ,medical imaging system, genomic sequencing, pay of records, pharmaceutical research , and medical devices, etc. are refers to as big data in healthcare. Dr. Ritushree Narayan ""Benefits of Big Data in Health Care: A Revolution"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd22974.pdf
Paper URL: https://www.ijtsrd.com/computer-science/data-miining/22974/benefits-of-big-data-in-health-care-a-revolution/dr-ritushree-narayan
Repurposed existing drugs and updated global health policy and clinical guidelines will be essential for limiting the social and economic devastation caused by this virus. So, we are leading a three-phase multinational Network Medicine clinical study (MNM COVID-19 study). The study will apply Network Medicine methodologies to repurpose existing drugs for SARS-CoV-2 infected patients and update global health policy and clinical guidelines.
The A3BC, Biobanks and the future of precision medicine - A3BC -
Professor Lyn March AM and Craig WIllers discuss The Australian Arthritis and Autoimmune Biobank Collaborative (A3BC), their vision for Autoimmune disease and the future of treatments, medical advances and ultimately a cure for the debilitating group of musculoskeletal diseases.
Make DNA data actionable - Festival of Genomics London 2018Omar Fogliadini
Consumers are looking for more control over their own health and healthcare, and with the advent of affordable genetic testing there are new avenues for personalised treatment and precision medicine.
As consumer tests come to the fore and patients arrive at their doctor appointments brandishing their own genetic data and full of questions and opinions. Concerns still remain that patients – and not a few physicians – don't always understand what the genetic results mean and just what to do about them. But as consumers get more comfortable with those companies' offerings, the visits with their doctors are often getting more complex.
With the increased use of the Internet for medical information, consumers have become medical consumers not just patients This has created a change in the doctor/patient relationship as individuals become more knowledgeable about their own health and want more control over their personal information and treatment decisions. Physicians, meanwhile, are concerned about giving patients too much access to information they may not properly understand. Even many doctors aren't well-trained in the clinical implications of genetics and genomics.
The CMO’s Generation Genome report is focused on the need for the UK to maintain its presence as a world leader in genomic medicine. It presented a vision of genomics being integrated into the NHS within the next 5 years.
The “vast majority” of NHS doctors are “not up to speed” with modern genetic techniques that can transform patients’ chances.
GENOMIC INTELLIGENCE
Suisse Life Science specialiSes in making DNA data useful for population health management.
Using our flagship Scienceof[DNA]f(x) technology, Suisse Life Science Group can provide you with an evidence-based lifestyle behavioural medicine dynamic system based on the associated genetic variants supported by literature citations for health, wellness or chronic diseases, and customised around your needs.
What used to take weeks of painstaking manual curation can now be completed in days.
PERSONALISED HEALTH TECHNOLOGY FOR GENOMIC COMPANIES OR CUSTOM APPLICATIONS
We help companies by providing a comprehensive nutritional, metabolic and lifestyle analysis and management program that includes nutrition intake and lifestyle monitoring, followed by a personalised nutritional, chronic disease prevention and sport recommendations, tailored for the user, informed on their biomarkers but updated - in real-time - with life data from consumer devices (smartphones, wearables…).
Ready-to-market technology.
Ask for a demo at info@suisselifescience.com
Advances in information and communication technologies have led to the emergence of Internet of Things
(IoT). In the modern health care environment, the usage of IoT technologies brings convenience to physicians and
patients since they are applied to various medical areas (such as real-time monitoring, patient information and healthcare
management). The body sensor network (BSN) technology is one of the core technologies of IoT developments in
healthcare system, where a patient can be monitored using a collection of tiny-powered and lightweight wireless sensor
nodes
This study aimed at identifying the issue, challenges and opportunities from the health consumers in
Tanzania towards interoperability of electronic health records. Reaching that level of seamless data
sharing among Hospitals needs the cooperation of all stakeholders especially the health consumer whose
data are the ones to be shared. Without their acceptance that means there is nothing to share. Recognizing
that we conducted a study in Tanzania to identify the challenges, issues and opportunities towards health
information exchange through interoperable EHRs. The study was conducted in three major cities of
Tanzania to identify the security, privacy and confidentiality issues of information sharing together with
related challenges to data sharing. This was in order to come up with a clear picture of how to implement
some EHRs that will be trusted by health consumers. The participants (n=240) were surveyed on computer
usage, EHRs knowledge, demographics, security and privacy issues. A total of 200 surveys were completed
and returned (83.3% response rate). Among them 67.5% were women, 62.6% had not heard of EHRs, 73%
were highly concerned about the privacy and security of their information. 75% believed that introduction
of various security mechanisms will make EHRs more secure and thus better. We conducted a number of
chi-square tests (p<0.05) and we realized that there was a strong relationship among the variable of age,
computer use, EHRs knowledge and the concerns for privacy and security.The study also showed that there
was just a small difference of 8.5% between those people who think EHRs are safer than paper records and
those who think otherwise. The general observation of the study was that in order to make EHRs successful
in our Hospitals then the issue of security, and health consumer involvement were they two key towards the
road of successful EHRs in our hospitals practices and that will make consumers more willing to allow
their records to be shared among different health organizations. So besides the issues identified, this study
helped us to identify the key requirements which will be implemented in our proposed framework.
Promise and peril: How artificial intelligence is transforming health careΔρ. Γιώργος K. Κασάπης
AI has enormous potential to improve the quality of health care, enable early diagnosis of diseases, and reduce costs. But if implemented incautiously, AI can exacerbate health disparities, endanger patient privacy, and perpetuate bias. STAT, with support from the Commonwealth Fund, explored these possibilities and pitfalls during the past year and a half, illuminating best practices while identifying concerns and regulatory gaps. This report includes many of the articles we published and summarizes our findings, as well as recommendations we heard from caregivers, health care executives, academic experts, patient advocates, and others.
IBM Watson Health: How cognitive technologies have begun transforming clinica...Maged N. Kamel Boulos
Cite as: Kamel Boulos MN. IBM Watson Health: how cognitive technologies have begun transforming clinical medicine and healthcare (Oral session IV – Patient safety tools, Thursday 19 May 2016, 15:45-16:45, Hotel Puijonsarvi, Kuopio). In: Proceedings of the 4th Nordic Conference on Research in Patient Safety and Quality in Healthcare (NSQH2016), Kuopio, Finland, 18-20 May 2016 (organised by University of Eastern Finland), p.29. URL: http://www.uef.fi/NSQH2016 (In: Nykanen I (ed.). The 4th Nordic Conference on Research in Patient Safety and Quality in Healthcare. Kuopio, Finland, May 18-20, 2016. Program and Abstracts. Publications of the University of Eastern Finland. Report and Studies in Health Sciences 21. 2016, p.29 (of 119 p.). ISBN: 978-952-61-2130-7 (nid.), ISSNL: 1798-5722, ISSN: 1798-5730.)
IBM Watson health: how cognitive technologies have begun transforming clinical medicine and healthcare
Maged N Kamel Boulos
ABSTRACT
Background: IBM Watson Health (http://www.ibm.com/smarterplanet/us/en/ibmwatson/health/) belongs to a new generation of smart cognitive computing technologies (a type of artificial intelligence) that are poised to transform the way healthcare is delivered, and to vastly improve clinical outcomes, quality of care and patient safety.
Objectives: Our goal was to collect and document the huge potential of a range of emerging and exemplary uses of IBM Watson in healthcare in both developed and developing country settings.
Methods: A survey of current peer reviewed and grey literature has been conducted, looking for reports and case studies involving the use of IBM Watson in different health and healthcare applications.
Results, conclusions and clinical implications: With its ability to make sense of unstructured medical information by analysing the meaning and context of natural language, and uncovering important knowledge buried within large volumes of data and information, including medical images, IBM Watson is exceptionally well suited for clinical and healthcare decision support, where there are often elements of ambiguity and uncertainty. It has been (or is currently being) successfully deployed in many developed countries in the West, as well as in developing countries, such as India and South Africa. IBM Watson unlocks a complex case by acquiring information from multiple sources, e.g., accessing the electronic patient record, then parsing all related medical evidence at up to 60 million pages per second. After processing all of this information, Watson offers relevant and prioritised suggestions to the decision-maker, e.g., helping clinicians identify the best diagnosis and treatment options in complex oncology cases, and providing hospital managers with new operational insights. The ultimate goals are to reduce cost, medical errors, mortality rates, and help improve patients' quality of life.
Giris basics of biomedical informatics generalSerkan Turkeli
At the end of this course, students will be able to
• Define medical informatics
• Define information management, information technology and informatics
• Define concepts of medical informatics
• Selecting best techniques to manage a medical informatics project.
The Evolution of Data Analysis with Hadoop - StampedeCon 2014StampedeCon
At StampedeCon 2014, Tom Wheeler (Cloudera) presented, "The Evolution of Data Analysis with Hadoop."
This session will lead the audience through the evolution of data analysis in Hadoop to illustrate its progression from the original low-level, batch-oriented MapReduce approach to today’s higher-level interactive tools that require very little technical knowledge. We’ll discuss Apache Crunch, Hive, Impala and Solr.
While the nature of this talk is somewhat technical, no prior knowledge of Hadoop or any specific programming language is required. Frequent live demonstrations of the tools discussed will emphasize that analyzing data in Hadoop can be as easy as using a relational database or Internet search engine.
Standardization and wider use of Electronic Health records (EHR) creates opportunities for
better understanding patterns of illness and care within and across medical systems. In the healthcare
systems, hidden event signatures allow taking decision for patient’s diagnosis, prognosis, and
management. Temporal history of event codes embedded in patients' records, investigates frequently
occurring sequences of event codes across patients. There is a framework that enables the
representation, retrieval, and mining of high order latent event structure and relationships within
single and multiple event sequences. There is a wealth of hidden information present in the large
databases. Different data mining techniques can be used for retrieving data. A classifier approach for
detection of diabetes is presented in this paper and shows how Naive Bayes can be used for
classification purpose. In this system, medical data is categories into five categories namely low,
average, high and very high and critical, treatment is given as per the predicted category. The system
will predict the class label of unknown sample. Hence two basic functions namely classification
(training) and prediction (testing) will be performed. An algorithm and database used affects the
accuracy of the system. It can answer complex queries for diagnosing diabetes disease and thus assist
healthcare practitioners to make intelligent clinical decisions which traditional decision support
systems cannot.Over the last decade, so many information visualization techniques have been
developed to support the exploration of large data sets. There are various interactive visual data
mining tools available for visual data analysis. It is possible to perform clinical assessment for visual
interactive knowledge discovery in large electronic health record databases. In this paper, we
proposed that it is possible to develop a tool for data visualization for interactive knowledge
discovery.
Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games – tasks which would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in healthcare. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades – and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome. Tarun Jaiswal | Sushma Jaiswal ""Deep Learning in Medicine"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23641.pdf
Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/23641/deep-learning-in-medicine/tarun-jaiswal
Big data for healthcare analytics final -v0.3 mizYusuf Brima
Sources of Big Data in Health (a comparative description of national and international data sources and identification of new/emerging sources of data)
Benefits of Big Data in Health Care A Revolutionijtsrd
Lifespan of a normal human is increasing with the world population and it produces new challenge in health care. big data change the method of data management ,leverage data and analyzing data.with the help of big data we can reduces the costs of treatment, reducing medication and provide better treatment with predictive analytics. Health related data collected from various sources like electronic health record EHR ,medical imaging system, genomic sequencing, pay of records, pharmaceutical research , and medical devices, etc. are refers to as big data in healthcare. Dr. Ritushree Narayan ""Benefits of Big Data in Health Care: A Revolution"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd22974.pdf
Paper URL: https://www.ijtsrd.com/computer-science/data-miining/22974/benefits-of-big-data-in-health-care-a-revolution/dr-ritushree-narayan
Repurposed existing drugs and updated global health policy and clinical guidelines will be essential for limiting the social and economic devastation caused by this virus. So, we are leading a three-phase multinational Network Medicine clinical study (MNM COVID-19 study). The study will apply Network Medicine methodologies to repurpose existing drugs for SARS-CoV-2 infected patients and update global health policy and clinical guidelines.
The A3BC, Biobanks and the future of precision medicine - A3BC -
Professor Lyn March AM and Craig WIllers discuss The Australian Arthritis and Autoimmune Biobank Collaborative (A3BC), their vision for Autoimmune disease and the future of treatments, medical advances and ultimately a cure for the debilitating group of musculoskeletal diseases.
Make DNA data actionable - Festival of Genomics London 2018Omar Fogliadini
Consumers are looking for more control over their own health and healthcare, and with the advent of affordable genetic testing there are new avenues for personalised treatment and precision medicine.
As consumer tests come to the fore and patients arrive at their doctor appointments brandishing their own genetic data and full of questions and opinions. Concerns still remain that patients – and not a few physicians – don't always understand what the genetic results mean and just what to do about them. But as consumers get more comfortable with those companies' offerings, the visits with their doctors are often getting more complex.
With the increased use of the Internet for medical information, consumers have become medical consumers not just patients This has created a change in the doctor/patient relationship as individuals become more knowledgeable about their own health and want more control over their personal information and treatment decisions. Physicians, meanwhile, are concerned about giving patients too much access to information they may not properly understand. Even many doctors aren't well-trained in the clinical implications of genetics and genomics.
The CMO’s Generation Genome report is focused on the need for the UK to maintain its presence as a world leader in genomic medicine. It presented a vision of genomics being integrated into the NHS within the next 5 years.
The “vast majority” of NHS doctors are “not up to speed” with modern genetic techniques that can transform patients’ chances.
GENOMIC INTELLIGENCE
Suisse Life Science specialiSes in making DNA data useful for population health management.
Using our flagship Scienceof[DNA]f(x) technology, Suisse Life Science Group can provide you with an evidence-based lifestyle behavioural medicine dynamic system based on the associated genetic variants supported by literature citations for health, wellness or chronic diseases, and customised around your needs.
What used to take weeks of painstaking manual curation can now be completed in days.
PERSONALISED HEALTH TECHNOLOGY FOR GENOMIC COMPANIES OR CUSTOM APPLICATIONS
We help companies by providing a comprehensive nutritional, metabolic and lifestyle analysis and management program that includes nutrition intake and lifestyle monitoring, followed by a personalised nutritional, chronic disease prevention and sport recommendations, tailored for the user, informed on their biomarkers but updated - in real-time - with life data from consumer devices (smartphones, wearables…).
Ready-to-market technology.
Ask for a demo at info@suisselifescience.com
Advances in information and communication technologies have led to the emergence of Internet of Things
(IoT). In the modern health care environment, the usage of IoT technologies brings convenience to physicians and
patients since they are applied to various medical areas (such as real-time monitoring, patient information and healthcare
management). The body sensor network (BSN) technology is one of the core technologies of IoT developments in
healthcare system, where a patient can be monitored using a collection of tiny-powered and lightweight wireless sensor
nodes
This study aimed at identifying the issue, challenges and opportunities from the health consumers in
Tanzania towards interoperability of electronic health records. Reaching that level of seamless data
sharing among Hospitals needs the cooperation of all stakeholders especially the health consumer whose
data are the ones to be shared. Without their acceptance that means there is nothing to share. Recognizing
that we conducted a study in Tanzania to identify the challenges, issues and opportunities towards health
information exchange through interoperable EHRs. The study was conducted in three major cities of
Tanzania to identify the security, privacy and confidentiality issues of information sharing together with
related challenges to data sharing. This was in order to come up with a clear picture of how to implement
some EHRs that will be trusted by health consumers. The participants (n=240) were surveyed on computer
usage, EHRs knowledge, demographics, security and privacy issues. A total of 200 surveys were completed
and returned (83.3% response rate). Among them 67.5% were women, 62.6% had not heard of EHRs, 73%
were highly concerned about the privacy and security of their information. 75% believed that introduction
of various security mechanisms will make EHRs more secure and thus better. We conducted a number of
chi-square tests (p<0.05) and we realized that there was a strong relationship among the variable of age,
computer use, EHRs knowledge and the concerns for privacy and security.The study also showed that there
was just a small difference of 8.5% between those people who think EHRs are safer than paper records and
those who think otherwise. The general observation of the study was that in order to make EHRs successful
in our Hospitals then the issue of security, and health consumer involvement were they two key towards the
road of successful EHRs in our hospitals practices and that will make consumers more willing to allow
their records to be shared among different health organizations. So besides the issues identified, this study
helped us to identify the key requirements which will be implemented in our proposed framework.
Promise and peril: How artificial intelligence is transforming health careΔρ. Γιώργος K. Κασάπης
AI has enormous potential to improve the quality of health care, enable early diagnosis of diseases, and reduce costs. But if implemented incautiously, AI can exacerbate health disparities, endanger patient privacy, and perpetuate bias. STAT, with support from the Commonwealth Fund, explored these possibilities and pitfalls during the past year and a half, illuminating best practices while identifying concerns and regulatory gaps. This report includes many of the articles we published and summarizes our findings, as well as recommendations we heard from caregivers, health care executives, academic experts, patient advocates, and others.
IBM Watson Health: How cognitive technologies have begun transforming clinica...Maged N. Kamel Boulos
Cite as: Kamel Boulos MN. IBM Watson Health: how cognitive technologies have begun transforming clinical medicine and healthcare (Oral session IV – Patient safety tools, Thursday 19 May 2016, 15:45-16:45, Hotel Puijonsarvi, Kuopio). In: Proceedings of the 4th Nordic Conference on Research in Patient Safety and Quality in Healthcare (NSQH2016), Kuopio, Finland, 18-20 May 2016 (organised by University of Eastern Finland), p.29. URL: http://www.uef.fi/NSQH2016 (In: Nykanen I (ed.). The 4th Nordic Conference on Research in Patient Safety and Quality in Healthcare. Kuopio, Finland, May 18-20, 2016. Program and Abstracts. Publications of the University of Eastern Finland. Report and Studies in Health Sciences 21. 2016, p.29 (of 119 p.). ISBN: 978-952-61-2130-7 (nid.), ISSNL: 1798-5722, ISSN: 1798-5730.)
IBM Watson health: how cognitive technologies have begun transforming clinical medicine and healthcare
Maged N Kamel Boulos
ABSTRACT
Background: IBM Watson Health (http://www.ibm.com/smarterplanet/us/en/ibmwatson/health/) belongs to a new generation of smart cognitive computing technologies (a type of artificial intelligence) that are poised to transform the way healthcare is delivered, and to vastly improve clinical outcomes, quality of care and patient safety.
Objectives: Our goal was to collect and document the huge potential of a range of emerging and exemplary uses of IBM Watson in healthcare in both developed and developing country settings.
Methods: A survey of current peer reviewed and grey literature has been conducted, looking for reports and case studies involving the use of IBM Watson in different health and healthcare applications.
Results, conclusions and clinical implications: With its ability to make sense of unstructured medical information by analysing the meaning and context of natural language, and uncovering important knowledge buried within large volumes of data and information, including medical images, IBM Watson is exceptionally well suited for clinical and healthcare decision support, where there are often elements of ambiguity and uncertainty. It has been (or is currently being) successfully deployed in many developed countries in the West, as well as in developing countries, such as India and South Africa. IBM Watson unlocks a complex case by acquiring information from multiple sources, e.g., accessing the electronic patient record, then parsing all related medical evidence at up to 60 million pages per second. After processing all of this information, Watson offers relevant and prioritised suggestions to the decision-maker, e.g., helping clinicians identify the best diagnosis and treatment options in complex oncology cases, and providing hospital managers with new operational insights. The ultimate goals are to reduce cost, medical errors, mortality rates, and help improve patients' quality of life.
Giris basics of biomedical informatics generalSerkan Turkeli
At the end of this course, students will be able to
• Define medical informatics
• Define information management, information technology and informatics
• Define concepts of medical informatics
• Selecting best techniques to manage a medical informatics project.
The Evolution of Data Analysis with Hadoop - StampedeCon 2014StampedeCon
At StampedeCon 2014, Tom Wheeler (Cloudera) presented, "The Evolution of Data Analysis with Hadoop."
This session will lead the audience through the evolution of data analysis in Hadoop to illustrate its progression from the original low-level, batch-oriented MapReduce approach to today’s higher-level interactive tools that require very little technical knowledge. We’ll discuss Apache Crunch, Hive, Impala and Solr.
While the nature of this talk is somewhat technical, no prior knowledge of Hadoop or any specific programming language is required. Frequent live demonstrations of the tools discussed will emphasize that analyzing data in Hadoop can be as easy as using a relational database or Internet search engine.
This presentation, by big data guru Bernard Marr, outlines in simple terms what Big Data is and how it is used today. It covers the 5 V's of Big Data as well as a number of high value use cases.
A REVIEW OF DATA INTELLIGENCE APPLICATIONS WITHIN HEALTHCARE SECTOR IN THE UN...ijsc
Data intelligence technologies have transformed the United States healthcare sector, bringing about transformational advances in patient care, research, and healthcare management. United States is the focus due fact that many academic and research institutions in the country are at the forefront of healthcare data research, making it an attractive location for in-depth studies.This paper explores the diverse realm of Data Intelligence in Healthcare, examining its applications, challenges, ethical considerations, and emerging trends. Data Intelligence Applications encompass a spectrum of technologies designed to collect, process, analyze, and interpret data effectively. These apps enable healthcare practitioners to make more educated decisions, forecast health outcomes, manage population health, customize treatment, optimize workflows, assist research, improve data security, and drive healthcare analytics. However, the use of data intelligence applications raises issues and concerns about data privacy, fairness, transparency, data quality, accountability, fair data access, regulatory compliance, and the balance between automation and human judgment. Emerging themes include AI and machine learning domination, stronger ethical and regulatory frameworks, edge and quantum computing, data democratization, sustainability applications, and developing human-machine collaboration. Data intelligence has an impact that goes beyond healthcare delivery, influencing decision-making, scientific discovery, education, and economic growth. Understanding its potential and ethical responsibilities is paramount as data-driven insights redefine healthcare excellence and extend their influence across sectors.
A REVIEW OF DATA INTELLIGENCE APPLICATIONS WITHIN HEALTHCARE SECTOR IN THE UN...ijsc
Data intelligence technologies have transformed the United States healthcare sector, bringing about
transformational advances in patient care, research, and healthcare management. United States is the
focus due fact that many academic and research institutions in the country are at the forefront of healthcare
data research, making it an attractive location for in-depth studies.This paper explores the diverse realm of
Data Intelligence in Healthcare, examining its applications, challenges, ethical considerations, and
emerging trends. Data Intelligence Applications encompass a spectrum of technologies designed to collect,
process, analyze, and interpret data effectively. These apps enable healthcare practitioners to make more
educated decisions, forecast health outcomes, manage population health, customize treatment, optimize
workflows, assist research, improve data security, and drive healthcare analytics. However, the use of data
intelligence applications raises issues and concerns about data privacy, fairness, transparency, data
quality, accountability, fair data access, regulatory compliance, and the balance between automation and
human judgment. Emerging themes include AI and machine learning domination, stronger ethical and
regulatory frameworks, edge and quantum computing, data democratization, sustainability applications,
and developing human-machine collaboration. Data intelligence has an impact that goes beyond
healthcare delivery, influencing decision-making, scientific discovery, education, and economic growth.
Understanding its potential and ethical responsibilities is paramount as data-driven insights redefine
healthcare excellence and extend their influence across sectors.
A REVIEW OF DATA INTELLIGENCE APPLICATIONS WITHIN HEALTHCARE SECTOR IN THE UN...ijsc
Data intelligence technologies have transformed the United States healthcare sector, bringing about
transformational advances in patient care, research, and healthcare management. United States is the
focus due fact that many academic and research institutions in the country are at the forefront of healthcare
data research, making it an attractive location for in-depth studies.This paper explores the diverse realm of
Data Intelligence in Healthcare, examining its applications, challenges, ethical considerations, and
emerging trends. Data Intelligence Applications encompass a spectrum of technologies designed to collect,
process, analyze, and interpret data effectively. These apps enable healthcare practitioners to make more
educated decisions, forecast health outcomes, manage population health, customize treatment, optimize
workflows, assist research, improve data security, and drive healthcare analytics. However, the use of data
intelligence applications raises issues and concerns about data privacy, fairness, transparency, data
quality, accountability, fair data access, regulatory compliance, and the balance between automation and
human judgment. Emerging themes include AI and machine learning domination, stronger ethical and
regulatory frameworks, edge and quantum computing, data democratization, sustainability applications,
and developing human-machine collaboration. Data intelligence has an impact that goes beyond
healthcare delivery, influencing decision-making, scientific discovery, education, and economic growth.
Understanding its potential and ethical responsibilities is paramount as data-driven insights redefine
healthcare excellence and extend their influence across sectors.
This white paper offers a detailed perspective on how big data is impacting the healthcare industry and its underlying implication on the industry as a whole. It outlines the role of big data in healthcare, its benefits, core components and challenges faced by the healthcare sector towards full-fledged adoption & implementation.
Modern Genomics in conjunction with Patients IP on the value chainJohn Peter Mary Wubbe
Perhaps it is now time to move past the classical dichotomy of privacy or data utility and to seize the possibilities of emerging health technologies, processes, and projects. Far from being harmful, Patient-controlled health records in collaboration with emerging technologies that work within a defined, over sighted and legislated regulatory framework will facilitate innumerable benefits.
There are a number of forthcoming statistical solutions that would permit assembly of data sets at a patient level with limited risk to privacy and also delineate the cumbersome contentions of data ownership and access
Overview of Health Informatics: survey of fundamentals of health information technology, Identify the forces behind health informatics, educational and career opportunities in health informatics.
Please respond to each of the 3 posts with 3 APA sources no older thmaple8qvlisbey
Please respond to each of the 3 posts with 3 APA sources no older than 5 years old. APA format must be exceptional.
Reply 1
Professor,
How can big data impact prescription errors? Be specific and provide examples. Who should be on the team to implement this project and why? Support your work with the literature.
Reply 2
Ruth Niyasimi,
Big Data Risks and Rewards
Big data is defined as the process of collecting, analyzing, and leveraging consumer patient, physical, and clinical data that is too vast or complex to be understood by traditional means of data processing. In healthcare, data is generated from medical records, patient portals, government agencies, research studies, electronic health records, and medical devices. The data generated in healthcare is used to make decisions that will have an impact on patient health outcomes (Raghupathi & Raghupathi, 2014). Healthcare is a critical docket in our society since it is tasked with a duty to prevent, diagnose and treat illnesses and diseases affecting the community. In the past, health information was stored on paper but through advancements in technology, things have significantly changed as patient information is stored on Electronic health records (EHR).
The adoption of big data had significant impacts on customer services and other related issues. According to Raghupathi and Raghupathi (2014), for many years, healthcare has been generating huge volumes of data that was stored in hardcopy. This was a critical step toward improving the quality of healthcare delivery while reducing costs. This huge volume of information is crucial to healthcare because, through digitalization, it has become possible to detect diseases at an early stage and take necessary intervention measures. Secondly, big data enables the ability to enhance continuity, starting when a patient visits a hospital until the last stage of being discharged. For example, the lab tests taken from those patients and other specialized treatments are stored in a way that other departments can access this information in the future preventing duplicate redoing labs and imaging studies (Adibuzzaman et al., 2017). This cuts down costs while improving service delivery.
Although big data has had a tremendous impact on the healthcare systems, it has also created some problems. Firstly, the use of technology such as EHR has resulted in security issues and privacy threats. According to McGonigle and Mastrian (2017), technology has enabled the interoperability of healthcare data. Interoperability means sharing important health data across different organizations while ensuring it is presented understandably to the user. Unauthorized third parties can intersect this information and the Health Insurance Portability and Accountability Act (HIPPA) has shown little concern for patient data breach cases. Another problem is that big data is not static, it requires continuous system updates to ensure that it ...
Modern Era of Medical Field : E-HealthFull Text ijbbjournal
E-Health is alluded to as utilizing of information and communication technologies (ICT) in restorative field
to control treatment of patients, research, and wellbeing training and checking of general wellbeing. The
reason for this paper is thusly to investigate an institutionalized system for E-Health challenges confronted
by e-wellbeing A rundown of both e-wellbeing difficulties are given and a proposed structure is likewise
accommodated E-Health and could give direction in the execution of e-wellbeing To understand the
motivation behind the paper, an inductive substance examination procedure was taken after. The
fundamental outcomes were that in spite of the fact that the difficulties exceeds the advantages in the gave
records, there is still trust that through appropriate ICT arrangements the advantages of e-wellbeing can
develop all the more quickly. This can prompt to enhanced e-wellbeing administration conveyance and
nationals in nations can all profit by this
The care business traditionally has generated massive amounts of inf.pdfanudamobileshopee
The care business traditionally has generated massive amounts of information, driven by record
keeping, compliance & regulative needs, and patient care [1]. whereas most knowledge is hold
on in text type, the present trend is toward fast conversion of those massive amounts of
information. Driven by obligatory needs and also the potential to enhance the standard of health
care delivery in the meantime reducing the prices, these huge quantities of information (known
as ‘big data’) hold the promise of supporting a good vary of medical and care functions, as well
as among others clinical call support, illness police work, and population health management [2,
3, 4, 5]. Reports say knowledge from the U.S. care system alone reached, in 2011, one hundred
fifty exabytes. At this rate of growth, huge knowledge for U.S. care can before long reach the
zettabyte (1021 gigabytes) scale and, shortly when, the yottabyte (1024 gigabytes) [6]. Kaiser
Permanente, the California-based health network, that has over nine million members, is
believed to possess between twenty six.5 and forty four petabytes of doubtless made knowledge
from EHRs, as well as pictures and annotations [6].
By definition, huge knowledge in care refers to electronic health knowledge sets therefore
massive and complicated that they\'re tough (or impossible) to manage with ancient computer
code and/or hardware; nor will they be simply managed with ancient or common knowledge
management tools and strategies [7]. huge knowledge in care is overwhelming not solely due to
its volume however additionally due to the range of information varieties and also the speed at
that it should be managed [7]. The totality of information associated with patient care and well-
being compose “big data” within the care business. It includes clinical knowledge from CPOE
and clinical call support systems (physician’s written notes and prescriptions, medical imaging,
laboratory, pharmacy, insurance, and alternative body knowledge); patient knowledge in
electronic patient records (EPRs); machine generated/sensor data, like from observance
important signs; social media posts, as well as Twitter feeds (so-called tweets) [8], blogs [9],
standing updates on Facebook and alternative platforms, and net pages; and fewer patient-
specific data, as well as emergency care knowledge, news feeds, and articles in medical journals.
For the massive knowledge person, there is, amongst this large quantity and array of information,
chance. By discovering associations and understanding patterns and trends inside the
information, huge knowledge analytics has the potential to enhance care, save lives and lower
prices. Thus, huge knowledge analytics applications in care cash in of the explosion in
knowledge to extract insights for creating higher enlightened selections [10, 11, 12], and as a
groundwork class square measure noted as, no surprise here, huge knowledge analytics in care
[13, 14, 15]. once huge knowledge is synthesized and an.
In the realm of healthcare, data is a critical asset that holds the potential to revolutionise patient care, enhance treatment outcomes, and streamline healthcare operations. One of the most valuable resources in this data-driven landscape is healthcare datasets. These datasets encompass a wide range of information, from patient medical records and clinical trial data to health insurance claims and public health statistics.
Healthcare datasets serve as the foundation for evidence-based medicine, enabling researchers and healthcare professionals to analyse trends, identify patterns, and make informed decisions. By delving into these datasets, medical researchers can uncover new insights into disease progression, treatment efficacy, and patient outcomes. This knowledge is crucial for developing more effective therapies, improving diagnostic accuracy, and tailoring treatment plans to individual patients' needs.
Moreover, healthcare datasets play a pivotal role in public health initiatives. By examining data on disease incidence, vaccination rates, and health behaviours, public health officials can design targeted interventions, allocate resources more efficiently, and monitor the impact of public health policies. This data-driven approach helps in controlling the spread of infectious diseases, promoting healthy lifestyles, and ultimately reducing the burden of illness on society.
The integration of healthcare datasets with advanced analytics and machine learning technologies opens up even more possibilities. Predictive models built on these datasets can forecast disease outbreaks, identify high-risk patient populations, and optimise resource allocation in healthcare facilities. These predictive insights are invaluable for proactive healthcare management and ensuring that patients receive timely and appropriate care.
However, the effective use of healthcare datasets is not without challenges. Issues related to data privacy, security, and interoperability need to be addressed to ensure that sensitive patient information is protected and that data from different sources can be integrated seamlessly. Additionally, the quality and completeness of data are crucial for drawing accurate conclusions, necessitating rigorous data management and validation practices.
In conclusion, healthcare datasets are a vital resource that holds immense potential for advancing medical research, improving patient care, and enhancing public health outcomes. As technology continues to evolve, the ability to harness the power of these datasets will become increasingly important in shaping the future of healthcare.
Big Data in Healthcare
Hospital and healthcare providers can use big data to expand the scope of their projects and draw comparisons over larger populations of data. Because big data involves the use of automation and artificial intelligence, data can be processed in larger volumes and higher velocity to uncover valuable insights for Management.
Big data enables management to proactively identify issues with real-time access to the data so that decisions can be base more on hard evidence and facts, rather than emphasizing on guesswork and assumptions about customers, employees, and vendors. Applying analytics to big data creates many opportunities for healthcare businesses to gain greater insight, predict future outcomes and automate non-routine tasks.
Healthcare industries have gone through massive technology driven transformations over the past decade. This is a result of the significant advancement in digitized, disruptive, open sourced and pervasive healthcare information technologies and peripherals in application, that are continuously producing huge volumes of diversified data. In a recent literature review, Agrawal and Prabakaran1 suggested that big data are an integral part of “the next generation of technological developments” that reveal new insights from vast quantities of data being produced from various sectors, including health care. (Shah J Miah, Edwin Camilleria, and H. Quan Vub).
Healthcare requires a lot of analysis and less room for error, with big data and analytics procedure can be game changer. Healthcare busines requires to analyze, store, and continuously update patient’s data and these tasks cannot efficiently be achieved without the help of big data.
According to Pastorino, the use of big data in health care can provision the design of solutions that improve patient care and can generate value and new strategies to overcome dynamic challenges in healthcare organizations. This is attributed to big data in health care providing an opportunity to detect meaningful patterns, which in turn produce actionable knowledge for precision medicine and various healthcare decision-makers. (Shah J Miah, Edwin Camilleria, and H. Quan Vu)
Harmony Alliance stated that opportunities offered by big data “will only materialize when healthcare systems move beyond the mere collection of large amounts of data. Linkage of previously separated data sets and their analysis using appropriate big data analytics offer new ways to accelerate research and to identify the right treatment for individual patients. Access to large data sets that paint a more comprehensive picture of patients allows patient-relevant outcomes to be measured more accurately.”
Big data is becoming crucial in this time of Covid-19, where data need to be collected from different corner of the globe. Data are collected in a big amount and need to be processed in real time so the decision-makers can have enough information to work on. Today’s world is interconnected, and pa ...
Application of Big Data in Medical Science brings revolution in managing heal...IJEEE
Big Data can be combined with new technology to bring about positive conversion in the health care segment. A technology aimed at making Big Data analytics a certainty will act as a key element in transforming the way the health care industry operates today. The study and analysis of Big Data can be used for tracking and managing population health care effectively and efficiently. In ten years, eighty percent of the work people do in medicine will be replaced by technology. And medicine will not look anything like what it does today. Healthcare will change enormously as it becomes a data-driven industry. But the magnitude of the data, the speed at which it’s growing and the threat it could pose to individual privacy mean mastering "big data" is one of biomedicine's most pressing challenges. Hiding within those mounds of data is knowledge that could change the life of a patient, or change the world. This also plays a vital role in delivering preventive care. Health care will change a great deal as it becomes a data- driven industry. But the size of the data, the speed at which it’s growing and the threat it could cause to individual privacy mean mastering it is one of biomedicine's most critical challenges. In this research paper we will discuss problems faced by big data, obstacles in using big data in the health industry, how big Data analytics can take health care to a new level by enhancing the overall quality of patient care.
Big data approaches to healthcare systemsShubham Jain
The idea behind this presentation is to explore how big data will revolutionize existing healthcare system effectively by reducing healthcare concerns such as the selection of appropriate treatment paths, quality of healthcare systems and so on. Large amount of unstructured data is available in various organizations (payers, providers, pharmaceuticals). We will discuss all the intricacies involved in massive datasets of healthcare systems and how combination of VPH technologies and big data resulted into some mind-boggling consequences. Major opportunities in healthcare includes the integration of various data pools such as clinical data, pharmaceutical R&D data and patient behaviour and sentiment data. Finding potential insights from big data with the help of medical image processing techniques, predictive modelling etc. will eventually help us to leverage the ever-increasing costs of care, help providers practice more effective medicine, empower patients and caregivers, support fitness and preventive self-care, and to dream about more personalized medicine.
An efficient tree based self-organizing protocol for internet of thingsredpel dot com
An efficient tree based self-organizing protocol for internet of things.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Web Service QoS Prediction Based on Adaptive Dynamic Programming Using Fuzzy ...redpel dot com
Web Service QoS Prediction Based on Adaptive Dynamic Programming Using Fuzzy Neural Networks for Cloud Services
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Privacy preserving and delegated access control for cloud applicationsredpel dot com
Privacy preserving and delegated access control for cloud applications
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Performance evaluation and estimation model using regression method for hadoo...redpel dot com
Performance evaluation and estimation model using regression method for hadoop word count.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Frequency and similarity aware partitioning for cloud storage based on space ...redpel dot com
Frequency and similarity aware partitioning for cloud storage based on space time utility maximization model.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Multiagent multiobjective interaction game system for service provisoning veh...redpel dot com
Multiagent multiobjective interaction game system for service provisoning vehicular cloud
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Efficient multicast delivery for data redundancy minimization over wireless d...redpel dot com
Efficient multicast delivery for data redundancy minimization over wireless data centers
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Cloud assisted io t-based scada systems security- a review of the state of th...redpel dot com
Cloud assisted io t-based scada systems security- a review of the state of the art and future challenges.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
I-Sieve: An inline High Performance Deduplication System Used in cloud storageredpel dot com
I-Sieve: An inline High Performance Deduplication System Used in cloud storage
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Architecture harmonization between cloud radio access network and fog networkredpel dot com
Architecture harmonization between cloud radio access network and fog network
for more ieee paper / full abstract / implementation , just visit www.redpel.com
A tutorial on secure outsourcing of large scalecomputation for big dataredpel dot com
A tutorial on secure outsourcing of large scalecomputation for big data
for more ieee paper / full abstract / implementation , just visit www.redpel.com
A parallel patient treatment time prediction algorithm and its applications i...redpel dot com
A parallel patient treatment time prediction algorithm and its applications in hospital.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
2. 1194 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 19, NO. 4, JULY 2015
Fig. 2. Six V’s of big data (value, volume, velocity, variety, veracity, and
variability), which also apply to health data.
continuous physiological features of an individual. In parallel,
environmental factors present yet another set of variables that
can be captured by continuous sensing that are important to
population health.
However, size itself does not qualify big data. Other chal-
lenges include speed, heterogeneity, and variety of data in health.
With the versatility, diversity, and connectivity of data capturing
devices, additional data is generated at increasingly high speed,
and decision support must be made available near real time in
order to keep up with the constant evolution of technologies. In
managing an influenza pandemic, for example, heterogeneous
information from managed and unmanaged (e.g., social media,
air travels) sources can be processed, mined, and turned into
decisive actions to control the outbreak.
In healthcare, data heterogeneity and variety arise as a result
of linking the diverse range of biomedical data sources avail-
able. Sources can be either quantitative (e.g., sensor data, im-
ages, gene arrays, laboratory tests) or qualitative (e.g., free text,
demographics). The objectives underlying this data challenge
are to support the basis for observational evidence to answer
clinical questions, which would not otherwise been solved via
studies based on randomized trials alone. In addition, the issue
of generalizing results based on a narrow spectrum of partici-
pants may be solved by taking advantage of the potential of big
data for deploying longitudinal studies.
Volume, Velocity, and Variety are the three Vs in the original
definition of the key characteristics of big data in the research
report published by META Group, Inc. (now Gartner, Inc.) [2].
Since then, other factors have also been considered, including
Variability (consistency of data over time), Veracity (trustwor-
thiness of the data obtained), and Value. These characteristics
are summarized in Fig. 2 along with the key features that each
captures.
Veracity is important for big data as, for example, personal
health records may contain typographical errors, abbreviations,
and cryptic notes. Ambulatory measurements are sometimes
taken within less reliable, uncontrolled environments com-
pared to clinical data, which are collected by trained practi-
tioners. The use of spontaneous unmanaged data, such as those
from social media, can lead to wrong predictions as the data
context is not always known. Furthermore, sources are of-
ten biased toward those young, internet savvy, and expressive
online.
Last but not least, real value to both patients and healthcare
systems can only be realized if challenges to analyze big data can
be addressed in a coherent fashion. It should be noted that many
of the underlying principles of big data have been explored by
the research community for years in other domains. Neverthe-
less, new theories and approaches are needed for analyzing big
health data. The total projected healthcare spending in the UK
by 2021 will make 6.4% of the gross domestic product (GDP)
[3] whilst the total projected healthcare share of the GDP in the
United States is expected to reach 19.9% by 2022 [4]. In these
respects, if used properly, big data can be a valuable resource
that can provide significant insights toward improving contem-
porary health services and reducing healthcare costs. However,
it also raises major social and legal challenges in terms of
privacy, reidentification, data ownership, data stewardship, and
governance.
In this paper, we will discuss some of the existing activities
and future opportunities related to big data for health. More
specifically, we will discuss its value for Medical and Health
Informatics, Translational Bioinformatics, Sensor Informatics,
and Imaging Informatics.
II. MEDICAL AND HEALTH INFORMATICS
With the ability to deal with large volumes of both
structured and unstructured data from different sources, big
data analytical tools hold the promise to study outcomes of
large-scale population-based longitudinal studies, as well as
to capture trends and propose predictive models for data gen-
erated from electronic medical and health records. A unique
opportunity lies in the integration of traditional medical infor-
matics with mobile health and social health, addressing both
acute and chronic diseases in a way that we have never seen
before.
A. Electronic Health Records (EHRs)
EHRs describing patient treatments and outcomes are rich but
underused information. Traditional health data centres capture
and store an enormous amount of structured data concerning
a wide range of information including diagnostics, laboratory
tests, medication, and ancillary clinical data. For individual
patient reports, the use of natural language processing plays an
essential role for systematic analysis and indexing of the un-
derlying semantic contents. Mining EHRs is a valuable tool for
improving clinical knowledge and supporting clinical research,
for example, in discovering phenotype information [5]. Mining
local information included in EHR data has already been proven
to be effective for a wide range of healthcare challenges, such
as disease management support [6], [7], pharmacovigilance
[8], building models for predicting health risk assessment [9],
[10], enhancing knowledge about survival rates [11], [12],
therapeutic recommendation [11], [13], discovering comor-
bidities, and building support systems for the recruitment of
www.redpel.com+917620593389
www.redpel.com+917620593389
3. ANDREU-PEREZ et al.: BIG DATA FOR HEALTH 1195
Fig. 3. Integration of imaging, modeling, and real-time sensing for the management of disease progression and planning of intervention procedures. This example
of thoracic aortic dissection illustrates how risk stratification and subject-specific haemodynamic modeling substantiated with long-term continuous monitoring
are used to guide the clinical decision process.
patients for new clinical trials [14]. Most of this work focused
on the analysis of very large multidimensional longitudinal
patient data collected over many years. However, most clinical
databases provide low temporal resolution information due to
the difficulty in collecting rich long-term time-series data. To
bridge this gap, current clinical databases can be enhanced by
connecting with mobile health platforms, community centres,
or elderly homes such that other information can be incorpo-
rated into the system to facilitate clinical descision-making
and address unanswered clinical questions. One interesting
direction will be to build patient-specific models using data
already available in existing clinical databases, and, then,
update the model with data that can be collected outside the
hospitals. In particular, some chronic diseases are manifested
with acute events that are unlikely to be predictable solely by
sporadic measurements made within hospitals.
Taking thoracic aortic dissection, a relatively rare disease
(3–4 per 100 000 people per year), as an example, the dis-
ease is typically manifested as a tear in the intimal layer of
the aorta, which can later on develop into either type A (in-
volving both ascending and descending aorta) or type B dis-
section (involving descending aorta only). Type-A patients
would require immediate surgical intervention, whereas for
type B dissection, it is generally considered as a chronic con-
dition requiring careful long-term control of blood pressure
(BP).
Individuals with connective tissue disorders such as Marfan
syndrome (MFS) are often more susceptible to aortic aneurysms
or tears. Large-scale population screening for this rare dis-
ease will, therefore, be useful in identifying people who are
at higher risk of developing aortic dissection. For a tear to de-
velop into type A dissection, while others into type B dissec-
tion, one hypothesis would be that it is due to different flow
patterns generated close to the tear location and across the
aorta. Although an initial model built from imaging can give
good insights into the problem, this does not take into account
progressive hemodynamic variation over time and the impact
of lifestyle and daily activities. By incorporating ambulatory
BP profiles, it is possible to create simulation results as a lon-
gitudinal model spanning over a longer period of time for a
better understanding of disease progression as summarized in
Fig. 3.
B. Social Health
One of the primary tasks of telemedicine involves connecting
patients and doctors beyond the clinic. However, this commu-
nication has been expanded, with the involvement of social
networks, to new levels of social interaction. This new feature
has opened up new possibilities of patient-to-patient communi-
cation regarding health beyond the traditional doctor-to-patient
paradigm. One-fourth of patients with chronic diseases, such
as diabetes, cancer, and heart conditions, are now using social
network to share experiences with other patients with similar
conditions, thereby providing another potential source of big
data [15]. In addition to biological information, geolocation
and social apps provide an additional feature to understand the
behaviors and social demographics of patients, while avoid-
ing resource intensive and expensive studies of large statistical
sampling. This advantage has already been exploited by several
epidemiological studies in areas, such as influenza outbreaks
[16], [17], collective dynamics of smoking [18], and the misuse
of antibiotics [19]. Text messages and posts on online social
networks are also a valuable source of health information, e.g.,
for the better management of mental health. Compared to tra-
ditional methods, such as surveys, fluctuations and regulation
of emotions, thoughts and behaviors analyzed over social net-
work platforms, such as Twitter, offer new opportunities for the
real-time analysis of expressed mood and its context [20]. For
example, when validating against known patterns of variation
in mood, the 2.73 × 109
emotional tweets collected over a 12-
week period in a study reported by Larsen et al. [20] claimed to
find some correlation between emotion tweets and global health
estimates from the World Health Organization on anxiety and
suicide rates.
Social media and internet searches can also be combined
with environmental data, such as air quality data, to predict
the sudden increase of asthma-related emergency visits [21].
www.redpel.com+917620593389
www.redpel.com+917620593389
4. 1196 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 19, NO. 4, JULY 2015
Similar models are anticipated to help other areas of public
health surveillance.
C. Lifestyle Environmental Factors, and Public Health
Climatological data, such as heat-stress and cold-related
mortality, present another dimension to predict personal health
[22], [23]. Recent remote sensing technologies and geographic
information systems allow climate data for global land areas to
be interpolated at a spatial resolution of 500 m to 1 km [24],
[25]. Achieving high-resolution measurements are necessary so
as to be able to monitor the real impact of pollution on human
wellbeing in urban environments. With this aim the dense grid
of wireless sensor networks facilitates the capture of spatiotem-
poral variability in toxic air pollutants [26]. Such technologies
will become increasingly important for connecting epidemic
intelligence with infectious disease surveillance and launching
effective heat response plans [27]–[29]. Similarly, patterns of
social factors influencing unhealthy habits such as smoking can
be studied using the collective dynamics of social networks [18].
As an example of this, Christakis and Fowler found that smok-
ers mostly belonged to the periphery of social networks, and by
the time of quitting, they behaved collectively [18]. In addition,
smokers with higher education tended to have a greater influ-
ence on their peers toward smoking behavior, compared to less
educated smokers. As regards psychological states, emotional
levels denoting hostility and stress, expressed in social media
such as Twitter tweets, can serve as predictors of heart disease
mortality per geographical area [30].
A mobile phone is an excellent platform to deliver personal
messages to individuals to engage them in behavioral changes
to improve health. Although at present, there is limited evidence
that mobile messaging-based interventions support preventive
health care for improving health status and health behavior out-
comes [31], a better understanding of how this platform can
be used is an interesting area to explore. For example, type-2
diabetes is generally thought to be preventable by lifestyle mod-
ification; however, successful lifestyle intervention programs
are often labor intensive. It has been shown that mobile phone
messaging can be used as an alternative to deliver motivational
and educational advices for changing population lifestyles [32].
III. TRANSLATIONAL BIOINFORMATICS
Translational bioinformatics, a field that emerged after the
first human genome mapping, focuses on bridging molecular
biology, biostatistics, and statistical genetics with clinical in-
formatics. The field is evolving at a tremendously fast pace,
and many related areas have been proposed. Amongst them,
pharmacogenomics is a branch of genomics concerned with in-
dividuals’ variations to drug response due to genetic differences.
The area is important for designing precision medicine in future.
New discoveries, resulting from the Human Genome Project,
are now frequently applied to develop improved diagnostics,
prognostics, and therapies for complex diseases, which is known
as “translational genomics”. In particular, the sequencing cost
per genome has markedly reduced over the last decade, accord-
ing to the data presented by the National Institutes of Health
Fig. 4. a) Number of research studies sequencing DNA or genomes (source:
PubMed, Web of Science, Scopus, IEEE, ACM). b) Sequencing cost per human-
sized genome (source: National Human Genome Research Institute, NHGRI).
Total volume of genomic data per year reported by completed studies for c)
eukaryotes and d) prokaryotes in 1e2 GB (source: National Center for Biotech-
nology Information).
(NIH) Human Genome Research Institute as shown in Fig. 4.
This further gives rise to new opportunities for personalized
treatment and risk stratification.
On the other hand, research in bioinformatics has broadened
from solely sequencing the genome of an individual to also
measuring epigenomic data (i.e., above the genome), which in-
clude processes that alter gene expression other than changes
of primary DNA sequences, such as DNA methylation and his-
tone modifications. Information technologies for acquiring and
analyzing biological molecules other than the genome, for ex-
ample, transcriptome (the total mRNA in a cell or organism),
proteome (the set of all expressed proteins in a cell, tissue, or
organism), and metabolome (the total quantitative collection
of low molecular weight compounds, metabolites, present in a
cell or organism that participate in metabolic reactions) are also
needed for future advances in the field. To summarize, OMICS
aims at collectively characterizing and quantifying groups of
biological molecules that translate into the structure, function,
and dynamics of an organism. The OMICS profile of each
www.redpel.com+917620593389
www.redpel.com+917620593389
5. ANDREU-PEREZ et al.: BIG DATA FOR HEALTH 1197
Fig. 5. Outline of the “OMICS” approach for studying disease mechanisms. OMICS aims at collectively characterizing and quantifying groups of biological
molecules that translate into the structure, function, and dynamics of an organism. The OMICS profile of each individual, including the genome, transcriptome,
proteome, and metabolome, should be eventually linked up with phenotypes obtained from clinical observations, medical images, and physiological signals.
Different acquisition technologies are required to collect data at each biological level. Interaction within each level and across different levels as well as with the
environment, including nutrition, food, drugs, traditional Chinese medicine, and gut microbiome presents grand challenges in future bioinformatics research.
individual should eventually be linked up with phenotypes ob-
tained from clinical observations, medical images, and physio-
logical signals (see Fig. 5).
A. Pharmacogenomics
A single whole human genome obtained by the
next-generation sequencing (NGS) is typically 3 GB. Depending
on the average depth of coverage, this can vary up to 200 GB,
making it a clear source of big data for health. Nevertheless,
only about 0.1% of the genome is different amongst individu-
als, which accounts for roughly 3 million variants. From a signal
processing point of view, the data can be considered as highly
compressible; however, in practice, compressed genotyping is
not widely adopted at present.
Whole genome sequencing by NGS is important to the study
of complex diseases such as cancer. It has been a long-standing
problem in cancer treatment that drugs often have heterogeneous
treatment responses even for the same type of cancer, and some
drugs only show profound sensitivity in a small number of pa-
tients [33], [34]. Currently, large-scale personal genomics and
pharmacogenomics datasets have been generated to uncover
unique signalling patterns of individual patients and discover
drugs that target these unique patterns. These include cancer
cell line databases of nonspecific cancer cell types [35], [36]
or a specific cancer cell type such as breast cancer [37]. The
Cancer Genome Atlas Project of the NIH has tested the per-
sonal genomic profiles of over 10 000 individuals across over
20 types of cancer [38], and uncovered new cancer subtypes
based on those profiles [39]. Patients with distinct genomics
aberrations are believed to be responsible for the variability of
drug response [40]. Large-scale datasets as such can be used to
enable drug repositioning [41], [42], predict drug combinations
[43], [44], and delineate mechanisms of action [45]. They are
becoming an important component in drug development [46],
[47]. It is, therefore, possible to design precision medicine for
individual patients based on their genomics profiles.
Pharmacogenomics has gone beyond studying individuals’
drug response based on genome characteristics (e.g., copy
number variations and somatic mutations) and now
incorporates additional transcriptomic and metabolic features
such as gene expression, considering factors that influence the
concentration of a drug reaching its targets and factors associ-
ated with the drug targets. Since the gene expression profiles
of cell lines are known to vary considerably in the process of
prolonged culture under different culture conditions and tech-
niques, the use of gene expression from cell lines for predic-
tion of drug response in the patient is currently controversial.
A recent algorithm for predicting in vivo drug response with
the patient’s baseline gene expression profile achieved 60%–
80% predictive accuracy for different cases [48]. Other research
[49], [50] studied drug response using immunodeficient mice
xenografted with human tumors, which have the advantage of
potentially studying both genetic and nongenetic factors that
affect cancer growth and therapy tolerance [51].
Similar pharmacogenomics studies are also important to
vascular diseases. Although antiplatelet agents such as clopido-
grel are widely prescribed for diseases such as acute coronary
syndrome (ACS), their responses vary greatly from person to
person and approximately 30% of the patients may exhibit resis-
tance to clopidogrel [52], 53]. Since clopidogrel is activated by
the citocromo P450 (CYP) enzyme system to active metabolite,
CYP2C19 loss-of-function (LOF) allele(s) affects the respon-
siveness of clopidogrel, but not the new antiplatelet agents
(prasugrel and ticagrelor). Therefore, it is cost effective to use
the genotype-guided method to screen carrier of CYP2C19
LOF allele(s) when using antiplatelets in high-risk ACS
patients [54].
www.redpel.com+917620593389
www.redpel.com+917620593389
6. 1198 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 19, NO. 4, JULY 2015
B. Translational Genomics
Although comprehensive genotyping is still relatively recent,
it has a high potential for genetic stratification in patient screen-
ing, for instance, in the case of factors arising from genotyp-
ing, such as high-risk DNA mutations [55], milk and gluten
intolerance, and muscovisciosis. Genetics combined with phe-
notypic information provided by EHR may help to provide
greater insights into low penetrant alleles [56]. For example, it
is well known that mutations of fibrillin 1 (FBN1) cause MFS.
Nevertheless, the aetiology of the disease leads to marked clin-
ical variability of MFS patients of the same family as well as
different families [57]. Combining genetic tests of FBN1 and a
series of related genes (TGFBR1, TGFBR2, TGFB2, MYH11,
MYLK1, SMAD3, and ACTA2) will help to screen out patients
who are more likely to develop aortic aneurysms that lead to dis-
sections [58]. Further studies on these high-risk patients based
on morphological images of the aorta may provide insight into
the rate of disease development.
Another potential area for translational genomics is to study
the gene networks of different syndromes of the same person in
order to better understand how these syndromes are interrelated.
For example, this has been used to study different genes on chro-
mosome 21 (HSA21) and their role in Down’s Syndrome (DS),
as well as to understand the underlying reason why nearly half
of DS patients exhibit an overprotection against cardiac abnor-
malities related to the connective tissue [59]. One hypothesis is
based on the recent evidence that there is an overall upregulation
of FBN1 in DS (which is normally down regulated in MFS) [59].
The construction of genetic networks will, therefore, provide a
clearer picture of how these syndromes are related. By under-
standing the gene networks of the related syndromes, it may be
possible to provide specific gene therapy for the related diseases.
C. OMICS and Large-Scale Databases
In addition to the Human Genome Project, several large-scale
biological databases launched recently will further facilitate the
study of disease mechanisms and progressions, particularly at
the system level as outlined in Fig. 5. The Research Collabora-
tory for Structural Bioinformatics Protein Data Bank [60], [61]
is a worldwide archive of structural data of biological macro-
molecules, providing access to the 3-D structures of biological
macromolecules, as well as integration with external biological
resources, such as gene and drug databases [62]. ProteomicsDB
[63] is another example, encompassing mass spectrometry of the
human proteome acquired from human tissues, cell lines, and
body fluid to facilitate the identification of organ-specific pro-
teins and translated long intergenic noncoding RNAs, with due
consideration of time-dependent expression patterns of proteins
[63].
Parallel to these developments, the Human Metabolome
Database [64] consists of more than 40 000 annotated metabo-
lites entries in the latest version released in 2013. It provides
both experimental metabolite concentration data and analyses
through mass spectrometry and Nuclear Magnetic Resonance
(NMR) spectrometry [64]. Databases as such are believed to
greatly facilitate the translation of information into knowledge
for transforming clinical practice, particularly for metabolic-
Fig. 6. a) Evolution of the number of patents published in the area of mobile
health (source: European Patent Office); b) evolution of the number of smart-
phones sold per year in million units (source: Gartner); c) evolution of the cost
of Internet-enabled sensors in dollars (source: Business Intelligence Interna-
tional); d) number of mobile health apps published in Google play and iTunes
as of May 2015.
related diseases, such as diabetes and coronary artery diseases
[65]. In fact, metabolomics has emerged as an important re-
search area that does not only include endogenous metabolites
of the human body but also chemical and biochemical molecules
that can interact with the human body [66]. Specifically, on-
going efforts have been placed for fingerprinting metabolites
from food and nutrition products [67], drugs [68], and tradi-
tional Chinese medicine [69], as well as molecules produced
by the gut bacterial microbiota [67], [70]. These will eventually
help us to better understand the interaction between the host,
pathogen and environment.
The availability of the genomic, proteomic, and metabolic
databases allows a better understanding of the development of
complex diseases such as cancer. They also allow the search
of new biomarkers using different pattern mining and cluster-
ing techniques [68]–[71]. The clusters can be either partitional
(hard) or hierarchical (tree-like nested structure). These meth-
ods can be further accelerated by using multicore CPU, GPU,
and field-programmable gate arrays with parallel processing
techniques.
IV. SENSOR INFORMATICS
Advances in sensing hardware have been accelerating in
recent years and this trend shows no signs of slowing down
[72]. According to the analysis in the BI Intelligence report
(Garner) published at the end of 2014, the price of one MEMS
sensor has decreased by half from US$ 1.30 to US$ 0.60 dur-
ing the last decade as shown in Fig. 6. This has partly driven
a paradigm shift of future internet applications toward what
www.redpel.com+917620593389
www.redpel.com+917620593389
7. ANDREU-PEREZ et al.: BIG DATA FOR HEALTH 1199
Fig. 7. Big sensing data in health are all around us, enabled by technologies
ranging from nano- and microelectronics, advanced materials, wearable/mobile
computing, and telecommunication systems as well as remote sensing and ge-
ographic information systems. The inner loop presents technologies for sensor
components, while the middle loop presents devices and systems potentially own
by each individual or household. The outer loop presents sensing technologies
required at the community and public health level.
is termed “the Internet of Things” (IoT). Moreover, enabling
technologies ranging from nano- and microelectronics, ad-
vanced materials, wearable/mobile computing, and telecommu-
nication systems, as well as remote sensing and geographic
information systems have made it possible for sensing health
information to be collected pervasively and unobtrusively [73]
as illustrated in Fig. 7.
A. Wearable, Implantable, and Ambient Sensors
As outlined in a recent review article [15], three factors, in
particular, have contributed to the rapid uptake of wearable de-
vices. These include increased data processing power, faster
wireless communications with higher bandwidth, and improved
design of microelectronics and sensor devices [15]. Example
platforms include earlier systems with limited connectivity and
single sensing elements developed solely for use in research lab-
oratories to more recent ambient sensors as well as easy-to-wear
wearable/implantable devices equipped with continuous multi-
modal sensing capabilities and support for data fusion deployed
in a wide range of clinical applications [74]–[76]. Furthermore,
parallel developments in miniaturized sensor embodiment, mi-
croelectronics and fabrication processes, and the availability of
wireless power delivery have made miniaturized implantable
sensors increasingly versatile [73].
Implantable sensors address the challenges of both acute and
chronic disease monitoring by providing a means of captur-
ing critical events and continuous streamlining of health infor-
mation. Recent advances in microelectronics and nanotechnol-
ogy have greatly improved the sensitivity of different sensors.
For example, based on metal nanoparticle arrays and single
nanoparticles, the sensitivity of localized surface plasmon reso-
nance optical sensors can be pushed toward the detection limit
of a single molecule [77]. This has enabled the development
of the next generation of high-throughput sequencing technolo-
gies, as well as the detection of biomolecules, such as glucose,
lactate, nitric oxide, and sodium ions [78]. For diabetic patients,
a myriad of new sensors for both wearable and implantable ap-
plications have been developed, which provide continuous mon-
itoring and corresponding response to the time-varying glucose
level, which is well known to be diet dependent [79]–[81].
There is a clear trend of moving from the scenario where
a centralized large computing infrastructure is shared between
multiple users toward one where each individual possesses mul-
tiple smart devices, most of which are sufficiently small to be
wearable or implantable such that the use of these sensing de-
vices will not affect normal daily activities. These sensor sys-
tems have the potential to generate datasets which are currently
beyond our capabilities to easily organize and interpret [82].
Meanwhile, healthcare services delivered via ambient intelli-
gence consisting of ambient sensors and objects interconnected
into an integrated IoT represent a promising and supportive so-
lution for the ageing society. It is important that such systems
should take into account the sensor, service, and system integra-
tion architecture [83]. Such distributed systems require decen-
tralized inference algorithms, which are frequency explored,
either in the framework of parametric models, in which the
statistics of phenomena under observation are assumed to be
known by the system designer, or nonparametric models, when
the underlying data is sparse and prior knowledge is limited
[84], [85].
B. From Sensor Data to Stratified Patient Management
Physiological sensing by these smart devices can be long
term and continuous, imposing new challenges for interpreting
their clinical relevance. For example, the current clinical prac-
tice defines hypertension based on measurements taken during
infrequent hospital visits. Although automated oscillometric BP
measurement devices are now available, studies in these areas
are often limited to taking BP once every hour over a 24-hr
period. With the newly emerging ambulatory devices [75], a
comprehensive BP-related profile of an individual can be made
available. Nevertheless, the interpretation of these data is non-
trivial, since in many situations, they may not be equivalent
to the clinical BP readings that are currently being used by
practitioners [86], [87]. The signals, however, carry underlying
physiological meanings that, if properly processed and man-
aged, can be used as additional information for understanding
uncontrolled hypertension or to enhance the current hyperten-
sion management schemes. In addition to vital sign monitoring,
smart implantable sensors provide a promising technology to
monitor postoperative complications, such as slow tissue heal-
ing and infections. Moreover, smart implants can also have a
reactive role by delivering drugs for chronic pain [88] and act-
ing as brain stimulators for neurological diseases including re-
fractory epilepsy [89] and Parkinson’s disease [90]. This makes
www.redpel.com+917620593389
www.redpel.com+917620593389
8. 1200 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 19, NO. 4, JULY 2015
Fig. 8. Different imaging modalities across the electromagnetic spectrum. They are playing an increasingly important role in early diagnosis, treatment planning,
and deploying direct therapeutic measures.
smart implants not just another resource for data collection but
also an integral part of early intervention.
With increased volume and acquisition speed of data from
both wearable and implantable sources, new automated algo-
rithms are needed to reduce false alarms such that they are suf-
ficiently robust to support large-scale deployment, particularly
for free-living environments [91]. Automatic classifications are
necessary since the dataset sizes are beyond the capability of
manual interpretation within a reasonable time period. New
compression-based measures are, therefore, proposed as high-
quality cloud computing services to reduce the computation
time for the automated classification of different types of car-
diac arrhythmia [92]. In many situations, measurements must
be interpreted together with the context under which the data is
collected. For example, many physiological parameters, such as
BP or episodes of gastroesophageal reflux disease are posture
dependent [93], [94], which can be captured by inertial sensors.
Therefore, multimodal integration and context awareness are
essential to the analysis of pervasive sensing data.
C. Mobile Health
Nowadays, smart phones have become an inseparable
companion for more than 1.75 billion users. The data gener-
ated by the use of smart phones provides highly descriptive and
continuous information anytime and anywhere. The penetration
of smartphones, which has reached over 200% of the total pop-
ulation in some cities such as Hong Kong, makes it logical to
use it as a personal logging device of health information. The
new generation of smartphones has a wide range of health apps
with standardized protocol to connect to sensors provided by
different companies. They can potentially serve as a platform to
centralize health data, from which additional new information
that was previously untraceable by individual sensors can now
be mined. In fact, earlier versions of mobile phones consist of
only simple motion sensors, while newer models are packed
with sophisticated sensors that facilitate the extraction of dif-
ferent types of vital signs, even without the need for external
devices. These sensors, when properly used, can provide valu-
able health information for the management of many long-term
illnesses. For example, the video cameras of mobile phones can
be used to collect heart rate and heart rate variability [95], em-
bedded accelerometers and gyroscopes to track energy expendi-
ture [96]. Furthermore, the pulse transmission time as measured
by time delays between electrocardiographic and photoplethys-
mographic sensors can be used as a surrogate measure for BP
[75], [97]. This information can be calculated from two devices
that connect with a mobile phone independently, one with an
electrocardiographic sensor and the other one with a photo-
plethysmographic sensor. When connected to health providers,
a closer level of interaction in healthcare can be maintained
toward greater personalization and responsiveness [98].
V. IMAGING INFORMATICS
The ever-increasing amount of annotated and real-time
medical imaging data has raised the question of organizing, min-
ing, and knowledge harvesting from large-scale medical imag-
ing datasets. While established imaging modalities are getting
pervasive, new imaging modalities are also emerging. These
modalities are rapidly filling up the entire EM spectrum as
shown in Fig. 8. Many of these imaging techniques are now
geared toward real-time in situ or in vivo applications, mak-
ing multimodality imaging an exciting yet challenging big data
management problem.
Recent developments in imaging are progressing in
multiple frontiers. First, there is relentless effort in making exist-
ing imaging modalities faster, higher resolution, and more versa-
tile. Take cardiovascular magnetic resonance imaging (MRI) as
an example, imaging sequences are no longer limited to morpho-
logical and simple tissue characterization (e.g., via T1, T2/T2∗
relaxation times). Details concerning vessel walls, myocardial
perfusion and diffusion, and complex flow patterns in vivo can
all be captured. When facilitated with new minimally invasive
interventional techniques, novel drugs and other forms of treat-
ment, MRI now serves as a therapeutic and interventional aid,
rather than solely a diagnostic modality. Similar advances can
also be appreciated for ultrasound, computed tomography (CT),
and other imaging modalities. Moreover, extensive efforts in
combining different imaging modalities, not by postprocessing,
www.redpel.com+917620593389
www.redpel.com+917620593389
9. ANDREU-PEREZ et al.: BIG DATA FOR HEALTH 1201
but at the hardware level, e.g., MRI/PET and PET/CT, open up a
range of new opportunities, particularly for oncological imaging
and targeted therapy.
A. Imaging Across Scales
There have been extensive research efforts for developing new
technologies that probe deeper into the biological system, from
tissue (up to micrometer) to the protein level (micronanometer).
In particular, recent advances in stimulated emission depletion
fluorescence microscopy allow the generation of 3-D superreso-
lution images of living biological specimens [99]. It overcomes
the classical optical resolution limit of light microscopy and
pushes the spatial resolution of optical microscope toward the
nanoscale [100]–[102]. This opens up the possibility of imaging
not only the fine morphological structure of many organ systems
(e.g., microfibrils that form blood vessels), but also subcellular
behavior and molecular signaling. The use of quantum dots or
qdots also pushes the boundary of imaging resolution, allowing
the study of intracellular processes at molecular levels (20–
40 nm) [103]. Another class of fluorescent labels is made by
conjugating qdots with biorecognition molecules, which emis-
sion wavelength can be tuned by changing the particle size such
that a single light source can be used for simultaneous excita-
tion of all different-sized dots [104]. These technologies have
already been used for immunofluorescence labeling of tissues,
fixed cells, and membrane proteins, such as cancer markers
[105], the hybridization of chromosomes [106], the labeling of
DNA [107], and contrast-enhanced image-guided resection of
tumors [108].
B. From Morphology to Function
The understanding of many biological processes requires the
identification and representation of structure–function relation-
ships. This expands across different spatial scales, namely pro-
teins, cells, tissues, and organs. For instance, haemodynamic
analysis combined with contractile analysis, substantiated with
myocardial perfusion data, can be used to elucidate the un-
derlying factors associated with cardiac abnormalities. Starting
with modeling, the tissue and scaling up toward a more specific
description of organ behaviors has made it possible to create in-
tegrative models of heart function [109], [110]. These architec-
tural models fuse information such as fibrous-sheet geometrical
models of tissue and membrane currents from ion channels at
the subcellular level [111].
Amongst all organs that have been studied to define their
function from its morphology, the brain is the one that has
received the most attention recently. This is motivated by the
fact that brain structure and function are keys to understand
cognitive processes, hence the need for unveiling neuronal
behavior from the molecular level up to the functioning of
neural circuits. Super-resolution fluorescence microscopy has
been applied to study neural morphology and their subcellular
structures. These techniques may enable us to achieve a
resolution as high as 20 nm [112]. Needless to say, the myriad
of markers necessary for each single type of cell and synapse
would result in an enormous database.
Methods, such as functional MRI (fMRI) and functional dif-
fusion tension imaging provide flexible information in the form
of macrostructural, microstructural, and dense connectivity ma-
trices. Improved fMRI sampling methods produce time-series
data of multiple blood oxygenation-level-dependent volumes of
the brain [113]. In addition, there is an increasing trend in mak-
ing neuroimaging multimodal. In some studies, several modal-
ities are used to compensate the benefits and tradeoff of one
another. Furthermore, information from lower cost and rapid
noninvasive methods, such as wearable electroencephalogra-
phy and functional near-infrared spectroscopy allows gathering
brain functional data for examining cortical responses due to
more complex tasks.
An indirect way of inferring function consists of a combina-
tion of imaging modalities as well as medical records, demo-
graphics, and lab test results. In order to maximize the informa-
tion contained in these heterogonous sources, linking different
metadata with features extracted from image modalities is key
to characterize the structure, function, and progression of dis-
eases. Solving this challenge presents a unique opportunity for
bridging the semantic gap between images and more effective
prediction, diagnosis, and treatment of diseases. However, this
issue entails many independent yet interrelated tasks, such as
generating, segmenting, and extracting enormous amounts of
quantifiable spatial objects and features (nuclei, tissue regions,
blood vessels, etc.). This requires the implementation of effec-
tive and optimized querying systems [114] in order to reduce
the computational complexity of handling these data. Fig. 9 rep-
resents a schema of what big data means for imaging, as defined
by both structural and functional data.
Existing efforts in improving the spatiotemporal constraints
of brain imaging are significantly increasing the computational
resources needed for neuroimaging studies. RAM memory is
an important resource for neuroimaging analysis. For instance,
to perform subject-, voxel- and trial-level analysis, a signifi-
cant amount of fMRI images needs to be loaded into memory.
Fig. 10 illustrates the evolution of the required amount of RAM
reported by neuroimaging-related studies in pubmed.org. From
2013 onwards, there has been a fast increase in the amount of
RAM reported (from 8 to 60 GB). If this trend is confirmed, the
amount of memory used in a study could reach values of around
260 GB by 2020.
C. Research Initiatives to Understand the Human Brain
Another active topic in imaging is to study the functional
connectivity of the human brain, which is fundamental to both
basic and applied neurobiological research [115]. Both U.S. and
European Union (EU) have launched large-scale Human Brain
projects in recent years with an aim to unravel the organ’s com-
plexity. The NIH-funded Human Connectome Project (HCP),
with a funding scale of 30 million US$, aims at leveraging
the latest advances in DTI to study brain areas in relation to
their functional, structural, and electrophysiological connectiv-
ity [116]. The idea behind the HCP is that neural connectivity is
as unique as the fingerprint to each individual. Genetics, environ-
mental influences, and life experience are factors contributing to
the formation of each individual’s neural circuitry [117]. This is
www.redpel.com+917620593389
www.redpel.com+917620593389
10. 1202 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 19, NO. 4, JULY 2015
Fig. 9. Processing schema of imaging toward big data. Nonfunctional medical imaging is acquired and processed to serve as a model to register organ activity in
the resulting functional imaging. Results from image processing and functional imaging are stored in databases with specific metadata protocols. Large-scale big
data analysis is performed in these databases linking then the features extracted through medical imaging processing and functional imaging.
Fig. 10. Amount of RAM needed and forecasted to be used in neuroimaging
studies.
supported by genome-wide association studies that link genetic
variants with neurological and psychiatric disorders that have
abnormal brain connectivity, e.g., variants at human clusterin
(CLU) on chromosome 8 and complement receptor 1 on chro-
mosome 1 are associated with Alzheimer’s disease [118] as well
as those specific markers associated with schizophrenia [119]
and dementia [120].
Recently, the EU commission is providing €1.1 billion for
the human brain project, which aims to develop a biological
model of the brain that simulates different aspects of the ner-
vous system, including point neuron models, neural circuitry,
and cellular models at different scales. The main idea is to
provide a simulation platform for theoretical neuroscientists to
study how the brain processes information. For this purpose, it
would require simulating all functions, architecture, and chemi-
cal properties for the 86 billion neurons and trillions of synapses
of the human brain as estimated by Azevedo et al. [121]. There-
fore, the aim of the project is both ambitious and controversial.
A panel review disclosed earlier this year, after the project had
been launched for 18 months, urges the project team to adjust
its governance and scientific direction [122]. Specifically, the
report emphasizes that it is overambitious for whole-brain sim-
ulation, and that the project should consider the perspective of
other sciences involved in the study of how the brain works, such
as neuropsychology or neuroimaging. It is hope that the adjusted
aim could complement the U.S. BRAIN initiative [122].
VI. DISCUSSION
According to International Data Corporation, worldwide
spending on information and communication technology will
reach 5 trillion US$ by 2020, and at least 80% of the growth
will be driven by platform technologies, which encompass mo-
bile technology, cloud services, social technologies, and big
data analytics. Table I shows a selection of studies illustrating
the potential of applying big data to health and the considerable
increase in data complexity and heterogeneity in the field.
Applying big data to health is not only important to biolog-
ical and physical sciences, but equivalently important to what
has traditionally been considered as “soft” sciences, such as
behavior and social sciences [123]. It is well known that hu-
man behavior is a significant driver for environmental problems,
such as climate change, air pollution, and medical issues. Nev-
ertheless, there are few studies that actually study these issues
systematically and quantitatively. With the advanced technolo-
gies reviewed in this paper, it is now possible to study human
behavior, including their physical actions, observable emotions,
personality, temperament, and social interaction patterns, all of
www.redpel.com+917620593389
www.redpel.com+917620593389
11. ANDREU-PEREZ et al.: BIG DATA FOR HEALTH 1203
TABLE I
EXAMPLES OF STUDIES ILLUSTRATING THE POTENTIAL OF BIG DATA IN HEALTH
Area Sample Methods Data type Ref
B 2708 subjects Biostatistics Gene expression data [125]
HI (EHR) 2974 patients Machine Learning (NLP) Patient records and laboratory results [126]
HI (EHR) 42 160 control 8549 patients Statistics Categorical database of patient records [127]
B 876∗
subjects Genomics Gene expression data [128]
S 200∗
patients Machine Learning Wearable sensor and annotation data [129]
HI 3000 animal sample Statistics Veterinary records of health assessment [130]
HI 745 053 patients Machine Learning Preoperative risk data and patient records [131]
IMG 1414 subjects Network Analysis Resting state of neural fMRI data [132]
IMG EHR 228∗
patients Machine Learning PET scans and patient records [133]
HI (SN and ENV) 465 million records Machine Learning Social network and air quality data [20]
HI (SN) 686 003 Social network users Machine Learning (NLP) Emotions in users’ news feeds during 20 years [134]
Acronyms: B (Bioinformatics), HI (Health Informatics), S (Sensing), IMG. (Imaging), EHR (Electronics Health Records), ENV (Environmental
data), SN (Social Network), NLP (Natural Language Processing).
∗
Although these samples do not make more than 1000 instances, they can be considered large for the particular area of study.
which are conventionally difficult to measure and quantify. This
will further help us to understand the mechanisms of disease de-
velopment, and how these diseases spread and affect one another
at the community level.
Health informatics applications are known to generate
datasets that are complicated to store, untangle, organize, pro-
cess, and, above all, interpret. From a scientific perspective,
studies with a limited cohort of patients and controls can only
serve as a proof-of-concept for future treatments and diagnoses.
Close to 3000 scientific studies indexed in pubmed.org since
2005 state that their conclusions should be “interpreted with
caution” due to issues relative to statistical sampling. Large lon-
gitudinal and multimodal studies are necessary to discover the
causes, risk, and improvement factors of several health diseases,
such as cancer, Parkinson’s, Alzheimer, and arthritis.
It must be emphasized that the interpretation of big data
should be handled with care in all situations. In particular,
proven cases show large discrepancies between the predicted
and actual values. After all, predicting the future is always diffi-
cult. Despite its early success, Google Flu Trend (GFT) in 2013
was predicting more than twice the proportion of doctor vis-
its for influenza-like illness than that of the Centers for Disease
Control and Prevention [124]. There were a number of attributes
to this problem which should be avoided in future studies in this
area. First, the quality of the data collected should not be com-
prised with the quantity of the desired data. In many problems
that researchers are dealing with, the number of parameters
considered in a model may be exemplarily overfitting. Thus, the
trained model was unable to predict future trends in this example
because it put too much focus on the idiosyncrasies of the data
at hand. Moreover, specific datapoints (outliers) may dominate
in the trained model and those may have no predicting values.
For the case of GFT, the nonseasonal 2009 influenza A–H1N1
pandemic was also incorporated in the model, which makes it
partly a flu detector and partly a winter detector. Second, algo-
rithm dynamics can induce errors in the prediction, particularly
for analyzing big data. Often, both the data collected and the
algorithms are changing at different paces. Capturing a specific
instance can, therefore, be difficult due to the enormous amount
of variations.
A. Processing Big Data
A bottleneck in analyzing big data is to obtain fast inference in
real time from large and high-dimensional observations. For in-
stance, high-dimensional spaces may arise from an extensive set
of biomarkers [135], health attributes, and sensor fusion [136].
From a software point of view, processing big data is usually
linked to parallel programming paradigms such as MapReduce
[137]. Several open-source frameworks such as Hadoop have
been considered to store distributed databases in a scalable ar-
chitecture, as a basis for tools (e.g., Cascading, Pig, Hive) that
allow developing applications to process vast amounts of data
on commodity clusters. However, when combined with the con-
tinuous streams of pervasive heath monitoring data, this also
requires capacities for iterative and low-latency computations,
which depends on sophisticated models of data caching and
in-memory computation.
In addition to the processing architecture, machine-learning-
based data analysis also requires specific tuning to learn a
classifier or repressor over large-scale datasets. Dimensional-
ity reduction and feature selection can help us to cope with
the curse of dimensionality. Nevertheless, whether supervised
or unsupervised, these algorithms also require the regular im-
plementation of a learning process to obtain a mapping or a
set of maximally informative dimensions. Some machine learn-
ing methods, such as deep learning, involve learning several
layered transformations of the data in order to find the best
high-level abstraction for the problem at hand, mimicking the
way neuroscience explains learning [138], [139]. Most machine
learning techniques involve learning a set of model parameters
that need to be found by means of optimization. The com-
plexity of this learning process typically increases when deal-
ing with big data. When the number of observations grows
sample-by-sample iterative parameter learning methods can be
a solution [140]. Another interesting option for scalable learn-
ing is to incrementally generate the set of required parameters
or update the model structure whilst new data is being added
[141], [142]. Online methods of variable selection and regu-
larization are recommended to deactivate spurious variables in
order to ease this scalability to large dimensions during learning
[143], [144].
www.redpel.com+917620593389
www.redpel.com+917620593389
12. 1204 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 19, NO. 4, JULY 2015
B. Data Privacy and Security
The emergence of big data for health raises additional chal-
lenges in relation to privacy, security, data ownership, stew-
ardship, and governance. Personal data, which is regarded as
the “New Oil” of the 21st century as coined during the 2011
World Economic Forum [145], are being generated at a tremen-
dously fast speed due to the launch of many new intelligent
devices, sensors, networks, and software applications. While
these datasets often used to be generated and stored at a cen-
tralized location, today they are often distributed over various
servers and networks. In the healthcare domain, data privacy
is of utmost importance as regulated by laws in countries with
large population. Closely related to the privacy issue is that data
must be linked to the right person to ensure correct diagnosis
and treatment. Therefore, the collected data about an individual
must be uniquely tagged with an identifier. Furthermore, data
security should be ensured at all levels of the healthcare system,
including at the sensor level at which the data is collected [146].
VII. CONCLUSION
Big data can serve to boost the applicability of clinical re-
search studies into real-world scenarios, where population het-
erogeneity is an obstacle. It equally provides the opportunity
to enable effective and precision medicine by performing pa-
tient stratification. This is indeed a key task toward personal-
ized healthcare. A better use of medical resources by means of
personalization can lead to well-managed health services that
can overcome the challenges of a rapidly increasing and aging
population. Thus, advances in big data processing for health
informatics, bioinformatics, sensing, and imaging will have a
great impact on future clinical research. Another important fac-
tor to consider is rapid and seamless health data acquisition,
which will contribute to the success of big data in medicine.
Specifically, sensing provides a solid set of solutions to fill this
gap. Frequencies of health data acquisition still involve a slow
and complex process requiring the involvement of special health
personal and laboratories. In this context, faster and unobtrusive
health data can be provided by means of pervasive sensing. The
use of sensors means the capacity of covering large periods of
continuous monitoring without the need for performing spo-
radic screening, which may only represent a narrow picture of
the development of a disease. However, the fact of deploying
continuous sensing over a large population will result in a large
amount of information that requires both on-node data abstrac-
tion and distributed inference. From a population level, one’s
unfortunate past can provide significant insight into forecasting
and preventing the same incident from occurring in others. Last
but not the least, the governmental policy and regulation are
required to ensure privacy during data transmission and storage,
as well as during subsequent data analysis tasks.
REFERENCES
[1] M. Cottle, W. Hoover, S. Kanwal, M. Kohn, T. Strome, and N. W. Treis-
ter, Transforming Health Care Through Big Data, Institute for Health
Technology Transformation, Washington DC, USA, 2013.
[2] D. Laney, “3D data management: Controlling data volume, velocity and
variety,” Meta Group Inc., Stamford, CT, USA, Tech. Rep. 949, 2011.
[3] Public Expenditure Statistical Analyses, HM Treasury, London, U.K.,
2012.
[4] G. A. Cuckler, A. M. Sisko, S. P. Keehan, S. D. Smith, A. J. Madison, J.
A. Poisal, C. J. Wolfe, J. M. Lizonitz, and D. A. Stone, “National health
expenditure projections, 2012–22: slow growth until coverage expands
and economy improves,” Health Affairs, vol. 32, pp. 1820–1831, 2013.
[5] C. Friedman, L. Shagina, Y. Lussier, and G. Hripcsak, “Automated en-
coding of clinical documents based on natural language processing,” J.
Amer. Med. Informat. Assoc., vol. 11, pp. 392–402, 2004.
[6] J. Sun, C. D. McNaughton, P. Zhang, A. Perer, A. Gkoulalas-Divanis,
J. C. Denny, J. Kirby, T. Lasko, A. Saip, and B. A. Malin, “Predicting
changes in hypertension control using electronic health records from a
chronic disease management program,” J. Amer. Med. Informat. Assoc.,
vol. 21, pp. 337–344, 2014.
[7] G. N. Forrest, T. C. Van Schooneveld, R. Kullar, L. T. Schulz, P. Duong,
and M. Postelnick, “Use of electronic health records and clinical decision
support systems for antimicrobial stewardship,” Clin. Infectious Dis., vol.
59, pp. 122–133, 2014.
[8] R.Eriksson,T.Werge,L.J.Jensen,and S.Brunak,“Dose-specific adverse
drug reaction identification in electronic patient records: Temporal data
mining in an inpatient psychiatric population,” Drug Saf., vol. 37, pp.
237–247, 2014.
[9] D. W. Bates, S. Saria, L. Ohno-Machado, A. Shah, and G. Escobar,
“Big data in health care: Using analytics to identify and manage high-
risk and high-cost patients,” Health Affairs, vol. 33, pp. 1123–1131,
2014.
[10] M. R. Boland, G. Hripcsak, D. J. Albers, Y. Wei, A. B. Wilcox, J. Wei,
J. Li, S. Lin, M. Breene, and R. Myers, “Discovering medical conditions
associated with periodontitis using linked electronic health records,” J.
Clin. Periodontol., vol. 40, pp. 474–482, 2013.
[11] H. Xu, M. C. Aldrich, Q. Chen, H. Liu, N. B. Peterson, Q. Dai, M. Levy,
A. Shah, X. Han, and X. Ruan, “Validating drug repurposing signals
using electronic health records: A case study of metformin associated
with reduced cancer mortality,” J. Amer. Med. Informat. Assoc., pp. 1–
10, 2014.
[12] Y. Hagar, D. Albers, R. Pivovarov, H. Chase, V. Dukic, and N. Elhadad,
“Survival analysis with electronic health record data: Experiments with
chronic kidney disease,” Statist. Anal. Data Mining, ASA Data Sci. J.,
vol. 7, pp. 385–403, 2014.
[13] T. Cars, B. Wettermark, R. E. Malmstr¨om, G. Ekeving, B. Vikstr¨om,
U. Bergman, M. Neovius, B. Ringertz, and L. L. Gustafsson, “Extrac-
tion of electronic health record data in a hospital setting: Compari-
son of automatic and semi automatic methods using anti TNF ther-
apy as model,” Basic Clin. Pharmacol. Toxicol., vol. 112, pp. 392–400,
2013.
[14] M. Marcos, J. A. Maldonado, B. Mart´ınez-Salvador, D. Bosc´a, and M.
Robles, “Interoperability of clinical decision-support systems and elec-
tronic health records using archetypes: A case study in clinical trial
eligibility,” J. Biomed. Informat., vol. 46, pp. 676–689, 2013.
[15] J. Andreu-Perez, D. Leff, H. M. D. IP, and G.-Z. Yang, “From wear-
able sensors to smart implants—Towards pervasive and personalised
healthcare,” IEEE Trans. Biomed. Eng., pp. 1–13, 2015, submitted for
publication.
[16] A. Signorini, A. M. Segre, and P. M. Polgreen, “The use of Twitter to
track levels of disease activity and public concern in the US during the
influenza A H1N1 pandemic,” PLoS One, vol. 6, pp. 1–10, 2011.
[17] A. Culotta, “Towards detecting influenza epidemics by analyzing Twitter
messages,” in Proc. 1st Workshop Soc. Media Analytics, 2010, pp. 115–
122.
[18] N. A. Christakis and J. H. Fowler, “The collective dynamics of smoking
in a large social network,” N. Engl. J. Med., vol. 358, pp. 2249–2258,
2008.
[19] D. Scanfeld, V. Scanfeld, and E. L. Larson, “Dissemination of health
information through social networks: Twitter and antibiotics,” Amer. J.
Infection Control, vol. 38, pp. 182–188, 2010.
[20] M. Larsen, T. Boonstra, P. Batterham, B. O’Dea, C. Paris, and H. Chris-
tensen, “We feel: Mapping emotion on Twitter,” IEEE J. Biomed. Health
Informat., pp. 1–7, 2015, submitted for publication.
[21] S. Ram, W. Zhang, M. Williams, and Y. Pengetnze, “Predicting asthma-
related emergency department visits using big data,” IEEE J. Biomed.
Health Informat., pp. 1–8, 2015, submitted for publication.
[22] R. S. Kovats and S. Hajat, “Heat stress and public health: A critical
review,” in Annual Review of Public Health, vol. 29. Palo Alto, CA,
USA: Annual Reviews, 2008, pp. 41–57.
www.redpel.com+917620593389
www.redpel.com+917620593389
13. ANDREU-PEREZ et al.: BIG DATA FOR HEALTH 1205
[23] W. R. Keatinge, G. C. Donaldson, K. Bucher, G. Jendritsky, E. Cor-
dioli, M. Martinelli, L. Dardanoni, K. Katsouyanni, A. E. Kunst, J. P.
Mackenbach, C. McDonald, S. Nayha, and I. Vuori, “Cold exposure and
winter mortality from Ischaemic heart disease, cerebrovascular disease,
respiratory disease, and all causes in warm and cold regions of Europe,”
Lancet, vol. 349, pp. 1341–1346, May 1997.
[24] R. J. Hijmans, S. E. Cameron, J. L. Parra, P. G. Jones, and A. Jarvis, “Very
high resolution interpolated climate surfaces for global land areas,” Int.
J. Climatol., vol. 25, pp. 1965–1978, Dec. 2005.
[25] M. A. Friedl, D. Sulla-Menashe, B. Tan, A. Schneider, N. Ramankutty,
A. Sibley, and X. M. Huang, “MODIS Collection 5 global land cover:
Algorithm refinements and characterization of new datasets,” Remote
Sens. Environ., vol. 114, pp. 168–182, Jan. 2010.
[26] S. Moltchanov et al., “On the feasibility of measuring urban air pollution
by wireless distributed sensor networks,” Sci. Total Environ., vol. 502,
pp. 537–547, 2015.
[27] B. Lobitz, L. Beck, A. Huq, B. Wood, G. Fuchs, A. S. G. Faruque, and
R. Colwell, “Climate and infectious disease: Use of remote sensing for
detection of Vibrio cholerae by indirect measurement,” Proc. Nat. Acad.
Sci. United States Amer., vol. 97, pp. 1438–1443, Feb. 2000.
[28] G. Luber and M. McGeehin, “Climate change and extreme heat events,”
Amer. J. Prev. Med., vol. 35, pp. 429–435, Nov. 2008.
[29] J. C. Semenza and B. Menne, “Climate change and infectious diseases
in Europe,” Lancet Infectious Dis., vol. 9, pp. 365–375, Jun. 2009.
[30] J. C. Eichstaedt, H. A. Schwartz, M. L. Kern, G. Park, D. R. Labarthe, R.
M. Merchant, S. Jha, M. Agrawal, L. A. Dziurzynski, M. Sap, C. Weeg,
E. E. Larson, L. H. Ungar, and M. E. Seligman, “Psychological language
on Twitter predicts county-level heart disease mortality,” Psychol. Sci.,
vol. 26, pp. 159–69, Feb. 2015.
[31] V. Vodopivec-Jamsek, T. de Jongh, I. Gurol-Urganci, R. Atun, and J.
Car, “Mobile phone messaging for preventive health care,” Cochrane
Database Syst. Rev., vol. 12, pp. 1–44, 2012.
[32] A. Ramachandran, C. Snehalatha, J. Ram, S. Selvam, M. Simon, A.
Nanditha, A. S. Shetty, I. F. Godsland, N. Chaturvedi, A. Majeed, N.
Oliver, C. Toumazou, K. G. Alberti, and D. G. Johnston, “Effectiveness
of mobile phone messaging in prevention of type 2 diabetes by lifestyle
modification in men in India: A prospective, parallel-group, randomised
controlled trial,” Lancet Diab. Endocrinol., vol. 1, pp. 191–198, 2013.
[33] A. R. Zlotta, “Words of wisdom: Re: Genome sequencing identifies a
basis for everolimus sensitivity,” Eur. Urol., vol. 64, p. 516, Sep. 2013.
[34] G. Iyer, A. J. Hanrahan, M. I. Milowsky, H. Al-Ahmadie, S. N. Scott, M.
Janakiraman, M. Pirun, C. Sander, N. D. Socci, I. Ostrovnaya, A. Viale,
A. Heguy, L. Peng, T. A. Chan, B. Bochner, D. F. Bajorin, M. F. Berger,
B. S. Taylor, and D. B. Solit, “Genome sequencing identifies a basis for
everolimus sensitivity,” Science, vol. 338, pp. 221–223, Oct. 2012.
[35] M. J. Garnett, E. J. Edelman, S. J. Heidorn, C. D. Greenman, A. Dastur,
K. W. Lau, P. Greninger, I. R. Thompson, X. Luo, J. Soares, Q. S.
Liu, F. Iorio, D. Surdez, L. Chen, R. J. Milano, G. R. Bignell, A. T.
Tam, H. Davies, J. A. Stevenson, S. Barthorpe, S. R. Lutz, F. Kogera,
K. Lawrence, A. McLaren-Douglas, X. Mitropoulos, T. Mironenko, H.
Thi, L. Richardson, W. J. Zhou, F. Jewitt, T. H. Zhang, P. O’Brien, J.
L. Boisvert, S. Price, W. Hur, W. J. Yang, X. M. Deng, A. Butler, H.
G. Choi, J. Chang, J. Baselga, I. Stamenkovic, J. A. Engelman, S. V.
Sharma, O. Delattre, J. Saez-Rodriguez, N. S. Gray, J. Settleman, P. A.
Futreal, D. A. Haber, M. R. Stratton, S. Ramaswamy, U. McDermott,
and C. H. Benes, “Systematic identification of genomic markers of drug
sensitivity in cancer cells,” Nature, vol. 483, pp. 570–575, Mar. 2012.
[36] J. Barretina, G. Caponigro, N. Stransky, K. Venkatesan, A. A. Margolin,
S. Kim, C. J. Wilson, J. Lehar, G. V. Kryukov, D. Sonkin, A. Reddy, M.
W. Liu, L. Murray, M. F. Berger, J. E. Monahan, P. Morais, J. Meltzer,
A. Korejwa, J. Jane-Valbuena, F. A. Mapa, J. Thibault, E. Bric-Furlong,
P. Raman, A. Shipway, I. H. Engels, J. Cheng, G. Y. K. Yu, J. J. Yu,
P. Aspesi, M. de Silva, K. Jagtap, M. D. Jones, L. Wang, C. Hatton, E.
Palescandolo, S. Gupta, S. Mahan, C. Sougnez, R. C. Onofrio, T. Liefeld,
L. MacConaill, W. Winckler, M. Reich, N. X. Li, J. P. Mesirov, S. B.
Gabriel, G. Getz, K. Ardlie, V. Chan, V. E. Myer, B. L. Weber, J. Porter,
M. Warmuth, P. Finan, J. L. Harris, M. Meyerson, T. R. Golub, M. P.
Morrissey, W. R. Sellers, R. Schlegel, and L. A. Garraway, “The cancer
cell line encyclopedia enables predictive modelling of anticancer drug
sensitivity,” Nature, vol. 483, pp. 603–607, Mar. 2012.
[37] J. C. Costello, L. M. Heiser, E. Georgii, M. G¨onen, M. P. Menden, N.
J Wang, M. Bansal, M. Ammad-ud-din, P. Hintsanen, S. A. Khan, J.-
P. Mpindi, O. Kallioniemi, A. Honkela, T. Aittokallio, K. Wennerberg,
NCI DREAM Community, J. J. Collins, D. Gallahan, D. Singer, J. Saez-
Rodriguez, S. Kaski, J. W. Gray, and G. Stolovitzky, “A community effort
to assess and improve drug sensitivity prediction algorithms,” Nature
Biotechnol., vol. 32, pp. 1202–1215, Dec. 2014.
[38] TCGA-web. (2015). [Online]. Available: https://tcga-data.nci.nih.
gov/tcga/tcgaHome2.jsp
[39] The Cancer Genome Atlas Network, “Comprehensive molecular por-
traits of human breast tumours,” Nature, vol. 490, pp. 61–70,
Oct. 2012.
[40] J. N. Weinstein, E. A. Collisson, G. B. Mills, K. R. Shaw, B. A. Ozen-
berger, K. Ellrott, I. Shmulevich, C. Sander, and J. M. Stuart, “The Cancer
genome atlas pan-cancer analysis project,” Nature Genetic, vol. 45, pp.
1113–1120, Oct. 2013.
[41] J. T. Dudley, M. Sirota, M. Shenoy, R. K. Pai, S. Roedder, A. P. Chiang, A.
A. Morgan, M. M. Sarwal, P. J. Pasricha, and A. J. Butte, “Computational
repositioning of the anticonvulsant topiramate for inflammatory bowel
disease,” Sci. Transl. Med, vol. 3, pp. 1–8, Aug. 2011.
[42] M. Sirota, J. T. Dudley, J. Kim, A. P. Chiang, A. A. Morgan, A. Sweet-
Cordero, J. Sage, and A. J. Butte, “Discovery and preclinical validation
of drug indications using compendia of public gene expression data,”
Sci. Transl. Med., vol. 3, pp. 1–10, Aug. 2011.
[43] L. Huang, F. Li, J. Sheng, X. Xia, J. Ma, M. Zhan, and S. T. C. Wong,
“DrugComboRanker: drug combination discovery based on target net-
work analysis,” Bioinformatics, vol. 30, pp. 228–236, Jun. 2014.
[44] J. Lee, D. G. Kim, T. J. Bae, K. Rho, J.-T. Kim, J.-J. Lee, Y. Jang, B.
C. Kim, K. M. Park, and S. Kim, “CDA: Combinatorial drug discovery
using transcriptional response modules,” PLoS One vol. 7, no. 8, pp.
1–11, 2012.
[45] F. Iorio, R. Bosotti, E. Scacheri, V. Belcastro, P. Mithbaokar, R. Fer-
riero, L. Murino, R. Tagliaferri, N. Brunetti-Pierri, A. Isacchi, and D.
di Bernardo, “Discovery of drug mode of action and drug repositioning
from transcriptional responses,” Proc. Natl. Acad. Sci. USA, vol. 107, pp.
14621–14626, Aug. 2010.
[46] V. v. van Noort, S. Scholch, M. Iskar, G. Zeller, K. Ostertag, C.
Schweitzer, K. Werner, J. Weitz, M. Koch, and P. Bork, “Novel drug can-
didates for the treatment of metastatic colorectal cancer through global
inverse gene expression profiling,” Cancer Res., vol. 74, no. 20, pp.
5690–5699, 2014.
[47] N.S.Jahchan,J.T.Dudley,P.K.Mazur,N.Flores,D.Yang,A.Palmerton,
A.-F. Zmoos, D. Vaka, K. Q. T. Tran, M. Zhou, K. Krasinska, J. W.
Riess, J. W. Neal, P. Khatri, K. S. Park, A. J. Butte, and J. Sage, “A drug
repositioning approach identifies tricyclic antidepressants as inhibitors
of small cell lung cancer and other neuroendocrine tumors,” Cancer
Discovery, vol. 3, no. 12, pp. 1364–1377, Sep. 2013.
[48] P. Geeleher, N. J. Cox, and R. S. Huang, “Clinical drug response
can be predicted using baseline gene expression levels and in vitro
drug sensitivity in cell lines,” Genome Biol., vol. 15, no. 3, pp. 1–12,
2014.
[49] A. Prahallad, C. Sun, S. D. Huang, F. Di Nicolantonio, R. Salazar,
D. Zecchin, R. L. Beijersbergen, A. Bardelli, and R. Bernards, “Un-
responsiveness of colon cancer to BRAF(V600E) inhibition through
feedback activation of EGFR,” Nature, vol. 483, pp. 100–103,
Mar. 2012.
[50] M. Nunes, P. Vrignaud, S. Vacher, S. Richon, A. Lievre, W. Cacheux, L.
B. Weiswald, G. Massonnet, S. Chateau-Joubert, A. Nicolas, C. Dib, W.
D. Zhang, J. Watters, D. Bergstrom, S. Roman-Roman, I. Bieche, and V.
Dangles-Marie, “Evaluating patient-derived colorectal cancer xenografts
as preclinical models by comparison with patient clinical data,” Cancer
Res., vol. 75, pp. 1560–1566, Apr. 2015.
[51] A. Kreso, C. A. O’Brien, P. van Galen, O. I. Gan, F. Notta, A. M. K.
Brown, K. Ng, J. Ma, E. Wienholds, C. Dunant, A. Pollett, S. Gallinger, J.
McPherson, C. G. Mullighan, D. Shibata, and J. E. Dick, “Variable clonal
repopulation dynamics influence chemotherapy response in colorectal
cancer,” Science, vol. 339, pp. 543–548, Feb. 2013.
[52] K. A. Kim, P. W. Park, S. J. Hong, and J. Y. Park, “The effect of CYP2C19
polymorphism on the pharmacokinetics and pharmacodynamics of clopi-
dogrel: A possible mechanism for clopidogrel resistance,” Clin. Phar-
macol. Therapeutics, vol. 84, pp. 236–242, Aug. 2008.
[53] H. G. Xie, J. J. Zou, Z. Y. Hu, J. J. Zhang, F. Ye, and S. L. Chen,
“Individual variability in the disposition of and response to clopidogrel:
Pharmacogenomics and beyond,” Pharmacol. Therapeutics, vol. 129, pp.
267–289, Mar. 2011.
[54] M. H. Jiang and J. H. S. You, “Review of pharmacoeconomic evaluation
of genotype-guided antiplatelet therapy,” Expert Opinion Pharmacother-
apy, vol. 16, pp. 771–779, Apr. 2015.
[55] Y. A. Lussier and Y. Liu, “Computational approaches to phenotyping:
High-throughput phenomics,” Proc. Amer. Thoracic Soc., vol. 4, pp. 18–
25, 2007.
[56] P. B. Jensen, L. J. Jensen, and S. Brunak, “Mining electronic health
records: Towards better research applications and clinical care,” Nature
Rev. Genetic, vol. 13, pp. 395–405, 2012.
www.redpel.com+917620593389
www.redpel.com+917620593389
14. 1206 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 19, NO. 4, JULY 2015
[57] S. Hutchinson, A. Furger, D. Halliday, D. P. Judge, A. Jefferson, H.
C. Dietz, H. Firth, and P. A. Handford, “Allelic variation in normal
human FBN1 expression in a family with Marfan syndrome: A potential
modifier of phenotype?” Hum. Mol. Genetics, vol. 12, pp. 2269–2276,
Sep. 2003.
[58] A. W. den Hartog, R. Franken, A. H. Zwinderman, J. Timmermans, A. J.
Scholte, M. P. van den Berg, V. de Waard, G. Pals, B. J. Mulder, and M.
Groenink, “The risk for type B aortic dissection in Marfan syndrome,”
J. Amer. College Cardiol., vol. 65, pp. 246–254, 2015.
[59] M. Vilardell, S. Civit, and R. Herwig, “An integrative computational
analysis provides evidence for FBN1-associated network deregulation in
trisomy 21,” Biol. Open, vol. 2, pp. 771–778, Aug. 2013.
[60] H. M. Berman, J. Westbrook, Z. Feng, G. Gilliland, T. N. Bhat, H.
Weissig, I. N. Shindyalov, and P. E. Bourne, “The protein data bank,”
Nucleic Acids Res., vol. 28, pp. 235–242, Jan. 2000.
[61] P. W. Rose, A. Prlic, C. X. Bi, W. F. Bluhm, C. H. Christie, S. Dutta,
R. K. Green, D. S. Goodsell, J. D. Westbrook, J. Woo, J. Young, C.
Zardecki, H. M. Berman, P. E. Bourne, and S. K. Burley, “The RCSB
protein data bank: Views of structural biology for basic and applied
research and education,” Nucleic Acids Res., vol. 43, pp. 345–356,
Jan. 2015.
[62] P. W. Rose, B. Beran, C. Bi, W. F. Bluhm, D. Dimitropoulos, D. S.
Goodsell, A. Prli´c, M. Quesada, G. B. Quinn, and J. D. Westbrook, “The
RCSB protein data bank: Redesigned web site and web services,” Nucleic
Acids Res., vol. 39, pp. 392–401, 2011.
[63] M. Wilhelm, J. Schlegl, H. Hahne, A. M. Gholami, M. Lieberenz, M. M.
Savitski, E. Ziegler, L. Butzmann, S. Gessulat, H. Marx, T. Mathieson, S.
Lemeer, K. Schnatbaum, U. Reimer, H. Wenschuh, M. Mollenhauer, J.
Slotta-Huspenina, J. H. Boese, M. Bantscheff, A. Gerstmair, F. Faerber,
and B. Kuster, “Mass-spectrometry-based draft of the human proteome,”
Nature, vol. 509, pp. 582–587, May 2014.
[64] D. S. Wishart, T. Jewison, A. C. Guo, M. Wilson, C. Knox, Y. F. Liu,
Y. Djoumbou, R. Mandal, F. Aziat, E. Dong, S. Bouatra, I. Sinelnikov,
D. Arndt, J. G. Xia, P. Liu, F. Yallou, T. Bjorndahl, R. Perez-Pineiro, R.
Eisner, F. Allen, V. Neveu, R. Greiner, and A. Scalbert, “HMDB 3.0-The
human metabolome database in 2013,” Nucleic Acids Res., vol. 41, pp.
801–807, Jan. 2013.
[65] J. R. Bain, R. D. Stevens, B. R. Wenner, O. Ilkayeva, D. M. Muoio,
and C. B. Newgard, “Metabolomics applied to diabetes research moving
from information to knowledge,” Diabetes, vol. 58, pp. 2429–2443, Nov.
2009.
[66] R. Goodacre, S. Vaidyanathan, W. B. Dunn, G. G. Harrigan, and D. B.
Kell, “Metabolomics by numbers: Acquiring and understanding global
metabolite data,” Trends Biotechnol., vol. 22, pp. 245–252, May 2004.
[67] H. Tilg and A. R. Moschen, “Food, immunity, and the microbiome,”
Gastroenterology, vol. 148, pp. 1107–1119, 2015.
[68] A. K. Jain, M. N. Murty, and P. J. Flynn, “Data clustering: A review,”
ACM Comput. Surveys, vol. 31, pp. 264–323, Sep. 1999.
[69] R. Xu and D. Wunsch, “Survey of clustering algorithms,” IEEE Trans.
Neural Netw., vol. 16, no. 3, pp. 645–678, May 2005.
[70] S. K. Mazmanian, J. L. Round, and D. L. Kasper, “A microbial symbiosis
factor prevents intestinal inflammatory disease,” Nature, vol. 453, pp.
620–625, 2008.
[71] D. Z. Wang, R. C. C. Cheung, and H. Yan, “Design exploration of
geometric biclustering for microarray data analysis in data mining,” IEEE
Trans. Parallel Distributed Syst., vol. 25, no. 10, pp. 2540–2550, Oct.
2014.
[72] C. C. Poon and Y.-T. Zhang, “Perspectives on high technologies for low-
cost healthcare,” IEEE Eng. Med. Biology Mag., vol. 27, no. 5, pp. 42–47,
Sep/Oct. 2008.
[73] P. Rashidi and A. Mihailidis, “A survey on ambient-assisted living tools
for older adults,” IEEE J. Biomed. Health Informat., vol. 17, no. 3, pp.
579–590, May 2013.
[74] Y. Zheng, X. Ding, C. C. Y. Poon, B. Lo, H. Zhang, X. Zhou, G. Yang,
N. Zhao, and Y. Zhang, “Unobtrusive sensing and wearable devices for
health informatics,” IEEE Trans. Biomed. Informat., vol. 61, no. 5, pp.
1538–1554, May 2014.
[75] Y.-L. Zheng, B. P. Yan, Y.-T. Zhang, and C. C. Y. Poon, “An Armband
wearable device for overnight and cuff-less blood pressure measure-
ment,” IEEE Trans. Biomed. Eng., vol. 61, no. 7, pp. 2179–2186, Jul.
2014.
[76] G. Z. Yang, Body Sensor Networks, 2nd ed. New York, NY, USA:
Springer, 2014.
[77] J. N. Anker, W. P. Hall, O. Lyandres, N. C. Shah, J. Zhao, and R. P. Van
Duyne, “Biosensing with plasmonic nanosensors,” Nature Mater., vol. 7,
pp. 442–453, Jun. 2008.
[78] G. S. Wilson and R. Gifford, “Biosensors for real-time in vivo
measurements,” Biosens. Bioelectron., vol. 20, pp. 2388–2403, Jun.
2005.
[79] C. R. Yonzon, C. L. Haynes, X. Y. Zhang, J. T. Walsh, and R. P. Van
Duyne, “A glucose biosensor based on surface-enhanced Raman scat-
tering: Improved partition layer, temporal stability, reversibility, and re-
sistance to serum protein interference,” Analytical Chem., vol. 76, pp.
78–85, Jan. 2004.
[80] R. Hovorka, “Continuous glucose monitoring and closed-loop systems,”
Diab. Med., vol. 23, pp. 1–12, Jan. 2006.
[81] M. Breton, A. Farret, D. Bruttomesso, S. Anderson, L. Magni, S. Patek,
C. D. Man, J. Place, S. Demartini, S. Del Favero, C. Toffanin, C. Hughes-
Karvetski, E. Dassau, H. Zisser, F. J. Doyle, G. De Nicolao, A. Avogaro,
C. Cobelli, E. Renard, and B. Kovatchev, “Fully integrated artificial pan-
creas in type 1 diabetes: Modular closed-loop glucose control maintains
near normoglycemia,” Diabetes, vol. 61, pp. 2230–2237, Sep. 2012.
[82] S. Redmond, N. Lovell, G. Yang, A. Horsch, P. Lukowicz, L. Murrugarra,
and M. Marschollek, “What does big data mean for wearable sensor
systems?: Contribution of the IMIA wearable sensors in healthcare WG,”
Yearbook Med. Informat., vol. 9, pp. 135–142, 2014.
[83] Z. Pang, L. Zheng, J. Tian, S. Kao-Walter, E. Dubrova, and Q. Chen,
“Design of a terminal solution for integration of in-home health care
devices and services towards the Internet-of-things,” Enterprise Inf. Syst.,
vol. 9, pp. 86–116, 2015.
[84] J. B. Predd, S. R. Kulkarni, and H. V. Poor, “Distributed learning in wire-
less sensor networks,” in Proc. Annu. Allerton Conf. Commun., Control
Comput., 2005, pp. 1–10.
[85] J.-J. Xiao, A. Ribeiro, Z.-Q. Luo, and G. B. Giannakis, “Distributed
compression-estimation using wireless sensor networks,” IEEE Signal
Process. Mag., vol. 23, no. 4, pp. 27–41, Jul. 2006.
[86] Q. Liu, B. P. Yan, C.-M. Yu, Y.-T. Zhang, and C. C. Y. Poon, “Attenuation
of systolic blood pressure and pulse transit time hysteresis during exercise
and recovery in cardiovascular patients,” IEEE Trans. Biomed. Eng., vol.
61, no. 2, pp. 346–352, Feb. 2014.
[87] Y.-L. Zheng, B. P. Yan, Y.-T. Zhang, and C. C. Y. Poon, “Noninvasive
characterization of vascular tone by model-based system identification
in healthy and heart failure patients,” Ann. Biomed. Eng., 2015, to be
published.
[88] D. C. Turk, “Clinical effectiveness and cost-effectiveness of treatments
for patients with chronic pain,” Clin. J. Pain, vol. 18, pp. 355–365, 2002.
[89] T. Loddenkemper, A. Pan, S. Neme, K. B. Baker, A. R. Rezai, D. S.
Dinner, E. B. Montgomery, and H. O. L¨uders, “Deep brain stimulation
in epilepsy,” J. Clin. Neurophysiol., vol. 18, pp. 514–532, 2001.
[90] Deep-Brain Stimulation for Parkinson’s Disease Study Group, “Deep-
brain stimulation of the subthalamic nucleus or the pars interna of the
globus pallidus in Parkinson’s disease,” N. Engl. J. Med., vol. 345, pp.
956–963, 2001.
[91] L. Clifton, D. A. Clifton, M. A. F. Pimentel, P. J. Watkinson, and L.
Tarassenko, “Gaussian processes for personalized e-health monitoring
with wearable sensors,” IEEE Trans. Biomed. Eng., vol. 60, no. 1, pp.
193–197, Jan. 2013.
[92] J. M. Lillo-Castellano, I. Mora-Jimenez, R. Santiago-Mozos, F.
Chavarria-Asso, “Symmetrical compression distance for arrhythmia dis-
crimination in cloud-based big-data services,” IEEE J. Biomed. Health
Inform., pp. 1–11, 2015, to be published.
[93] J. H. Wang, J. Y. Luo, L. Dong, J. Gong, and M. Tong, “Epidemiology
of gastroesophageal reflux disease: A general population-based study in
Xi’an of Northwest China,” World J. Gastroenterol., vol. 10, pp. 1647–
1651, Jun. 2004.
[94] F. I. Caird, G. R. Andrews, and R. D. Kennedy, “Effect of posture on
blood-pressure in elderly,” Brit. Heart J., vol. 35, pp. 527–530, 1973.
[95] M. Z. Poh, D. J. McDuff, and R. W. Picard, “Non-contact, automated
cardiac pulse measurements using video imaging and blind source sepa-
ration,” Opt. Express, vol. 18, pp. 10762–10774, May 2010.
[96] L. Atallah, B. Lo, R. King, and G.-Z. Yang, “Sensor positioning for ac-
tivity recognition using wearable accelerometers,” IEEE Trans. Biomed.
Circuits Syst., vol. 5, no. 4, pp. 320–329, Aug. 2011.
[97] C. C. Y. Poon and Y. T. Zhang, “Cuff-less and noninvasive measurements
of arterial blood pressure by pulse transit time,” in Proc. IEEE Eng. Med.
Biol. Soc. Int. Conf., 2005, pp. 5877–5880.
[98] R. Atun, S. Jaffar, S. Nishtar, F. M. Knaul, M. L. Barreto, M. Nyirenda,
N. Banatvala, and P. Piot, “Improving responsiveness of health systems
to non-communicable diseases,” Lancet, vol. 381, pp. 690–697, 2013.
[99] S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit
by stimulated emission depletion fluorescence microscopy,” Opt. Lett.,
vol. 19, pp. 780–782, Jun. 1994.
www.redpel.com+917620593389
www.redpel.com+917620593389
15. ANDREU-PEREZ et al.: BIG DATA FOR HEALTH 1207
[100] E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J.
S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess,
“Imaging intracellular fluorescent proteins at nanometer resolution,” Sci-
ence, vol. 313, pp. 1642–1645, Sep. 2006.
[101] A. Sundaramurthy, P. J. Schuck, N. R. Conley, D. P. Fromm, G. S. Kino,
and W. E. Moerner, “Toward nanometer-scale optical photolithography:
Utilizing the near-field of bowtie optical nanoantennas,” Nano Lett., vol.
6, pp. 355–360, Mar. 2006.
[102] S. W. Hell, “Far-field optical nanoscopy,” Science, vol. 316, pp. 1153–
1158, May 2007.
[103] C. Dais, G. Mussler, T. Fromherz, E. Muller, H. H. Solak, and D. Grutz-
macher, “SiGe quantum dot crystals with periods down to 35nm,” Nan-
otechnology, vol. 26, no. 25, pp. 1–6, Jun 2015.
[104] W. C. Chan, D. J. Maxwell, X. Gao, R. E. Bailey, M. Han, and S. Nie,
“Luminescent quantum dots for multiplexed biological detection and
imaging,” Curr. Opinion Biotechnol., vol. 13, pp. 40–46, 2002.
[105] X. Wu, H. Liu, J. Liu, K. N. Haley, J. A. Treadway, J. P. Larson, N. Ge, F.
Peale, and M. P. Bruchez, “Immunofluorescent labeling of cancer marker
Her2 and other cellular targets with semiconductor quantum dots,” Nature
Biotechnol., vol. 21, pp. 41–46, 2003.
[106] S. Pathak, S.-K. Choi, N. Arnheim, and M. E. Thompson, “Hydroxylated
quantum dots as luminescent probes for in situ hybridization,” J. Amer.
Chem. Soc., vol. 123, pp. 4103–4104, 2001.
[107] J. Farlow, D. Seo, K. E. Broaders, M. J. Taylor, Z. J. Gartner, and
Y.-w. Jun, “Formation of targeted monovalent quantum dots by steric
exclusion,” Nature Methods, vol. 10, pp. 1203–1205, 2013.
[108] A. Mohs, M. Mancini, J. Provenzale, C. Saba, K. Cornell, E. Howerth,
and S. Nie, “An integrated widefield imaging and spectroscopy system
for contrast-enhanced, image-guided resection of tumors,” IEEE Trans.
Biomed. Eng., vol. 62, no. 5, pp. 1416–1424 May 2015.
[109] I. LeGrice, P. Hunter, and B. Smaill, “Laminar structure of the heart: A
mathematical model,” Amer. J. Physiol.-Heart Circul. Physiol., vol. 272,
no. 5, pp. 2466–2476, 1997.
[110] I. LeGrice, P. Hunter, A. Young, and B. Smaill, “The architecture of the
heart: A data–based model,” Philosoph. Trans. Roy. Soc. London. Ser. A,
Math., Phys. Eng. Sci., vol. 359, pp. 1217–1232, 2001.
[111] C.-h. Luo and Y. Rudy, “A dynamic model of the cardiac ventricu-
lar action potential. I. Simulations of ionic currents and concentration
changes,” Circulation Res., vol. 74, pp. 1071–1096, 1994.
[112] A. Dani and B. Huang, “New resolving power for light microscopy:
applications to neurobiology,” Curr. Opinion Neurobiol., vol. 20, pp.
648–652, 2010.
[113] D. A. Feinberg, S. Moeller, S. M. Smith, E. Auerbach, S. Ramanna, M.
F. Glasser, K. L. Miller, K. Ugurbil, and E. Yacoub, “Multiplexed echo
planar imaging for sub-second whole brain FMRI and fast diffusion
imaging,” PLoS One, vol. 5, no. 12, p. e15710, 2010.
[114] A. Aji, F. Wang, and J. H. Saltz, “Towards building a high performance
spatial query system for large scale medical imaging data,” in Proc. 20th
Int. Conf. Adv. Geograph. Inf. Syst., 2012, pp. 309–318.
[115] O. Sporns, G. Tononi, and R. Kotter, “The human connectome: A struc-
tural description of the human brain,” PLoS Comput. Biol., vol. 1, pp.
245–251, Sep. 2005.
[116] P. Kochunov, N. Jahanshad, D. Marcus, A. Winkler, E. Sprooten, T. E.
Nichols, S. N. Wright, L. E. Hong, B. Patel, T. Behrens, S. Jbabdi, J.
Andersson, C. Lenglet, E. Yacoub, S. Moeller, E. Auerbach, K. Ugurbil,
S. N. Sotiropoulos, R. M. Brouwer, B. Landman, H. Lemaitre, A. den
Braber, M. P. Zwiers, S. Ritchie, K. van Hulzen, L. Almasy, J. Curran,
G. I. Dezubicaray, R. Duggirala, P. Fox, N. G. Martin, K. L. McMahon,
B. Mitchell, R. L. Olvera, C. Peterson, J. Starr, J. Sussmann, J. Wardlaw,
M. Wright, D. I. Boomsma, R. Kahn, E. J. C. de Geus, D. E. Williamson,
A. Hariri, D. van ’t Ent, M. E. Bastin, A. McIntosh, I. J. Deary, H.
E. Hulshoffpol, J. Blangero, P. M. Thompson, D. C. Glahn, and D. C.
van Essen, “Heritability of fractional anisotropy in human white matter:
A comparison of human connectome project and ENIGMA-DTI data,”
NeuroImage, vol. 111, pp. 300–311, May 2015.
[117] J. M. Perkel, “Life Science Technologies: This is your brain: Mapping
the connectome,” Science, vol. 339, pp. 350–352, 2013.
[118] J. C. Lambert, S. Heath, G. Even, D. Campion, K. Sleegers, M. Hiltunen,
O. Combarros, D. Zelenika, M. J. Bullido, B. Tavernier, L. Letenneur,
K. Bettens, C. Berr, F. Pasquier, N. Fievet, P. Barberger-Gateau, S. En-
gelborghs, P. De Deyn, I. Mateo, A. Franck, S. Helisalmi, E. Porcellini,
O. Hanon, M. M. de Pancorbo, C. Lendon, C. Dufouil, C. Jaillard, T.
Leveillard, V. Alvarez, P. Bosco, M. Mancuso, F. Panza, B. Nacmias, P.
Bossu, P. Piccardi, G. Annoni, D. Seripa, D. Galimberti, D. Hannequin,
F. Licastro, H. Soininen, K. Ritchie, H. Blanche, J. F. Dartigues, C.
Tzourio, I. Gut, C. Van Broeckhoven, A. Alperovitch, M. Lathrop, P.
Amouyel, “Genome-wide association study identifies variants at CLU
and CR1 associated with Alzheimer’s disease,” Nature Genetics, vol. 41,
pp. 1094–199, Oct. 2009.
[119] H. Stefansson, R. A. Ophoff, S. Steinberg, O. A. Andreassen, S. Cichon,
D. Rujescu, T. Werge, O. P. H. Pietilainen, O. Mors, P. B. Mortensen,
E. Sigurdsson, O. Gustafsson, M. Nyegaard, A. Tuulio-Henriksson, A.
Ingason, T. Hansen, J. Suvisaari, J. Lonnqvist, T. Paunio, A. D. Bor-
glum, A. Hartmann, A. Fink-Jensen, M. Nordentoft, D. Hougaard, B.
Norgaard-Pedersen, Y. Bottcher, J. Olesen, R. Breuer, H. J. Moller, I.
Giegling, H. B. Rasmussen, S. Timm, M. Mattheisen, I. Bitter, J. M.
Rethelyi, B. B. Magnusdottir, T. Sigmundsson, P. Olason, G. Mason, J.
R. Gulcher, M. Haraldsson, R. Fossdal, T. E. Thorgeirsson, U. Thorsteins-
dottir, M. Ruggeri, S. Tosato, B. Franke, E. Strengman, L. A. Kiemeney,
I. Melle, S. Djurovic, L. Abramova, V. Kaleda, J. Sanjuan, R. de Frutos,
E. Bramon, E. Vassos, G. Fraser, U. Ettinger, M. Picchioni, N. Walker,
T. Toulopoulou, A. C. Need, D. Ge, J. L. Yoon, K. V. Shianna, N. B.
Freimer, R. M. Cantor, R. Murray, A. Kong, V. Golimbet, A. Carracedo,
C. Arango, J. Costas, E. G. Jonsson, L. Terenius, I. Agartz, H. Petursson,
M. M. Nothen, M. Rietschel, P. M. Matthews, P. Muglia, L. Peltonen, D.
St Clair, D. B. Goldstein, K. Stefansson, and D. A. Collier, “Common
variants conferring risk of schizophrenia,” Nature, vol. 460, no. 7256,
pp. 744–747, Aug. 2009.
[120] N. Jahanshad, P. Rajagopalan, X. Hua, D. P. Hibar, T. M. Nir, A. W. Toga,
C. R. Jack, A. J. Saykin, R. C. Green, M. W. Weiner, S. E. Medland,
G. W. Montgomery, N. K. Hansell, K. L. McMahon, G. I. de Zubicaray,
N. G. Martin, M. J. Wright, P. M. Thompson, “Genome-wide scan of
healthy human connectome discovers SPON1 gene variant influencing
dementia severity,” Proc. Nat. Acad. Sci. United States Amer., vol. 110,
pp. 4768–4773, Mar. 2013.
[121] Editorial, “Rethinking the brain,” Nature, vol. 519, pp. 389–389, Mar.
2015.
[122] F. A. C. Azevedo, L. R. B. Carvalho, L. T. Grinberg, J. M. Farfel, R. E.
L. Ferretti, R. E. P. Leite, W. J. Filho, R. Lent, and S. Herculano-Houzel,
“Equal numbers of neuronal and nonneuronal cells make the human brain
an isometrically scaled–up primate brain,” J. Compara. Neurolog., vol.
513, no. 5, pp. 532–541, 2009.
[123] Editorial. “In praise of soft science,” Nature, vol. 435, p. 1003, Jun 2005.
[124] D. Lazer, R. Kennedy, G. King, and A. Vespignani, “The Parable of
Google Flu: Traps in big data analysis,” Science, vol. 343, pp. 1203–
1205, Mar 2014.
[125] K. Schramm, C. Marzi, C. Schurmann, M. Carstensen, E. Reinmaa, R.
Biffar, G. Eckstein, C. Gieger, H.-J. Grabe, and G. Homuth, “Mapping
the genetic architecture of gene regulation in whole blood,” PLoS One,
vol. 9, pp. 1–13, 2014.
[126] H. J. Murff, F. FitzHenry, M. E. Matheny, N. Gentry, K. L. Kotter, K.
Crimin, R. S. Dittus, A. K. Rosen, P. L. Elkin, and S. H. Brown, “Auto-
mated identification of postoperative complications within an electronic
medical record using natural language processing,” JAMA, vol. 306, pp.
848–855, 2011.
[127] ´A. Skow, I. Douglas, and L. Smeeth, “The association between Parkin-
son’s disease and anti epilepsy drug carbamazepine: A case—Control
study using the UK general practice research database,” Brit. J. Clin.
Pharmacol., vol. 76, pp. 816–822, 2013.
[128] G. P. Consortium, “A map of human genome variation from
population-scale sequencing,” Nature, vol. 467, pp. 1061–1073,
2010.
[129] L. Clifton, D. A. Clifton, M. A. Pimentel, P. J. Watkinson, and L.
Tarassenko, “Predictive monitoring of mobile patients by combining
clinical observations with data from wearable sensors,” IEEE J. Biomed.
Health Inform., vol. 18, no. 3, pp. 722–730, May 2014.
[130] J. L. Nielson, C. F. Guandique, A. W. Liu, D. A. Burke, A. T. Lash,
R. Moseanko, S. Hawbecker, S. C. Strand, S. Zdunowski, K. A. Irvine,
J. H. Brock, Y. S. Nout-Lomas, J. C. Gensel, K. D. Anderson, M. R.
Segal, E. S. Rosenzweig, D. S. Magnuson, S. R. Whittemore, D. M.
McTigue, P. G. Popovich, A. G. Rabchevsky, S. W. Scheff, O. Steward,
G. Courtine, V. R. Edgerton, M. H. Tuszynski, M. S. Beattie, J. C. Bres-
nahan, and A. R. Ferguson, “Development of a database for translational
spinal cord injury research,” J. Neurotrauma, vol. 31, pp. 1789–1799,
Nov. 2014.
[131] J. E. Anderson and D. C. Chang, “Using electronic health records for
surgical quality improvement in the era of big data,” JAMA Surg., vol.
150, pp. 24–29, Jan. 2015.
[132] B. B. Biswal, M. Mennes, X.-N. Zuo, S. Gohel, C. Kelly, S. M. Smith, C.
F. Beckmann, J. S. Adelstein, R. L. Buckner, and S. Colcombe, “Toward
discovery science of human brain function,” Proc. Nat. Acad. Sci., vol.
107, pp. 4734–4739, 2010.
www.redpel.com+917620593389
www.redpel.com+917620593389
16. 1208 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 19, NO. 4, JULY 2015
[133] A. Mikhno, F. Zanderigo, R. T. Ogden, J. J. Mann, E. D. Angelini,
A. F. Laine, and R. V. Parsey, “Toward non-invasive quantification of
brain radioligand binding by combining electronic health records and
dynamic PET imaging data,” IEEE J. Biomed. Health Informat., 2015,
to be published.
[134] A. D. Kramer, J. E. Guillory, and J. T. Hancock, “Experimental evidence
of massive-scale emotional contagion through social networks,” Proc.
Nat. Acad. Sci., vol. 111, pp. 8788–8790, 2014.
[135] M. Hilario and A. Kalousis, “Approaches to dimensionality reduction in
proteomic biomarker studies,” Briefings Bioinformat., vol. 9, pp. 102–
118, 2008.
[136] G.-Z. Yang, J. Andreu-Perez, X. Hu, and S. Thiemjarus, “Multi-sensor
fusion,” in Body Sensor Networks. New York, NY, USA: Springer, 2014,
pp. 301–354.
[137] E. A. Mohammed, B. H. Far, and C. Naugler, “Applications of the
MapReduce programming framework to clinical big data analysis: cur-
rent landscape and future trends,” BioData Mining, vol. 7, no. 22, pp.
1–23, 2014.
[138] G. E. Hinton, “Learning multiple layers of representation,” Trends Cogn.
Sci., vol. 11, pp. 428–434, 2007.
[139] I. Arel, D. C. Rose, and T. P. Karnowski, “Deep machine learning—A
new frontier in artificial intelligence research [research frontier],” IEEE
Comput. Intell. Mag., vol. 5, no. 4, pp. 13–18, Nov. 2010.
[140] O. Bousquet and L. Bottou, “The tradeoffs of large scale learning,” in
Advances in Neural Information Processing System. Cambridge, MA,
USA: MIT Press, 2008, pp. 161–168.
[141] C. C. Aggarwal, Data Streams: Models and Algorithms, vol. 31. New
York, NY, USA: Springer, 2007.
[142] J. Andreu and P. Angelov, “Real-time human activity recognition from
wireless sensors using evolving fuzzy systems,” in Proc. IEEE Int. Conf.
Fuzzy Syst., 2010, pp. 1–8.
[143] R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Roy.
Statist. Soc. Ser. B, Methodological, vol. 58, pp. 267–288, 1996.
[144] V. R. Carvalho and W. W. Cohen, “Single-pass online learning: Per-
formance, voting schemes and online feature selection,” in Proc. ACM
SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2006, pp. 548–553.
[145] R. Wainwright, F. Donck, M. Fertik, M. Rake, S. C. Savage, and J. H.
Cloppinger, “Personal Data: The ‘new oil’ of the 21st century,” presented
at the World Economic Forum Europe Central Asia, Vienna, Austria,
2011.
[146] C. C. Y. Poon, Y. T. Zhang, and S. D. Bao, “A novel biometrics method
to secure wireless body area sensor networks for telemedicine and m-
health,” IEEE Commun. Mag., vol. 44, no. 4, pp. 73–81, Apr. 2006.
Authors’ photographs and biographies not available at the time of publication.
www.redpel.com+917620593389
www.redpel.com+917620593389