Presentation delivered to the Chicago Technology For Value-Based Healthcare Meetup (https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/)
"Big Data" is big business, but what does it really mean? How will big data impact industries and consumers? This slide deck goes through some of the high level details of the market and how it is revolutionizing the world.
Workshop with Joe Caserta, President of Caserta Concepts, at Data Summit 2015 in NYC.
Data science, the ability to sift through massive amounts of data to discover hidden patterns and predict future trends and actions, may be considered the "sexiest" job of the 21st century, but it requires an understanding of many elements of data analytics. This workshop introduced basic concepts, such as SQL and NoSQL, MapReduce, Hadoop, data mining, machine learning, and data visualization.
For notes and exercises from this workshop, click here: https://github.com/Caserta-Concepts/ds-workshop.
For more information, visit our website at www.casertaconcepts.com
Fortune Teller API - Doing Data Science with Apache SparkBas Geerdink
This presentation of the Endpoint 2015 conference gives an overview of a short data science project: predicting the future happiness of a person, as if he or she walks into a circus tent! First, the domain problem is analyzed. Then, the data is gathered and analyzed. Finally a linear regression model is created and the app is published in the form of a REST API. The technology that is demoed is using Apache Spark and Zeppelin, and can be found on Github: https://github.com/geerdink/FortuneTellerApi
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Big Data is becoming economically feasible for health care. These slides describe how the cost of sensors, data processing, data storage and data analyzing are falling, how new and better forms of storage and algorithms are being implemented, and what this means for sustainable health care. These changes are enabling a move towards personalized health care.
"Big Data" is big business, but what does it really mean? How will big data impact industries and consumers? This slide deck goes through some of the high level details of the market and how it is revolutionizing the world.
Workshop with Joe Caserta, President of Caserta Concepts, at Data Summit 2015 in NYC.
Data science, the ability to sift through massive amounts of data to discover hidden patterns and predict future trends and actions, may be considered the "sexiest" job of the 21st century, but it requires an understanding of many elements of data analytics. This workshop introduced basic concepts, such as SQL and NoSQL, MapReduce, Hadoop, data mining, machine learning, and data visualization.
For notes and exercises from this workshop, click here: https://github.com/Caserta-Concepts/ds-workshop.
For more information, visit our website at www.casertaconcepts.com
Fortune Teller API - Doing Data Science with Apache SparkBas Geerdink
This presentation of the Endpoint 2015 conference gives an overview of a short data science project: predicting the future happiness of a person, as if he or she walks into a circus tent! First, the domain problem is analyzed. Then, the data is gathered and analyzed. Finally a linear regression model is created and the app is published in the form of a REST API. The technology that is demoed is using Apache Spark and Zeppelin, and can be found on Github: https://github.com/geerdink/FortuneTellerApi
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Big Data is becoming economically feasible for health care. These slides describe how the cost of sensors, data processing, data storage and data analyzing are falling, how new and better forms of storage and algorithms are being implemented, and what this means for sustainable health care. These changes are enabling a move towards personalized health care.
Slides of my presentation at 9th Amirkabir Linux & Open-source Softwares Festival, about Big Data Computing Platforms and the rise of the so-called "Fast Data" phenomenon, and the architectures and state-of-the-art platforms for dealing with them.
It is almost impossible to escape the topic of Data Science. While the core of Data Science has remained the same over the last decade, it’s emergence to the forefront is spurred by both the availability of new data types and a true realization of the value that it delivers. In this session, we will provide an overview of data science, the different classes of machine learning algorithm and deliver an end-to-end demonstration of performing Machine Learning Using Hadoop. Audience: Developers, Data Scientist Architects and System Engineers.
Recording: https://hortonworks.webex.com/hortonworks/lsr.php?RCID=4175a7421d00257f33df146f50c41af8
Gain New Insights by Analyzing Machine Logs using Machine Data Analytics and BigInsights.
Half of Fortune 500 companies experience more than 80 hours of system down time annually. Spread evenly over a year, that amounts to approximately 13 minutes every day. As a consumer, the thought of online bank operations being inaccessible so frequently is disturbing. As a business owner, when systems go down, all processes come to a stop. Work in progress is destroyed and failure to meet SLA’s and contractual obligations can result in expensive fees, adverse publicity, and loss of current and potential future customers. Ultimately the inability to provide a reliable and stable system results in loss of $$$’s. While the failure of these systems is inevitable, the ability to timely predict failures and intercept them before they occur is now a requirement.
A possible solution to the problem can be found is in the huge volumes of diagnostic big data generated at hardware, firmware, middleware, application, storage and management layers indicating failures or errors. Machine analysis and understanding of this data is becoming an important part of debugging, performance analysis, root cause analysis and business analysis. In addition to preventing outages, machine data analysis can also provide insights for fraud detection, customer retention and other important use cases.
In the past decade a number of technologies have revolutionized the way we do analytics in banking. In this talk we would like to summarize this journey from classical statistical offline modeling to the latest real-time streaming predictive analytical techniques.
In particular, we will look at hadoop and how this distributing computing paradigm has evolved with the advent of in-memory computing. We will introduce Spark, an engine for large-scale data processing optimized for in-memory computing.
Finally, we will describe how to make data science actionable and how to overcome some of the limitations of current batch processing with streaming analytics.
Big Data As a service - Sethuonline.com | Sathyabama University Chennaisethuraman R
An Efficient Framework for Data As A Service in Hadoop EcoSystem.
R.Sethuraman M.E,(PhD).,
Assistant Professor,
Faculty of Computing,
Dept of Computer Science Engineering,
Sathyabama University
http://Sethuonline.com
Big Data Analysis Patterns - TriHUG 6/27/2013boorad
Big Data Analysis Patterns: Tying real world use cases to strategies for analysis using big data technologies and tools.
Big data is ushering in a new era for analytics with large scale data and relatively simple algorithms driving results rather than relying on complex models that use sample data. When you are ready to extract benefits from your data, how do you decide what approach, what algorithm, what tool to use? The answer is simpler than you think.
This session tackles big data analysis with a practical description of strategies for several classes of application types, identified concretely with use cases. Topics include new approaches to search and recommendation using scalable technologies such as Hadoop, Mahout, Storm, Solr, & Titan.
Big data analytics is the process of examining large data sets containing a variety of data types i.e., big data to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. The analytical findings can lead to more effective marketing, new revenue opportunities, better customer service, improved operational efficiency, competitive advantages over rival organizations and other business benefits. Enterprises are increasingly looking to find actionable insights into their data. Many big data projects originate from the need to answer specific business questions. With the right big data analytics platforms in place, an enterprise can boost sales, increase efficiency, and improve operations, customer service and risk management. Notably, the business area getting the most attention relates to increasing efficiencies and optimizing operations. By using big data analytics you can extract only the relevant information from terabytes, petabytes and exabytes, and analyse it to transform your business decisions for the future. Becoming proactive with big data analytics isn't a one-time endeavour, it is more of a culture change – a new way of gaining ground.
Keywords: business, analytics, exabytes, efficiency, data sets
Big Data HPC Convergence and a bunch of other thingsGeoffrey Fox
This talk supports the Ph.D. in Computational & Data Enabled Science & Engineering at Jackson State University. It describes related educational activities at Indiana University, the Big Data phenomena, jobs and HPC and Big Data computations. It then describes how HPC and Big Data can be converged into a single theme.
Hadoop is a cluster computing framework.
Hadoop tools empower more developers and more organizations to leverage Hadoop for big data management. There’s been a growing demand for Hadoop tools that can make Hadoop's vast processing power more accessible. I’m going to present a Brief explanation of the various applications and tools that are associated with Hadoop. Also, I would be presenting a project how on how some of these tools where used to analyze the percentage of brain injured person in New England in the month of December 2010 survey to determine if brain transplant was an option to solve brain problem in the Nation.
Applying Noisy Knowledge Graphs to Real ProblemsDataWorks Summit
Knowledge graphs (KGs) have recently emerged as a powerful way to represent knowledge in multiple communities, including data mining, natural language processing and machine learning. Large-scale KGs like Wikidata and DBpedia are openly available, while in industry, the Google Knowledge Graph is a good example of proprietary knowledge that continues to fuel impressive advances in Google's semantic search capabilities. Yet, both crowdsourced and automatically constructed KGs suffer from noise, both during KG construction and during search and inference. In this talk, I will discuss how to build and use such knowledge graphs effectively, despite the noise and sparsity of labeled data, to solve real-world social problems such as providing insights in disaster situations, and helping law enforcement fight human trafficking. I will conclude by providing insight on the lessons learned, and the applicability of research techniques to industrial problems. The talk will be designed to appeal both to business and technical leaders.
This presentation contains a broad introduction to big data and its technologies.
Big data is a term that describes the large volume of data – both structured and unstructured – that inundates a business on a day-to-day basis.
Big Data is a phrase used to mean a massive volume of both structured and unstructured data that is so large it is difficult to process using traditional database and software techniques. In most enterprise scenarios the volume of data is too big or it moves too fast or it exceeds current processing capacity.
Slides of my presentation at 9th Amirkabir Linux & Open-source Softwares Festival, about Big Data Computing Platforms and the rise of the so-called "Fast Data" phenomenon, and the architectures and state-of-the-art platforms for dealing with them.
It is almost impossible to escape the topic of Data Science. While the core of Data Science has remained the same over the last decade, it’s emergence to the forefront is spurred by both the availability of new data types and a true realization of the value that it delivers. In this session, we will provide an overview of data science, the different classes of machine learning algorithm and deliver an end-to-end demonstration of performing Machine Learning Using Hadoop. Audience: Developers, Data Scientist Architects and System Engineers.
Recording: https://hortonworks.webex.com/hortonworks/lsr.php?RCID=4175a7421d00257f33df146f50c41af8
Gain New Insights by Analyzing Machine Logs using Machine Data Analytics and BigInsights.
Half of Fortune 500 companies experience more than 80 hours of system down time annually. Spread evenly over a year, that amounts to approximately 13 minutes every day. As a consumer, the thought of online bank operations being inaccessible so frequently is disturbing. As a business owner, when systems go down, all processes come to a stop. Work in progress is destroyed and failure to meet SLA’s and contractual obligations can result in expensive fees, adverse publicity, and loss of current and potential future customers. Ultimately the inability to provide a reliable and stable system results in loss of $$$’s. While the failure of these systems is inevitable, the ability to timely predict failures and intercept them before they occur is now a requirement.
A possible solution to the problem can be found is in the huge volumes of diagnostic big data generated at hardware, firmware, middleware, application, storage and management layers indicating failures or errors. Machine analysis and understanding of this data is becoming an important part of debugging, performance analysis, root cause analysis and business analysis. In addition to preventing outages, machine data analysis can also provide insights for fraud detection, customer retention and other important use cases.
In the past decade a number of technologies have revolutionized the way we do analytics in banking. In this talk we would like to summarize this journey from classical statistical offline modeling to the latest real-time streaming predictive analytical techniques.
In particular, we will look at hadoop and how this distributing computing paradigm has evolved with the advent of in-memory computing. We will introduce Spark, an engine for large-scale data processing optimized for in-memory computing.
Finally, we will describe how to make data science actionable and how to overcome some of the limitations of current batch processing with streaming analytics.
Big Data As a service - Sethuonline.com | Sathyabama University Chennaisethuraman R
An Efficient Framework for Data As A Service in Hadoop EcoSystem.
R.Sethuraman M.E,(PhD).,
Assistant Professor,
Faculty of Computing,
Dept of Computer Science Engineering,
Sathyabama University
http://Sethuonline.com
Big Data Analysis Patterns - TriHUG 6/27/2013boorad
Big Data Analysis Patterns: Tying real world use cases to strategies for analysis using big data technologies and tools.
Big data is ushering in a new era for analytics with large scale data and relatively simple algorithms driving results rather than relying on complex models that use sample data. When you are ready to extract benefits from your data, how do you decide what approach, what algorithm, what tool to use? The answer is simpler than you think.
This session tackles big data analysis with a practical description of strategies for several classes of application types, identified concretely with use cases. Topics include new approaches to search and recommendation using scalable technologies such as Hadoop, Mahout, Storm, Solr, & Titan.
Big data analytics is the process of examining large data sets containing a variety of data types i.e., big data to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. The analytical findings can lead to more effective marketing, new revenue opportunities, better customer service, improved operational efficiency, competitive advantages over rival organizations and other business benefits. Enterprises are increasingly looking to find actionable insights into their data. Many big data projects originate from the need to answer specific business questions. With the right big data analytics platforms in place, an enterprise can boost sales, increase efficiency, and improve operations, customer service and risk management. Notably, the business area getting the most attention relates to increasing efficiencies and optimizing operations. By using big data analytics you can extract only the relevant information from terabytes, petabytes and exabytes, and analyse it to transform your business decisions for the future. Becoming proactive with big data analytics isn't a one-time endeavour, it is more of a culture change – a new way of gaining ground.
Keywords: business, analytics, exabytes, efficiency, data sets
Big Data HPC Convergence and a bunch of other thingsGeoffrey Fox
This talk supports the Ph.D. in Computational & Data Enabled Science & Engineering at Jackson State University. It describes related educational activities at Indiana University, the Big Data phenomena, jobs and HPC and Big Data computations. It then describes how HPC and Big Data can be converged into a single theme.
Hadoop is a cluster computing framework.
Hadoop tools empower more developers and more organizations to leverage Hadoop for big data management. There’s been a growing demand for Hadoop tools that can make Hadoop's vast processing power more accessible. I’m going to present a Brief explanation of the various applications and tools that are associated with Hadoop. Also, I would be presenting a project how on how some of these tools where used to analyze the percentage of brain injured person in New England in the month of December 2010 survey to determine if brain transplant was an option to solve brain problem in the Nation.
Applying Noisy Knowledge Graphs to Real ProblemsDataWorks Summit
Knowledge graphs (KGs) have recently emerged as a powerful way to represent knowledge in multiple communities, including data mining, natural language processing and machine learning. Large-scale KGs like Wikidata and DBpedia are openly available, while in industry, the Google Knowledge Graph is a good example of proprietary knowledge that continues to fuel impressive advances in Google's semantic search capabilities. Yet, both crowdsourced and automatically constructed KGs suffer from noise, both during KG construction and during search and inference. In this talk, I will discuss how to build and use such knowledge graphs effectively, despite the noise and sparsity of labeled data, to solve real-world social problems such as providing insights in disaster situations, and helping law enforcement fight human trafficking. I will conclude by providing insight on the lessons learned, and the applicability of research techniques to industrial problems. The talk will be designed to appeal both to business and technical leaders.
This presentation contains a broad introduction to big data and its technologies.
Big data is a term that describes the large volume of data – both structured and unstructured – that inundates a business on a day-to-day basis.
Big Data is a phrase used to mean a massive volume of both structured and unstructured data that is so large it is difficult to process using traditional database and software techniques. In most enterprise scenarios the volume of data is too big or it moves too fast or it exceeds current processing capacity.
An overview of big data in clinical research. Discussion of big data related to real world evidence (RWE), wearable sensor data (IoT), and clinical genomics. Introduces the use of map-reduce infrastructure for big data in biomedicine.
What exactly is big data? The definition of big data is data that contains greater variety, arriving in increasing volumes and with more velocity. This is also known as the three Vs. Put simply, big data is larger, more complex data sets, especially from new data sources.
This Presentation gives an insight into what is big data, data analytics, difference between big data and data science.And also salary trends in big data analytics.
Agile Big Data Analytics Development: An Architecture-Centric ApproachSoftServe
Presented at The Hawaii International Conference on System Sciences by Hong-Mei Chen and Rick Kazman (University of Hawaii), Serge Haziyev (SoftServe).
Predictive Analytics: Context and Use Cases
Historical context for successful implementation of predictive analytic techniques and examples of implementation of successful use cases.
Measuring, Mismeasuring, and Remeasuring - Creating Meaningful Key Performanc...Dan Wellisch
Here is our September 2019 meeting presentation to the Chicago Technology For Value-Based Healthcare Group (https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/) on meaningful KPIs in the hospital setting.
The Role Of Community-Based Organizations in Achieving Population Health GoalsDan Wellisch
Marc Rosen discusses how the YMCA participates in keeping the population healthy. He presented to our group found here.: https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/
At the Chicago Technology For Value-Based Healthcare October 2018 meetup (https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup), Omar Husain cuts through healthcare data to show how providers can save their bottom lines.
US Healthcare Reform Landscape - Addendum to June 2018 Presentation to the Ch...Dan Wellisch
This is an addendum to the June 2018 presentation (to the Chicago Technology For Value-Based Healthcare Meetup https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/) containing interesting info. about what may replace the Affordable Care Act
Payer Analytics In A Shifting Healthcare Landscape - June Presentation To Chi...Dan Wellisch
This is the June 2018 presentation to the Chicago Technology For Value-Based Healthcare https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/
White Paper distributed at our May 2018 meeting of the Chicago Technology For Value-Based Healthcare Meetup Group - https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/
Chronic Care Management - Implemented By TimeDoc - May 2018Dan Wellisch
This is May's presentation of the Chicago Technology For Value-Based Healthcare Meetup - https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/
Managing HIPAA Business Associate Relationships - April 24, 2018 Dan Wellisch
This is the April presentation of the Chicago Technology for Value-Based Healthcare Meetup Group - https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/
Using Models For Analytically-Driven Cultural TransformationDan Wellisch
Jason Cooper delivered a powerful presentation at our meetup: Chicago Technology For Value-Based Healthcare found here: https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/ on March 20. 2018
Analyzing Breast Cancer Dataset with Azure Machine Learning StudioDan Wellisch
This presentation was given by https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/ Member Frank Mendoza of Catalytics on January 23, 2018
Simple Linear Regression: Step-By-StepDan Wellisch
This presentation was made to our meetup group found here.: https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/ on 9/26/2017. Our group is focused on technology applied to healthcare in order to create better healthcare.
Mike Ghen gave this presentation to the Chicago Technology For Value-Based Healthcare Meetup (https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/). Mike since has moved to Philadelphia where he started the Philadelphia Technology For Value-Based Healthcare (https://www.meetup.com/Philadelphia-Technology-For-Value-Based-Healthcare-Meetup/). The Chicago and Philadelphia chapters share a website at techforvaluebasedhealthcare.org
What Are The All Payer Claims Databases (SCPDs) And What Could Be Used For?Dan Wellisch
Dan Wellisch gave this presentation to the Chicago Technology For Value-Based Healthcare Meetup (https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/)
Presentation was given by Jim Anfield to Chicago Technology For Value-Based HealthCare (https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/).
Using Predictive Analytics For Care Management And CoordinationDan Wellisch
This presentation was given by Dennis O'Donnell for the Chicago Technology For Value-Based Healthcare Meetup (https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/)
Dan Wellisch gave this presentation to the Chicago Technology For Vaue Based Healthcare Meetup at https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/
How many patients does case series should have In comparison to case reports.pdfpubrica101
Pubrica’s team of researchers and writers create scientific and medical research articles, which may be important resources for authors and practitioners. Pubrica medical writers assist you in creating and revising the introduction by alerting the reader to gaps in the chosen study subject. Our professionals understand the order in which the hypothesis topic is followed by the broad subject, the issue, and the backdrop.
https://pubrica.com/academy/case-study-or-series/how-many-patients-does-case-series-should-have-in-comparison-to-case-reports/
One of the most developed cities of India, the city of Chennai is the capital of Tamilnadu and many people from different parts of India come here to earn their bread and butter. Being a metropolitan, the city is filled with towering building and beaches but the sad part as with almost every Indian city
Empowering ACOs: Leveraging Quality Management Tools for MIPS and BeyondHealth Catalyst
Join us as we delve into the crucial realm of quality reporting for MSSP (Medicare Shared Savings Program) Accountable Care Organizations (ACOs).
In this session, we will explore how a robust quality management solution can empower your organization to meet regulatory requirements and improve processes for MIPS reporting and internal quality programs. Learn how our MeasureAble application enables compliance and fosters continuous improvement.
Navigating Challenges: Mental Health, Legislation, and the Prison System in B...Guillermo Rivera
This conference will delve into the intricate intersections between mental health, legal frameworks, and the prison system in Bolivia. It aims to provide a comprehensive overview of the current challenges faced by mental health professionals working within the legislative and correctional landscapes. Topics of discussion will include the prevalence and impact of mental health issues among the incarcerated population, the effectiveness of existing mental health policies and legislation, and potential reforms to enhance the mental health support system within prisons.
Deep Leg Vein Thrombosis (DVT): Meaning, Causes, Symptoms, Treatment, and Mor...The Lifesciences Magazine
Deep Leg Vein Thrombosis occurs when a blood clot forms in one or more of the deep veins in the legs. These clots can impede blood flow, leading to severe complications.
India Clinical Trials Market: Industry Size and Growth Trends [2030] Analyzed...Kumar Satyam
According to TechSci Research report, "India Clinical Trials Market- By Region, Competition, Forecast & Opportunities, 2030F," the India Clinical Trials Market was valued at USD 2.05 billion in 2024 and is projected to grow at a compound annual growth rate (CAGR) of 8.64% through 2030. The market is driven by a variety of factors, making India an attractive destination for pharmaceutical companies and researchers. India's vast and diverse patient population, cost-effective operational environment, and a large pool of skilled medical professionals contribute significantly to the market's growth. Additionally, increasing government support in streamlining regulations and the growing prevalence of lifestyle diseases further propel the clinical trials market.
Growing Prevalence of Lifestyle Diseases
The rising incidence of lifestyle diseases such as diabetes, cardiovascular diseases, and cancer is a major trend driving the clinical trials market in India. These conditions necessitate the development and testing of new treatment methods, creating a robust demand for clinical trials. The increasing burden of these diseases highlights the need for innovative therapies and underscores the importance of India as a key player in global clinical research.
ICH Guidelines for Pharmacovigilance.pdfNEHA GUPTA
The "ICH Guidelines for Pharmacovigilance" PDF provides a comprehensive overview of the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) guidelines related to pharmacovigilance. These guidelines aim to ensure that drugs are safe and effective for patients by monitoring and assessing adverse effects, ensuring proper reporting systems, and improving risk management practices. The document is essential for professionals in the pharmaceutical industry, regulatory authorities, and healthcare providers, offering detailed procedures and standards for pharmacovigilance activities to enhance drug safety and protect public health.
Antibiotic Stewardship by Anushri Srivastava.pptxAnushriSrivastav
Stewardship is the act of taking good care of something.
Antimicrobial stewardship is a coordinated program that promotes the appropriate use of antimicrobials (including antibiotics), improves patient outcomes, reduces microbial resistance, and decreases the spread of infections caused by multidrug-resistant organisms.
WHO launched the Global Antimicrobial Resistance and Use Surveillance System (GLASS) in 2015 to fill knowledge gaps and inform strategies at all levels.
ACCORDING TO apic.org,
Antimicrobial stewardship is a coordinated program that promotes the appropriate use of antimicrobials (including antibiotics), improves patient outcomes, reduces microbial resistance, and decreases the spread of infections caused by multidrug-resistant organisms.
ACCORDING TO pewtrusts.org,
Antibiotic stewardship refers to efforts in doctors’ offices, hospitals, long term care facilities, and other health care settings to ensure that antibiotics are used only when necessary and appropriate
According to WHO,
Antimicrobial stewardship is a systematic approach to educate and support health care professionals to follow evidence-based guidelines for prescribing and administering antimicrobials
In 1996, John McGowan and Dale Gerding first applied the term antimicrobial stewardship, where they suggested a causal association between antimicrobial agent use and resistance. They also focused on the urgency of large-scale controlled trials of antimicrobial-use regulation employing sophisticated epidemiologic methods, molecular typing, and precise resistance mechanism analysis.
Antimicrobial Stewardship(AMS) refers to the optimal selection, dosing, and duration of antimicrobial treatment resulting in the best clinical outcome with minimal side effects to the patients and minimal impact on subsequent resistance.
According to the 2019 report, in the US, more than 2.8 million antibiotic-resistant infections occur each year, and more than 35000 people die. In addition to this, it also mentioned that 223,900 cases of Clostridoides difficile occurred in 2017, of which 12800 people died. The report did not include viruses or parasites
VISION
Being proactive
Supporting optimal animal and human health
Exploring ways to reduce overall use of antimicrobials
Using the drugs that prevent and treat disease by killing microscopic organisms in a responsible way
GOAL
to prevent the generation and spread of antimicrobial resistance (AMR). Doing so will preserve the effectiveness of these drugs in animals and humans for years to come.
being to preserve human and animal health and the effectiveness of antimicrobial medications.
to implement a multidisciplinary approach in assembling a stewardship team to include an infectious disease physician, a clinical pharmacist with infectious diseases training, infection preventionist, and a close collaboration with the staff in the clinical microbiology laboratory
to prevent antimicrobial overuse, misuse and abuse.
to minimize the developme
Defecation
Normal defecation begins with movement in the left colon, moving stool toward the anus. When stool reaches the rectum, the distention causes relaxation of the internal sphincter and an awareness of the need to defecate. At the time of defecation, the external sphincter relaxes, and abdominal muscles contract, increasing intrarectal pressure and forcing the stool out
The Valsalva maneuver exerts pressure to expel faeces through a voluntary contraction of the abdominal muscles while maintaining forced expiration against a closed airway. Patients with cardiovascular disease, glaucoma, increased intracranial pressure, or a new surgical wound are at greater risk for cardiac dysrhythmias and elevated blood pressure with the Valsalva maneuver and need to avoid straining to pass the stool.
Normal defecation is painless, resulting in passage of soft, formed stool
CONSTIPATION
Constipation is a symptom, not a disease. Improper diet, reduced fluid intake, lack of exercise, and certain medications can cause constipation. For example, patients receiving opiates for pain after surgery often require a stool softener or laxative to prevent constipation. The signs of constipation include infrequent bowel movements (less than every 3 days), difficulty passing stools, excessive straining, inability to defecate at will, and hard feaces
IMPACTION
Fecal impaction results from unrelieved constipation. It is a collection of hardened feces wedged in the rectum that a person cannot expel. In cases of severe impaction the mass extends up into the sigmoid colon.
DIARRHEA
Diarrhea is an increase in the number of stools and the passage of liquid, unformed feces. It is associated with disorders affecting digestion, absorption, and secretion in the GI tract. Intestinal contents pass through the small and large intestine too quickly to allow for the usual absorption of fluid and nutrients. Irritation within the colon results in increased mucus secretion. As a result, feces become watery, and the patient is unable to control the urge to defecate. Normally an anal bag is safe and effective in long-term treatment of patients with fecal incontinence at home, in hospice, or in the hospital. Fecal incontinence is expensive and a potentially dangerous condition in terms of contamination and risk of skin ulceration
HEMORRHOIDS
Hemorrhoids are dilated, engorged veins in the lining of the rectum. They are either external or internal.
FLATULENCE
As gas accumulates in the lumen of the intestines, the bowel wall stretches and distends (flatulence). It is a common cause of abdominal fullness, pain, and cramping. Normally intestinal gas escapes through the mouth (belching) or the anus (passing of flatus)
FECAL INCONTINENCE
Fecal incontinence is the inability to control passage of feces and gas from the anus. Incontinence harms a patient’s body image
PREPARATION AND GIVING OF LAXATIVESACCORDING TO POTTER AND PERRY,
An enema is the instillation of a solution into the rectum and sig
Telehealth Psychology Building Trust with Clients.pptxThe Harvest Clinic
Telehealth psychology is a digital approach that offers psychological services and mental health care to clients remotely, using technologies like video conferencing, phone calls, text messaging, and mobile apps for communication.
Telehealth Psychology Building Trust with Clients.pptx
Using The Hadoop Ecosystem to Drive Healthcare Innovation
1. Using the Hadoop Ecosystem to
Drive Healthcare Innovation
Aly Sivji
April 25, 2017
2. About Me
• Aly Sivji
– Twitter: @CaiusSivjus
– Blog: http://alysivji.github.io
• Senior Analyst @ IBM Watson Health
– Value-Based Care: Planning Solutions
• Grad Student @ Northwestern University
– Medical Informatics
• Interests:
– Technology 🐍
– Data 📈
– Star Trek 🖖🖖
6. Overview
• Data Analytics / Data Science
– Retrospective versus Predictive
• Machine Learning
– Types of Algorithms
• Healthcare Analytics
7. Overview
• Apache Hadoop Ecosystem
– Big Data framework
– Distributed computation on commodity hardware
– Demo!
8. Road to Electronic Health Records
1920s –
Modern
record
keeping
begins
1960s – Dr.
Larry Weed
introduces
problem-
oriented
medical
records
1972 –
Regenstrief
Institute
develops
first EMR
System
1980s-90s –
Siloed adoption
by departments
& admin
1996 –
HIPAA
establishes
national
standards
for
electronic
health
records
2004 –
President Bush
calls for
Computerized
Health Records
9. 2009: EHRs Go Mainstream
• HITECH Act passed by President Obama
– $25.9 billion to expand Health IT (HIT) adoption
• Meaningful Use (MU) program
– Incentive payments for using HIT to
• Improve quality, safety, efficiency of care
• Engage patients
• Increase care co-ordination
– Goal: MU compliance => better outcomes
10. EHR Adoption: Doubled Since 2008
Office-based Physician Electronic Health Record Adoption (2005-2015)
Source: Office of the National Coordinator for Health Information Technology. 'Office-based Physician Electronic Health Record
Adoption,' Health IT Quick-Stat #50. dashboard.healthit.gov/quickstats/pages/physician-ehr-adoption-trends.php. Dec 2016.
11. Health Data Today
• Electronic Health Records
• Genomic Data ($1000 genome)
• Medical Internet of Things (mIoT)
• Wearable devices
• Bottom Line: Data is growing
Big Data = 'Bigger Data' in Healthcare (article)
12. Data Analytics
• Businesses collect lots of data
– IBM: 90% of world’s data created in last 2 years
• How can we find hidden patterns in the data
and make information actionable?
Data Science!
13. Types of Analytics
• Retrospective Analytics
– Summarizing historical activity / performance
– Limited scope for making future plans
• Better than nothing
14. Types of Analytics
• Predictive Analytics
– Finding patterns (correlations) between historical
environment and results
– Apply to current environment to make predictions
15. Predictive Analytics
"Once you have enough data, you start to see
patterns. You can then build a model of how
these data work. Once you build a model, you
can predict.”
Michael Wu
Chief Scientist, Lithium Technologies
17. Machine Learning (ML)
“Field of study that gives computers the ability
to learn without being explicitly programmed”
Arthur Samuel
Artificial Intelligence Pioneer
18. Machine Learning Algorithms
• A probabilistic framework to create models
used for predictions
• Predictive models are developed iteratively
• Models are refined until they converge
– i.e. output gets close to a specific value
19. Types of ML Algorithms
• Unsupervised Learning
– Group objects by similar characteristics
– Given inputs (X), find label for each observation
• Supervised Learning
– Given inputs (X) and output (Y)
– Find function f that maps X to Y
– Given new inputs (Xnew), predict value/label (Ynew)
20. Types of Supervised Learning
• Regression
– Try to predict a value (continuous variable)
• Classification
– Try to predict a label (discrete variable)
21. Analytics in Healthcare
“Advanced analytics can be used to improve
medical outcomes, increase financial
performance, deepen relationships with
customers and patients, and drive new medical
innovations”
Jason Burke
Author of Health Analytics
23. Healthcare Challenges
• US system wastes $750 billion annually
Source: Washington Post (Sept 2012). Retrieved from https://www.washingtonpost.com/news/wonk/wp/2012/09/07/we-spend-
750-billion-on-unnecessary-health-care-two-charts-explain-why/
24. Healthcare Challenges
• Low quality
– To Err is Human Report:
• 44,000 - 98,000 deaths to preventable medical errors
– Rates poorly when compared to other countries
• Last in 2014 Commonwealth Fund survey on:
– Quality of care
– Access to doctors
– Equity
25. Solution: Big Data!
• Use data analytics and machine learning to
improve outcomes & lower costs
27. Good News
• Most of the analytical and software
capabilities needed to drive systemic changes
in healthcare are already available as:
– Commercial software
– Open Source solutions 🎉
• Hadoop ecosystem
28. Big Data
• Characteristics (4 V’s of Big Data)
– Volume
• Scale of data
– Variety
• Diversity of data (many sources)
– Velocity
• Speed of data
– Veracity
• Certainty of data
• 5th V: Value?
29. Types of Data
• Structured
– Highly organized information that fits neatly into a
relational database (columns and rows)
• Unstructured
– Has internal structure, but does not fit into a
traditional database (or spreadsheet)
– Most data is unstructured (>80%)
– Can use Extract-Transform-Load (ETL) Processing to
turn unstructured data into structured data
30. Apache Hadoop
• Set of open source software technology components that
form a scalable system we can use to analyze Big Data
• Main features:
– Distributed storage and processing
• Data is too big for a single computer
– Runs on commodity hardware
– Fault tolerant
• Hardware failures are common and handle automatically
– Runs in Java Virtual Machine (JVM) environment
31. Sample Hadoop Stack
Source: Soong, K. (Feb 2016). Big Data Specialization. Retrieved from http://ksoong.org/big-data
32. Core Hadoop Components
• Yet Another Resource Negotiator (YARN)
– “Operating System” for Hadoop
– Controls how resources are allocated to different
applications and execution engines across cluster
33. Core Hadoop Components
• Hadoop Distributed File System (HDFS)
– Highly scalable storage system
Data File
34. Core Hadoop Components
• Hadoop Distributed File System (HDFS)
– Too big to fit on single machine => Partition
A B
C D
35. Core Hadoop Components
• Hadoop Distributed File System (HDFS)
– Split across multiple machines
– Data is protected against hardware failure
A B
C
A
D
A
C D
B
C D
Server 1 Server 2 Server 3 Server 4
36. Core Hadoop Components
• Hadoop Distributed File System (HDFS)
– Server goes down, we can still reconstruct data
A B
C
A
D
A
C D
B
C D
Server 1 Server 2 Server 3 Server 4
🔥
37. Core Hadoop Components
• Execution Engine
– Used when running analytic applications
– Distributed data allows us to perform parallel
computations
– MapReduce execution engine comes bundled with the
Hadoop core distribution
– Can plug-in different components
• Tez, Storm, Spark, etc
39. MapReduce Example
Source: Zhang, X. (Jul 2013). A Simple Example to Demonstrate how does the
MapReduce work. Retrieved from http://xiaochongzhang.me/blog/?p=338
40. MapReduce Limitations
• Lot of read/writes
– I/O becomes bottleneck when performing analysis
• Machine Learning algorithms are iterative
– Many reads and writes cycles before convergence
– Slow runtime
• There must be a better way!
41. Apache Tez
• Optimizes workflow to limit number of writes
• Less I/O => faster execution
42. Apache Storm
• Execution engine for real-time streaming
applications
• Data is analyzed as it is generated BEFORE it is
stored
43. Apache Spark
• In-memory computational engine
• Read in data once, subsequent calculations
are done in-memory
Logistic Regression Runtime
44. Other Apache Projects
• Apache Hive
– SQL interface to data stored in HDFS
– Analysts with SQL experience can use Hadoop
47. Optimal Hadoop Workflow
• Depends on what you are trying to do
• Data Lake (HDFS)
– Storage repository that holds data in raw format
– Read into Spark to perform analysis
• Use Data Science and Machine Learning algorithms
• Demo will walkthrough this workflow
48.
49. Dataset
• Texas Department of State Health Services
– Released State Inpatient / Outpatient data (link)
• Inpatient (IP) - 1999 to 2010
• Outpatient (OP) – Q42009 to 2010
– Data is de-identified and made available for free
– Tab-delimited text files (for each quarter)
• IP data – 450MB base table, 500MB charges
• OP data – 750MB base table, 700MB charges
50. Spark Background
• Java, Scala, Python, and R APIs (docs)
• Built around the concept of Resilient
Distributed Datasets (RDDs)
– Can perform MapReduce on RDD
OR
– Use the Spark DataFrame abstraction
*Recommended*
51. Spark DataFrame
• Distributed collection of rows and named
columns
– Think relational database or spreadsheet
– Akin to pandas DataFrame or R data.frame
# Displays the content of the DataFrame
df.show()
#
# +----+-------+
# | age| name|
# +----+-------+
# |null|Michael|
# | 30| Andy|
# | 19| Justin|
# +----+-------+
Before we get to what we’re talking about. I’ll talk about me.
Data has been making a huge difference in other industries
Chase uses machine learning algorithms to flag purchases that could be fraudulent. Last time this happened, I booked my flight using my American Airlines card and booked my hotel and conference on my United card. Chase didn’t know about the flight so it asked for my confirmation. Saves them money for having to pay for fraudulent purchases.
Amazon uses data mining to find products purchased together and makes suggestions to increase revenue. Spark was created in Scala and most people who learn Scala do so in order to use Spark in its native language. Amazon doesn’t know this, but it can use data to figure this out.
Netflix’s recommendation system finds users who are similar to you and uses their ratings to make predictions for media for you to watch
Medical fraud dedection could be more robust or similar algorithms can find unnecessary procedures (purchases that do not match my profile)
Data mining to suggest medication that is always prescribed together if an order is missing it
Recommendation system to find similar patients. Group them by the treatment prescribed, rate their outcomes and use that information to suggest optimal course of action
Why is this not widespread in healthcare?
People who work in healthcare know, healthcare is different.
We won’t really go into too many details why, but you can find out more at the links provided.
I will spend some time discussing how healthcare has changed and made it easier to facilitate a data revolution
What do we mean by data revolution?
Data is ubiquitous... We’ll explore data science in some depth to understand the basic principles of the field and get a grasp on how we can make our information actionable
Bee is Buzzword Bee! I’ll try to include him every time I use a buzzword
Next we’ll talk about we can use the Hadoop ecosystem to analyze healthcare data
Is paved with good intentions ;)
1920s [1]
Healthcare professionals realized that documenting patient care benefited both providers and patients. Patient records established the details, complications and outcomes of patient care.
Once healthcare providers realized that they were better able to treat patients with complete and accurate medical history, documentation became wildly popular.
Health records were soon recognized as being critical to the safety and quality of the patient experience.
1960s [2]
Charting how we currently know it. First, a patient database is collected. Then use that information to start the diagnosis process. Database is very thorough contains:
Family history
Prior encounter information
Lab results
Current health status
1972 [1, 2]
There are quite a few cases of electronic record system pilots (thru universities and large healthcare facilities), this is the first major system that was developed. Did not attract many physicians
1980s-90s [1, 2]
Computers made their way into hospitals, like they did in every other professional environment, but systems did not speak to each other
1996
HIPAA was passed and national standards for electronic health records was established
2004 [1, 3]
In his 2004 State of the Union, President George W Bush calls for computerized health records. Established the Office of the National Coordinator for Health Information Technology. It coordinates nationwide efforts to implement HealthIT and electronic exchange of health information.
References
[1] http://www.rasmussen.edu/degrees/health-sciences/blog/health-information-management-history/
[2] http://www.nethealth.com/a-history-of-electronic-medical-records-infographic/
[3] https://en.wikipedia.org/wiki/Office_of_the_National_Coordinator_for_Health_Information_Technology
Meaningful Use provided incentive payments to healthcare providers who could demonstrate they used health information technology in a ‘meaningful way’ to improve quality, engage patients, increase care coordination.
Goal is that MU compliance will result in:
Better clinical outcomes
Improved population health outcomes
Increased transparency and efficiency
Empowered individuals
https://en.wikipedia.org/wiki/Health_Information_Technology_for_Economic_and_Clinical_Health_Act
https://www.healthit.gov/providers-professionals/meaningful-use-definition-objectives
Did it work? Well… it did increase EHR adoption
* EHR systems have a wealth of data and are collecting more each day
* Genomic sequencing costs less than $1000 dollar, I’ve heard about a race to $100 as well
* Medical sensors are collecting information at a dizzying pace. One big application is patient sensors in post-acute care environments where patients are hooked up to machines collecting real-time data
* People are more concerned about their health than ever before and the consumer wearable industry is growing.
But we’re getting ahead of ourselves. I need to introduce the topic of data analytics
References
[1] https://datascience.berkeley.edu/about/what-is-data-science/
This leads nicely into the topic of Machine Learning
References
http://www.ibmbigdatahub.com/blog/how-does-machine-learning-work?cm_mmc=OSocial_Twitter-_-IBM+Analytics_Inbound+Marketing-_-WW_WW-_-B+Yelland+3-20-2017&cm_mmca1=000000VQ&cm_mmca2=10000779&
Analytics is suited to the specific challenges in healthcare
References
[1] http://www.pbs.org/newshour/rundown/new-peak-us-health-care-spending-10345-per-person/
[2] http://www.pgpf.org/chart-archive/0006_health-care-oecd
Healthcare analytics is broad as we can see from this diagram. Lots of areas where a little bit of deliberate data science and machine learning to make a difference
Worth noting that most of the analytical capabilities needed to drive systemic changes in healthcare are already available in commercial software
So let’s start talking about Big Data. What is big data?
In healthcare, there is a lot of data… each genome is around 200GB of raw data.
Lots of different information… clinical, notes, lab information, demographic result data, patient generated data
Velocity data... Real time sensors monitoring patients
Veracity... How sure are we that the data we get is correct?
References
[1] http://www.ibmbigdatahub.com/infographic/extracting-business-value-4-vs-big-data
Execution engine is used to perform calculations on the underlying data
The MapReduce engine runs the map step on all nodes in the cluster to produce a set of intermediate output files. It then sorts these intermediate les and then runs a reduce step to take the sorted intermediate les and aggregate the data to get a final result.
This process is scalable but relatively slow because of the need to write lots of intermediate les to disk and then read them again.
The key takeaway from this presentation: Use Spark to do all calculations