The document discusses the use of Fitbit devices in clinical trials. It notes that while Fitbit is not a medical device, it is widely used in medical research studies. The number of clinical trials using Fitbit has been increasing each year. Fitbit is used in trials both as an intervention to increase patients' activity levels, and to monitor activity levels of research participants. Examples of studies exploring if Fitbit can increase activity in obese children, post-surgery patients, and cancer patients are provided.
How to implement digital medicine in the futureYoon Sup Choi
by Yoon Sup Choi, PhD
yoonsup.choi@gmail.com
Professor, SAHIST, Sungkyunkwan University
Director, Digital Healthcare Institute
Managing Partner, Digital Healthcare Partners
디지털 헬스케어 기반의 능동적, 선제적 보험
수동적, 사후적 대응에서 능동적, 선제적 관리로의 변화
- 디지털 헬스케어 기반의 가입자 데이터의 측정
- 데이터 분석을 통한 가입자 관리: 질병 위험군 분류, 계리
- 질병 관리 및 치료에 대한 능동적 개입: 관리 방안 및 인센티브
'인공지능은 의료를 어떻게 혁신하는가' 주제의 2017년 11월 버전입니다.
'How Artificial Intelligence would Innovate the medicine of the future'
최윤섭 소장 (최윤섭 디지털 헬스케어 연구소)
Yoon Sup Choi, PhD (Director/Founder, Digital Healthcare Institute)
yoonsup.choi@gmail.com
How to implement digital medicine in the futureYoon Sup Choi
by Yoon Sup Choi, PhD
yoonsup.choi@gmail.com
Professor, SAHIST, Sungkyunkwan University
Director, Digital Healthcare Institute
Managing Partner, Digital Healthcare Partners
디지털 헬스케어 기반의 능동적, 선제적 보험
수동적, 사후적 대응에서 능동적, 선제적 관리로의 변화
- 디지털 헬스케어 기반의 가입자 데이터의 측정
- 데이터 분석을 통한 가입자 관리: 질병 위험군 분류, 계리
- 질병 관리 및 치료에 대한 능동적 개입: 관리 방안 및 인센티브
'인공지능은 의료를 어떻게 혁신하는가' 주제의 2017년 11월 버전입니다.
'How Artificial Intelligence would Innovate the medicine of the future'
최윤섭 소장 (최윤섭 디지털 헬스케어 연구소)
Yoon Sup Choi, PhD (Director/Founder, Digital Healthcare Institute)
yoonsup.choi@gmail.com
2016 SXSW Interactive Panel Submission:
Big Data and mobile health are transforming the healthcare landscape. The digital revolution is here and health tech heavyweights are ready to ride the wave. Take a look into the future of drug development and how mHealth technology, including wearables, sensors and apps, are uncovering new “digital biomarkers” and driving a patient-centric research model. The connected patient paired with tech straight out of the Matrix is altering the way we’re collecting and understanding our own data, helping people to better understand themselves today to proactively lead healthier lives tomorrow.
On June 9th, 2016 Fitabase had the pleasure of taking part in the annual Fitbit Captivate Summit, where founder and CEO, Aaron Coleman, was invited to share insights on using wearables sensors for health research. Aaron presented on the Fitabase platform and how it’s being used to understand participant activity, sleep, and overall health in over 150 research studies.
Augmented Personalized Health: using AI techniques on semantically integrated...Amit Sheth
Keynote @ 2018 AAAI Joint Workshop on Health Intelligence (W3PHIAI 2018), 2 February 2018, New Orleans, LA [Video: https://youtu.be/GujvoWRa0O8]
Related article: https://ieeexplore.ieee.org/document/8355891/
Abstract
Healthcare as we know it is in the process of going through a massive change - from episodic to continuous, from disease-focused to wellness and quality of life focused, from clinic centric to anywhere a patient is, from clinician controlled to patient empowered, and from being driven by limited data to 360-degree, multimodal personal-public-population physical-cyber-social big data-driven. While the ability to create and capture data is already here, the upcoming innovations will be in converting this big data into smart data through contextual and personalized processing such that patients and clinicians can make better decisions and take timely actions for augmented personalized health. In this talk, we will discuss how use of AI techniques on semantically integrated patient-generated health data (PGHD), environmental data, clinical data, and public social data is exploited to achieve a range of augmented health management strategies that include self-monitoring, self-appraisal, self-management, intervention, and Disease Progression Tracking and Prediction. We will review examples and outcomes from a number of applications, some involving patient evaluations, including asthma in children, bariatric surgery/obesity, mental health/depression, that are part of the Kno.e.sis kHealth personalized digital health initiative.
Background: Background: http://bit.ly/k-APH, http://bit.ly/kAsthma, http://j.mp/PARCtalk
The Future of mHealth - Jay Srini - March 2011LifeWIRE Corp
Jay Srini's presentation of her take on the Future of mHealth, presented at the 3rd mHealth Networking Conference, March 30, 2011. Aside from being one of the preeminent thought leader in the area of innovation and mhealth, she holds a number of positions including Assistant Professor at the University of Pittsburgh and CIO for LifeWIRE Corp.
With @Atreja at the NODE Health Conference - Digital Medicine http://digitalmedicineconference.com/ on the events and studies which moved the field forward
Presented at the Expert Panel Discussion: The Future of Telehealth Technology at National Telehealth Conference, 10 Oct 2017, Cincinnati: http://www.nationaltelehealthconference.com
This is an abridged version of an invited talk: https://youtu.be/wDi1mLLyxuc
Asia HealthTech Investments by Julien de Salaberry (30 June 2015)KickstartPH
Kickstart Ventures' 2nd HealthTech Forum featured Julien de Salaberry, a globally-recognised expert on healthcare and technology.
Julien, the Chief Innovation Officer and Founder of The Propell Group (based in Singapore), talked about healthcare trends in Southeast Asia and how “frugal innovation" can be done in healthcare delivery.
And yeah, if you've got an interesting healthtech startup, message us at info@kickstart.ph. #startupPH
A review of the health sensor market estimated at 400M devices and worth $4B by 2014, including 36 companies offering devices across the wellness, chronic, diagnostic and monitoring markets. Purchase the report here: https://gumroad.com/l/Khrd
WHITE PAPER: How safe is your quantified self? from the Symantec Security Res...Symantec
Fueled by technological advances and social factors, the quantified self movement has experienced rapid growth. Quantified self, also known as self-tracking, aims to improve lifestyle and achievements by measuring and analyzing key performance data across a range of activities.
Symantec has found security risks in a large number of self tracking devices and applications. One of the most significant findings was that all of the wearable activity-tracking devices examined, including those from leading brands, are vulnerable to location tracking.
Our researchers built a number of scanning devices using Raspberry Pi mini computers and, by taking them out to athletic events and busy public spaces, found that it was possible to track individuals.
Symantec also found vulnerabilities in how personal data is stored and managed, such as passwords being transmitted in clear text and poor session management. As we collect, store, and share more data about ourselves, do we ever pause to consider the risks and implications of sharing this additional data?
- Video recording of this lecture in English language: https://youtu.be/kqbnxVAZs-0
- Video recording of this lecture in Arabic language: https://youtu.be/SINlygW1Mpc
- Link to download the book free: https://nephrotube.blogspot.com/p/nephrotube-nephrology-books.html
- Link to NephroTube website: www.NephroTube.com
- Link to NephroTube social media accounts: https://nephrotube.blogspot.com/p/join-nephrotube-on-social-media.html
Recomendações da OMS sobre cuidados maternos e neonatais para uma experiência pós-natal positiva.
Em consonância com os ODS – Objetivos do Desenvolvimento Sustentável e a Estratégia Global para a Saúde das Mulheres, Crianças e Adolescentes, e aplicando uma abordagem baseada nos direitos humanos, os esforços de cuidados pós-natais devem expandir-se para além da cobertura e da simples sobrevivência, de modo a incluir cuidados de qualidade.
Estas diretrizes visam melhorar a qualidade dos cuidados pós-natais essenciais e de rotina prestados às mulheres e aos recém-nascidos, com o objetivo final de melhorar a saúde e o bem-estar materno e neonatal.
Uma “experiência pós-natal positiva” é um resultado importante para todas as mulheres que dão à luz e para os seus recém-nascidos, estabelecendo as bases para a melhoria da saúde e do bem-estar a curto e longo prazo. Uma experiência pós-natal positiva é definida como aquela em que as mulheres, pessoas que gestam, os recém-nascidos, os casais, os pais, os cuidadores e as famílias recebem informação consistente, garantia e apoio de profissionais de saúde motivados; e onde um sistema de saúde flexível e com recursos reconheça as necessidades das mulheres e dos bebês e respeite o seu contexto cultural.
Estas diretrizes consolidadas apresentam algumas recomendações novas e já bem fundamentadas sobre cuidados pós-natais de rotina para mulheres e neonatos que recebem cuidados no pós-parto em unidades de saúde ou na comunidade, independentemente dos recursos disponíveis.
É fornecido um conjunto abrangente de recomendações para cuidados durante o período puerperal, com ênfase nos cuidados essenciais que todas as mulheres e recém-nascidos devem receber, e com a devida atenção à qualidade dos cuidados; isto é, a entrega e a experiência do cuidado recebido. Estas diretrizes atualizam e ampliam as recomendações da OMS de 2014 sobre cuidados pós-natais da mãe e do recém-nascido e complementam as atuais diretrizes da OMS sobre a gestão de complicações pós-natais.
O estabelecimento da amamentação e o manejo das principais intercorrências é contemplada.
Recomendamos muito.
Vamos discutir essas recomendações no nosso curso de pós-graduação em Aleitamento no Instituto Ciclos.
Esta publicação só está disponível em inglês até o momento.
Prof. Marcus Renato de Carvalho
www.agostodourado.com
Title: Sense of Smell
Presenter: Dr. Faiza, Assistant Professor of Physiology
Qualifications:
MBBS (Best Graduate, AIMC Lahore)
FCPS Physiology
ICMT, CHPE, DHPE (STMU)
MPH (GC University, Faisalabad)
MBA (Virtual University of Pakistan)
Learning Objectives:
Describe the primary categories of smells and the concept of odor blindness.
Explain the structure and location of the olfactory membrane and mucosa, including the types and roles of cells involved in olfaction.
Describe the pathway and mechanisms of olfactory signal transmission from the olfactory receptors to the brain.
Illustrate the biochemical cascade triggered by odorant binding to olfactory receptors, including the role of G-proteins and second messengers in generating an action potential.
Identify different types of olfactory disorders such as anosmia, hyposmia, hyperosmia, and dysosmia, including their potential causes.
Key Topics:
Olfactory Genes:
3% of the human genome accounts for olfactory genes.
400 genes for odorant receptors.
Olfactory Membrane:
Located in the superior part of the nasal cavity.
Medially: Folds downward along the superior septum.
Laterally: Folds over the superior turbinate and upper surface of the middle turbinate.
Total surface area: 5-10 square centimeters.
Olfactory Mucosa:
Olfactory Cells: Bipolar nerve cells derived from the CNS (100 million), with 4-25 olfactory cilia per cell.
Sustentacular Cells: Produce mucus and maintain ionic and molecular environment.
Basal Cells: Replace worn-out olfactory cells with an average lifespan of 1-2 months.
Bowman’s Gland: Secretes mucus.
Stimulation of Olfactory Cells:
Odorant dissolves in mucus and attaches to receptors on olfactory cilia.
Involves a cascade effect through G-proteins and second messengers, leading to depolarization and action potential generation in the olfactory nerve.
Quality of a Good Odorant:
Small (3-20 Carbon atoms), volatile, water-soluble, and lipid-soluble.
Facilitated by odorant-binding proteins in mucus.
Membrane Potential and Action Potential:
Resting membrane potential: -55mV.
Action potential frequency in the olfactory nerve increases with odorant strength.
Adaptation Towards the Sense of Smell:
Rapid adaptation within the first second, with further slow adaptation.
Psychological adaptation greater than receptor adaptation, involving feedback inhibition from the central nervous system.
Primary Sensations of Smell:
Camphoraceous, Musky, Floral, Pepperminty, Ethereal, Pungent, Putrid.
Odor Detection Threshold:
Examples: Hydrogen sulfide (0.0005 ppm), Methyl-mercaptan (0.002 ppm).
Some toxic substances are odorless at lethal concentrations.
Characteristics of Smell:
Odor blindness for single substances due to lack of appropriate receptor protein.
Behavioral and emotional influences of smell.
Transmission of Olfactory Signals:
From olfactory cells to glomeruli in the olfactory bulb, involving lateral inhibition.
Primitive, less old, and new olfactory systems with different path
Best Ayurvedic medicine for Gas and IndigestionSwastikAyurveda
Here is the updated list of Top Best Ayurvedic medicine for Gas and Indigestion and those are Gas-O-Go Syp for Dyspepsia | Lavizyme Syrup for Acidity | Yumzyme Hepatoprotective Capsules etc
NVBDCP.pptx Nation vector borne disease control programSapna Thakur
NVBDCP was launched in 2003-2004 . Vector-Borne Disease: Disease that results from an infection transmitted to humans and other animals by blood-feeding arthropods, such as mosquitoes, ticks, and fleas. Examples of vector-borne diseases include Dengue fever, West Nile Virus, Lyme disease, and malaria.
These simplified slides by Dr. Sidra Arshad present an overview of the non-respiratory functions of the respiratory tract.
Learning objectives:
1. Enlist the non-respiratory functions of the respiratory tract
2. Briefly explain how these functions are carried out
3. Discuss the significance of dead space
4. Differentiate between minute ventilation and alveolar ventilation
5. Describe the cough and sneeze reflexes
Study Resources:
1. Chapter 39, Guyton and Hall Textbook of Medical Physiology, 14th edition
2. Chapter 34, Ganong’s Review of Medical Physiology, 26th edition
3. Chapter 17, Human Physiology by Lauralee Sherwood, 9th edition
4. Non-respiratory functions of the lungs https://academic.oup.com/bjaed/article/13/3/98/278874
CDSCO and Phamacovigilance {Regulatory body in India}NEHA GUPTA
The Central Drugs Standard Control Organization (CDSCO) is India's national regulatory body for pharmaceuticals and medical devices. Operating under the Directorate General of Health Services, Ministry of Health & Family Welfare, Government of India, the CDSCO is responsible for approving new drugs, conducting clinical trials, setting standards for drugs, controlling the quality of imported drugs, and coordinating the activities of State Drug Control Organizations by providing expert advice.
Pharmacovigilance, on the other hand, is the science and activities related to the detection, assessment, understanding, and prevention of adverse effects or any other drug-related problems. The primary aim of pharmacovigilance is to ensure the safety and efficacy of medicines, thereby protecting public health.
In India, pharmacovigilance activities are monitored by the Pharmacovigilance Programme of India (PvPI), which works closely with CDSCO to collect, analyze, and act upon data regarding adverse drug reactions (ADRs). Together, they play a critical role in ensuring that the benefits of drugs outweigh their risks, maintaining high standards of patient safety, and promoting the rational use of medicines.
12. •2017년은 역대 디지털 헬스케어 스타트업 펀딩 중 최대의 해.
•투자횟수와 개별 투자의 규모도 역대 최고 수준을 기록
•$100m 을 넘는 mega deal 도 8건이 있었으며,
•이에 따라 기업가치 $1b이 넘는 유니콘 기업들이 상당수 생겨남.
https://rockhealth.com/reports/2017-year-end-funding-report-the-end-of-the-beginning-of-digital-health/
15. •최근 3년 동안 Merck, J&J, GSK 등의 제약사들의 디지털 헬스케어 분야 투자 급증
•2015-2016년 총 22건의 deal (=2010-2014년의 5년간 투자 건수와 동일)
•Merck 가 가장 활발: 2009년부터 Global Health Innovation Fund 를 통해 24건 투자 ($5-7M)
•GSK 의 경우 2014년부터 6건 (via VC arm, SR One): including Propeller Health
16.
17. 헬스케어넓은 의미의 건강 관리에는 해당되지만,
디지털 기술이 적용되지 않고, 전문 의료 영역도 아닌 것
예) 운동, 영양, 수면
디지털 헬스케어
건강 관리 중에 디지털 기술이 사용되는 것
예) 사물인터넷, 인공지능, 3D 프린터, VR/AR
모바일 헬스케어
디지털 헬스케어 중
모바일 기술이 사용되는 것
예) 스마트폰, 사물인터넷, SNS
개인 유전정보분석
예) 암유전체, 질병위험도,
보인자, 약물 민감도
예) 웰니스, 조상 분석
헬스케어 관련 분야 구성도(ver 0.3)
의료
질병 예방, 치료, 처방, 관리
등 전문 의료 영역
원격의료
원격진료
18. What is most important factor in digital medicine?
19. “Data! Data! Data!” he cried.“I can’t
make bricks without clay!”
- Sherlock Holmes,“The Adventure of the Copper Beeches”
20.
21. 새로운 데이터가
새로운 방식으로
새로운 주체에 의해
측정, 저장, 통합, 분석된다.
데이터의 종류
데이터의 질적/양적 측면
웨어러블 기기
스마트폰
유전 정보 분석
인공지능
SNS
사용자/환자
대중
22. Three Steps to Implement Digital Medicine
• Step 1. Measure the Data
• Step 2. Collect the Data
• Step 3. Insight from the Data
23. Digital Healthcare Industry Landscape
Data Measurement Data Integration Data Interpretation Treatment
Smartphone Gadget/Apps
DNA
Artificial Intelligence
2nd Opinion
Wearables / IoT
(ver. 3)
EMR/EHR 3D Printer
Counseling
Data Platform
Accelerator/early-VC
Telemedicine
Device
On Demand (O2O)
VR
Digital Healthcare Institute
Diretor, Yoon Sup Choi, Ph.D.
yoonsup.choi@gmail.com
24. Data Measurement Data Integration Data Interpretation Treatment
Smartphone Gadget/Apps
DNA
Artificial Intelligence
2nd Opinion
Device
On Demand (O2O)
Wearables / IoT
Digital Healthcare Institute
Diretor, Yoon Sup Choi, Ph.D.
yoonsup.choi@gmail.com
EMR/EHR 3D Printer
Counseling
Data Platform
Accelerator/early-VC
VR
Telemedicine
Digital Healthcare Industry Landscape (ver. 3)
39. Skin Cancer Image Classification (TensorFlow Dev Summit 2017)
Skin cancer classification performance of
the CNN and dermatologists.
https://www.youtube.com/watch?v=toK1OSLep3s&t=419s
50. Digital Phenotype:
Your smartphone knows if you are depressed
J Med Internet Res. 2015 Jul 15;17(7):e175.
The correlation analysis between the features and the PHQ-9 scores revealed that 6 of the 10
features were significantly correlated to the scores:
• strong correlation: circadian movement, normalized entropy, location variance
• correlation: phone usage features, usage duration and usage frequency
51.
52. • 아이폰의 센서로 측정한 자신의 의료/건강 데이터를 플랫폼에 공유 가능
• 가속도계, 마이크, 자이로스코프, GPS 센서 등을 이용
• 걸음, 운동량, 기억력, 목소리 떨림 등등
• 기존의 의학연구의 문제를 해결: 충분한 의료 데이터의 확보
• 연구 참여자 등록에 물리적, 시간적 장벽을 제거 (1번/3개월 ➞ 1번/1초)
• 대중의 의료 연구 참여 장려: 연구 참여자의 수 증가
• 발표 후 24시간 내에 수만명의 연구 참여자들이 지원
• 사용자 본인의 동의 하에 진행
ResearchKit
58. Autism and Beyond EpiWatchMole Mapper
measuring facial expressions of young
patients having autism
measuring morphological changes
of moles
measuring behavioral data
of epilepsy patients
59. •스탠퍼드의 심혈관 질환 연구 앱, myHeart
• 발표 하루만에 11,000 명의 참가자가 등록
• 스탠퍼드의 해당 연구 책임자 앨런 영,
“기존의 방식으로는 11,000명 참가자는
미국 전역의 50개 병원에서 1년간 모집해야 한다”
60. •파킨슨 병 연구 앱, mPower
• 발표 하루만에 5,589 명의 참가자가 등록
• 기존에 6000만불을 들여 5년 동안 모집한
환자의 수는 단 800명
67. Fig 1. What can consumer wearables do? Heart rate can be measured with an oximeter built into a ring [3], muscle activity with an electromyographi
sensor embedded into clothing [4], stress with an electodermal sensor incorporated into a wristband [5], and physical activity or sleep patterns via an
accelerometer in a watch [6,7]. In addition, a female’s most fertile period can be identified with detailed body temperature tracking [8], while levels of me
attention can be monitored with a small number of non-gelled electroencephalogram (EEG) electrodes [9]. Levels of social interaction (also known to a
PLOS Medicine 2016
68. PwC Health Research Institute Health wearables: Early days2
insurers—offering incentives for
use may gain traction. HRI’s survey
Source: HRI/CIS Wearables consumer survey 2014
21%
of US
consumers
currently
own a
wearable
technology
product
2%
wear it a few
times a month
2%
no longer
use it
7%
wear it a few
times a week
10%
wear it
everyday
Figure 2: Wearables are not mainstream – yet
Just one in five US consumers say they own a wearable device.
Intelligence Series sought to better
understand American consumers’
attitudes toward wearables through
done with the data.
PwC, Health wearables: early days, 2014
69. PwC | The Wearable Life | 3
device (up from 21% in 2014). And 36% own more than one.
We didn’t even ask this question in our previous survey since
it wasn’t relevant at the time. That’s how far we’ve come.
millennials are far more likely to own wearables than older
adults. Adoption of wearables declines with age.
Of note in our survey findings, however: Consumers aged
35 to 49 are more likely to own smart watches.
Across the board for gender, age, and ethnicity, fitness
wearable technology is most popular.
Fitness band
Smart clothing
Smart video/
photo device
(e.g. GoPro)
Smart watch
Smart
glasses*
45%
14%
27%
15%
12%
Base: Respondents who currently own at least one device (pre-quota sample, n=700); Q10A/B/C/D/E. Please tell us your relationship with the following wearable
technology products. *Includes VR/AR glasses
Fitness runs away with it
% respondents who own type of wearable device
PwC,The Wearable Life 2.0, 2016
• 49% own at least one wearable device (up from 21% in2014)
• 36% own more than one device.
74. •Fitbit이 임상연구에 활용되는 것은 크게 두 가지 경우
•Fitbit 자체가 intervention이 되어서 활동량이나 치료 효과를 증진시킬 수 있는지 여부
•연구 참여자의 활동량을 모니터링 하기 위한 수단
•1. Fitbit으로 환자의 활동량을 증가시키기 위한 연구들
•Fitbit이 소아 비만 환자의 활동량을 증가시키는지 여부를 연구
•Fitbit이 위소매절제술을 받은 환자들의 활동량을 증가시키는지 여부
•Fitbit이 젊은 낭성 섬유증 (cystic fibrosis) 환자의 활동량을 증가시키는지 여부
•Fitbit이 암 환자의 신체 활동량을 증가시키기 위한 동기부여가 되는지 여부
•2. Fitbit으로 임상 연구에 참여하는 환자의 활동량을 모니터링
•항암 치료를 받은 환자들의 건강과 예후를 평가하는데 fitbit을 사용
•현금이 자녀/부모의 활동량을 증가시키는지 파악하기 위해 fitbit을 사용
•Brain tumor 환자의 삶의 질 측정을 위해 다른 survey 결과와 함께 fitbit을 사용
•말초동맥 질환(Peripheral Artery Disease) 환자의 활동량을 평가하기 위해
75. •체중 감량이 유방암 재발에 미치는 영향을 연구
•유방암 환자들 중 20%는 재발, 대부분이 전이성 유방암
•과체중은 유방암의 위험을 높인다고 알려져 왔으며,
•비만은 초기 유방암 환자의 예후를 좋지 않게 만드는 것도 알려짐
•하지만, 체중 감량과 유방암 재발 위험도의 상관관계 연구는 아직 없음
•3,200 명의 과체중, 초기 비만 유방암 환자들이 2년간 참여
•결과에 따라 전세계 유방암 환자의 표준 치료에 체중 감량이 포함될 가능성
•Fitbit 이 체중 감량 프로그램에 대한 지원
•Fitbit Charge HR: 운동량, 칼로리 소모, 심박수 측정
•Fitbit Aria Wi-Fi Smart Scale: 스마트 체중계
•FitStar: 개인 맞춤형 동영상 운동 코칭 서비스
2016. 4. 27.
93. Sensor and Transmitter
Transmitter
Tiny wire inserted
Converts glucose into electrical current
Glucose range: 40-400 mg/dL
Every 5 minutes, up to 7 days
Converts sensor data into
glucose readings (Software 505)
Glucose data broadcast via
Bluetooth to display device
Sensor
94.
95. CO-1
Dexcom G5 Mobile Continuous
Glucose Monitoring (CGM) System
for Non-Adjunctive Management
of Diabetes
July 21, 2016
Dexcom, Inc.
Clinical Chemistry and Clinical Toxicology
Devices Panel
96. Dexcom G5 Mobile Continuous Glucose
Monitoring (CGM) System for Non-Adjunctive
Management of Diabetes
•FDA의 Clinical Chemistry and Clinical Toxicology Devices Panel
•Dexcom G5가 기존의 SMBG를 대체 가능하다고 권고
•안전 (8:2), 효과 (9:1), 위험 대비 효용 (8:2)
•Dexcom G5의 혈당 수치는 SMBG와 약 9% 차이가 날 수 있음
•여러 회사의 SMBG 들 간에도 4-9%의 상대적 차이 존재
•어차피 상당수(69%)의 환자들은 off-label로 CGM을 SMBG 대신 사용중
•차라리 허용 후 환자들을 정식으로 교육/관리하는 것이 나을 것
97. • Health Canada 에서 Dexcom G5 CGM이 SMBG를 대체할 수 있다고 결정
• 의사들이 기존의 SMBG 대신에 Dexcom 을 처방할 수 있게 되었음
• 기존의 SMBG는 하루에 두 번 calibration 목적으로 사용하면 됨
102. Transmitter iPhone Apple Watch
“not require the user to have a separate receiver box,
though it will still require that the iPhone be in range” (2016.3)
http://www.mobihealthnews.com/content/dexcoms-next-generation-apple-watch-cgm-app-needs-one-less-device-work
103. Transmitter Apple Watch
“with Bluetooth built into the Watch, users won’t need to have anything on them
but the CGM itself and their Apple Watch.” (2017.7)
http://www.mobihealthnews.com/content/dexcom-propeller-and-resound-poised-make-use-apple-watch-native-bluetooth-launch
104. FreeStyle Libre Flash Glucose Monitoring System
Why prick when you can scan?
http://www.freestylelibre.co.uk
105. Temporary Tattoo Offers Needle-Free Way
to Monitor Glucose Levels
• A very mild electrical current applied to the skin for 10 minutes forces sodium
ions in the fluid between skin cells to migrate toward the tattoo’s electrodes.
• These ions carry glucose molecules that are also found in the fluid.
• A sensor built into the tattoo then measures the strength of the electrical charge
produced by the glucose to determine a person’s overall glucose levels.
106. GlucoWatch
• GlucoWatch 2 - Cygnus
• FDA approved and marketed in 2002
• Provides a glucose reading every 10 minutes
• … but the device was discontinued because it caused skin irritation
108. C8 Medisensor
"From a technological standpoint, what we had done with this
was a stellar achievement in that people thought even (what
we achieved) was beyond possibility.”
- Former C8 MediSensors CEO Rudy Hofmeister
111. Science Advances 24 Jan 2018
the elastic region was mainly stretched because of the significant
difference in Young’s modulus (24, 27, 28). In addition, Fig. 2B
shows that there were no gaps at the interfaces between these heter-
ogeneous regions even during the stretching states (30% in tensile
from the mechanical deformations.
Figure 2D and fig. S2 present the atomic force microscopy (AFM)
andscanning electron microscopy (SEM) images of the hybrid substrate
with continuous interfaces between the reinforced and elastic areas. The
Fig. 1. Stretchable, transparent smart contact lens system. (A) Schematic illustration of the soft, smart contact lens. The soft, smart contact lens is composed of a hybrid
substrate, functional devices (rectifier, LED, and glucose sensor), and a transparent, stretchable conductor (for antenna and interconnects). (B) Circuit diagram of the smart contact
lens system. (C) Operation of this soft, smart contact lens. Electric power is wirelessly transmitted to the lens through the antenna. This power activates the LED pixel and the
glucose sensor. After detecting the glucose level in tear fluid above the threshold, this pixel turns off.
Park et al., Sci. Adv. 2018;4:eaap9841 24 January 2018 3 of 11
onMarch7,2018http://advances.sciencemag.org/Downloadedfrom
112. Science Advances 24 Jan 2018
Fig. 2. Properties of a stretchable and transparent hybrid substrate. (A) Schematic image of the hybrid substrate where the reinforced islands are embedded in the elastic
substrate. (B) SEM images before (top) and during (bottom) 30% stretching. The arrow indicates the direction of stretching direction. Scale bars, 500 mm. (C) Effective strains on
each part along the stretching direction indicated in (B). (D) AFM image of the hybrid substrate. Black and blue arrows indicate the elastic region and the reinforced island,
respectively. Scale bar, 5 mm. (E) Photograph of the hybrid substrates molded into contact lens shape. Scale bar, 1 cm. (F) Optical transmittance (black) and haze (red) spectra of the
hybrid substrate. (G) Schematic diagram of the photographing method to identify the optical clarity of hybrid substrates. (H) Photographs taken by camera where the OP-LENS–
based hybrid substrate (left) and the SU8-LENS–based hybrid substrate (right) are located on the camera lens.
S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E
onMarch7,2018http://advances.sciencemag.org/Downloadedfrom
113. Science Advances 24 Jan 2018
When the glucose concentration is above 0.9 mM, this pixel turns off
because the bias applied to the LED becomes below than its turn-off
turned off because the glucose concentration was over the threshold,
not because of damage to the circuit. The design is such that the LED
Fig. 5. Soft, smart contact lens for detecting glucose. (A) Schematic image of the soft, smart contact lens. The rectifier, the LED, and the glucose sensor are located on the
reinforced regions. The transparent, stretchable AgNF-based antenna and interconnects are located on an elastic region. (B) Photograph of the fabricated soft, smart contact lens.
Scale bar, 1 cm. (C) Photograph of the smart contact lens on an eye of a mannequin. Scale bar, 1 cm. (D) Photographs of the in vivo test on a live rabbit using the soft, smartcontact
lens. Left: Turn-on state of the LED in the soft, smart contact lens mounted on the rabbit’s eye. Middle: Injection of tear fluids withthe glucose concentration of 0.9 mM. Right: Turn-
off state of the LED after detecting the increased glucose concentration. Scale bars, 1 cm. (E) Heat tests while a live rabbit is wearing the operating soft, smart contact lens. Scale
bars, 1 cm.
Park et al., Sci. Adv. 2018;4:eaap9841 24 January 2018 8 of 11
onMarch7,2018http://advances.sciencemag.org/Downloadedfrom
120. Night Scout Project
•연속 혈당계 기기를 해킹해서 클라우드에 혈당 수치를 전송할 수 있게
•언제 어디서든 스마트폰, 스마트 워치 등으로 자녀의 혈당 수치를 확인 가능
•소아 당뇨병 환자의 부모들이 자발적으로 개발 + 오픈소스로 무료 배포 + 본인이 자발적으로 설치
•상용 의료기기가 아니므로 FDA의 규제 없음
124. Hood Thabit et. al. Home Use of an Artificial Beta Cell in Type 1 Diabetes, NEJM (2015)
Home Use of an Artificial Beta Cell in Type 1 Diabetes
The proportion of time that the glycated hemoglobin level was in the target range
(primary end point) was significantly greater during the intervention period than during
the control period — by a mean of 11.0 percentage points (95% confidence interval [CI],
8.1 to 13.8; P<0.001).
125. Hood Thabit et. al. Home Use of an Artificial Beta Cell in Type 1 Diabetes, NEJM (2015)
The overnight mean glucose level was significantly lower with the closed-loop system
than with the control system (P<0.001), and the proportion of time that the glucose level
was within the overnight target range was greater with the closed-loop system (P<0.001)
Home Use of an Artificial Beta Cell in Type 1 Diabetes
127. • Self-reported data from a small group – 18 of the first 40 users
• The positive glucose and quality of life impact this system has had
• 0.9% improvement in A1c (from 7.1% to 6.2%)
• a strong time-in-range improvement from 58% to 81%
• near-unanimous improvements in sleep quality
OpenAPS DIY Automated Insulin Delivery Users Report 81%
Time in Range, Better Sleep, and a 0.9% A1c Improvement
https://openaps.org/2016/06/11/real-world-use-of-open-source-artificial-pancreas-systems-poster-presented-at-american-diabetes-association-scientific-sessions/
129. First FDA-approved Artificial Pancreas
http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm522974.htm
• 메드트로닉의 MiniMed 670G 가 최초로 제 1형 당뇨병 환자에 대해서 FDA 승인
• 14세 이상의 제 1형 당뇨병 환자 123명을 대상으로 진행된 임상
• 3개월의 추적 관찰 결과 당화혈색소(A1c) 수치가 7.4%에서 6.9%로 유의미하게 개선
• 당뇨병성 케톤산증, 저혈당증 등의 심각한 부작용이 이 기간 동안 발생 없음
• 메드트로닉은 향후 7-13세 환자들에 대해서 효과성과 안전성을 추가적으로 검증 계혹
(2016. 9. 28)
130. https://myglu.org/articles/a-pathway-to-an-artificial-pancreas-an-interview-with-jdrf-s-aaron-kowalski
•Step 1: 혈당 수치가 미리 정해놓은 기준까지 낮아지면, 인슐린 주입을 멈춤
•Step 2: 사용자의 혈당이 기준치까지 낮아질 것을 ‘예측’하여, 인슐린 주입을 미리 멈추거나 줄인다.
•Step 3: 혈당이 기준치 이하로 너무 낮아지는 것뿐만 아니라, 기준치 이상으로 너무 높아지는 것도 막는다.
•Step 4: 특정 범위 이내가 아니라, 특정 혈당 수치를 유지하는 것을 목표로 한다. (Hybrid closed-loop product)
•Step 5: Step 4 에서 더 나아가, 식전의 별도 인슐린 주입까지도 자동화한다.
•Step 6: 인슐린 뿐만 아니라, 글루카곤과 같은 추가적인 호르몬도 조절
Six Steps of Artificial Pancreas (JDRF)
131. https://myglu.org/articles/a-pathway-to-an-artificial-pancreas-an-interview-with-jdrf-s-aaron-kowalski
•Step 1: 혈당 수치가 미리 정해놓은 기준까지 낮아지면, 인슐린 주입을 멈춤
•Step 2: 사용자의 혈당이 기준치까지 낮아질 것을 ‘예측’하여, 인슐린 주입을 미리 멈추거나 줄인다.
•Step 3: 혈당이 기준치 이하로 너무 낮아지는 것뿐만 아니라, 기준치 이상으로 너무 높아지는 것도 막는다.
•Step 4: 특정 범위 이내가 아니라, 특정 혈당 수치를 유지하는 것을 목표로 한다. (Hybrid closed-loop product)
•Step 5: Step 4 에서 더 나아가, 식전의 별도 인슐린 주입까지도 자동화한다.
•Step 6: 인슐린 뿐만 아니라, 글루카곤과 같은 추가적인 호르몬도 조절
Six Steps of Artificial Pancreas (JDRF)
132. MiniMed 670G vs. OpenAPS
http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm522974.htm
•120 mg/dl 이외의 다른 수치는 지정하기 어려움
•13세 이하의 환자에 대해서는 활용이 불가능
•미국 이외에서는 아직 인허가 이전
•고가의 유지 비용 (800만원+ 매달 40만원)
143. Epic MyChart Epic EHR
Dexcom CGM
Patients/User
Devices
EHR Hospital
Whitings
+
Apple Watch
Apps
HealthKit
144.
145. • 애플 HealthKit 가 미국의 23개 선도병원 중에, 14개의 병원과 협력
• 경쟁 플랫폼 Google Fit, S-Health 보다 현저히 빠른 움직임
• Beth Israel Deaconess 의 CIO
• “25만명의 환자들 중 상당수가 웨어러블로 각종 데이터 생산 중.
이 모든 디바이스에 인터페이스를 우리 병원은 제공할 수 없다.
하지만 애플이라면 가능하다.”
2015.2.5
151. Epic MyChart Epic EHR
Dexcom CGM
Patients/User
Devices
EHR Hospital
Whitings
+
Apple Watch
Apps
HealthKit
152. transfer from Share2 to HealthKit as mandated by Dexcom receiver
Food and Drug Administration device classification. Once the glucose
values reach HealthKit, they are passively shared with the Epic
MyChart app (https://www.epic.com/software-phr.php). The MyChart
patient portal is a component of the Epic EHR and uses the same data-
base, and the CGM values populate a standard glucose flowsheet in
the patient’s chart. This connection is initially established when a pro-
vider places an order in a patient’s electronic chart, resulting in a re-
quest to the patient within the MyChart app. Once the patient or
patient proxy (parent) accepts this connection request on the mobile
device, a communication bridge is established between HealthKit and
MyChart enabling population of CGM data as frequently as every 5
Participation required confirmation of Bluetooth pairing of the CGM re-
ceiver to a mobile device, updating the mobile device with the most recent
version of the operating system, Dexcom Share2 app, Epic MyChart app,
and confirming or establishing a username and password for all accounts,
including a parent’s/adolescent’s Epic MyChart account. Setup time aver-
aged 45–60 minutes in addition to the scheduled clinic visit. During this
time, there was specific verbal and written notification to the patients/par-
ents that the diabetes healthcare team would not be actively monitoring
or have real-time access to CGM data, which was out of scope for this pi-
lot. The patients/parents were advised that they should continue to contact
the diabetes care team by established means for any urgent questions/
concerns. Additionally, patients/parents were advised to maintain updates
Figure 1: Overview of the CGM data communication bridge architecture.
BRIEFCOMMUNICATION
Kumar R B, et al. J Am Med Inform Assoc 2016;0:1–6. doi:10.1093/jamia/ocv206, Brief Communication
byguestonApril7,2016http://jamia.oxfordjournals.org/Downloadedfrom
•Apple HealthKit, Dexcom CGM기기를 통해 지속적으로 혈당을 모니터링한 데이터를 EHR과 통합
•당뇨환자의 혈당관리를 향상시켰다는 연구결과
•Stanford Children’s Health와 Stanford 의대에서 10명 type 1 당뇨 소아환자 대상으로 수행 (288 readings /day)
•EHR 기반 데이터분석과 시각화는 데이터 리뷰 및 환자커뮤니케이션을 향상
•환자가 내원하여 진료하는 기존 방식에 비해 실시간 혈당변화에 환자가 대응
JAMIA 2016
Remote Patients Monitoring
via Dexcom-HealthKit-Epic-Stanford
160. •약한 인공 지능 (Artificial Narrow Intelligence)
• 특정 방면에서 잘하는 인공지능
• 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전
•강한 인공 지능 (Artificial General Intelligence)
• 모든 방면에서 인간 급의 인공 지능
• 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습
•초 인공 지능 (Artificial Super Intelligence)
• 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능
• “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
161.
162. 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100
90%
50%
10%
PT-AI
AGI
EETNTOP100 Combined
언제쯤 기계가 인간 수준의 지능을 획득할 것인가?
Philosophy and Theory of AI (2011)
Artificial General Intelligence (2012)
Greek Association for Artificial Intelligence
Survey of most frequently cited 100 authors (2013)
Combined
응답자
누적 비율
Superintelligence, Nick Bostrom (2014)
163. Superintelligence: Science of fiction?
Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google),
David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart
Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI)
January 6-8, 2017, Asilomar, CA
https://brunch.co.kr/@kakao-it/49
https://www.youtube.com/watch?v=h0962biiZa4
164. Superintelligence: Science of fiction?
Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google),
David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart
Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI)
January 6-8, 2017, Asilomar, CA
Q: 초인공지능이란 영역은 도달 가능한 것인가?
Q: 초지능을 가진 개체의 출현이 가능할 것이라고 생각하는가?
Table 1
Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn
YES YES YES YES YES YES YES YES YES
Table 1-1
Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn
YES YES YES YES YES YES YES YES YES
Q: 초지능의 실현이 일어나기를 희망하는가?
Table 1-1-1
Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn
Complicated Complicated Complicated YES Complicated YES YES Complicated Complicated
https://brunch.co.kr/@kakao-it/49
https://www.youtube.com/watch?v=h0962biiZa4
167. Superintelligence, Nick Bostrom (2014)
일단 인간 수준(human baseline)의 강한 인공지능이 구현되면,
이후 초지능(superintelligence)로 도약(take off)하기까지는
극히 짧은 시간이 걸릴 수 있다.
How far to superintelligence
168. •약한 인공 지능 (Artificial Narrow Intelligence)
• 특정 방면에서 잘하는 인공지능
• 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전
•강한 인공 지능 (Artificial General Intelligence)
• 모든 방면에서 인간 급의 인공 지능
• 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습
•초 인공 지능 (Artificial Super Intelligence)
• 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능
• “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
169.
170.
171.
172.
173.
174.
175.
176. •복잡한 의료 데이터의 분석 및 insight 도출
•영상 의료/병리 데이터의 분석/판독
•연속 데이터의 모니터링 및 예방/예측
인공지능의 의료 활용
177. •복잡한 의료 데이터의 분석 및 insight 도출
•영상 의료/병리 데이터의 분석/판독
•연속 데이터의 모니터링 및 예방/예측
인공지능의 의료 활용
180. 600,000 pieces of medical evidence
2 million pages of text from 42 medical journals and clinical trials
69 guidelines, 61,540 clinical trials
IBM Watson on Medicine
Watson learned...
+
1,500 lung cancer cases
physician notes, lab results and clinical research
+
14,700 hours of hands-on training
181.
182.
183.
184.
185.
186. Annals of Oncology (2016) 27 (suppl_9): ix179-ix180. 10.1093/annonc/mdw601
Validation study to assess performance of IBM cognitive
computing system Watson for oncology with Manipal
multidisciplinary tumour board for 1000 consecutive cases:
An Indian experience
• MMDT(Manipal multidisciplinary tumour board) treatment recommendation and
data of 1000 cases of 4 different cancers breast (638), colon (126), rectum (124)
and lung (112) which were treated in last 3 years was collected.
• Of the treatment recommendations given by MMDT, WFO provided
50% in REC, 28% in FC, 17% in NREC
• Nearly 80% of the recommendations were in WFO REC and FC group
• 5% of the treatment provided by MMDT was not available with WFO
• The degree of concordance varied depending on the type of cancer
• WFO-REC was high in Rectum (85%) and least in Lung (17.8%)
• high with TNBC (67.9%); HER2 negative (35%)
• WFO took a median of 40 sec to capture, analyze and give the treatment.
(vs MMDT took the median time of 15 min)
187. WFO in ASCO 2017
• Early experience with IBM WFO cognitive computing system for lung
and colorectal cancer treatment (마니팔 병원)
• 지난 3년간: lung cancer(112), colon cancer(126), rectum cancer(124)
• lung cancer: localized 88.9%, meta 97.9%
• colon cancer: localized 85.5%, meta 76.6%
• rectum cancer: localized 96.8%, meta 80.6%
Performance of WFO in India
2017 ASCO annual Meeting, J Clin Oncol 35, 2017 (suppl; abstr 8527)
188. San Antonio Breast Cancer Symposium—December 6-10, 2016
Concordance WFO (@T2) and MMDT (@T1* v. T2**)
(N= 638 Breast Cancer Cases)
Time Point
/Concordance
REC REC + FC
n % n %
T1* 296 46 463 73
T2** 381 60 574 90
This presentation is the intellectual property of the author/presenter.Contact somusp@yahoo.com for permission to reprint and/or distribute.26
* T1 Time of original treatment decision by MMDT in the past (last 1-3 years)
** T2 Time (2016) of WFO’s treatment advice and of MMDT’s treatment decision upon blinded re-review of non-concordant
cases
189. 잠정적 결론
•왓슨 포 온콜로지와 의사의 일치율:
•암종별로 다르다.
•같은 암종에서도 병기별로 다르다.
•같은 암종에 대해서도 병원별/국가별로 다르다.
•시간이 흐름에 따라 달라질 가능성이 있다.
190. 원칙이 필요하다
•어떤 환자의 경우, 왓슨에게 의견을 물을 것인가?
•왓슨을 (암종별로) 얼마나 신뢰할 것인가?
•왓슨의 의견을 환자에게 공개할 것인가?
•왓슨과 의료진의 판단이 다른 경우 어떻게 할 것인가?
•왓슨에게 보험 급여를 매길 수 있는가?
이러한 기준에 따라 의료의 질/치료효과가 달라질 수 있으나,
현재 개별 병원이 개별적인 기준으로 활용하게 됨
191. Empowering the Oncology Community for Cancer Care
Genomics
Oncology
Clinical
Trial
Matching
Watson Health’s oncology clients span more than 35 hospital systems
“Empowering the Oncology Community
for Cancer Care”
Andrew Norden, KOTRA Conference, March 2017, “The Future of Health is Cognitive”
193. •총 16주간 HOG( Highlands Oncology Group)의 폐암과 유방암 환자 2,620명을 대상
•90명의 환자를 3개의 노바티스 유방암 임상 프로토콜에 따라 선별
•임상 시험 코디네이터: 1시간 50분
•Watson CTM: 24분 (78% 시간 단축)
•Watson CTM은 임상 시험 기준에 해당되지 않는 환자 94%를 자동으로 스크리닝
194. Watson Genomics Overview
20
Watson Genomics Content
• 20+ Content Sources Including:
• Medical Articles (23Million)
• Drug Information
• Clinical Trial Information
• Genomic Information
Case Sequenced
VCF / MAF, Log2, Dge
Encryption
Molecular Profile
Analysis
Pathway Analysis
Drug Analysis
Service Analysis, Reports, & Visualizations
195. •복잡한 의료 데이터의 분석 및 insight 도출
•영상 의료/병리 데이터의 분석/판독
•연속 데이터의 모니터링 및 예방/예측
인공지능의 의료 활용
198. 12 Olga Russakovsky* et al.
Fig. 4 Random selection of images in ILSVRC detection validation set. The images in the top 4 rows were taken from
ILSVRC2012 single-object localization validation set, and the images in the bottom 4 rows were collected from Flickr using
scene-level queries.
tage of all the positive examples available. The second is images collected from Flickr specifically for the de- http://arxiv.org/pdf/1409.0575.pdf
199. • Main competition
• 객체 분류 (Classification): 그림 속의 객체를 분류
• 객체 위치 (localization): 그림 속 ‘하나’의 객체를 분류하고 위치를 파악
• 객체 인식 (object detection): 그림 속 ‘모든’ 객체를 분류하고 위치 파악
16 Olga Russakovsky* et al.
Fig. 7 Tasks in ILSVRC. The first column shows the ground truth labeling on an example image, and the next three show
three sample outputs with the corresponding evaluation score.
http://arxiv.org/pdf/1409.0575.pdf
200. Performance of winning entries in the ILSVRC2010-2015 competitions
in each of the three tasks
http://image-net.org/challenges/LSVRC/2015/results#loc
Single-object localization
Localizationerror
0
10
20
30
40
50
2011 2012 2013 2014 2015
Object detection
Averageprecision
0.0
17.5
35.0
52.5
70.0
2013 2014 2015
Image classification
Classificationerror
0
10
20
30
2010 2011 2012 2013 2014 2015
201.
202. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, “Deep Residual Learning for Image Recognition”, 2015
How deep is deep?
206. DeepFace: Closing the Gap to Human-Level
Performance in FaceVerification
Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14.
Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three
locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million
parameters, where more than 95% come from the local and fully connected layers.
very few parameters. These layers merely expand the input
into a set of simple local features.
The subsequent layers (L4, L5 and L6) are instead lo-
cally connected [13, 16], like a convolutional layer they ap-
ply a filter bank, but every location in the feature map learns
a different set of filters. Since different regions of an aligned
image have different local statistics, the spatial stationarity
The goal of training is to maximize the probability of
the correct class (face id). We achieve this by minimiz-
ing the cross-entropy loss for each training sample. If k
is the index of the true label for a given input, the loss is:
L = log pk. The loss is minimized over the parameters
by computing the gradient of L w.r.t. the parameters and
Human: 95% vs. DeepFace in Facebook: 97.35%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
207. FaceNet:A Unified Embedding for Face
Recognition and Clustering
Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering
Human: 95% vs. FaceNet of Google: 99.63%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
False accept
False reject
s. This shows all pairs of images that were
on LFW. Only eight of the 13 errors shown
he other four are mislabeled in LFW.
on Youtube Faces DB
ge similarity of all pairs of the first one
our face detector detects in each video.
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
lead to truly amazing results. Figure 7 shows one cluster in
a users personal photo collection, generated using agglom-
erative clustering. It is a clear showcase of the incredible
invariance to occlusion, lighting, pose and even age.
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
sification. Our end-to-end training both simplifies the setup
and shows that directly optimizing a loss relevant to the task
at hand improves performance.
Another strength of our model is that it only requires
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
208. Show and Tell:
A Neural Image Caption Generator
Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555
v
om
Samy Bengio
Google
bengio@google.com
Dumitru Erhan
Google
dumitru@google.com
s a
cts
his
re-
m-
ed
he
de-
nts
A group of people
shopping at an
outdoor market.
!
There are many
vegetables at the
fruit stand.
Vision!
Deep CNN
Language !
Generating!
RNN
Figure 1. NIC, our model, is based end-to-end on a neural net-
work consisting of a vision CNN followed by a language gener-
209. Show and Tell:
A Neural Image Caption Generator
Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555
Figure 5. A selection of evaluation results, grouped by human rating.
214. Business Area
Medical Image Analysis
VUNOnet and our machine learning technology will help doctors and hospitals manage
medical scans and images intelligently to make diagnosis faster and more accurately.
Original Image Automatic Segmentation EmphysemaNormal ReticularOpacity
Our system finds DILDs at the highest accuracy * DILDs: Diffuse Interstitial Lung Disease
Digital Radiologist
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
216. Digital Radiologist
Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214.
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
217. Digital Radiologist
Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214.
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
218. Digital Radiologist
Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214.
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
Feature Engineering vs Feature Learning
alization of Hand-crafted Feature vs Learned Feature in 2D
Feature Engineering vs Feature Learning
• Visualization of Hand-crafted Feature vs Learned Feature in 2D
Visualization of Hand-crafted Feature vs Learned Feature in 2D
219. Bench to Bedside : Practical Applications
• Contents-based Case Retrieval
–Finding similar cases with the clinically matching context - Search engine for medical images.
–Clinicians can refer the diagnosis, prognosis of past similar patients to make better clinical decision.
–Accepted to present at RSNA 2017
Digital Radiologist
220. •Zebra Medical Vision에서 $1 에 영상의학데이터를 판독해주는 서비스를 런칭 (2017년 10월)
•항목은 확정되지는 않았으나, Pulmonary Hypertension, Lung Nodule, Fatty Liver, Emphysema,
Coronary Calcium Scoring, Bone Mineral Density, Aortic Aneurysm 등으로 예상
https://www.zebra-med.com/aione/
221. Zebra Medical Vision’s AI1: AI at Your Fingertips
https://www.youtube.com/watch?v=0PGgCpXa-Fs
223. 당뇨성 망막병증
• 당뇨병의 대표적 합병증: 당뇨병력이 30년 이상 환자 90% 발병
• 안과 전문의들이 안저(안구의 안쪽)를 사진으로 찍어서 판독
• 망막 내 미세혈관 생성, 출혈, 삼출물 정도를 파악하여 진단
224. Copyright 2016 American Medical Association. All rights reserved.
Development and Validation of a Deep Learning Algorithm
for Detection of Diabetic Retinopathy
in Retinal Fundus Photographs
Varun Gulshan, PhD; Lily Peng, MD, PhD; Marc Coram, PhD; Martin C. Stumpe, PhD; Derek Wu, BS; Arunachalam Narayanaswamy, PhD;
Subhashini Venugopalan, MS; Kasumi Widner, MS; Tom Madams, MEng; Jorge Cuadros, OD, PhD; Ramasamy Kim, OD, DNB;
Rajiv Raman, MS, DNB; Philip C. Nelson, BS; Jessica L. Mega, MD, MPH; Dale R. Webster, PhD
IMPORTANCE Deep learning is a family of computational methods that allow an algorithm to
program itself by learning from a large set of examples that demonstrate the desired
behavior, removing the need to specify rules explicitly. Application of these methods to
medical imaging requires further assessment and validation.
OBJECTIVE To apply deep learning to create an algorithm for automated detection of diabetic
retinopathy and diabetic macular edema in retinal fundus photographs.
DESIGN AND SETTING A specific type of neural network optimized for image classification
called a deep convolutional neural network was trained using a retrospective development
data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy,
diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists
and ophthalmology senior residents between May and December 2015. The resultant
algorithm was validated in January and February 2016 using 2 separate data sets, both
graded by at least 7 US board-certified ophthalmologists with high intragrader consistency.
EXPOSURE Deep learning–trained algorithm.
MAIN OUTCOMES AND MEASURES The sensitivity and specificity of the algorithm for detecting
referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy,
referable diabetic macular edema, or both, were generated based on the reference standard
of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2
operating points selected from the development set, one selected for high specificity and
another for high sensitivity.
RESULTS TheEyePACS-1datasetconsistedof9963imagesfrom4997patients(meanage,54.4
years;62.2%women;prevalenceofRDR,683/8878fullygradableimages[7.8%]);the
Messidor-2datasethad1748imagesfrom874patients(meanage,57.6years;42.6%women;
prevalenceofRDR,254/1745fullygradableimages[14.6%]).FordetectingRDR,thealgorithm
hadanareaunderthereceiveroperatingcurveof0.991(95%CI,0.988-0.993)forEyePACS-1and
0.990(95%CI,0.986-0.995)forMessidor-2.Usingthefirstoperatingcutpointwithhigh
specificity,forEyePACS-1,thesensitivitywas90.3%(95%CI,87.5%-92.7%)andthespecificity
was98.1%(95%CI,97.8%-98.5%).ForMessidor-2,thesensitivitywas87.0%(95%CI,81.1%-
91.0%)andthespecificitywas98.5%(95%CI,97.7%-99.1%).Usingasecondoperatingpoint
withhighsensitivityinthedevelopmentset,forEyePACS-1thesensitivitywas97.5%and
specificitywas93.4%andforMessidor-2thesensitivitywas96.1%andspecificitywas93.9%.
CONCLUSIONS AND RELEVANCE In this evaluation of retinal fundus photographs from adults
with diabetes, an algorithm based on deep machine learning had high sensitivity and
specificity for detecting referable diabetic retinopathy. Further research is necessary to
determine the feasibility of applying this algorithm in the clinical setting and to determine
whether use of the algorithm could lead to improved care and outcomes compared with
current ophthalmologic assessment.
JAMA. doi:10.1001/jama.2016.17216
Published online November 29, 2016.
Editorial
Supplemental content
Author Affiliations: Google Inc,
Mountain View, California (Gulshan,
Peng, Coram, Stumpe, Wu,
Narayanaswamy, Venugopalan,
Widner, Madams, Nelson, Webster);
Department of Computer Science,
University of Texas, Austin
(Venugopalan); EyePACS LLC,
San Jose, California (Cuadros); School
of Optometry, Vision Science
Graduate Group, University of
California, Berkeley (Cuadros);
Aravind Medical Research
Foundation, Aravind Eye Care
System, Madurai, India (Kim); Shri
Bhagwan Mahavir Vitreoretinal
Services, Sankara Nethralaya,
Chennai, Tamil Nadu, India (Raman);
Verily Life Sciences, Mountain View,
California (Mega); Cardiovascular
Division, Department of Medicine,
Brigham and Women’s Hospital and
Harvard Medical School, Boston,
Massachusetts (Mega).
Corresponding Author: Lily Peng,
MD, PhD, Google Research, 1600
Amphitheatre Way, Mountain View,
CA 94043 (lhpeng@google.com).
Research
JAMA | Original Investigation | INNOVATIONS IN HEALTH CARE DELIVERY
(Reprinted) E1
Copyright 2016 American Medical Association. All rights reserved.
227. Training Set / Test Set
• CNN으로 후향적으로 128,175개의 안저 이미지 학습
• 미국의 안과전문의 54명이 3-7회 판독한 데이터
• 우수한 안과전문의들 7-8명의 판독 결과와 인공지능의 판독 결과 비교
• EyePACS-1 (9,963 개), Messidor-2 (1,748 개)a) Fullscreen mode
b) Hit reset to reload this image. This will reset all of the grading.
c) Comment box for other pathologies you see
eFigure 2. Screenshot of the Second Screen of the Grading Tool, Which Asks Graders to Assess the
Image for DR, DME and Other Notable Conditions or Findings
228. • EyePACS-1 과 Messidor-2 의 AUC = 0.991, 0.990
• 7-8명의 안과 전문의와 sensitivity, specificity 가 동일한 수준
• F-score: 0.95 (vs. 인간 의사는 0.91)
Additional sensitivity analyses were conducted for sev-
eralsubcategories:(1)detectingmoderateorworsediabeticreti-
effects of data set size on algorithm performance were exam-
ined and shown to plateau at around 60 000 images (or ap-
Figure 2. Validation Set Performance for Referable Diabetic Retinopathy
100
80
60
40
20
0
0
70
80
85
95
90
75
0 5 10 15 20 25 30
100806040
Sensitivity,%
1 – Specificity, %
20
EyePACS-1: AUC, 99.1%; 95% CI, 98.8%-99.3%A
100
High-sensitivity operating point
High-specificity operating point
100
80
60
40
20
0
0
70
80
85
95
90
75
0 5 10 15 20 25 30
100806040
Sensitivity,%
1 – Specificity, %
20
Messidor-2: AUC, 99.0%; 95% CI, 98.6%-99.5%B
100
High-specificity operating point
High-sensitivity operating point
Performance of the algorithm (black curve) and ophthalmologists (colored
circles) for the presence of referable diabetic retinopathy (moderate or worse
diabetic retinopathy or referable diabetic macular edema) on A, EyePACS-1
(8788 fully gradable images) and B, Messidor-2 (1745 fully gradable images).
The black diamonds on the graph correspond to the sensitivity and specificity of
the algorithm at the high-sensitivity and high-specificity operating points.
In A, for the high-sensitivity operating point, specificity was 93.4% (95% CI,
92.8%-94.0%) and sensitivity was 97.5% (95% CI, 95.8%-98.7%); for the
high-specificity operating point, specificity was 98.1% (95% CI, 97.8%-98.5%)
and sensitivity was 90.3% (95% CI, 87.5%-92.7%). In B, for the high-sensitivity
operating point, specificity was 93.9% (95% CI, 92.4%-95.3%) and sensitivity
was 96.1% (95% CI, 92.4%-98.3%); for the high-specificity operating point,
specificity was 98.5% (95% CI, 97.7%-99.1%) and sensitivity was 87.0% (95%
CI, 81.1%-91.0%). There were 8 ophthalmologists who graded EyePACS-1 and 7
ophthalmologists who graded Messidor-2. AUC indicates area under the
receiver operating characteristic curve.
Research Original Investigation Accuracy of a Deep Learning Algorithm for Detection of Diabetic Retinopathy
Results
232. LETTERH
his task, the CNN achieves 72.1±0.9% (mean±s.d.) overall
he average of individual inference class accuracies) and two
gists attain 65.56% and 66.0% accuracy on a subset of the
set. Second, we validate the algorithm using a nine-class
rtition—the second-level nodes—so that the diseases of
have similar medical treatment plans. The CNN achieves
two trials, one using standard images and the other using
images, which reflect the two steps that a dermatologist m
to obtain a clinical impression. The same CNN is used for a
Figure 2b shows a few example images, demonstrating th
distinguishing between malignant and benign lesions, whic
visual features. Our comparison metrics are sensitivity an
Acral-lentiginous melanoma
Amelanotic melanoma
Lentigo melanoma
…
Blue nevus
Halo nevus
Mongolian spot
…
Training classes (757)Deep convolutional neural network (Inception v3) Inference classes (varies by task)
92% malignant melanocytic lesion
8% benign melanocytic lesion
Skin lesion image
Convolution
AvgPool
MaxPool
Concat
Dropout
Fully connected
Softmax
Deep CNN layout. Our classification technique is a
Data flow is from left to right: an image of a skin lesion
e, melanoma) is sequentially warped into a probability
over clinical classes of skin disease using Google Inception
hitecture pretrained on the ImageNet dataset (1.28 million
1,000 generic object classes) and fine-tuned on our own
29,450 skin lesions comprising 2,032 different diseases.
ning classes are defined using a novel taxonomy of skin disease
oning algorithm that maps diseases into training classes
(for example, acrolentiginous melanoma, amelanotic melano
melanoma). Inference classes are more general and are comp
or more training classes (for example, malignant melanocytic
class of melanomas). The probability of an inference class is c
summing the probabilities of the training classes according to
structure (see Methods). Inception v3 CNN architecture repr
from https://research.googleblog.com/2016/03/train-your-ow
classifier-with.html
GoogleNet Inception v3
• 129,450개의 피부과 병변 이미지 데이터를 자체 제작
• 미국의 피부과 전문의 18명이 데이터 curation
• CNN (Inception v3)으로 이미지를 학습
• 피부과 전문의들 21명과 인공지능의 판독 결과 비교
• 표피세포 암 (keratinocyte carcinoma)과 지루각화증(benign seborrheic keratosis)의 구분
• 악성 흑색종과 양성 병변 구분 (표준 이미지 데이터 기반)
• 악성 흑색종과 양성 병변 구분 (더마토스코프로 찍은 이미지 기반)
233. Skin cancer classification performance of
the CNN and dermatologists. LETT
a
b
0 1
Sensitivity
0
1
Specificity
Melanoma: 130 images
0 1
Sensitivity
0
1
Specificity
Melanoma: 225 images
Algorithm: AUC = 0.96
0 1
Sensitivity
0
1
Specificity
Melanoma: 111 dermoscopy images
0 1
Sensitivity
0
1
Specificity
Carcinoma: 707 images
Algorithm: AUC = 0.96
0 1
Sensitivity
0
1
Specificity
Melanoma: 1,010 dermoscopy images
Algorithm: AUC = 0.94
0 1
Sensitivity
0
1
Specificity
Carcinoma: 135 images
Algorithm: AUC = 0.96
Dermatologists (25)
Average dermatologist
Algorithm: AUC = 0.94
Dermatologists (22)
Average dermatologist
Algorithm: AUC = 0.91
Dermatologists (21)
Average dermatologist
cancer classification performance of the CNN and
21명 중에 인공지능보다 정확성이 떨어지는 피부과 전문의들이 상당수 있었음
피부과 전문의들의 평균 성적도 인공지능보다 좋지 않았음
234. Skin cancer classification performance of
the CNN and dermatologists. LETT
a
b
0 1
Sensitivity
0
1
Specificity
Melanoma: 130 images
0 1
Sensitivity
0
1
Specificity
Melanoma: 225 images
Algorithm: AUC = 0.96
0 1
Sensitivity
0
1
Specificity
Melanoma: 111 dermoscopy images
0 1
Sensitivity
0
1
Specificity
Carcinoma: 707 images
Algorithm: AUC = 0.96
0 1
Sensitivity
0
1
Specificity
Melanoma: 1,010 dermoscopy images
Algorithm: AUC = 0.94
0 1
Sensitivity
0
1
Specificity
Carcinoma: 135 images
Algorithm: AUC = 0.96
Dermatologists (25)
Average dermatologist
Algorithm: AUC = 0.94
Dermatologists (22)
Average dermatologist
Algorithm: AUC = 0.91
Dermatologists (21)
Average dermatologist
cancer classification performance of the CNN and
235. Skin Cancer Image Classification (TensorFlow Dev Summit 2017)
Skin cancer classification performance of
the CNN and dermatologists.
https://www.youtube.com/watch?v=toK1OSLep3s&t=419s
239. Fig 1. What can consumer wearables do? Heart rate can be measured with an oximeter built into a ring [3], muscle activity with an electromyographi
sensor embedded into clothing [4], stress with an electodermal sensor incorporated into a wristband [5], and physical activity or sleep patterns via an
accelerometer in a watch [6,7]. In addition, a female’s most fertile period can be identified with detailed body temperature tracking [8], while levels of me
attention can be monitored with a small number of non-gelled electroencephalogram (EEG) electrodes [9]. Levels of social interaction (also known to a
PLOS Medicine 2016
240. •복잡한 의료 데이터의 분석 및 insight 도출
•영상 의료/병리 데이터의 분석/판독
•연속 데이터의 모니터링 및 예방/예측
인공지능의 의료 활용
243. S E P S I S
A targeted real-time early warning score (TREWScore)
for septic shock
Katharine E. Henry,1
David N. Hager,2
Peter J. Pronovost,3,4,5
Suchi Saria1,3,5,6
*
Sepsis is a leading cause of death in the United States, with mortality highest among patients who develop septic
shock. Early aggressive treatment decreases morbidity and mortality. Although automated screening tools can detect
patients currently experiencing severe sepsis and septic shock, none predict those at greatest risk of developing
shock. We analyzed routinely available physiological and laboratory data from intensive care unit patients and devel-
oped “TREWScore,” a targeted real-time early warning score that predicts which patients will develop septic shock.
TREWScore identified patients before the onset of septic shock with an area under the ROC (receiver operating
characteristic) curve (AUC) of 0.83 [95% confidence interval (CI), 0.81 to 0.85]. At a specificity of 0.67, TREWScore
achieved a sensitivity of 0.85 and identified patients a median of 28.2 [interquartile range (IQR), 10.6 to 94.2] hours
before onset. Of those identified, two-thirds were identified before any sepsis-related organ dysfunction. In compar-
ison, the Modified Early Warning Score, which has been used clinically for septic shock prediction, achieved a lower
AUC of 0.73 (95% CI, 0.71 to 0.76). A routine screening protocol based on the presence of two of the systemic inflam-
matory response syndrome criteria, suspicion of infection, and either hypotension or hyperlactatemia achieved a low-
er sensitivity of 0.74 at a comparable specificity of 0.64. Continuous sampling of data from the electronic health
records and calculation of TREWScore may allow clinicians to identify patients at risk for septic shock and provide
earlier interventions that would prevent or mitigate the associated morbidity and mortality.
INTRODUCTION
Seven hundred fifty thousand patients develop severe sepsis and septic
shock in the United States each year. More than half of them are
admitted to an intensive care unit (ICU), accounting for 10% of all
ICU admissions, 20 to 30% of hospital deaths, and $15.4 billion in an-
nual health care costs (1–3). Several studies have demonstrated that
morbidity, mortality, and length of stay are decreased when severe sep-
sis and septic shock are identified and treated early (4–8). In particular,
one study showed that mortality from septic shock increased by 7.6%
with every hour that treatment was delayed after the onset of hypo-
tension (9).
More recent studies comparing protocolized care, usual care, and
early goal-directed therapy (EGDT) for patients with septic shock sug-
gest that usual care is as effective as EGDT (10–12). Some have inter-
preted this to mean that usual care has improved over time and reflects
important aspects of EGDT, such as early antibiotics and early ag-
gressive fluid resuscitation (13). It is likely that continued early identi-
fication and treatment will further improve outcomes. However, the
Acute Physiology Score (SAPS II), SequentialOrgan Failure Assessment
(SOFA) scores, Modified Early Warning Score (MEWS), and Simple
Clinical Score (SCS) have been validated to assess illness severity and
risk of death among septic patients (14–17). Although these scores
are useful for predicting general deterioration or mortality, they typical-
ly cannot distinguish with high sensitivity and specificity which patients
are at highest risk of developing a specific acute condition.
The increased use of electronic health records (EHRs), which can be
queried in real time, has generated interest in automating tools that
identify patients at risk for septic shock (18–20). A number of “early
warning systems,” “track and trigger” initiatives, “listening applica-
tions,” and “sniffers” have been implemented to improve detection
andtimelinessof therapy forpatients with severe sepsis andseptic shock
(18, 20–23). Although these tools have been successful at detecting pa-
tients currently experiencing severe sepsis or septic shock, none predict
which patients are at highest risk of developing septic shock.
The adoption of the Affordable Care Act has added to the growing
excitement around predictive models derived from electronic health
R E S E A R C H A R T I C L E
onNovember3,2016http://stm.sciencemag.org/Downloadedfrom
244. •아주대병원 외상센터, 응급실, 내과계 중환자실 등 3곳의 80개 병상
•산소포화도, 혈압, 맥박, 뇌파, 체온 등 8가지 환자 생체 데이터를 하나로 통합 저장
•생체 정보를 인공지능으로 실시간 모니터링+분석하여 1-3시간 전에 예측
•부정맥, 패혈증, 급성호흡곤란증후군(ARDS), 계획되지 않은 기도삽관 등의 질병
247. 혈당 관리
• 혈당은 당뇨병 뿐만 아니라, 많은 대사성 질환과 관련된 중요한 수치
• 식후 혈당 변화 (PPGR) 예측 위해
• 개별 식품의 혈당 지수
• 개별 식품의 탄수화물 함량
• 동일한 음식에 대해 사람들의 혈당 변화가 동일하게 일어나는가?
248. 혈당 관리
• 혈당은 당뇨병 뿐만 아니라, 많은 대사성 질환과 관련된 중요한 수치
• 식후 혈당 변화 (PPGR) 예측 위해
• 개별 식품의 혈당 지수
• 개별 식품의 탄수화물 함량
• 동일한 음식에 대해 사람들의 혈당 변화가 동일하게 일어나는가?
249. Article
Personalized Nutrition by Prediction of Glycemic
Responses
Graphical Abstract
Highlights
d High interpersonal variability in post-meal glucose observed
in an 800-person cohort
d Using personal and microbiome features enables accurate
glucose response prediction
d Prediction is accurate and superior to common practice in an
independent cohort
d Short-term personalized dietary interventions successfully
lower post-meal glucose
Authors
David Zeevi, Tal Korem, Niv Zmora, ...,
Zamir Halpern, Eran Elinav, Eran Segal
Correspondence
eran.elinav@weizmann.ac.il (E.E.),
eran.segal@weizmann.ac.il (E.S.)
In Brief
People eating identical meals present
high variability in post-meal blood
glucose response. Personalized diets
created with the help of an accurate
predictor of blood glucose response that
integrates parameters such as dietary
habits, physical activity, and gut
microbiota may successfully lower post-
meal blood glucose and its long-term
metabolic consequences.
Zeevi et al., 2015, Cell 163, 1079–1094
November 19, 2015 ª2015 Elsevier Inc.
http://www.cell.com/cell/abstract/S0092-8674(15)01481-6
250. Nuts (456,000)
Beef (444,000)
Legumes (420,000)
Fruit (400,000)
Poultry (386,000)
Rice (331,000)
Other (4,010,000)
Baked goods (542,000)
Vegetables (548,000)
Sweets (639,000)
Dairy (730,000)
Bread (919,000)
Overall energy documented: 9,807,000 Calories
Glucose(mg/dl)
Time
Anthropometrics
Blood tests
Gut microbiome
16S rRNA
Metagenomics
Questionnaires
Food frequency
Lifestyle
Medical
Diary (food, sleep, physical activity)
Continuous glucose monitoring
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7
Standardized meals (50g available carbohydrates)
G G F
Bread Bread Bread &
butter
Bread &
butter
Glucose Glucose Fructose
Per person profiling Computational analysis
Main
cohort
800 Participants
Validation
cohort
100 Participants
PPGR
prediction
26 Participants
Dietary
intervention
A
Glucose(mg/dl)
DayBMI
1 2 3 4 5 6 7
Standardized meal
Lunch
Snack
Dinner
Postprandial glycemic response
(PPGR; 2-hour iAUC)
D
5,435 days, 46,898 meals, 9.8M Calories, 2,532 exercises
130K hours, 1.56M glucose measurements
B C
Frequency
Frequency
HbA1c%
45% 33% 22% 76% 21% 3%
F
1000
2000
requency
Carbohydrate
Fat
E
Sleep
G Study participants MetaHIT - stool
HMP - stool HMP - oral
(2.2%)
Using smartphone-adjusted website
Using a subcutaneous sensor (iPro2)
Participant 141
http://www.cell.com/cell/abstract/S0092-8674(15)01481-6
252. http://www.cell.com/cell/abstract/S0092-8674(15)01481-6
Predicted PPGR
(iAUC, mg/dl.h)
R=0.70
Validation cohort
prediction
Personal features Meal features
Main
cohort
800 participants
Validation
cohort
100 participants
Time, nutrients,
prev. exercise
Meal response predictor
Meal
responses
Train predictor
Cross-validation
Leave-one-person-out
0 20 25 5 30
x4000
Use predictor to predict meal responses
Boosted decision trees
=
?
Meal response prediction
Predicted Measured
16S MG
Participant
MeasuredPPGR
(iAUC,mg/dl.h)
Meal Carbohydrates (g)
R=0.38
Carbohydrate-only
prediction
Predicted PPGR
(iAUC, mg/dl.h)
R=0.68
Main cohort prediction
(cross-validation)
A
B C
D E
MeasuredPPGR
(iAUC,mg/dl.h)
Calories-only
prediction
R=0.33
Meal Calories (g)
B Q A
Figure 3. Accurate Predicti
ized Postprandial Glycemic
(A) Illustration of our machine-le
predicting PPGRs.
(B–E) PPGR predictions. Dots r
(x axis) and CGM-measured
meals, for a model based: only
bohydrate content (B); only on
content (C); our predictor evalu
person-out cross validation on
son cohort (D); and our predicto
independent 100-person valid
Pearson correlation of predict
PPGRs is indicated.
As expected, the PDP o
(Figure 4A) shows that as
hydrate content increases
predicts, on average, a hi
term this relation, of hi
PPGR with increasing fe
non-beneficial (with respec
and the opposite relation
dicted PPGR with incr
value, as beneficial (also
prediction; see PDP lege
However, since PDPs dis
contribution of each feat
entire cohort, we asked w
tionship between carboh
and PPGRs varies across
end, for each participant
the slope of the linear regr
256. In an early research project involving 600 patient cases, the team was able to
predict near-term hypoglycemic events up to 3 hours in advance of the symptoms.
IBM Watson-Medtronic
Jan 7, 2016
257. Sugar.IQ
사용자의 음식 섭취와 그에 따른 혈당 변
화, 인슐린 주입 등의 과거 기록 기반
식후 사용자의 혈당이 어떻게 변화할지
Watson 이 예측
258. ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
259. ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
260. ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
261. ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
264. • Puretech Health
• ‘새로운 개념의 제약회사’를 추구하는 회사
• 기존의 신약 뿐만 아니라, 게임, 앱 등을 이용한 Digital Therapeutics 를 개발
• Digital Therapeutics는 최근 미국 FDA의 de novo 승인을 받기도 함
265.
266.
267. • Puretech Health
• 신약 파이프라인 중에는 일반적인 small molecule 등도 있지만,
• Akili: ADHD, 우울증, 알츠하이머 등을 위한 인지 능력 개선 목적의 게임 (Project EVO)
• Sonde: Voice biomarker 를 이용한 우울증 등 mental health의 진단 및 모니터링 목적
268. • Puretech Health
• 신약 파이프라인 중에는 일반적인 small molecule 등도 있지만,
• Akili: ADHD, 우울증, 알츠하이머 등을 위한 인지 능력 개선 목적의 게임 (Project EVO)
• Sonde: Voice biomarker 를 이용한 우울증 등 mental health의 진단 및 모니터링 목적
269. • Puretech Health
• 신약 파이프라인 중에는 일반적인 small molecule 등도 있지만,
• Akili: ADHD, 우울증, 알츠하이머 등을 위한 인지 능력 개선 목적의 게임 (Project EVO)
• Sonde: Voice biomarker 를 이용한 우울증 등 mental health의 진단 및 모니터링 목적