SlideShare a Scribd company logo
Professor, SAHIST, Sungkyunkwan University
Director, Digital Healthcare Institute
Yoon Sup Choi, Ph.D.
의료의 미래, 디지털 헬스케어

: 당뇨와 내분비학을 중심으로
“It's in Apple's DNA that technology alone is not enough. 

It's technology married with liberal arts.”
The Convergence of IT, BT and Medicine
Inevitable Tsunami of Change
대한영상의학회 춘계학술대회 2017.6
Vinod Khosla
Founder, 1st CEO of Sun Microsystems
Partner of KPCB, CEO of KhoslaVentures
LegendaryVenture Capitalist in SiliconValley
“Technology will replace 80% of doctors”
https://www.youtube.com/watch?time_continue=70&v=2HMPRXstSvQ
“영상의학과 전문의를 양성하는 것을 당장 그만둬야 한다.
5년 안에 딥러닝이 영상의학과 전문의를 능가할 것은 자명하다.”
Hinton on Radiology
http://rockhealth.com/2015/01/digital-health-funding-tops-4-1b-2014-year-review/
•2017년은 역대 디지털 헬스케어 스타트업 펀딩 중 최대의 해. 

•투자횟수와 개별 투자의 규모도 역대 최고 수준을 기록

•$100m 을 넘는 mega deal 도 8건이 있었으며, 

•이에 따라 기업가치 $1b이 넘는 유니콘 기업들이 상당수 생겨남.
https://rockhealth.com/reports/2017-year-end-funding-report-the-end-of-the-beginning-of-digital-health/
2010 2011 2012 2013 2014 2015 2016 2017
Q1 Q2 Q3 Q4
FUNDING SNAPSHOT: YEAR OVER YEAR
6
155
284
477
647
596
550
658
794
Deal Count
$1.4B
$1.7B
$1.7B
$627M
$603M$459M
$288M
$8.2B
$6.2B
$7.2B
$2.9B
$2.3B
$2.0B
$1.2B
$11.5B
$2.3B
2017 was the most active year for digital health funding to date with more than $11.5B invested across a record-setting 794
deals. Q4 2017 also had record-breaking numbers, surpassing $2B across 227 deals (the most ever in one quarter). Given the
global market opportunity, increasing demand for innovation, wave of high-quality entrepreneurs flocking to the sector, and
early stage of this innovation cycle, we expect plentiful capital in 2018.
DEALS & FUNDING OUTLOOKGEOGRAPHY INVESTORS
Source: StartUp Health Insights | startuphealth.com/insights Note: Report based on public data through 12/31/17 on seed (incl. accelerator), venture, corporate venture and private equity funding only. © 2018 StartUp Health LLC
https://www.slideshare.net/StartUpHealth/2017-startup-health-insights-year-end-report
https://rockhealth.com/reports/digital-health-funding-2015-year-in-review/
•최근 3년 동안 Merck, J&J, GSK 등의 제약사들의 디지털 헬스케어 분야 투자 급증

•2015-2016년 총 22건의 deal (=2010-2014년의 5년간 투자 건수와 동일)

•Merck 가 가장 활발: 2009년부터 Global Health Innovation Fund 를 통해 24건 투자 ($5-7M)

•GSK 의 경우 2014년부터 6건 (via VC arm, SR One): including Propeller Health
헬스케어넓은 의미의 건강 관리에는 해당되지만, 

디지털 기술이 적용되지 않고, 전문 의료 영역도 아닌 것

예) 운동, 영양, 수면
디지털 헬스케어
건강 관리 중에 디지털 기술이 사용되는 것

예) 사물인터넷, 인공지능, 3D 프린터, VR/AR
모바일 헬스케어
디지털 헬스케어 중 

모바일 기술이 사용되는 것

예) 스마트폰, 사물인터넷, SNS
개인 유전정보분석
예) 암유전체, 질병위험도, 

보인자, 약물 민감도
예) 웰니스, 조상 분석
헬스케어 관련 분야 구성도(ver 0.3)
의료
질병 예방, 치료, 처방, 관리
등 전문 의료 영역
원격의료
원격진료
What is most important factor in digital medicine?
“Data! Data! Data!” he cried.“I can’t
make bricks without clay!”
- Sherlock Holmes,“The Adventure of the Copper Beeches”
새로운 데이터가

새로운 방식으로

새로운 주체에 의해

측정, 저장, 통합, 분석된다.
데이터의 종류

데이터의 질적/양적 측면
웨어러블 기기

스마트폰

유전 정보 분석

인공지능

SNS
사용자/환자

대중
Three Steps to Implement Digital Medicine
• Step 1. Measure the Data
• Step 2. Collect the Data
• Step 3. Insight from the Data
Digital Healthcare Industry Landscape
Data Measurement Data Integration Data Interpretation Treatment
Smartphone Gadget/Apps
DNA
Artificial Intelligence
2nd Opinion
Wearables / IoT
(ver. 3)
EMR/EHR 3D Printer
Counseling
Data Platform
Accelerator/early-VC
Telemedicine
Device
On Demand (O2O)
VR
Digital Healthcare Institute
Diretor, Yoon Sup Choi, Ph.D.
yoonsup.choi@gmail.com
Data Measurement Data Integration Data Interpretation Treatment
Smartphone Gadget/Apps
DNA
Artificial Intelligence
2nd Opinion
Device
On Demand (O2O)
Wearables / IoT
Digital Healthcare Institute
Diretor, Yoon Sup Choi, Ph.D.
yoonsup.choi@gmail.com
EMR/EHR 3D Printer
Counseling
Data Platform
Accelerator/early-VC
VR
Telemedicine
Digital Healthcare Industry Landscape (ver. 3)
Step 1. Measure the Data
Smartphone: the origin of healthcare innovation
Smartphone: the origin of healthcare innovation
2013?
The election of Pope Benedict
The Election of Pope Francis
The Election of Pope Francis
The Election of Pope Benedict
SummerTanThese Days
Sci Transl Med 2015
Jan 2015 WSJ
CellScope’s iPhone-enabled otoscope
CellScope’s iPhone-enabled otoscope
http://www.firsthud.com/
Smartphone-connected dermatoscope
Skin Cancer Image Classification (TensorFlow Dev Summit 2017)
Skin cancer classification performance of
the CNN and dermatologists.
https://www.youtube.com/watch?v=toK1OSLep3s&t=419s
Smartphone video microscope
automates detection of parasites in blood
SpiroSmart: spirometer using iPhone
AliveCor Heart Monitor (Kardia)
AliveCor Heart Monitor (Kardia)
GluCase:World's First Smartphone
Case Glucometer
Sleep Cycle
Digital Phenotype:
Your smartphone knows if you are depressed
Ginger.io
Digital Phenotype:
Your smartphone knows if you are depressed
J Med Internet Res. 2015 Jul 15;17(7):e175.
The correlation analysis between the features and the PHQ-9 scores revealed that 6 of the 10
features were significantly correlated to the scores:
• strong correlation: circadian movement, normalized entropy, location variance
• correlation: phone usage features, usage duration and usage frequency
• 아이폰의 센서로 측정한 자신의 의료/건강 데이터를 플랫폼에 공유 가능

• 가속도계, 마이크, 자이로스코프, GPS 센서 등을 이용

• 걸음, 운동량, 기억력, 목소리 떨림 등등

• 기존의 의학연구의 문제를 해결: 충분한 의료 데이터의 확보

• 연구 참여자 등록에 물리적, 시간적 장벽을 제거 (1번/3개월 ➞ 1번/1초)

• 대중의 의료 연구 참여 장려: 연구 참여자의 수 증가

• 발표 후 24시간 내에 수만명의 연구 참여자들이 지원

• 사용자 본인의 동의 하에 진행
ResearchKit
•초기 버전으로, 5가지 질환에 대한 앱 5개를 소개
ResearchKit
ResearchKit
ResearchKit
http://www.roche.com/media/store/roche_stories/roche-stories-2015-08-10.htm
http://www.roche.com/media/store/roche_stories/roche-stories-2015-08-10.htm
pRED app to track Parkinson’s symptoms in drug trial
Autism and Beyond EpiWatchMole Mapper
measuring facial expressions of young
patients having autism
measuring morphological changes
of moles
measuring behavioral data
of epilepsy patients
•스탠퍼드의 심혈관 질환 연구 앱, myHeart 

• 발표 하루만에 11,000 명의 참가자가 등록

• 스탠퍼드의 해당 연구 책임자 앨런 영,

“기존의 방식으로는 11,000명 참가자는 

미국 전역의 50개 병원에서 1년간 모집해야 한다”
•파킨슨 병 연구 앱, mPower

• 발표 하루만에 5,589 명의 참가자가 등록

• 기존에 6000만불을 들여 5년 동안 모집한

환자의 수는 단 800명
Wearable Devices
http://www.rolls-royce.com/about/our-technology/enabling-technologies/engine-health-management.aspx#sense
250 sensors to monitor the “health” of the GE turbines
Fig 1. What can consumer wearables do? Heart rate can be measured with an oximeter built into a ring [3], muscle activity with an electromyographi
sensor embedded into clothing [4], stress with an electodermal sensor incorporated into a wristband [5], and physical activity or sleep patterns via an
accelerometer in a watch [6,7]. In addition, a female’s most fertile period can be identified with detailed body temperature tracking [8], while levels of me
attention can be monitored with a small number of non-gelled electroencephalogram (EEG) electrodes [9]. Levels of social interaction (also known to a
PLOS Medicine 2016
PwC Health Research Institute Health wearables: Early days2
insurers—offering incentives for
use may gain traction. HRI’s survey
Source: HRI/CIS Wearables consumer survey 2014
21%
of US
consumers
currently
own a
wearable
technology
product
2%
wear it a few
times a month
2%
no longer
use it
7%
wear it a few
times a week
10%
wear it
everyday
Figure 2: Wearables are not mainstream – yet
Just one in five US consumers say they own a wearable device.
Intelligence Series sought to better
understand American consumers’
attitudes toward wearables through
done with the data.
PwC, Health wearables: early days, 2014
PwC | The Wearable Life | 3
device (up from 21% in 2014). And 36% own more than one.
We didn’t even ask this question in our previous survey since
it wasn’t relevant at the time. That’s how far we’ve come.
millennials are far more likely to own wearables than older
adults. Adoption of wearables declines with age.
Of note in our survey findings, however: Consumers aged
35 to 49 are more likely to own smart watches.
Across the board for gender, age, and ethnicity, fitness
wearable technology is most popular.
Fitness band
Smart clothing
Smart video/
photo device
(e.g. GoPro)
Smart watch
Smart
glasses*
45%
14%
27%
15%
12%
Base: Respondents who currently own at least one device (pre-quota sample, n=700); Q10A/B/C/D/E. Please tell us your relationship with the following wearable
technology products. *Includes VR/AR glasses
Fitness runs away with it
% respondents who own type of wearable device
PwC,The Wearable Life 2.0, 2016
• 49% own at least one wearable device (up from 21% in2014)
• 36% own more than one device.
Fitbit
21.4m
$1.8B
https://clinicaltrials.gov/ct2/results?term=fitbit&Search=Search
•의료기기가 아님에도 Fitbit 은 이미 임상 연구에 폭넓게 사용되고 있음

•Fitbit 이 장려하지 않았음에도, 임상 연구자들이 자발적으로 사용

•Fitbit 을 이용한 임상 연구 수는 계속 증가하는 추세 (16.3(80), 16.8(113), 17.7(173))
•Fitbit이 임상연구에 활용되는 것은 크게 두 가지 경우

•Fitbit 자체가 intervention이 되어서 활동량이나 치료 효과를 증진시킬 수 있는지 여부

•연구 참여자의 활동량을 모니터링 하기 위한 수단

•1. Fitbit으로 환자의 활동량을 증가시키기 위한 연구들

•Fitbit이 소아 비만 환자의 활동량을 증가시키는지 여부를 연구

•Fitbit이 위소매절제술을 받은 환자들의 활동량을 증가시키는지 여부

•Fitbit이 젊은 낭성 섬유증 (cystic fibrosis) 환자의 활동량을 증가시키는지 여부

•Fitbit이 암 환자의 신체 활동량을 증가시키기 위한 동기부여가 되는지 여부

•2. Fitbit으로 임상 연구에 참여하는 환자의 활동량을 모니터링

•항암 치료를 받은 환자들의 건강과 예후를 평가하는데 fitbit을 사용

•현금이 자녀/부모의 활동량을 증가시키는지 파악하기 위해 fitbit을 사용

•Brain tumor 환자의 삶의 질 측정을 위해 다른 survey 결과와 함께 fitbit을 사용

•말초동맥 질환(Peripheral Artery Disease) 환자의 활동량을 평가하기 위해
•체중 감량이 유방암 재발에 미치는 영향을 연구

•유방암 환자들 중 20%는 재발, 대부분이 전이성 유방암

•과체중은 유방암의 위험을 높인다고 알려져 왔으며,

•비만은 초기 유방암 환자의 예후를 좋지 않게 만드는 것도 알려짐 

•하지만, 체중 감량과 유방암 재발 위험도의 상관관계 연구는 아직 없음

•3,200 명의 과체중, 초기 비만 유방암 환자들이 2년간 참여

•결과에 따라 전세계 유방암 환자의 표준 치료에 체중 감량이 포함될 가능성

•Fitbit 이 체중 감량 프로그램에 대한 지원

•Fitbit Charge HR: 운동량, 칼로리 소모, 심박수 측정

•Fitbit Aria Wi-Fi Smart Scale: 스마트 체중계

•FitStar: 개인 맞춤형 동영상 운동 코칭 서비스
2016. 4. 27.
http://nurseslabs.tumblr.com/post/82438508492/medical-surgical-nursing-mnemonics-and-tips-2
•Biogen Idec, 다발성 경화증 환자의 모니터링에 Fitbit을 사용

•고가의 약 효과성을 검증하여 보험 약가 유지 목적

•정교한 측정으로 MS 전조 증상의 조기 발견 가능?
Dec 23, 2014
Fitbit
Apple Watch
n
n-
ng
n
es
h-
n
ne
ne
ct
d
n-
at
s-
or
e,
ts
n
a-
gs
d
ch
Nat Biotech 2015
WELT
OURA ring
• $20
• the first and only 24-hour thermometer
• constantly monitor baby’s temperature
• FDA cleared
iRythm ZIO patch
Multisense
Google’s Smart Contact Lens
Sensor and Transmitter
Transmitter
Tiny wire inserted
Converts glucose into electrical current
Glucose range: 40-400 mg/dL
Every 5 minutes, up to 7 days
Converts sensor data into
glucose readings (Software 505)
Glucose data broadcast via
Bluetooth to display device
Sensor
CO-1
Dexcom G5 Mobile Continuous
Glucose Monitoring (CGM) System
for Non-Adjunctive Management
of Diabetes
July 21, 2016
Dexcom, Inc.
Clinical Chemistry and Clinical Toxicology
Devices Panel
Dexcom G5 Mobile Continuous Glucose
Monitoring (CGM) System for Non-Adjunctive
Management of Diabetes
•FDA의 Clinical Chemistry and Clinical Toxicology Devices Panel

•Dexcom G5가 기존의 SMBG를 대체 가능하다고 권고

•안전 (8:2), 효과 (9:1), 위험 대비 효용 (8:2)

•Dexcom G5의 혈당 수치는 SMBG와 약 9% 차이가 날 수 있음 

•여러 회사의 SMBG 들 간에도 4-9%의 상대적 차이 존재

•어차피 상당수(69%)의 환자들은 off-label로 CGM을 SMBG 대신 사용중

•차라리 허용 후 환자들을 정식으로 교육/관리하는 것이 나을 것
• Health Canada 에서 Dexcom G5 CGM이 SMBG를 대체할 수 있다고 결정
• 의사들이 기존의 SMBG 대신에 Dexcom 을 처방할 수 있게 되었음
• 기존의 SMBG는 하루에 두 번 calibration 목적으로 사용하면 됨
• FDA에서도 2016년 12월 Dexcom G5가 기존의 혈당계를 대체할 수 있다고 인허가
Transmitter Receiver iPhone Apple Watch
the user needed to have all four in reasonably close proximity
Transmitter iPhone Apple Watch
“not require the user to have a separate receiver box,
though it will still require that the iPhone be in range” (2016.3) 
http://www.mobihealthnews.com/content/dexcoms-next-generation-apple-watch-cgm-app-needs-one-less-device-work
Transmitter Apple Watch
“with Bluetooth built into the Watch, users won’t need to have anything on them
but the CGM itself and their Apple Watch.” (2017.7)
http://www.mobihealthnews.com/content/dexcom-propeller-and-resound-poised-make-use-apple-watch-native-bluetooth-launch
FreeStyle Libre Flash Glucose Monitoring System
Why prick when you can scan?
http://www.freestylelibre.co.uk
Temporary Tattoo Offers Needle-Free Way 

to Monitor Glucose Levels
• A very mild electrical current applied to the skin for 10 minutes forces sodium



ions in the fluid between skin cells to migrate toward the tattoo’s electrodes.
• These ions carry glucose molecules that are also found in the fluid.
• A sensor built into the tattoo then measures the strength of the electrical charge



produced by the glucose to determine a person’s overall glucose levels.
GlucoWatch
• GlucoWatch 2 - Cygnus
• FDA approved and marketed in 2002
• Provides a glucose reading every 10 minutes
• … but the device was discontinued because it caused skin irritation
애플워치에 혈당 측정 기능이 추가될까?
C8 Medisensor
"From a technological standpoint, what we had done with this
was a stellar achievement in that people thought even (what
we achieved) was beyond possibility.”
- Former C8 MediSensors CEO Rudy Hofmeister
Google’s Smart Contact Lens
A P P L I ED S CI E N CE S A N D EN G I N E E R I N G Copyright © 2018
The Authors, some
rights reserved;
exclusive licensee
American Association
for the Advancement
of Science. No claim to
originalU.S.Government
Works. Distributed
under a Creative
Commons Attribution
NonCommercial
License 4.0 (CC BY-NC).
Soft, smart contact lenses with integrations of wireless
circuits, glucose sensors, and displays
Jihun Park,1
* Joohee Kim,1
* So-Yun Kim,1
* Woon Hyung Cheong,1
Jiuk Jang,1
Young-Geun Park,1
Kyungmin Na,2
Yun-Tae Kim,3
Jun Hyuk Heo,4
Chang Young Lee,3
Jung Heon Lee,4†
Franklin Bien,2†
Jang-Ung Park1†
Recent advances in wearable electronics combined with wireless communications are essential to the realization of
medical applications through health monitoring technologies. For example, a smart contact lens, which is capable of
monitoring the physiological information of the eye and tear fluid, could provide real-time, noninvasive medical diag-
nostics. However, previous reports concerning the smart contact lens have indicated that opaque and brittle compo-
nents have been used to enable the operation of the electronic device, and this could block the user’s vision and
potentially damage the eye. In addition, the use of expensive and bulky equipment to measure signals from the con-
tact lens sensors could interfere with the user’s external activities. Thus, we report an unconventional approach for the
fabrication of a soft, smart contact lens in which glucose sensors, wireless power transfer circuits, and display pixels to
visualize sensing signals in real time are fully integrated using transparent and stretchable nanostructures. The inte-
gration of this display into the smart lens eliminates the need for additional, bulky measurement equipment. This soft,
smart contact lens can be transparent, providing a clear view by matching the refractive indices of its locally patterned
areas. The resulting soft, smart contact lens provides real-time, wireless operation, and there are in vivo tests to
monitor the glucose concentration in tears (suitable for determining the fasting glucose level in the tears of diabetic
patients) and, simultaneously, to provide sensing results through the contact lens display.
INTRODUCTION
Wearable electronic devices capable of real-time monitoring of the hu-
man body can provide new ways to manage the health status and
performance of individuals (1–7). Stretchable and skin-like electronics,
combined with wireless communications, enable noninvasive and com-
fortable physiological measurements by replacing the conventional
methods that use penetrating needles, rigid circuit boards, terminal con-
nections, and power supplies (8–12). Given this background, a smart
contact lens is a promising example of a wearable, health monitoring
device (13, 14). The reliability and stability of soft contact lenses have
been studied extensively, and significant advances have been made to
minimize irritation of the eye to maximize the user’s comfort. In addi-
tion, the user’s tears can be collected in the contact lens by completely
natural means, such as normal secretion and blinking, and used to
assess various biomarkers found in the blood, such as glucose, choles-
terol, sodium ions, and potassium ions (13). Thus, lens equipped with
sensors can provide noninvasive methods to continuously detect metab-
olites in tears. Among various biomarkers, noninvasive detection of glu-
cose levels for the diagnosis of diabetes has been studied in numerous
ways to replace conventional invasive diagnostic tests (for example,
finger pricking for drawing blood), as presented in table S1. By consid-
ering the correlation between the tear glucose level and blood glucose
level (15), a glucose sensor fitted on a contact lens can provide the non-
invasive monitoring of user’s glucose levels from tear fluids with a con-
sideration of the lag time between tear glucose level and blood glucose
level in the range of 10 to 20 min (13–16).
Although such a system provides many capabilities, there are some
crucial issues that must be addressed before practical uses of smart con-
tact lens can be realized. These issues include (i) the use of opaque
electronic materials for sensors, integrated circuit (IC) chips, metal
antennas, and interconnects that can block users’ vision (15, 17, 18);
(ii) the integration of the components of the electronic device on flat
and plastic substrates, resulting in buckled deformations when
transformed into the curved shape for lenses, thereby creating foreign
objects that can irritate users’ eyes and eyelids (19); (iii) the brittle and
rigid materials of the integrated electronic system, such as surface-
mounted IC chips and rigid interconnects, which could damage the
cornea or the eyelid (20–22); and (iv) the requirement for bulky and
expensive equipment for signal measurements, which limits the use of
smart contact lenses outside of research laboratories or clinical settings
by restricting users’ external activities (14, 15, 20, 23).
For all of the reasons stated above, we have introduced an un-
conventional approach for the fabrication of a soft, smart contact lens
where all of the electronic components are designed with normal usabil-
ity in mind. For example, the wearer’s view will not be obstructed be-
cause the contact lenses are made of transparent nanomaterials. In
addition, these lenses provide superb reliability because they can
undergo the mechanical deformations required to fit them into the soft
lens without damage. The planar, mesh-like structures of the compo-
nents of the device and their interconnects enable high stretchability
for the curved soft lens with no buckling. In addition, display pixels
integrated in the smart contact lens allow access to real-time sensing
data to eliminate the need for additional measurement equipment.
To achieve these goals, we used three strategies, as described as
follows: (i) For the design of soft contact lens, we formed soft contact
lenses with highly transparent and stress-tunable hybrid structures,
which are composed of mechanically reinforced islands to locate dis-
crete electronic devices (such as rectifying circuits and display pixels)
and elastic joints to locate a stretchable, transparent antenna and inter-
connect electrodes. The reinforced frames with small segments were
1
School of Materials Science and Engineering, Ulsan National Institute of Science and
Technology (UNIST), Ulsan 44919, Republic of Korea. 2
School of Electrical and Com-
puter Engineering, UNIST, Ulsan 44919, Republic of Korea. 3
School of Life Sciences,
School of Energy and Chemical Engineering, UNIST, Ulsan 44919, Republic of Korea.
4
School of Advanced Materials Science and Engineering, Sungkyunkwan University,
Suwon 16419, Republic of Korea.
*These authors contributed equally to this work.
†Corresponding author. Email: jangung@unist.ac.kr (J.-U.P.); bien@unist.ac.kr (F.B.);
jhlee7@skku.edu (J.H.L.)
S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E
Park et al., Sci. Adv. 2018;4:eaap9841 24 January 2018 1 of 11
onMarch7,2018http://advances.sciencemag.org/Downloadedfrom
•기존 연구의 한계

•렌즈의 소재가 불투명, 부서지기 쉬워서, 안구 손상의 위험 있음

•센서의 크기가 커서 사용자의 불편 초래

•소프트한 소재의 실시간 glucose 센서를 갖춘 콘텍트렌즈 개발

•실시간으로 시야에 센싱 시그널을 띄워주는 렌즈

•transparent, stretchable nanostructure를 활용하여 렌즈에 fully integration

•in vivo test를 하였으나, 혈당 측정의 정확성에 대해서는 테스트하지 않음
Science Advances 24 Jan 2018
Science Advances 24 Jan 2018
the elastic region was mainly stretched because of the significant
difference in Young’s modulus (24, 27, 28). In addition, Fig. 2B
shows that there were no gaps at the interfaces between these heter-
ogeneous regions even during the stretching states (30% in tensile
from the mechanical deformations.
Figure 2D and fig. S2 present the atomic force microscopy (AFM)
andscanning electron microscopy (SEM) images of the hybrid substrate
with continuous interfaces between the reinforced and elastic areas. The
Fig. 1. Stretchable, transparent smart contact lens system. (A) Schematic illustration of the soft, smart contact lens. The soft, smart contact lens is composed of a hybrid
substrate, functional devices (rectifier, LED, and glucose sensor), and a transparent, stretchable conductor (for antenna and interconnects). (B) Circuit diagram of the smart contact
lens system. (C) Operation of this soft, smart contact lens. Electric power is wirelessly transmitted to the lens through the antenna. This power activates the LED pixel and the
glucose sensor. After detecting the glucose level in tear fluid above the threshold, this pixel turns off.
Park et al., Sci. Adv. 2018;4:eaap9841 24 January 2018 3 of 11
onMarch7,2018http://advances.sciencemag.org/Downloadedfrom
Science Advances 24 Jan 2018
Fig. 2. Properties of a stretchable and transparent hybrid substrate. (A) Schematic image of the hybrid substrate where the reinforced islands are embedded in the elastic
substrate. (B) SEM images before (top) and during (bottom) 30% stretching. The arrow indicates the direction of stretching direction. Scale bars, 500 mm. (C) Effective strains on
each part along the stretching direction indicated in (B). (D) AFM image of the hybrid substrate. Black and blue arrows indicate the elastic region and the reinforced island,
respectively. Scale bar, 5 mm. (E) Photograph of the hybrid substrates molded into contact lens shape. Scale bar, 1 cm. (F) Optical transmittance (black) and haze (red) spectra of the
hybrid substrate. (G) Schematic diagram of the photographing method to identify the optical clarity of hybrid substrates. (H) Photographs taken by camera where the OP-LENS–
based hybrid substrate (left) and the SU8-LENS–based hybrid substrate (right) are located on the camera lens.
S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E
onMarch7,2018http://advances.sciencemag.org/Downloadedfrom
Science Advances 24 Jan 2018
When the glucose concentration is above 0.9 mM, this pixel turns off
because the bias applied to the LED becomes below than its turn-off
turned off because the glucose concentration was over the threshold,
not because of damage to the circuit. The design is such that the LED
Fig. 5. Soft, smart contact lens for detecting glucose. (A) Schematic image of the soft, smart contact lens. The rectifier, the LED, and the glucose sensor are located on the
reinforced regions. The transparent, stretchable AgNF-based antenna and interconnects are located on an elastic region. (B) Photograph of the fabricated soft, smart contact lens.
Scale bar, 1 cm. (C) Photograph of the smart contact lens on an eye of a mannequin. Scale bar, 1 cm. (D) Photographs of the in vivo test on a live rabbit using the soft, smartcontact
lens. Left: Turn-on state of the LED in the soft, smart contact lens mounted on the rabbit’s eye. Middle: Injection of tear fluids withthe glucose concentration of 0.9 mM. Right: Turn-
off state of the LED after detecting the increased glucose concentration. Scale bars, 1 cm. (E) Heat tests while a live rabbit is wearing the operating soft, smart contact lens. Scale
bars, 1 cm.
Park et al., Sci. Adv. 2018;4:eaap9841 24 January 2018 8 of 11
onMarch7,2018http://advances.sciencemag.org/Downloadedfrom
#WeAreNotWaiting
환자 주도의 의료 혁신
Stanford Medicine X 2016
#WeAreNotWaiting
Dexcome G4
Dexcome G4
20 feet (6m)
Night Scout Project
•연속 혈당계 기기를 해킹해서 클라우드에 혈당 수치를 전송할 수 있게

•언제 어디서든 스마트폰, 스마트 워치 등으로 자녀의 혈당 수치를 확인 가능

•소아 당뇨병 환자의 부모들이 자발적으로 개발 + 오픈소스로 무료 배포 + 본인이 자발적으로 설치

•상용 의료기기가 아니므로 FDA의 규제 없음
Night Scout Project
OpenAPS: DIY 인공췌장OpenAPS: DIY 인공췌장
OpenAPS: DIY 인공췌장
Hood Thabit et. al. Home Use of an Artificial Beta Cell in Type 1 Diabetes, NEJM (2015)
Home Use of an Artificial Beta Cell in Type 1 Diabetes
The proportion of time that the glycated hemoglobin level was in the target range
(primary end point) was significantly greater during the intervention period than during
the control period — by a mean of 11.0 percentage points (95% confidence interval [CI],
8.1 to 13.8; P<0.001).
Hood Thabit et. al. Home Use of an Artificial Beta Cell in Type 1 Diabetes, NEJM (2015)
The overnight mean glucose level was significantly lower with the closed-loop system
than with the control system (P<0.001), and the proportion of time that the glucose level
was within the overnight target range was greater with the closed-loop system (P<0.001)
Home Use of an Artificial Beta Cell in Type 1 Diabetes
OpenAPS: DIY 인공췌장
• Self-reported data from a small group – 18 of the first 40 users
• The positive glucose and quality of life impact this system has had
• 0.9% improvement in A1c (from 7.1% to 6.2%)
• a strong time-in-range improvement from 58% to 81%
• near-unanimous improvements in sleep quality
OpenAPS DIY Automated Insulin Delivery Users Report 81%
Time in Range, Better Sleep, and a 0.9% A1c Improvement
https://openaps.org/2016/06/11/real-world-use-of-open-source-artificial-pancreas-systems-poster-presented-at-american-diabetes-association-scientific-sessions/
#OpenAPS rigs are shrinking in size
https://diyps.org
First FDA-approved Artificial Pancreas
http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm522974.htm
• 메드트로닉의 MiniMed 670G 가 최초로 제 1형 당뇨병 환자에 대해서 FDA 승인

• 14세 이상의 제 1형 당뇨병 환자 123명을 대상으로 진행된 임상

• 3개월의 추적 관찰 결과 당화혈색소(A1c) 수치가 7.4%에서 6.9%로 유의미하게 개선

• 당뇨병성 케톤산증, 저혈당증 등의 심각한 부작용이 이 기간 동안 발생 없음

• 메드트로닉은 향후 7-13세 환자들에 대해서 효과성과 안전성을 추가적으로 검증 계혹
(2016. 9. 28)
https://myglu.org/articles/a-pathway-to-an-artificial-pancreas-an-interview-with-jdrf-s-aaron-kowalski
•Step 1: 혈당 수치가 미리 정해놓은 기준까지 낮아지면, 인슐린 주입을 멈춤

•Step 2: 사용자의 혈당이 기준치까지 낮아질 것을 ‘예측’하여, 인슐린 주입을 미리 멈추거나 줄인다. 

•Step 3: 혈당이 기준치 이하로 너무 낮아지는 것뿐만 아니라, 기준치 이상으로 너무 높아지는 것도 막는다.

•Step 4: 특정 범위 이내가 아니라, 특정 혈당 수치를 유지하는 것을 목표로 한다. (Hybrid closed-loop product)

•Step 5: Step 4 에서 더 나아가, 식전의 별도 인슐린 주입까지도 자동화한다.

•Step 6: 인슐린 뿐만 아니라, 글루카곤과 같은 추가적인 호르몬도 조절
Six Steps of Artificial Pancreas (JDRF)
https://myglu.org/articles/a-pathway-to-an-artificial-pancreas-an-interview-with-jdrf-s-aaron-kowalski
•Step 1: 혈당 수치가 미리 정해놓은 기준까지 낮아지면, 인슐린 주입을 멈춤

•Step 2: 사용자의 혈당이 기준치까지 낮아질 것을 ‘예측’하여, 인슐린 주입을 미리 멈추거나 줄인다. 

•Step 3: 혈당이 기준치 이하로 너무 낮아지는 것뿐만 아니라, 기준치 이상으로 너무 높아지는 것도 막는다.

•Step 4: 특정 범위 이내가 아니라, 특정 혈당 수치를 유지하는 것을 목표로 한다. (Hybrid closed-loop product)

•Step 5: Step 4 에서 더 나아가, 식전의 별도 인슐린 주입까지도 자동화한다.

•Step 6: 인슐린 뿐만 아니라, 글루카곤과 같은 추가적인 호르몬도 조절
Six Steps of Artificial Pancreas (JDRF)
MiniMed 670G vs. OpenAPS
http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm522974.htm
•120 mg/dl 이외의 다른 수치는 지정하기 어려움

•13세 이하의 환자에 대해서는 활용이 불가능

•미국 이외에서는 아직 인허가 이전

•고가의 유지 비용 (800만원+ 매달 40만원)
On the courtesy of Miyeong Kim (aka 소명맘)
On the courtesy of Miyeong Kim (aka 소명맘)
On the courtesy of Miyeong Kim (aka 소명맘)
On the courtesy of Miyeong Kim (aka 소명맘)
Step 2. Collect the Data
Sci Transl Med 2015
Google Fit
Samsung SAMI
Epic MyChart Epic EHR
Dexcom CGM
Patients/User
Devices
EHR Hospital
Whitings
+
Apple Watch
Apps
HealthKit
• 애플 HealthKit 가 미국의 23개 선도병원 중에, 14개의 병원과 협력

• 경쟁 플랫폼 Google Fit, S-Health 보다 현저히 빠른 움직임

• Beth Israel Deaconess 의 CIO 

• “25만명의 환자들 중 상당수가 웨어러블로 각종 데이터 생산 중.

이 모든 디바이스에 인터페이스를 우리 병원은 제공할 수 없다. 

하지만 애플이라면 가능하다.”
2015.2.5
Step 3. Insight from the Data
Data Overload
How to Analyze and Interpret the Big Data?
and/or
Two ways to get insights from the big data
Epic MyChart Epic EHR
Dexcom CGM
Patients/User
Devices
EHR Hospital
Whitings
+
Apple Watch
Apps
HealthKit
transfer from Share2 to HealthKit as mandated by Dexcom receiver
Food and Drug Administration device classification. Once the glucose
values reach HealthKit, they are passively shared with the Epic
MyChart app (https://www.epic.com/software-phr.php). The MyChart
patient portal is a component of the Epic EHR and uses the same data-
base, and the CGM values populate a standard glucose flowsheet in
the patient’s chart. This connection is initially established when a pro-
vider places an order in a patient’s electronic chart, resulting in a re-
quest to the patient within the MyChart app. Once the patient or
patient proxy (parent) accepts this connection request on the mobile
device, a communication bridge is established between HealthKit and
MyChart enabling population of CGM data as frequently as every 5
Participation required confirmation of Bluetooth pairing of the CGM re-
ceiver to a mobile device, updating the mobile device with the most recent
version of the operating system, Dexcom Share2 app, Epic MyChart app,
and confirming or establishing a username and password for all accounts,
including a parent’s/adolescent’s Epic MyChart account. Setup time aver-
aged 45–60 minutes in addition to the scheduled clinic visit. During this
time, there was specific verbal and written notification to the patients/par-
ents that the diabetes healthcare team would not be actively monitoring
or have real-time access to CGM data, which was out of scope for this pi-
lot. The patients/parents were advised that they should continue to contact
the diabetes care team by established means for any urgent questions/
concerns. Additionally, patients/parents were advised to maintain updates
Figure 1: Overview of the CGM data communication bridge architecture.
BRIEFCOMMUNICATION
Kumar R B, et al. J Am Med Inform Assoc 2016;0:1–6. doi:10.1093/jamia/ocv206, Brief Communication
byguestonApril7,2016http://jamia.oxfordjournals.org/Downloadedfrom
•Apple HealthKit, Dexcom CGM기기를 통해 지속적으로 혈당을 모니터링한 데이터를 EHR과 통합

•당뇨환자의 혈당관리를 향상시켰다는 연구결과

•Stanford Children’s Health와 Stanford 의대에서 10명 type 1 당뇨 소아환자 대상으로 수행 (288 readings /day)

•EHR 기반 데이터분석과 시각화는 데이터 리뷰 및 환자커뮤니케이션을 향상

•환자가 내원하여 진료하는 기존 방식에 비해 실시간 혈당변화에 환자가 대응
JAMIA 2016
Remote Patients Monitoring
via Dexcom-HealthKit-Epic-Stanford
GluVue
https://gluvue.stanfordchildrens.org/dashboard/?src=DEMO
16© 2017 by HURAYPOSITIVE INC., a Digital Healthcare Service Provider. This information is strictly privileged and confidential. All rights reserved.
7
7.2
7.4
7.6
7.8
8
8.2
3M 6M 9M 12M0M
▼0.63%p.
▼0.64%p.
당화혈색소(HbA1c,%)
&
Products & Services
의학적 유효성(Health Switch를 활용한 임상실험)
기간
• 1차 실험(0M-6M)
실험군: 중재 O ( )
대조군: 중재 X ( )
• 2차 실험: 실험군과 대조군 교차(6M-12M)
대조군: 중재 X ( )
실험군: 중재 O ( )
당화혈색소 0.63%p. 감소
무의미한 변화
당화혈색소 수준 유지
당화혈색소 0.64%p. 감소
▼0.04%p.
• N = 148명
• 평균 연령: 52.2세
결과
임상 대상자
1 모바일 중재 서비스의 의미 있는 혈당 감소 효과
2 약 6개월의 서비스 후 생활습관 유지 가능성
3 고령 환자들도 사용할 수 있는 간편한 서비스
임상실험을 통해 검증된
Health Switch의 효과
key facts
• 특징: 제2형 당뇨병 유병자
• 기간: 2014.10 ~ 2015.12
No choice but to bring AI into the medicine
Martin Duggan,“IBM Watson Health - Integrated Care & the Evolution to Cognitive Computing”
•약한 인공 지능 (Artificial Narrow Intelligence)

• 특정 방면에서 잘하는 인공지능

• 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전

•강한 인공 지능 (Artificial General Intelligence)

• 모든 방면에서 인간 급의 인공 지능

• 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습

•초 인공 지능 (Artificial Super Intelligence)

• 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능

• “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
2010 2020 2030 2040 2050 2060 2070 2080 2090 2100
90%
50%
10%
PT-AI
AGI
EETNTOP100 Combined
언제쯤 기계가 인간 수준의 지능을 획득할 것인가?
Philosophy and Theory of AI (2011)
Artificial General Intelligence (2012)
Greek Association for Artificial Intelligence
Survey of most frequently cited 100 authors (2013)
Combined
응답자
누적 비율
Superintelligence, Nick Bostrom (2014)
Superintelligence: Science of fiction?
Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google),
David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart
Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI)
January 6-8, 2017, Asilomar, CA
https://brunch.co.kr/@kakao-it/49
https://www.youtube.com/watch?v=h0962biiZa4
Superintelligence: Science of fiction?
Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google),
David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart
Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI)
January 6-8, 2017, Asilomar, CA
Q: 초인공지능이란 영역은 도달 가능한 것인가?
Q: 초지능을 가진 개체의 출현이 가능할 것이라고 생각하는가?
Table 1
Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn
YES YES YES YES YES YES YES YES YES
Table 1-1
Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn
YES YES YES YES YES YES YES YES YES
Q: 초지능의 실현이 일어나기를 희망하는가?
Table 1-1-1
Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn
Complicated Complicated Complicated YES Complicated YES YES Complicated Complicated
https://brunch.co.kr/@kakao-it/49
https://www.youtube.com/watch?v=h0962biiZa4
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Superintelligence, Nick Bostrom (2014)
일단 인간 수준(human baseline)의 강한 인공지능이 구현되면,

이후 초지능(superintelligence)로 도약(take off)하기까지는

극히 짧은 시간이 걸릴 수 있다.
How far to superintelligence
•약한 인공 지능 (Artificial Narrow Intelligence)

• 특정 방면에서 잘하는 인공지능

• 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전

•강한 인공 지능 (Artificial General Intelligence)

• 모든 방면에서 인간 급의 인공 지능

• 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습

•초 인공 지능 (Artificial Super Intelligence)

• 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능

• “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
•복잡한 의료 데이터의 분석 및 insight 도출

•영상 의료/병리 데이터의 분석/판독

•연속 데이터의 모니터링 및 예방/예측
인공지능의 의료 활용
•복잡한 의료 데이터의 분석 및 insight 도출

•영상 의료/병리 데이터의 분석/판독

•연속 데이터의 모니터링 및 예방/예측
인공지능의 의료 활용
Jeopardy!
2011년 인간 챔피언 두 명 과 퀴즈 대결을 벌여서 압도적인 우승을 차지
IBM Watson on Jeopardy!
600,000 pieces of medical evidence
2 million pages of text from 42 medical journals and clinical trials
69 guidelines, 61,540 clinical trials
IBM Watson on Medicine
Watson learned...
+
1,500 lung cancer cases
physician notes, lab results and clinical research
+
14,700 hours of hands-on training
Annals of Oncology (2016) 27 (suppl_9): ix179-ix180. 10.1093/annonc/mdw601
Validation study to assess performance of IBM cognitive
computing system Watson for oncology with Manipal
multidisciplinary tumour board for 1000 consecutive cases: 

An Indian experience
• MMDT(Manipal multidisciplinary tumour board) treatment recommendation and
data of 1000 cases of 4 different cancers breast (638), colon (126), rectum (124)
and lung (112) which were treated in last 3 years was collected.
• Of the treatment recommendations given by MMDT, WFO provided 



50% in REC, 28% in FC, 17% in NREC
• Nearly 80% of the recommendations were in WFO REC and FC group
• 5% of the treatment provided by MMDT was not available with WFO
• The degree of concordance varied depending on the type of cancer
• WFO-REC was high in Rectum (85%) and least in Lung (17.8%)
• high with TNBC (67.9%); HER2 negative (35%)

• WFO took a median of 40 sec to capture, analyze and give the treatment.



(vs MMDT took the median time of 15 min)
WFO in ASCO 2017
• Early experience with IBM WFO cognitive computing system for lung 



and colorectal cancer treatment (마니팔 병원)

• 지난 3년간: lung cancer(112), colon cancer(126), rectum cancer(124)
• lung cancer: localized 88.9%, meta 97.9%
• colon cancer: localized 85.5%, meta 76.6%
• rectum cancer: localized 96.8%, meta 80.6%
Performance of WFO in India
2017 ASCO annual Meeting, J Clin Oncol 35, 2017 (suppl; abstr 8527)
San Antonio Breast Cancer Symposium—December 6-10, 2016
Concordance WFO (@T2) and MMDT (@T1* v. T2**)
(N= 638 Breast Cancer Cases)
Time Point
/Concordance
REC REC + FC
n % n %
T1* 296 46 463 73
T2** 381 60 574 90
This presentation is the intellectual property of the author/presenter.Contact somusp@yahoo.com for permission to reprint and/or distribute.26
* T1 Time of original treatment decision by MMDT in the past (last 1-3 years)
** T2 Time (2016) of WFO’s treatment advice and of MMDT’s treatment decision upon blinded re-review of non-concordant
cases
잠정적 결론
•왓슨 포 온콜로지와 의사의 일치율: 

•암종별로 다르다.

•같은 암종에서도 병기별로 다르다.

•같은 암종에 대해서도 병원별/국가별로 다르다.

•시간이 흐름에 따라 달라질 가능성이 있다.
원칙이 필요하다
•어떤 환자의 경우, 왓슨에게 의견을 물을 것인가?

•왓슨을 (암종별로) 얼마나 신뢰할 것인가?

•왓슨의 의견을 환자에게 공개할 것인가?

•왓슨과 의료진의 판단이 다른 경우 어떻게 할 것인가?

•왓슨에게 보험 급여를 매길 수 있는가?
이러한 기준에 따라 의료의 질/치료효과가 달라질 수 있으나,

현재 개별 병원이 개별적인 기준으로 활용하게 됨
Empowering the Oncology Community for Cancer Care
Genomics
Oncology
Clinical
Trial
Matching
Watson Health’s oncology clients span more than 35 hospital systems
“Empowering the Oncology Community
for Cancer Care”
Andrew Norden, KOTRA Conference, March 2017, “The Future of Health is Cognitive”
IBM Watson Health
Watson for Clinical Trial Matching (CTM)
18
1. According to the National Comprehensive Cancer Network (NCCN)
2. http://csdd.tufts.edu/files/uploads/02_-_jan_15,_2013_-_recruitment-retention.pdf© 2015 International Business Machines Corporation
Searching across
eligibility criteria of clinical
trials is time consuming
and labor intensive
Current
Challenges
Fewer than 5% of
adult cancer patients
participate in clinical
trials1
37% of sites fail to meet
minimum enrollment
targets. 11% of sites fail
to enroll a single patient 2
The Watson solution
• Uses structured and unstructured
patient data to quickly check
eligibility across relevant clinical
trials
• Provides eligible trial
considerations ranked by
relevance
• Increases speed to qualify
patients
Clinical Investigators
(Opportunity)
• Trials to Patient: Perform
feasibility analysis for a trial
• Identify sites with most
potential for patient enrollment
• Optimize inclusion/exclusion
criteria in protocols
Faster, more efficient
recruitment strategies,
better designed protocols
Point of Care
(Offering)
• Patient to Trials:
Quickly find the
right trial that a
patient might be
eligible for
amongst 100s of
open trials
available
Improve patient care
quality, consistency,
increased efficiencyIBM Confidential
•총 16주간 HOG( Highlands Oncology Group)의 폐암과 유방암 환자 2,620명을 대상

•90명의 환자를 3개의 노바티스 유방암 임상 프로토콜에 따라 선별

•임상 시험 코디네이터: 1시간 50분

•Watson CTM: 24분 (78% 시간 단축)

•Watson CTM은 임상 시험 기준에 해당되지 않는 환자 94%를 자동으로 스크리닝
Watson Genomics Overview
20
Watson Genomics Content
• 20+ Content Sources Including:
• Medical Articles (23Million)
• Drug Information
• Clinical Trial Information
• Genomic Information
Case Sequenced
VCF / MAF, Log2, Dge
Encryption
Molecular Profile
Analysis
Pathway Analysis
Drug Analysis
Service Analysis, Reports, & Visualizations
•복잡한 의료 데이터의 분석 및 insight 도출

•영상 의료/병리 데이터의 분석/판독

•연속 데이터의 모니터링 및 예방/예측
인공지능의 의료 활용
Deep Learning
http://theanalyticsstore.ie/deep-learning/
12 Olga Russakovsky* et al.
Fig. 4 Random selection of images in ILSVRC detection validation set. The images in the top 4 rows were taken from
ILSVRC2012 single-object localization validation set, and the images in the bottom 4 rows were collected from Flickr using
scene-level queries.
tage of all the positive examples available. The second is images collected from Flickr specifically for the de- http://arxiv.org/pdf/1409.0575.pdf
• Main competition

• 객체 분류 (Classification): 그림 속의 객체를 분류

• 객체 위치 (localization): 그림 속 ‘하나’의 객체를 분류하고 위치를 파악

• 객체 인식 (object detection): 그림 속 ‘모든’ 객체를 분류하고 위치 파악
16 Olga Russakovsky* et al.
Fig. 7 Tasks in ILSVRC. The first column shows the ground truth labeling on an example image, and the next three show
three sample outputs with the corresponding evaluation score.
http://arxiv.org/pdf/1409.0575.pdf
Performance of winning entries in the ILSVRC2010-2015 competitions
in each of the three tasks
http://image-net.org/challenges/LSVRC/2015/results#loc
Single-object localization
Localizationerror
0
10
20
30
40
50
2011 2012 2013 2014 2015
Object detection
Averageprecision
0.0
17.5
35.0
52.5
70.0
2013 2014 2015
Image classification
Classificationerror
0
10
20
30
2010 2011 2012 2013 2014 2015
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, “Deep Residual Learning for Image Recognition”, 2015
How deep is deep?
http://image-net.org/challenges/LSVRC/2015/results
Localization
Classification
http://image-net.org/challenges/LSVRC/2015/results
http://venturebeat.com/2015/12/25/5-deep-learning-startups-to-follow-in-2016/
DeepFace: Closing the Gap to Human-Level
Performance in FaceVerification
Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14.
Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three
locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million
parameters, where more than 95% come from the local and fully connected layers.
very few parameters. These layers merely expand the input
into a set of simple local features.
The subsequent layers (L4, L5 and L6) are instead lo-
cally connected [13, 16], like a convolutional layer they ap-
ply a filter bank, but every location in the feature map learns
a different set of filters. Since different regions of an aligned
image have different local statistics, the spatial stationarity
The goal of training is to maximize the probability of
the correct class (face id). We achieve this by minimiz-
ing the cross-entropy loss for each training sample. If k
is the index of the true label for a given input, the loss is:
L = log pk. The loss is minimized over the parameters
by computing the gradient of L w.r.t. the parameters and
Human: 95% vs. DeepFace in Facebook: 97.35%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
FaceNet:A Unified Embedding for Face
Recognition and Clustering
Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering
Human: 95% vs. FaceNet of Google: 99.63%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
False accept
False reject
s. This shows all pairs of images that were
on LFW. Only eight of the 13 errors shown
he other four are mislabeled in LFW.
on Youtube Faces DB
ge similarity of all pairs of the first one
our face detector detects in each video.
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
lead to truly amazing results. Figure 7 shows one cluster in
a users personal photo collection, generated using agglom-
erative clustering. It is a clear showcase of the incredible
invariance to occlusion, lighting, pose and even age.
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
sification. Our end-to-end training both simplifies the setup
and shows that directly optimizing a loss relevant to the task
at hand improves performance.
Another strength of our model is that it only requires
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
Show and Tell:
A Neural Image Caption Generator
Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555
v
om
Samy Bengio
Google
bengio@google.com
Dumitru Erhan
Google
dumitru@google.com
s a
cts
his
re-
m-
ed
he
de-
nts
A group of people
shopping at an
outdoor market.
!
There are many
vegetables at the
fruit stand.
Vision!
Deep CNN
Language !
Generating!
RNN
Figure 1. NIC, our model, is based end-to-end on a neural net-
work consisting of a vision CNN followed by a language gener-
Show and Tell:
A Neural Image Caption Generator
Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555
Figure 5. A selection of evaluation results, grouped by human rating.
Radiologist
Medical Imaging AI Startups by Applications
Bone Age Assessment
• M: 28 Classes
• F: 20 Classes
• Method: G.P.
• Top3-95.28% (F)
• Top3-81.55% (M)
Business Area
Medical Image Analysis
VUNOnet and our machine learning technology will help doctors and hospitals manage
medical scans and images intelligently to make diagnosis faster and more accurately.
Original Image Automatic Segmentation EmphysemaNormal ReticularOpacity
Our system finds DILDs at the highest accuracy * DILDs: Diffuse Interstitial Lung Disease
Digital Radiologist
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
Digital Radiologist
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
Digital Radiologist
Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214.
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
Digital Radiologist
Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214.
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
Digital Radiologist
Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214.
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
Feature Engineering vs Feature Learning
alization of Hand-crafted Feature vs Learned Feature in 2D
Feature Engineering vs Feature Learning
• Visualization of Hand-crafted Feature vs Learned Feature in 2D
Visualization of Hand-crafted Feature vs Learned Feature in 2D
Bench to Bedside : Practical Applications
• Contents-based Case Retrieval
–Finding similar cases with the clinically matching context - Search engine for medical images.
–Clinicians can refer the diagnosis, prognosis of past similar patients to make better clinical decision.
–Accepted to present at RSNA 2017
Digital Radiologist
•Zebra Medical Vision에서 $1 에 영상의학데이터를 판독해주는 서비스를 런칭 (2017년 10월)

•항목은 확정되지는 않았으나, Pulmonary Hypertension, Lung Nodule, Fatty Liver, Emphysema, 





Coronary Calcium Scoring, Bone Mineral Density, Aortic Aneurysm 등으로 예상
https://www.zebra-med.com/aione/
Zebra Medical Vision’s AI1: AI at Your Fingertips
https://www.youtube.com/watch?v=0PGgCpXa-Fs
Detection of Diabetic Retinopathy
당뇨성 망막병증
• 당뇨병의 대표적 합병증: 당뇨병력이 30년 이상 환자 90% 발병

• 안과 전문의들이 안저(안구의 안쪽)를 사진으로 찍어서 판독

• 망막 내 미세혈관 생성, 출혈, 삼출물 정도를 파악하여 진단
Copyright 2016 American Medical Association. All rights reserved.
Development and Validation of a Deep Learning Algorithm
for Detection of Diabetic Retinopathy
in Retinal Fundus Photographs
Varun Gulshan, PhD; Lily Peng, MD, PhD; Marc Coram, PhD; Martin C. Stumpe, PhD; Derek Wu, BS; Arunachalam Narayanaswamy, PhD;
Subhashini Venugopalan, MS; Kasumi Widner, MS; Tom Madams, MEng; Jorge Cuadros, OD, PhD; Ramasamy Kim, OD, DNB;
Rajiv Raman, MS, DNB; Philip C. Nelson, BS; Jessica L. Mega, MD, MPH; Dale R. Webster, PhD
IMPORTANCE Deep learning is a family of computational methods that allow an algorithm to
program itself by learning from a large set of examples that demonstrate the desired
behavior, removing the need to specify rules explicitly. Application of these methods to
medical imaging requires further assessment and validation.
OBJECTIVE To apply deep learning to create an algorithm for automated detection of diabetic
retinopathy and diabetic macular edema in retinal fundus photographs.
DESIGN AND SETTING A specific type of neural network optimized for image classification
called a deep convolutional neural network was trained using a retrospective development
data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy,
diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists
and ophthalmology senior residents between May and December 2015. The resultant
algorithm was validated in January and February 2016 using 2 separate data sets, both
graded by at least 7 US board-certified ophthalmologists with high intragrader consistency.
EXPOSURE Deep learning–trained algorithm.
MAIN OUTCOMES AND MEASURES The sensitivity and specificity of the algorithm for detecting
referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy,
referable diabetic macular edema, or both, were generated based on the reference standard
of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2
operating points selected from the development set, one selected for high specificity and
another for high sensitivity.
RESULTS TheEyePACS-1datasetconsistedof9963imagesfrom4997patients(meanage,54.4
years;62.2%women;prevalenceofRDR,683/8878fullygradableimages[7.8%]);the
Messidor-2datasethad1748imagesfrom874patients(meanage,57.6years;42.6%women;
prevalenceofRDR,254/1745fullygradableimages[14.6%]).FordetectingRDR,thealgorithm
hadanareaunderthereceiveroperatingcurveof0.991(95%CI,0.988-0.993)forEyePACS-1and
0.990(95%CI,0.986-0.995)forMessidor-2.Usingthefirstoperatingcutpointwithhigh
specificity,forEyePACS-1,thesensitivitywas90.3%(95%CI,87.5%-92.7%)andthespecificity
was98.1%(95%CI,97.8%-98.5%).ForMessidor-2,thesensitivitywas87.0%(95%CI,81.1%-
91.0%)andthespecificitywas98.5%(95%CI,97.7%-99.1%).Usingasecondoperatingpoint
withhighsensitivityinthedevelopmentset,forEyePACS-1thesensitivitywas97.5%and
specificitywas93.4%andforMessidor-2thesensitivitywas96.1%andspecificitywas93.9%.
CONCLUSIONS AND RELEVANCE In this evaluation of retinal fundus photographs from adults
with diabetes, an algorithm based on deep machine learning had high sensitivity and
specificity for detecting referable diabetic retinopathy. Further research is necessary to
determine the feasibility of applying this algorithm in the clinical setting and to determine
whether use of the algorithm could lead to improved care and outcomes compared with
current ophthalmologic assessment.
JAMA. doi:10.1001/jama.2016.17216
Published online November 29, 2016.
Editorial
Supplemental content
Author Affiliations: Google Inc,
Mountain View, California (Gulshan,
Peng, Coram, Stumpe, Wu,
Narayanaswamy, Venugopalan,
Widner, Madams, Nelson, Webster);
Department of Computer Science,
University of Texas, Austin
(Venugopalan); EyePACS LLC,
San Jose, California (Cuadros); School
of Optometry, Vision Science
Graduate Group, University of
California, Berkeley (Cuadros);
Aravind Medical Research
Foundation, Aravind Eye Care
System, Madurai, India (Kim); Shri
Bhagwan Mahavir Vitreoretinal
Services, Sankara Nethralaya,
Chennai, Tamil Nadu, India (Raman);
Verily Life Sciences, Mountain View,
California (Mega); Cardiovascular
Division, Department of Medicine,
Brigham and Women’s Hospital and
Harvard Medical School, Boston,
Massachusetts (Mega).
Corresponding Author: Lily Peng,
MD, PhD, Google Research, 1600
Amphitheatre Way, Mountain View,
CA 94043 (lhpeng@google.com).
Research
JAMA | Original Investigation | INNOVATIONS IN HEALTH CARE DELIVERY
(Reprinted) E1
Copyright 2016 American Medical Association. All rights reserved.
Case Study: TensorFlow in Medicine - Retinal Imaging (TensorFlow Dev Summit 2017)
Inception-v3 (aka GoogleNet)
https://research.googleblog.com/2016/03/train-your-own-image-classifier-with.html
https://arxiv.org/abs/1512.00567
Training Set / Test Set
• CNN으로 후향적으로 128,175개의 안저 이미지 학습

• 미국의 안과전문의 54명이 3-7회 판독한 데이터

• 우수한 안과전문의들 7-8명의 판독 결과와 인공지능의 판독 결과 비교

• EyePACS-1 (9,963 개), Messidor-2 (1,748 개)a) Fullscreen mode
b) Hit reset to reload this image. This will reset all of the grading.
c) Comment box for other pathologies you see
eFigure 2. Screenshot of the Second Screen of the Grading Tool, Which Asks Graders to Assess the
Image for DR, DME and Other Notable Conditions or Findings
• EyePACS-1 과 Messidor-2 의 AUC = 0.991, 0.990

• 7-8명의 안과 전문의와 sensitivity, specificity 가 동일한 수준

• F-score: 0.95 (vs. 인간 의사는 0.91)
Additional sensitivity analyses were conducted for sev-
eralsubcategories:(1)detectingmoderateorworsediabeticreti-
effects of data set size on algorithm performance were exam-
ined and shown to plateau at around 60 000 images (or ap-
Figure 2. Validation Set Performance for Referable Diabetic Retinopathy
100
80
60
40
20
0
0
70
80
85
95
90
75
0 5 10 15 20 25 30
100806040
Sensitivity,%
1 – Specificity, %
20
EyePACS-1: AUC, 99.1%; 95% CI, 98.8%-99.3%A
100
High-sensitivity operating point
High-specificity operating point
100
80
60
40
20
0
0
70
80
85
95
90
75
0 5 10 15 20 25 30
100806040
Sensitivity,%
1 – Specificity, %
20
Messidor-2: AUC, 99.0%; 95% CI, 98.6%-99.5%B
100
High-specificity operating point
High-sensitivity operating point
Performance of the algorithm (black curve) and ophthalmologists (colored
circles) for the presence of referable diabetic retinopathy (moderate or worse
diabetic retinopathy or referable diabetic macular edema) on A, EyePACS-1
(8788 fully gradable images) and B, Messidor-2 (1745 fully gradable images).
The black diamonds on the graph correspond to the sensitivity and specificity of
the algorithm at the high-sensitivity and high-specificity operating points.
In A, for the high-sensitivity operating point, specificity was 93.4% (95% CI,
92.8%-94.0%) and sensitivity was 97.5% (95% CI, 95.8%-98.7%); for the
high-specificity operating point, specificity was 98.1% (95% CI, 97.8%-98.5%)
and sensitivity was 90.3% (95% CI, 87.5%-92.7%). In B, for the high-sensitivity
operating point, specificity was 93.9% (95% CI, 92.4%-95.3%) and sensitivity
was 96.1% (95% CI, 92.4%-98.3%); for the high-specificity operating point,
specificity was 98.5% (95% CI, 97.7%-99.1%) and sensitivity was 87.0% (95%
CI, 81.1%-91.0%). There were 8 ophthalmologists who graded EyePACS-1 and 7
ophthalmologists who graded Messidor-2. AUC indicates area under the
receiver operating characteristic curve.
Research Original Investigation Accuracy of a Deep Learning Algorithm for Detection of Diabetic Retinopathy
Results
Skin Cancer
ABCDE checklist
0 0 M O N T H 2 0 1 7 | V O L 0 0 0 | N A T U R E | 1
LETTER doi:10.1038/nature21056
Dermatologist-level classification of skin cancer
with deep neural networks
Andre Esteva1
*, Brett Kuprel1
*, Roberto A. Novoa2,3
, Justin Ko2
, Susan M. Swetter2,4
, Helen M. Blau5
& Sebastian Thrun6
Skin cancer, the most common human malignancy1–3
, is primarily
diagnosed visually, beginning with an initial clinical screening
and followed potentially by dermoscopic analysis, a biopsy and
histopathological examination. Automated classification of skin
lesions using images is a challenging task owing to the fine-grained
variability in the appearance of skin lesions. Deep convolutional
neural networks (CNNs)4,5
show potential for general and highly
variable tasks across many fine-grained object categories6–11
.
Here we demonstrate classification of skin lesions using a single
CNN, trained end-to-end from images directly, using only pixels
and disease labels as inputs. We train a CNN using a dataset of
129,450 clinical images—two orders of magnitude larger than
previous datasets12
—consisting of 2,032 different diseases. We
test its performance against 21 board-certified dermatologists on
biopsy-proven clinical images with two critical binary classification
use cases: keratinocyte carcinomas versus benign seborrheic
keratoses; and malignant melanomas versus benign nevi. The first
case represents the identification of the most common cancers, the
second represents the identification of the deadliest skin cancer.
The CNN achieves performance on par with all tested experts
across both tasks, demonstrating an artificial intelligence capable
of classifying skin cancer with a level of competence comparable to
dermatologists. Outfitted with deep neural networks, mobile devices
can potentially extend the reach of dermatologists outside of the
clinic. It is projected that 6.3 billion smartphone subscriptions will
exist by the year 2021 (ref. 13) and can therefore potentially provide
low-cost universal access to vital diagnostic care.
There are 5.4 million new cases of skin cancer in the United States2
every year. One in five Americans will be diagnosed with a cutaneous
malignancy in their lifetime. Although melanomas represent fewer than
5% of all skin cancers in the United States, they account for approxi-
mately 75% of all skin-cancer-related deaths, and are responsible for
over 10,000 deaths annually in the United States alone. Early detection
is critical, as the estimated 5-year survival rate for melanoma drops
from over 99% if detected in its earliest stages to about 14% if detected
in its latest stages. We developed a computational method which may
allow medical practitioners and patients to proactively track skin
lesions and detect cancer earlier. By creating a novel disease taxonomy,
and a disease-partitioning algorithm that maps individual diseases into
training classes, we are able to build a deep learning system for auto-
mated dermatology.
Previous work in dermatological computer-aided classification12,14,15
has lacked the generalization capability of medical practitioners
owing to insufficient data and a focus on standardized tasks such as
dermoscopy16–18
and histological image classification19–22
. Dermoscopy
images are acquired via a specialized instrument and histological
images are acquired via invasive biopsy and microscopy; whereby
both modalities yield highly standardized images. Photographic
images (for example, smartphone images) exhibit variability in factors
such as zoom, angle and lighting, making classification substantially
more challenging23,24
. We overcome this challenge by using a data-
driven approach—1.41 million pre-training and training images
make classification robust to photographic variability. Many previous
techniques require extensive preprocessing, lesion segmentation and
extraction of domain-specific visual features before classification. By
contrast, our system requires no hand-crafted features; it is trained
end-to-end directly from image labels and raw pixels, with a single
network for both photographic and dermoscopic images. The existing
body of work uses small datasets of typically less than a thousand
images of skin lesions16,18,19
, which, as a result, do not generalize well
to new images. We demonstrate generalizable classification with a new
dermatologist-labelled dataset of 129,450 clinical images, including
3,374 dermoscopy images.
Deep learning algorithms, powered by advances in computation
and very large datasets25
, have recently been shown to exceed human
performance in visual tasks such as playing Atari games26
, strategic
board games like Go27
and object recognition6
. In this paper we
outline the development of a CNN that matches the performance of
dermatologists at three key diagnostic tasks: melanoma classification,
melanoma classification using dermoscopy and carcinoma
classification. We restrict the comparisons to image-based classification.
We utilize a GoogleNet Inception v3 CNN architecture9
that was pre-
trained on approximately 1.28 million images (1,000 object categories)
from the 2014 ImageNet Large Scale Visual Recognition Challenge6
,
and train it on our dataset using transfer learning28
. Figure 1 shows the
working system. The CNN is trained using 757 disease classes. Our
dataset is composed of dermatologist-labelled images organized in a
tree-structured taxonomy of 2,032 diseases, in which the individual
diseases form the leaf nodes. The images come from 18 different
clinician-curated, open-access online repositories, as well as from
clinical data from Stanford University Medical Center. Figure 2a shows
a subset of the full taxonomy, which has been organized clinically and
visually by medical experts. We split our dataset into 127,463 training
and validation images and 1,942 biopsy-labelled test images.
To take advantage of fine-grained information contained within the
taxonomy structure, we develop an algorithm (Extended Data Table 1)
to partition diseases into fine-grained training classes (for example,
amelanotic melanoma and acrolentiginous melanoma). During
inference, the CNN outputs a probability distribution over these fine
classes. To recover the probabilities for coarser-level classes of interest
(for example, melanoma) we sum the probabilities of their descendants
(see Methods and Extended Data Fig. 1 for more details).
We validate the effectiveness of the algorithm in two ways, using
nine-fold cross-validation. First, we validate the algorithm using a
three-class disease partition—the first-level nodes of the taxonomy,
which represent benign lesions, malignant lesions and non-neoplastic
1
Department of Electrical Engineering, Stanford University, Stanford, California, USA. 2
Department of Dermatology, Stanford University, Stanford, California, USA. 3
Department of Pathology,
Stanford University, Stanford, California, USA. 4
Dermatology Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, California, USA. 5
Baxter Laboratory for Stem Cell Biology, Department
of Microbiology and Immunology, Institute for Stem Cell Biology and Regenerative Medicine, Stanford University, Stanford, California, USA. 6
Department of Computer Science, Stanford University,
Stanford, California, USA.
*These authors contributed equally to this work.
© 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
LETTERH
his task, the CNN achieves 72.1±0.9% (mean±s.d.) overall
he average of individual inference class accuracies) and two
gists attain 65.56% and 66.0% accuracy on a subset of the
set. Second, we validate the algorithm using a nine-class
rtition—the second-level nodes—so that the diseases of
have similar medical treatment plans. The CNN achieves
two trials, one using standard images and the other using
images, which reflect the two steps that a dermatologist m
to obtain a clinical impression. The same CNN is used for a
Figure 2b shows a few example images, demonstrating th
distinguishing between malignant and benign lesions, whic
visual features. Our comparison metrics are sensitivity an
Acral-lentiginous melanoma
Amelanotic melanoma
Lentigo melanoma
…
Blue nevus
Halo nevus
Mongolian spot
…
Training classes (757)Deep convolutional neural network (Inception v3) Inference classes (varies by task)
92% malignant melanocytic lesion
8% benign melanocytic lesion
Skin lesion image
Convolution
AvgPool
MaxPool
Concat
Dropout
Fully connected
Softmax
Deep CNN layout. Our classification technique is a
Data flow is from left to right: an image of a skin lesion
e, melanoma) is sequentially warped into a probability
over clinical classes of skin disease using Google Inception
hitecture pretrained on the ImageNet dataset (1.28 million
1,000 generic object classes) and fine-tuned on our own
29,450 skin lesions comprising 2,032 different diseases.
ning classes are defined using a novel taxonomy of skin disease
oning algorithm that maps diseases into training classes
(for example, acrolentiginous melanoma, amelanotic melano
melanoma). Inference classes are more general and are comp
or more training classes (for example, malignant melanocytic
class of melanomas). The probability of an inference class is c
summing the probabilities of the training classes according to
structure (see Methods). Inception v3 CNN architecture repr
from https://research.googleblog.com/2016/03/train-your-ow
classifier-with.html
GoogleNet Inception v3
• 129,450개의 피부과 병변 이미지 데이터를 자체 제작

• 미국의 피부과 전문의 18명이 데이터 curation

• CNN (Inception v3)으로 이미지를 학습

• 피부과 전문의들 21명과 인공지능의 판독 결과 비교

• 표피세포 암 (keratinocyte carcinoma)과 지루각화증(benign seborrheic keratosis)의 구분

• 악성 흑색종과 양성 병변 구분 (표준 이미지 데이터 기반)

• 악성 흑색종과 양성 병변 구분 (더마토스코프로 찍은 이미지 기반)
Skin cancer classification performance of
the CNN and dermatologists. LETT
a
b
0 1
Sensitivity
0
1
Specificity
Melanoma: 130 images
0 1
Sensitivity
0
1
Specificity
Melanoma: 225 images
Algorithm: AUC = 0.96
0 1
Sensitivity
0
1
Specificity
Melanoma: 111 dermoscopy images
0 1
Sensitivity
0
1
Specificity
Carcinoma: 707 images
Algorithm: AUC = 0.96
0 1
Sensitivity
0
1
Specificity
Melanoma: 1,010 dermoscopy images
Algorithm: AUC = 0.94
0 1
Sensitivity
0
1
Specificity
Carcinoma: 135 images
Algorithm: AUC = 0.96
Dermatologists (25)
Average dermatologist
Algorithm: AUC = 0.94
Dermatologists (22)
Average dermatologist
Algorithm: AUC = 0.91
Dermatologists (21)
Average dermatologist
cancer classification performance of the CNN and
21명 중에 인공지능보다 정확성이 떨어지는 피부과 전문의들이 상당수 있었음

피부과 전문의들의 평균 성적도 인공지능보다 좋지 않았음
Skin cancer classification performance of
the CNN and dermatologists. LETT
a
b
0 1
Sensitivity
0
1
Specificity
Melanoma: 130 images
0 1
Sensitivity
0
1
Specificity
Melanoma: 225 images
Algorithm: AUC = 0.96
0 1
Sensitivity
0
1
Specificity
Melanoma: 111 dermoscopy images
0 1
Sensitivity
0
1
Specificity
Carcinoma: 707 images
Algorithm: AUC = 0.96
0 1
Sensitivity
0
1
Specificity
Melanoma: 1,010 dermoscopy images
Algorithm: AUC = 0.94
0 1
Sensitivity
0
1
Specificity
Carcinoma: 135 images
Algorithm: AUC = 0.96
Dermatologists (25)
Average dermatologist
Algorithm: AUC = 0.94
Dermatologists (22)
Average dermatologist
Algorithm: AUC = 0.91
Dermatologists (21)
Average dermatologist
cancer classification performance of the CNN and
Skin Cancer Image Classification (TensorFlow Dev Summit 2017)
Skin cancer classification performance of
the CNN and dermatologists.
https://www.youtube.com/watch?v=toK1OSLep3s&t=419s
http://www.rolls-royce.com/about/our-technology/enabling-technologies/engine-health-management.aspx#sense
250 sensors to monitor the “health” of the GE turbines
Fig 1. What can consumer wearables do? Heart rate can be measured with an oximeter built into a ring [3], muscle activity with an electromyographi
sensor embedded into clothing [4], stress with an electodermal sensor incorporated into a wristband [5], and physical activity or sleep patterns via an
accelerometer in a watch [6,7]. In addition, a female’s most fertile period can be identified with detailed body temperature tracking [8], while levels of me
attention can be monitored with a small number of non-gelled electroencephalogram (EEG) electrodes [9]. Levels of social interaction (also known to a
PLOS Medicine 2016
•복잡한 의료 데이터의 분석 및 insight 도출

•영상 의료/병리 데이터의 분석/판독

•연속 데이터의 모니터링 및 예방/예측
인공지능의 의료 활용
Project Artemis at UIOT
S E P S I S
A targeted real-time early warning score (TREWScore)
for septic shock
Katharine E. Henry,1
David N. Hager,2
Peter J. Pronovost,3,4,5
Suchi Saria1,3,5,6
*
Sepsis is a leading cause of death in the United States, with mortality highest among patients who develop septic
shock. Early aggressive treatment decreases morbidity and mortality. Although automated screening tools can detect
patients currently experiencing severe sepsis and septic shock, none predict those at greatest risk of developing
shock. We analyzed routinely available physiological and laboratory data from intensive care unit patients and devel-
oped “TREWScore,” a targeted real-time early warning score that predicts which patients will develop septic shock.
TREWScore identified patients before the onset of septic shock with an area under the ROC (receiver operating
characteristic) curve (AUC) of 0.83 [95% confidence interval (CI), 0.81 to 0.85]. At a specificity of 0.67, TREWScore
achieved a sensitivity of 0.85 and identified patients a median of 28.2 [interquartile range (IQR), 10.6 to 94.2] hours
before onset. Of those identified, two-thirds were identified before any sepsis-related organ dysfunction. In compar-
ison, the Modified Early Warning Score, which has been used clinically for septic shock prediction, achieved a lower
AUC of 0.73 (95% CI, 0.71 to 0.76). A routine screening protocol based on the presence of two of the systemic inflam-
matory response syndrome criteria, suspicion of infection, and either hypotension or hyperlactatemia achieved a low-
er sensitivity of 0.74 at a comparable specificity of 0.64. Continuous sampling of data from the electronic health
records and calculation of TREWScore may allow clinicians to identify patients at risk for septic shock and provide
earlier interventions that would prevent or mitigate the associated morbidity and mortality.
INTRODUCTION
Seven hundred fifty thousand patients develop severe sepsis and septic
shock in the United States each year. More than half of them are
admitted to an intensive care unit (ICU), accounting for 10% of all
ICU admissions, 20 to 30% of hospital deaths, and $15.4 billion in an-
nual health care costs (1–3). Several studies have demonstrated that
morbidity, mortality, and length of stay are decreased when severe sep-
sis and septic shock are identified and treated early (4–8). In particular,
one study showed that mortality from septic shock increased by 7.6%
with every hour that treatment was delayed after the onset of hypo-
tension (9).
More recent studies comparing protocolized care, usual care, and
early goal-directed therapy (EGDT) for patients with septic shock sug-
gest that usual care is as effective as EGDT (10–12). Some have inter-
preted this to mean that usual care has improved over time and reflects
important aspects of EGDT, such as early antibiotics and early ag-
gressive fluid resuscitation (13). It is likely that continued early identi-
fication and treatment will further improve outcomes. However, the
Acute Physiology Score (SAPS II), SequentialOrgan Failure Assessment
(SOFA) scores, Modified Early Warning Score (MEWS), and Simple
Clinical Score (SCS) have been validated to assess illness severity and
risk of death among septic patients (14–17). Although these scores
are useful for predicting general deterioration or mortality, they typical-
ly cannot distinguish with high sensitivity and specificity which patients
are at highest risk of developing a specific acute condition.
The increased use of electronic health records (EHRs), which can be
queried in real time, has generated interest in automating tools that
identify patients at risk for septic shock (18–20). A number of “early
warning systems,” “track and trigger” initiatives, “listening applica-
tions,” and “sniffers” have been implemented to improve detection
andtimelinessof therapy forpatients with severe sepsis andseptic shock
(18, 20–23). Although these tools have been successful at detecting pa-
tients currently experiencing severe sepsis or septic shock, none predict
which patients are at highest risk of developing septic shock.
The adoption of the Affordable Care Act has added to the growing
excitement around predictive models derived from electronic health
R E S E A R C H A R T I C L E
onNovember3,2016http://stm.sciencemag.org/Downloadedfrom
•아주대병원 외상센터, 응급실, 내과계 중환자실 등 3곳의 80개 병상

•산소포화도, 혈압, 맥박, 뇌파, 체온 등 8가지 환자 생체 데이터를 하나로 통합 저장

•생체 정보를 인공지능으로 실시간 모니터링+분석하여 1-3시간 전에 예측

•부정맥, 패혈증, 급성호흡곤란증후군(ARDS), 계획되지 않은 기도삽관 등의 질병
혈당 관리
•식후 혈당의 변화를 예측할 수 있을까?

•어떤 음식이 혈당을 많이 올리는가?
혈당 관리
• 혈당은 당뇨병 뿐만 아니라, 많은 대사성 질환과 관련된 중요한 수치

• 식후 혈당 변화 (PPGR) 예측 위해

• 개별 식품의 혈당 지수

• 개별 식품의 탄수화물 함량

• 동일한 음식에 대해 사람들의 혈당 변화가 동일하게 일어나는가?
혈당 관리
• 혈당은 당뇨병 뿐만 아니라, 많은 대사성 질환과 관련된 중요한 수치

• 식후 혈당 변화 (PPGR) 예측 위해

• 개별 식품의 혈당 지수

• 개별 식품의 탄수화물 함량

• 동일한 음식에 대해 사람들의 혈당 변화가 동일하게 일어나는가?
Article
Personalized Nutrition by Prediction of Glycemic
Responses
Graphical Abstract
Highlights
d High interpersonal variability in post-meal glucose observed
in an 800-person cohort
d Using personal and microbiome features enables accurate
glucose response prediction
d Prediction is accurate and superior to common practice in an
independent cohort
d Short-term personalized dietary interventions successfully
lower post-meal glucose
Authors
David Zeevi, Tal Korem, Niv Zmora, ...,
Zamir Halpern, Eran Elinav, Eran Segal
Correspondence
eran.elinav@weizmann.ac.il (E.E.),
eran.segal@weizmann.ac.il (E.S.)
In Brief
People eating identical meals present
high variability in post-meal blood
glucose response. Personalized diets
created with the help of an accurate
predictor of blood glucose response that
integrates parameters such as dietary
habits, physical activity, and gut
microbiota may successfully lower post-
meal blood glucose and its long-term
metabolic consequences.
Zeevi et al., 2015, Cell 163, 1079–1094
November 19, 2015 ª2015 Elsevier Inc.
http://www.cell.com/cell/abstract/S0092-8674(15)01481-6
Nuts (456,000)
Beef (444,000)
Legumes (420,000)
Fruit (400,000)
Poultry (386,000)
Rice (331,000)
Other (4,010,000)
Baked goods (542,000)
Vegetables (548,000)
Sweets (639,000)
Dairy (730,000)
Bread (919,000)
Overall energy documented: 9,807,000 Calories
Glucose(mg/dl)
Time
Anthropometrics
Blood tests
Gut microbiome
16S rRNA
Metagenomics
Questionnaires
Food frequency
Lifestyle
Medical
Diary (food, sleep, physical activity)
Continuous glucose monitoring
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7
Standardized meals (50g available carbohydrates)
G G F
Bread Bread Bread &
butter
Bread &
butter
Glucose Glucose Fructose
Per person profiling Computational analysis
Main
cohort
800 Participants
Validation
cohort
100 Participants
PPGR
prediction
26 Participants
Dietary
intervention
A
Glucose(mg/dl)
DayBMI
1 2 3 4 5 6 7
Standardized meal
Lunch
Snack
Dinner
Postprandial glycemic response
(PPGR; 2-hour iAUC)
D
5,435 days, 46,898 meals, 9.8M Calories, 2,532 exercises
130K hours, 1.56M glucose measurements
B C
Frequency
Frequency
HbA1c%
45% 33% 22% 76% 21% 3%
F
1000
2000
requency
Carbohydrate
Fat
E
Sleep
G Study participants MetaHIT - stool
HMP - stool HMP - oral
(2.2%)
Using smartphone-adjusted website
Using a subcutaneous sensor (iPro2)
Participant 141
http://www.cell.com/cell/abstract/S0092-8674(15)01481-6
A B
D
E
H
G
F
C
http://www.cell.com/cell/abstract/S0092-8674(15)01481-6
http://www.cell.com/cell/abstract/S0092-8674(15)01481-6
Predicted PPGR
(iAUC, mg/dl.h)
R=0.70
Validation cohort
prediction
Personal features Meal features
Main
cohort
800 participants
Validation
cohort
100 participants
Time, nutrients,
prev. exercise
Meal response predictor
Meal
responses
Train predictor
Cross-validation
Leave-one-person-out
0 20 25 5 30
x4000
Use predictor to predict meal responses
Boosted decision trees
=
?
Meal response prediction
Predicted Measured
16S MG
Participant
MeasuredPPGR
(iAUC,mg/dl.h)
Meal Carbohydrates (g)
R=0.38
Carbohydrate-only
prediction
Predicted PPGR
(iAUC, mg/dl.h)
R=0.68
Main cohort prediction
(cross-validation)
A
B C
D E
MeasuredPPGR
(iAUC,mg/dl.h)
Calories-only
prediction
R=0.33
Meal Calories (g)
B Q A
Figure 3. Accurate Predicti
ized Postprandial Glycemic
(A) Illustration of our machine-le
predicting PPGRs.
(B–E) PPGR predictions. Dots r
(x axis) and CGM-measured
meals, for a model based: only
bohydrate content (B); only on
content (C); our predictor evalu
person-out cross validation on
son cohort (D); and our predicto
independent 100-person valid
Pearson correlation of predict
PPGRs is indicated.
As expected, the PDP o
(Figure 4A) shows that as
hydrate content increases
predicts, on average, a hi
term this relation, of hi
PPGR with increasing fe
non-beneficial (with respec
and the opposite relation
dicted PPGR with incr
value, as beneficial (also
prediction; see PDP lege
However, since PDPs dis
contribution of each feat
entire cohort, we asked w
tionship between carboh
and PPGRs varies across
end, for each participant
the slope of the linear regr
http://www.cell.com/cell/abstract/S0092-8674(15)01481-6
Breakfast
Lunch
Snack
Dinner
1 2 3 4 5 6
B1 B2 B3 B4 B6B5
L1 L2 L3 L4 L6L5
S1 S2 S3 S4 S6S5
D1 D2 D3 D4 D6D5
Day
Dietitian prescribed meals
One week profiling
(26 participants)
16S MG
Personal features
Carbs > 10g?
HbA1c>5.7%?
BMI>25?
Firmicutes>5%?
YN
Y
Y YN
N
N
0 20 25 5 30
x4000
Predictor-based
Expert-based ‘Good’ diet
B4
B6
L2
L5
S5
S6
D2
D3
‘Bad’ diet
B1
B2
L3
L6
S1
S2
D1
D5
Choose meals for dietary intervention weeks
‘Good’ diet
B4
B5
L4
L5
S5
S6
D2
D4
‘Bad’ diet
B1
B3
L1
L6
S1
S2
D1
D6
Measure and analyze intervention weeks
Glucose(mg/dl)Glucose(mg/dl)
14 participants
(E1, E2, ..., E14)
1 65432Day
L6
Text meal identifier
Color-coded response
‘Bad’ diet week
‘Good’ diet week
‘Bad’ diet week
‘Good’ diet week
(blue - low; yellow - high)
Find best
and worst meals
for each row
Predictor-based arm Expert-based arm
Predictor
Predictor
Predictor
Expert
Expert
Expert
* ** *** ****** * *** * ** *** n.s. n.s. * *** * *** *** n.s. ** † † † n.s. *** n.s. *** *** ** ** ****
‘Bad’ diet week
‘Good’ diet week
PPGR(iAUC,mg/dl.h)
PPGR(iAUC,mg/dl.h)
Glucosefluctuations(noise,σ/μ)
MaxPPGR(iAUC,mg/dl.h)
E FDC
A
Profiling week measured
PPGR (iAUC, mg/dl.h)
Intervention predicted
PPGR (iAUC, mg/dl.h)
H IR=0.70 R=0.80
Participant P3
Participant E7
Interventionmeasured
PPGR(iAUC,mg/dl.h)
Interventionmeasured
PPGR(iAUC,mg/dl.h)
B
Pizza
Hummus
Potatoes
Chickenliver
Participants
G
Food consumed
during ‘good’
diet week
Food consumed
during ‘bad’
diet week
B Q A
Schnitzel
P6
P10
P3
P8
P2
P5
P9
P4
P1
P11
P7
P12
E8
E7
E9
E4
E14
E11
E10
E12
E5
E3
E2
E1
E6
E13
E3
E4
P6
E8
E14
E6
E13
P8
P9
P10
P1
P2
P11
P12
12 participants
(P1, P2, ..., P12)
Figure 5. Personally Tailored Dietary Interventions Improve Postprandial Glycemic Responses
Jan 7, 2016
In an early research project involving 600 patient cases, the team was able to 

predict near-term hypoglycemic events up to 3 hours in advance of the symptoms.
IBM Watson-Medtronic
Jan 7, 2016
Sugar.IQ
사용자의 음식 섭취와 그에 따른 혈당 변
화, 인슐린 주입 등의 과거 기록 기반

식후 사용자의 혈당이 어떻게 변화할지
Watson 이 예측
ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
약이란 무엇인가?
Digital Therapeutics
디지털 신약
• Puretech Health

• ‘새로운 개념의 제약회사’를 추구하는 회사

• 기존의 신약 뿐만 아니라, 게임, 앱 등을 이용한 Digital Therapeutics 를 개발

• Digital Therapeutics는 최근 미국 FDA의 de novo 승인을 받기도 함
• Puretech Health

• 신약 파이프라인 중에는 일반적인 small molecule 등도 있지만, 

• Akili: ADHD, 우울증, 알츠하이머 등을 위한 인지 능력 개선 목적의 게임 (Project EVO)

• Sonde: Voice biomarker 를 이용한 우울증 등 mental health의 진단 및 모니터링 목적
• Puretech Health

• 신약 파이프라인 중에는 일반적인 small molecule 등도 있지만, 

• Akili: ADHD, 우울증, 알츠하이머 등을 위한 인지 능력 개선 목적의 게임 (Project EVO)

• Sonde: Voice biomarker 를 이용한 우울증 등 mental health의 진단 및 모니터링 목적
• Puretech Health

• 신약 파이프라인 중에는 일반적인 small molecule 등도 있지만, 

• Akili: ADHD, 우울증, 알츠하이머 등을 위한 인지 능력 개선 목적의 게임 (Project EVO)

• Sonde: Voice biomarker 를 이용한 우울증 등 mental health의 진단 및 모니터링 목적
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로
의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로

More Related Content

What's hot

디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약
Yoon Sup Choi
 
How to implement digital medicine in the future
How to implement digital medicine in the futureHow to implement digital medicine in the future
How to implement digital medicine in the future
Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 3월)
인공지능은 의료를 어떻게 혁신하는가 (2019년 3월)인공지능은 의료를 어떻게 혁신하는가 (2019년 3월)
인공지능은 의료를 어떻게 혁신하는가 (2019년 3월)
Yoon Sup Choi
 
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
Yoon Sup Choi
 
의료 인공지능: 인공지능은 의료를 어떻게 혁신하는가 - 최윤섭 (updated 18년 10월)
의료 인공지능: 인공지능은 의료를 어떻게 혁신하는가 - 최윤섭 (updated 18년 10월)의료 인공지능: 인공지능은 의료를 어떻게 혁신하는가 - 최윤섭 (updated 18년 10월)
의료 인공지능: 인공지능은 의료를 어떻게 혁신하는가 - 최윤섭 (updated 18년 10월)
Yoon Sup Choi
 
Artificial Intelligence in Medicine
Artificial Intelligence in Medicine Artificial Intelligence in Medicine
Artificial Intelligence in Medicine
Yoon Sup Choi
 
의료의 미래, 디지털 헬스케어
의료의 미래, 디지털 헬스케어의료의 미래, 디지털 헬스케어
의료의 미래, 디지털 헬스케어
Yoon Sup Choi
 
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
Yoon Sup Choi
 
When digital medicine becomes the medicine (1/2)
When digital medicine becomes the medicine (1/2)When digital medicine becomes the medicine (1/2)
When digital medicine becomes the medicine (1/2)
Yoon Sup Choi
 
디지털 헬스케어와 보험의 미래 (2019년 5월)
디지털 헬스케어와 보험의 미래 (2019년 5월)디지털 헬스케어와 보험의 미래 (2019년 5월)
디지털 헬스케어와 보험의 미래 (2019년 5월)
Yoon Sup Choi
 
디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)
Yoon Sup Choi
 
디지털 신약, 누구도 가보지 않은 길
디지털 신약, 누구도 가보지 않은 길디지털 신약, 누구도 가보지 않은 길
디지털 신약, 누구도 가보지 않은 길
Yoon Sup Choi
 
디지털 헬스케어 글로벌 동향: 2017년 상반기
디지털 헬스케어 글로벌 동향: 2017년 상반기디지털 헬스케어 글로벌 동향: 2017년 상반기
디지털 헬스케어 글로벌 동향: 2017년 상반기
Yoon Sup Choi
 
디지털 의료가 '의료'가 될 때 (2/2)
디지털 의료가 '의료'가 될 때 (2/2)디지털 의료가 '의료'가 될 때 (2/2)
디지털 의료가 '의료'가 될 때 (2/2)
Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
Yoon Sup Choi
 
[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어
Yoon Sup Choi
 
글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향 글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향
Yoon Sup Choi
 
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
Yoon Sup Choi
 
Digital health in diabetes: a global perspective
Digital health in diabetes: a global perspectiveDigital health in diabetes: a global perspective
Digital health in diabetes: a global perspective
Yoon Sup Choi
 
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
Yoon Sup Choi
 

What's hot (20)

디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약
 
How to implement digital medicine in the future
How to implement digital medicine in the futureHow to implement digital medicine in the future
How to implement digital medicine in the future
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 3월)
인공지능은 의료를 어떻게 혁신하는가 (2019년 3월)인공지능은 의료를 어떻게 혁신하는가 (2019년 3월)
인공지능은 의료를 어떻게 혁신하는가 (2019년 3월)
 
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
 
의료 인공지능: 인공지능은 의료를 어떻게 혁신하는가 - 최윤섭 (updated 18년 10월)
의료 인공지능: 인공지능은 의료를 어떻게 혁신하는가 - 최윤섭 (updated 18년 10월)의료 인공지능: 인공지능은 의료를 어떻게 혁신하는가 - 최윤섭 (updated 18년 10월)
의료 인공지능: 인공지능은 의료를 어떻게 혁신하는가 - 최윤섭 (updated 18년 10월)
 
Artificial Intelligence in Medicine
Artificial Intelligence in Medicine Artificial Intelligence in Medicine
Artificial Intelligence in Medicine
 
의료의 미래, 디지털 헬스케어
의료의 미래, 디지털 헬스케어의료의 미래, 디지털 헬스케어
의료의 미래, 디지털 헬스케어
 
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
 
When digital medicine becomes the medicine (1/2)
When digital medicine becomes the medicine (1/2)When digital medicine becomes the medicine (1/2)
When digital medicine becomes the medicine (1/2)
 
디지털 헬스케어와 보험의 미래 (2019년 5월)
디지털 헬스케어와 보험의 미래 (2019년 5월)디지털 헬스케어와 보험의 미래 (2019년 5월)
디지털 헬스케어와 보험의 미래 (2019년 5월)
 
디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)
 
디지털 신약, 누구도 가보지 않은 길
디지털 신약, 누구도 가보지 않은 길디지털 신약, 누구도 가보지 않은 길
디지털 신약, 누구도 가보지 않은 길
 
디지털 헬스케어 글로벌 동향: 2017년 상반기
디지털 헬스케어 글로벌 동향: 2017년 상반기디지털 헬스케어 글로벌 동향: 2017년 상반기
디지털 헬스케어 글로벌 동향: 2017년 상반기
 
디지털 의료가 '의료'가 될 때 (2/2)
디지털 의료가 '의료'가 될 때 (2/2)디지털 의료가 '의료'가 될 때 (2/2)
디지털 의료가 '의료'가 될 때 (2/2)
 
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
 
[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어
 
글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향 글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향
 
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
 
Digital health in diabetes: a global perspective
Digital health in diabetes: a global perspectiveDigital health in diabetes: a global perspective
Digital health in diabetes: a global perspective
 
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
 

Similar to 의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로

Trends brief: Quantified self & Wearable Technologies
Trends brief:  Quantified self & Wearable TechnologiesTrends brief:  Quantified self & Wearable Technologies
Trends brief: Quantified self & Wearable Technologies
G. Kofi Annan
 
AIC_2020_Presentation_Slides.pdf
AIC_2020_Presentation_Slides.pdfAIC_2020_Presentation_Slides.pdf
AIC_2020_Presentation_Slides.pdf
sifatzubaira
 
Silicon valley and the search for immortality — the future of healthcare
Silicon valley and the search for immortality — the future of healthcareSilicon valley and the search for immortality — the future of healthcare
Silicon valley and the search for immortality — the future of healthcare
Yogesh Malik
 
mHealth Empire: The Rise of the Digital Revolution
mHealth Empire: The Rise of the Digital RevolutionmHealth Empire: The Rise of the Digital Revolution
mHealth Empire: The Rise of the Digital Revolution
Medidata Solutions
 
Fitabase at 2016 Fitbit Captivate Summit
Fitabase at 2016 Fitbit Captivate SummitFitabase at 2016 Fitbit Captivate Summit
Fitabase at 2016 Fitbit Captivate Summit
Fitabase
 
E-health means participatory health: how social, mobile, wearable and ambient...
E-health means participatory health: how social, mobile, wearable and ambient...E-health means participatory health: how social, mobile, wearable and ambient...
E-health means participatory health: how social, mobile, wearable and ambient...
Kathleen Gray
 
Decentralized trials white paper by Andaman7
Decentralized trials white paper by Andaman7Decentralized trials white paper by Andaman7
Decentralized trials white paper by Andaman7
Lio Naveau
 
mobile technologies: riding the hype cycle together
mobile technologies: riding the hype cycle togethermobile technologies: riding the hype cycle together
mobile technologies: riding the hype cycle together
Brian Bot
 
Augmented Personalized Health: using AI techniques on semantically integrated...
Augmented Personalized Health: using AI techniques on semantically integrated...Augmented Personalized Health: using AI techniques on semantically integrated...
Augmented Personalized Health: using AI techniques on semantically integrated...
Amit Sheth
 
The Future of mHealth - Jay Srini - March 2011
The Future of mHealth - Jay Srini - March 2011The Future of mHealth - Jay Srini - March 2011
The Future of mHealth - Jay Srini - March 2011
LifeWIRE Corp
 
Digital evidence year in review-2017
Digital evidence year in review-2017Digital evidence year in review-2017
Digital evidence year in review-2017
Kent State University
 
ITNOW Are Health Wearables Fit-For-Purpose
ITNOW Are Health Wearables Fit-For-PurposeITNOW Are Health Wearables Fit-For-Purpose
ITNOW Are Health Wearables Fit-For-PurposeGareth Baxendale
 
Augmented Personalized Healthcare: Panel Version
Augmented Personalized Healthcare: Panel VersionAugmented Personalized Healthcare: Panel Version
Augmented Personalized Healthcare: Panel Version
Artificial Intelligence Institute at UofSC
 
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...Kostyantyn Bobyk
 
Asia HealthTech Investments by Julien de Salaberry (30 June 2015)
Asia HealthTech Investments by Julien de Salaberry (30 June 2015)Asia HealthTech Investments by Julien de Salaberry (30 June 2015)
Asia HealthTech Investments by Julien de Salaberry (30 June 2015)
KickstartPH
 
Rock Report: Sensors by @Rock_Health
Rock Report: Sensors by @Rock_HealthRock Report: Sensors by @Rock_Health
Rock Report: Sensors by @Rock_Health
Rock Health
 
Digital Health Complete Notes and Elaboration
Digital Health Complete Notes and ElaborationDigital Health Complete Notes and Elaboration
Digital Health Complete Notes and Elaboration
CaptainAmerica99
 
Digital health
Digital healthDigital health
Digital health
Dr.Puvaneswari kanagaraj
 
디지털 헬스케어, 그리고 예상되는 법적 이슈들
디지털 헬스케어, 그리고 예상되는 법적 이슈들디지털 헬스케어, 그리고 예상되는 법적 이슈들
디지털 헬스케어, 그리고 예상되는 법적 이슈들
Yoon Sup Choi
 
WHITE PAPER: How safe is your quantified self? from the Symantec Security Res...
WHITE PAPER: How safe is your quantified self? from the Symantec Security Res...WHITE PAPER: How safe is your quantified self? from the Symantec Security Res...
WHITE PAPER: How safe is your quantified self? from the Symantec Security Res...
Symantec
 

Similar to 의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로 (20)

Trends brief: Quantified self & Wearable Technologies
Trends brief:  Quantified self & Wearable TechnologiesTrends brief:  Quantified self & Wearable Technologies
Trends brief: Quantified self & Wearable Technologies
 
AIC_2020_Presentation_Slides.pdf
AIC_2020_Presentation_Slides.pdfAIC_2020_Presentation_Slides.pdf
AIC_2020_Presentation_Slides.pdf
 
Silicon valley and the search for immortality — the future of healthcare
Silicon valley and the search for immortality — the future of healthcareSilicon valley and the search for immortality — the future of healthcare
Silicon valley and the search for immortality — the future of healthcare
 
mHealth Empire: The Rise of the Digital Revolution
mHealth Empire: The Rise of the Digital RevolutionmHealth Empire: The Rise of the Digital Revolution
mHealth Empire: The Rise of the Digital Revolution
 
Fitabase at 2016 Fitbit Captivate Summit
Fitabase at 2016 Fitbit Captivate SummitFitabase at 2016 Fitbit Captivate Summit
Fitabase at 2016 Fitbit Captivate Summit
 
E-health means participatory health: how social, mobile, wearable and ambient...
E-health means participatory health: how social, mobile, wearable and ambient...E-health means participatory health: how social, mobile, wearable and ambient...
E-health means participatory health: how social, mobile, wearable and ambient...
 
Decentralized trials white paper by Andaman7
Decentralized trials white paper by Andaman7Decentralized trials white paper by Andaman7
Decentralized trials white paper by Andaman7
 
mobile technologies: riding the hype cycle together
mobile technologies: riding the hype cycle togethermobile technologies: riding the hype cycle together
mobile technologies: riding the hype cycle together
 
Augmented Personalized Health: using AI techniques on semantically integrated...
Augmented Personalized Health: using AI techniques on semantically integrated...Augmented Personalized Health: using AI techniques on semantically integrated...
Augmented Personalized Health: using AI techniques on semantically integrated...
 
The Future of mHealth - Jay Srini - March 2011
The Future of mHealth - Jay Srini - March 2011The Future of mHealth - Jay Srini - March 2011
The Future of mHealth - Jay Srini - March 2011
 
Digital evidence year in review-2017
Digital evidence year in review-2017Digital evidence year in review-2017
Digital evidence year in review-2017
 
ITNOW Are Health Wearables Fit-For-Purpose
ITNOW Are Health Wearables Fit-For-PurposeITNOW Are Health Wearables Fit-For-Purpose
ITNOW Are Health Wearables Fit-For-Purpose
 
Augmented Personalized Healthcare: Panel Version
Augmented Personalized Healthcare: Panel VersionAugmented Personalized Healthcare: Panel Version
Augmented Personalized Healthcare: Panel Version
 
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...
 
Asia HealthTech Investments by Julien de Salaberry (30 June 2015)
Asia HealthTech Investments by Julien de Salaberry (30 June 2015)Asia HealthTech Investments by Julien de Salaberry (30 June 2015)
Asia HealthTech Investments by Julien de Salaberry (30 June 2015)
 
Rock Report: Sensors by @Rock_Health
Rock Report: Sensors by @Rock_HealthRock Report: Sensors by @Rock_Health
Rock Report: Sensors by @Rock_Health
 
Digital Health Complete Notes and Elaboration
Digital Health Complete Notes and ElaborationDigital Health Complete Notes and Elaboration
Digital Health Complete Notes and Elaboration
 
Digital health
Digital healthDigital health
Digital health
 
디지털 헬스케어, 그리고 예상되는 법적 이슈들
디지털 헬스케어, 그리고 예상되는 법적 이슈들디지털 헬스케어, 그리고 예상되는 법적 이슈들
디지털 헬스케어, 그리고 예상되는 법적 이슈들
 
WHITE PAPER: How safe is your quantified self? from the Symantec Security Res...
WHITE PAPER: How safe is your quantified self? from the Symantec Security Res...WHITE PAPER: How safe is your quantified self? from the Symantec Security Res...
WHITE PAPER: How safe is your quantified self? from the Symantec Security Res...
 

More from Yoon Sup Choi

한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈
Yoon Sup Choi
 
디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료
Yoon Sup Choi
 
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
Yoon Sup Choi
 
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
Yoon Sup Choi
 
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
Yoon Sup Choi
 
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
Yoon Sup Choi
 
디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약
Yoon Sup Choi
 
[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine
Yoon Sup Choi
 
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
Yoon Sup Choi
 
성공하는 디지털 헬스케어 스타트업을 위한 조언
성공하는 디지털 헬스케어 스타트업을 위한 조언성공하는 디지털 헬스케어 스타트업을 위한 조언
성공하는 디지털 헬스케어 스타트업을 위한 조언
Yoon Sup Choi
 
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
Yoon Sup Choi
 
When digital medicine becomes the medicine (2/2)
When digital medicine becomes the medicine (2/2)When digital medicine becomes the medicine (2/2)
When digital medicine becomes the medicine (2/2)
Yoon Sup Choi
 

More from Yoon Sup Choi (14)

한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈
 
디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료
 
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
 
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
 
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
 
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
 
디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약
 
[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine
 
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
 
성공하는 디지털 헬스케어 스타트업을 위한 조언
성공하는 디지털 헬스케어 스타트업을 위한 조언성공하는 디지털 헬스케어 스타트업을 위한 조언
성공하는 디지털 헬스케어 스타트업을 위한 조언
 
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
 
When digital medicine becomes the medicine (2/2)
When digital medicine becomes the medicine (2/2)When digital medicine becomes the medicine (2/2)
When digital medicine becomes the medicine (2/2)
 

Recently uploaded

Hemodialysis: Chapter 4, Dialysate Circuit - Dr.Gawad
Hemodialysis: Chapter 4, Dialysate Circuit - Dr.GawadHemodialysis: Chapter 4, Dialysate Circuit - Dr.Gawad
Hemodialysis: Chapter 4, Dialysate Circuit - Dr.Gawad
NephroTube - Dr.Gawad
 
Dehradun #ℂall #gIRLS Oyo Hotel 9719300533 #ℂall #gIRL in Dehradun
Dehradun #ℂall #gIRLS Oyo Hotel 9719300533 #ℂall #gIRL in DehradunDehradun #ℂall #gIRLS Oyo Hotel 9719300533 #ℂall #gIRL in Dehradun
Dehradun #ℂall #gIRLS Oyo Hotel 9719300533 #ℂall #gIRL in Dehradun
chandankumarsmartiso
 
Cardiac Assessment for B.sc Nursing Student.pdf
Cardiac Assessment for B.sc Nursing Student.pdfCardiac Assessment for B.sc Nursing Student.pdf
Cardiac Assessment for B.sc Nursing Student.pdf
shivalingatalekar1
 
Top 10 Best Ayurvedic Kidney Stone Syrups in India
Top 10 Best Ayurvedic Kidney Stone Syrups in IndiaTop 10 Best Ayurvedic Kidney Stone Syrups in India
Top 10 Best Ayurvedic Kidney Stone Syrups in India
SwastikAyurveda
 
Effective-Soaps-for-Fungal-Skin-Infections.pptx
Effective-Soaps-for-Fungal-Skin-Infections.pptxEffective-Soaps-for-Fungal-Skin-Infections.pptx
Effective-Soaps-for-Fungal-Skin-Infections.pptx
SwisschemDerma
 
ARTHROLOGY PPT NCISM SYLLABUS AYURVEDA STUDENTS
ARTHROLOGY PPT NCISM SYLLABUS AYURVEDA STUDENTSARTHROLOGY PPT NCISM SYLLABUS AYURVEDA STUDENTS
ARTHROLOGY PPT NCISM SYLLABUS AYURVEDA STUDENTS
Dr. Vinay Pareek
 
Novas diretrizes da OMS para os cuidados perinatais de mais qualidade
Novas diretrizes da OMS para os cuidados perinatais de mais qualidadeNovas diretrizes da OMS para os cuidados perinatais de mais qualidade
Novas diretrizes da OMS para os cuidados perinatais de mais qualidade
Prof. Marcus Renato de Carvalho
 
BRACHYTHERAPY OVERVIEW AND APPLICATORS
BRACHYTHERAPY OVERVIEW  AND  APPLICATORSBRACHYTHERAPY OVERVIEW  AND  APPLICATORS
BRACHYTHERAPY OVERVIEW AND APPLICATORS
Krishan Murari
 
Physiology of Chemical Sensation of smell.pdf
Physiology of Chemical Sensation of smell.pdfPhysiology of Chemical Sensation of smell.pdf
Physiology of Chemical Sensation of smell.pdf
MedicoseAcademics
 
Pictures of Superficial & Deep Fascia.ppt.pdf
Pictures of Superficial & Deep Fascia.ppt.pdfPictures of Superficial & Deep Fascia.ppt.pdf
Pictures of Superficial & Deep Fascia.ppt.pdf
Dr. Rabia Inam Gandapore
 
Best Ayurvedic medicine for Gas and Indigestion
Best Ayurvedic medicine for Gas and IndigestionBest Ayurvedic medicine for Gas and Indigestion
Best Ayurvedic medicine for Gas and Indigestion
SwastikAyurveda
 
Aortic Association CBL Pilot April 19 – 20 Bern
Aortic Association CBL Pilot April 19 – 20 BernAortic Association CBL Pilot April 19 – 20 Bern
Aortic Association CBL Pilot April 19 – 20 Bern
suvadeepdas911
 
NVBDCP.pptx Nation vector borne disease control program
NVBDCP.pptx Nation vector borne disease control programNVBDCP.pptx Nation vector borne disease control program
NVBDCP.pptx Nation vector borne disease control program
Sapna Thakur
 
Non-respiratory Functions of the Lungs.pdf
Non-respiratory Functions of the Lungs.pdfNon-respiratory Functions of the Lungs.pdf
Non-respiratory Functions of the Lungs.pdf
MedicoseAcademics
 
Light House Retreats: Plant Medicine Retreat Europe
Light House Retreats: Plant Medicine Retreat EuropeLight House Retreats: Plant Medicine Retreat Europe
Light House Retreats: Plant Medicine Retreat Europe
Lighthouse Retreat
 
Triangles of Neck and Clinical Correlation by Dr. RIG.pptx
Triangles of Neck and Clinical Correlation by Dr. RIG.pptxTriangles of Neck and Clinical Correlation by Dr. RIG.pptx
Triangles of Neck and Clinical Correlation by Dr. RIG.pptx
Dr. Rabia Inam Gandapore
 
Top-Vitamin-Supplement-Brands-in-India.pptx
Top-Vitamin-Supplement-Brands-in-India.pptxTop-Vitamin-Supplement-Brands-in-India.pptx
Top-Vitamin-Supplement-Brands-in-India.pptx
SwisschemDerma
 
How STIs Influence the Development of Pelvic Inflammatory Disease.pptx
How STIs Influence the Development of Pelvic Inflammatory Disease.pptxHow STIs Influence the Development of Pelvic Inflammatory Disease.pptx
How STIs Influence the Development of Pelvic Inflammatory Disease.pptx
FFragrant
 
Ophthalmology Clinical Tests for OSCE exam
Ophthalmology Clinical Tests for OSCE examOphthalmology Clinical Tests for OSCE exam
Ophthalmology Clinical Tests for OSCE exam
KafrELShiekh University
 
CDSCO and Phamacovigilance {Regulatory body in India}
CDSCO and Phamacovigilance {Regulatory body in India}CDSCO and Phamacovigilance {Regulatory body in India}
CDSCO and Phamacovigilance {Regulatory body in India}
NEHA GUPTA
 

Recently uploaded (20)

Hemodialysis: Chapter 4, Dialysate Circuit - Dr.Gawad
Hemodialysis: Chapter 4, Dialysate Circuit - Dr.GawadHemodialysis: Chapter 4, Dialysate Circuit - Dr.Gawad
Hemodialysis: Chapter 4, Dialysate Circuit - Dr.Gawad
 
Dehradun #ℂall #gIRLS Oyo Hotel 9719300533 #ℂall #gIRL in Dehradun
Dehradun #ℂall #gIRLS Oyo Hotel 9719300533 #ℂall #gIRL in DehradunDehradun #ℂall #gIRLS Oyo Hotel 9719300533 #ℂall #gIRL in Dehradun
Dehradun #ℂall #gIRLS Oyo Hotel 9719300533 #ℂall #gIRL in Dehradun
 
Cardiac Assessment for B.sc Nursing Student.pdf
Cardiac Assessment for B.sc Nursing Student.pdfCardiac Assessment for B.sc Nursing Student.pdf
Cardiac Assessment for B.sc Nursing Student.pdf
 
Top 10 Best Ayurvedic Kidney Stone Syrups in India
Top 10 Best Ayurvedic Kidney Stone Syrups in IndiaTop 10 Best Ayurvedic Kidney Stone Syrups in India
Top 10 Best Ayurvedic Kidney Stone Syrups in India
 
Effective-Soaps-for-Fungal-Skin-Infections.pptx
Effective-Soaps-for-Fungal-Skin-Infections.pptxEffective-Soaps-for-Fungal-Skin-Infections.pptx
Effective-Soaps-for-Fungal-Skin-Infections.pptx
 
ARTHROLOGY PPT NCISM SYLLABUS AYURVEDA STUDENTS
ARTHROLOGY PPT NCISM SYLLABUS AYURVEDA STUDENTSARTHROLOGY PPT NCISM SYLLABUS AYURVEDA STUDENTS
ARTHROLOGY PPT NCISM SYLLABUS AYURVEDA STUDENTS
 
Novas diretrizes da OMS para os cuidados perinatais de mais qualidade
Novas diretrizes da OMS para os cuidados perinatais de mais qualidadeNovas diretrizes da OMS para os cuidados perinatais de mais qualidade
Novas diretrizes da OMS para os cuidados perinatais de mais qualidade
 
BRACHYTHERAPY OVERVIEW AND APPLICATORS
BRACHYTHERAPY OVERVIEW  AND  APPLICATORSBRACHYTHERAPY OVERVIEW  AND  APPLICATORS
BRACHYTHERAPY OVERVIEW AND APPLICATORS
 
Physiology of Chemical Sensation of smell.pdf
Physiology of Chemical Sensation of smell.pdfPhysiology of Chemical Sensation of smell.pdf
Physiology of Chemical Sensation of smell.pdf
 
Pictures of Superficial & Deep Fascia.ppt.pdf
Pictures of Superficial & Deep Fascia.ppt.pdfPictures of Superficial & Deep Fascia.ppt.pdf
Pictures of Superficial & Deep Fascia.ppt.pdf
 
Best Ayurvedic medicine for Gas and Indigestion
Best Ayurvedic medicine for Gas and IndigestionBest Ayurvedic medicine for Gas and Indigestion
Best Ayurvedic medicine for Gas and Indigestion
 
Aortic Association CBL Pilot April 19 – 20 Bern
Aortic Association CBL Pilot April 19 – 20 BernAortic Association CBL Pilot April 19 – 20 Bern
Aortic Association CBL Pilot April 19 – 20 Bern
 
NVBDCP.pptx Nation vector borne disease control program
NVBDCP.pptx Nation vector borne disease control programNVBDCP.pptx Nation vector borne disease control program
NVBDCP.pptx Nation vector borne disease control program
 
Non-respiratory Functions of the Lungs.pdf
Non-respiratory Functions of the Lungs.pdfNon-respiratory Functions of the Lungs.pdf
Non-respiratory Functions of the Lungs.pdf
 
Light House Retreats: Plant Medicine Retreat Europe
Light House Retreats: Plant Medicine Retreat EuropeLight House Retreats: Plant Medicine Retreat Europe
Light House Retreats: Plant Medicine Retreat Europe
 
Triangles of Neck and Clinical Correlation by Dr. RIG.pptx
Triangles of Neck and Clinical Correlation by Dr. RIG.pptxTriangles of Neck and Clinical Correlation by Dr. RIG.pptx
Triangles of Neck and Clinical Correlation by Dr. RIG.pptx
 
Top-Vitamin-Supplement-Brands-in-India.pptx
Top-Vitamin-Supplement-Brands-in-India.pptxTop-Vitamin-Supplement-Brands-in-India.pptx
Top-Vitamin-Supplement-Brands-in-India.pptx
 
How STIs Influence the Development of Pelvic Inflammatory Disease.pptx
How STIs Influence the Development of Pelvic Inflammatory Disease.pptxHow STIs Influence the Development of Pelvic Inflammatory Disease.pptx
How STIs Influence the Development of Pelvic Inflammatory Disease.pptx
 
Ophthalmology Clinical Tests for OSCE exam
Ophthalmology Clinical Tests for OSCE examOphthalmology Clinical Tests for OSCE exam
Ophthalmology Clinical Tests for OSCE exam
 
CDSCO and Phamacovigilance {Regulatory body in India}
CDSCO and Phamacovigilance {Regulatory body in India}CDSCO and Phamacovigilance {Regulatory body in India}
CDSCO and Phamacovigilance {Regulatory body in India}
 

의료의 미래, 디지털 헬스케어: 당뇨와 내분비학을 중심으로

  • 1. Professor, SAHIST, Sungkyunkwan University Director, Digital Healthcare Institute Yoon Sup Choi, Ph.D. 의료의 미래, 디지털 헬스케어 : 당뇨와 내분비학을 중심으로
  • 2. “It's in Apple's DNA that technology alone is not enough. 
 It's technology married with liberal arts.”
  • 3. The Convergence of IT, BT and Medicine
  • 4.
  • 5.
  • 8. Vinod Khosla Founder, 1st CEO of Sun Microsystems Partner of KPCB, CEO of KhoslaVentures LegendaryVenture Capitalist in SiliconValley
  • 9. “Technology will replace 80% of doctors”
  • 10. https://www.youtube.com/watch?time_continue=70&v=2HMPRXstSvQ “영상의학과 전문의를 양성하는 것을 당장 그만둬야 한다. 5년 안에 딥러닝이 영상의학과 전문의를 능가할 것은 자명하다.” Hinton on Radiology
  • 12. •2017년은 역대 디지털 헬스케어 스타트업 펀딩 중 최대의 해. •투자횟수와 개별 투자의 규모도 역대 최고 수준을 기록 •$100m 을 넘는 mega deal 도 8건이 있었으며, •이에 따라 기업가치 $1b이 넘는 유니콘 기업들이 상당수 생겨남. https://rockhealth.com/reports/2017-year-end-funding-report-the-end-of-the-beginning-of-digital-health/
  • 13. 2010 2011 2012 2013 2014 2015 2016 2017 Q1 Q2 Q3 Q4 FUNDING SNAPSHOT: YEAR OVER YEAR 6 155 284 477 647 596 550 658 794 Deal Count $1.4B $1.7B $1.7B $627M $603M$459M $288M $8.2B $6.2B $7.2B $2.9B $2.3B $2.0B $1.2B $11.5B $2.3B 2017 was the most active year for digital health funding to date with more than $11.5B invested across a record-setting 794 deals. Q4 2017 also had record-breaking numbers, surpassing $2B across 227 deals (the most ever in one quarter). Given the global market opportunity, increasing demand for innovation, wave of high-quality entrepreneurs flocking to the sector, and early stage of this innovation cycle, we expect plentiful capital in 2018. DEALS & FUNDING OUTLOOKGEOGRAPHY INVESTORS Source: StartUp Health Insights | startuphealth.com/insights Note: Report based on public data through 12/31/17 on seed (incl. accelerator), venture, corporate venture and private equity funding only. © 2018 StartUp Health LLC https://www.slideshare.net/StartUpHealth/2017-startup-health-insights-year-end-report
  • 15. •최근 3년 동안 Merck, J&J, GSK 등의 제약사들의 디지털 헬스케어 분야 투자 급증 •2015-2016년 총 22건의 deal (=2010-2014년의 5년간 투자 건수와 동일) •Merck 가 가장 활발: 2009년부터 Global Health Innovation Fund 를 통해 24건 투자 ($5-7M) •GSK 의 경우 2014년부터 6건 (via VC arm, SR One): including Propeller Health
  • 16.
  • 17. 헬스케어넓은 의미의 건강 관리에는 해당되지만, 디지털 기술이 적용되지 않고, 전문 의료 영역도 아닌 것 예) 운동, 영양, 수면 디지털 헬스케어 건강 관리 중에 디지털 기술이 사용되는 것 예) 사물인터넷, 인공지능, 3D 프린터, VR/AR 모바일 헬스케어 디지털 헬스케어 중 모바일 기술이 사용되는 것 예) 스마트폰, 사물인터넷, SNS 개인 유전정보분석 예) 암유전체, 질병위험도, 보인자, 약물 민감도 예) 웰니스, 조상 분석 헬스케어 관련 분야 구성도(ver 0.3) 의료 질병 예방, 치료, 처방, 관리 등 전문 의료 영역 원격의료 원격진료
  • 18. What is most important factor in digital medicine?
  • 19. “Data! Data! Data!” he cried.“I can’t make bricks without clay!” - Sherlock Holmes,“The Adventure of the Copper Beeches”
  • 20.
  • 21. 새로운 데이터가 새로운 방식으로 새로운 주체에 의해 측정, 저장, 통합, 분석된다. 데이터의 종류 데이터의 질적/양적 측면 웨어러블 기기 스마트폰 유전 정보 분석 인공지능 SNS 사용자/환자 대중
  • 22. Three Steps to Implement Digital Medicine • Step 1. Measure the Data • Step 2. Collect the Data • Step 3. Insight from the Data
  • 23. Digital Healthcare Industry Landscape Data Measurement Data Integration Data Interpretation Treatment Smartphone Gadget/Apps DNA Artificial Intelligence 2nd Opinion Wearables / IoT (ver. 3) EMR/EHR 3D Printer Counseling Data Platform Accelerator/early-VC Telemedicine Device On Demand (O2O) VR Digital Healthcare Institute Diretor, Yoon Sup Choi, Ph.D. yoonsup.choi@gmail.com
  • 24. Data Measurement Data Integration Data Interpretation Treatment Smartphone Gadget/Apps DNA Artificial Intelligence 2nd Opinion Device On Demand (O2O) Wearables / IoT Digital Healthcare Institute Diretor, Yoon Sup Choi, Ph.D. yoonsup.choi@gmail.com EMR/EHR 3D Printer Counseling Data Platform Accelerator/early-VC VR Telemedicine Digital Healthcare Industry Landscape (ver. 3)
  • 25. Step 1. Measure the Data
  • 26. Smartphone: the origin of healthcare innovation
  • 27. Smartphone: the origin of healthcare innovation
  • 28. 2013? The election of Pope Benedict The Election of Pope Francis
  • 29. The Election of Pope Francis The Election of Pope Benedict
  • 31.
  • 32.
  • 38.
  • 39. Skin Cancer Image Classification (TensorFlow Dev Summit 2017) Skin cancer classification performance of the CNN and dermatologists. https://www.youtube.com/watch?v=toK1OSLep3s&t=419s
  • 40. Smartphone video microscope automates detection of parasites in blood
  • 44.
  • 45.
  • 48.
  • 49. Digital Phenotype: Your smartphone knows if you are depressed Ginger.io
  • 50. Digital Phenotype: Your smartphone knows if you are depressed J Med Internet Res. 2015 Jul 15;17(7):e175. The correlation analysis between the features and the PHQ-9 scores revealed that 6 of the 10 features were significantly correlated to the scores: • strong correlation: circadian movement, normalized entropy, location variance • correlation: phone usage features, usage duration and usage frequency
  • 51.
  • 52. • 아이폰의 센서로 측정한 자신의 의료/건강 데이터를 플랫폼에 공유 가능 • 가속도계, 마이크, 자이로스코프, GPS 센서 등을 이용 • 걸음, 운동량, 기억력, 목소리 떨림 등등 • 기존의 의학연구의 문제를 해결: 충분한 의료 데이터의 확보 • 연구 참여자 등록에 물리적, 시간적 장벽을 제거 (1번/3개월 ➞ 1번/1초) • 대중의 의료 연구 참여 장려: 연구 참여자의 수 증가 • 발표 후 24시간 내에 수만명의 연구 참여자들이 지원 • 사용자 본인의 동의 하에 진행 ResearchKit
  • 53. •초기 버전으로, 5가지 질환에 대한 앱 5개를 소개 ResearchKit
  • 58. Autism and Beyond EpiWatchMole Mapper measuring facial expressions of young patients having autism measuring morphological changes of moles measuring behavioral data of epilepsy patients
  • 59. •스탠퍼드의 심혈관 질환 연구 앱, myHeart • 발표 하루만에 11,000 명의 참가자가 등록 • 스탠퍼드의 해당 연구 책임자 앨런 영,
 “기존의 방식으로는 11,000명 참가자는 
 미국 전역의 50개 병원에서 1년간 모집해야 한다”
  • 60. •파킨슨 병 연구 앱, mPower • 발표 하루만에 5,589 명의 참가자가 등록 • 기존에 6000만불을 들여 5년 동안 모집한
 환자의 수는 단 800명
  • 61.
  • 62.
  • 64.
  • 65.
  • 67. Fig 1. What can consumer wearables do? Heart rate can be measured with an oximeter built into a ring [3], muscle activity with an electromyographi sensor embedded into clothing [4], stress with an electodermal sensor incorporated into a wristband [5], and physical activity or sleep patterns via an accelerometer in a watch [6,7]. In addition, a female’s most fertile period can be identified with detailed body temperature tracking [8], while levels of me attention can be monitored with a small number of non-gelled electroencephalogram (EEG) electrodes [9]. Levels of social interaction (also known to a PLOS Medicine 2016
  • 68. PwC Health Research Institute Health wearables: Early days2 insurers—offering incentives for use may gain traction. HRI’s survey Source: HRI/CIS Wearables consumer survey 2014 21% of US consumers currently own a wearable technology product 2% wear it a few times a month 2% no longer use it 7% wear it a few times a week 10% wear it everyday Figure 2: Wearables are not mainstream – yet Just one in five US consumers say they own a wearable device. Intelligence Series sought to better understand American consumers’ attitudes toward wearables through done with the data. PwC, Health wearables: early days, 2014
  • 69. PwC | The Wearable Life | 3 device (up from 21% in 2014). And 36% own more than one. We didn’t even ask this question in our previous survey since it wasn’t relevant at the time. That’s how far we’ve come. millennials are far more likely to own wearables than older adults. Adoption of wearables declines with age. Of note in our survey findings, however: Consumers aged 35 to 49 are more likely to own smart watches. Across the board for gender, age, and ethnicity, fitness wearable technology is most popular. Fitness band Smart clothing Smart video/ photo device (e.g. GoPro) Smart watch Smart glasses* 45% 14% 27% 15% 12% Base: Respondents who currently own at least one device (pre-quota sample, n=700); Q10A/B/C/D/E. Please tell us your relationship with the following wearable technology products. *Includes VR/AR glasses Fitness runs away with it % respondents who own type of wearable device PwC,The Wearable Life 2.0, 2016 • 49% own at least one wearable device (up from 21% in2014) • 36% own more than one device.
  • 72. https://clinicaltrials.gov/ct2/results?term=fitbit&Search=Search •의료기기가 아님에도 Fitbit 은 이미 임상 연구에 폭넓게 사용되고 있음 •Fitbit 이 장려하지 않았음에도, 임상 연구자들이 자발적으로 사용 •Fitbit 을 이용한 임상 연구 수는 계속 증가하는 추세 (16.3(80), 16.8(113), 17.7(173))
  • 73.
  • 74. •Fitbit이 임상연구에 활용되는 것은 크게 두 가지 경우 •Fitbit 자체가 intervention이 되어서 활동량이나 치료 효과를 증진시킬 수 있는지 여부 •연구 참여자의 활동량을 모니터링 하기 위한 수단
 •1. Fitbit으로 환자의 활동량을 증가시키기 위한 연구들 •Fitbit이 소아 비만 환자의 활동량을 증가시키는지 여부를 연구 •Fitbit이 위소매절제술을 받은 환자들의 활동량을 증가시키는지 여부 •Fitbit이 젊은 낭성 섬유증 (cystic fibrosis) 환자의 활동량을 증가시키는지 여부 •Fitbit이 암 환자의 신체 활동량을 증가시키기 위한 동기부여가 되는지 여부 •2. Fitbit으로 임상 연구에 참여하는 환자의 활동량을 모니터링 •항암 치료를 받은 환자들의 건강과 예후를 평가하는데 fitbit을 사용 •현금이 자녀/부모의 활동량을 증가시키는지 파악하기 위해 fitbit을 사용 •Brain tumor 환자의 삶의 질 측정을 위해 다른 survey 결과와 함께 fitbit을 사용 •말초동맥 질환(Peripheral Artery Disease) 환자의 활동량을 평가하기 위해
  • 75. •체중 감량이 유방암 재발에 미치는 영향을 연구 •유방암 환자들 중 20%는 재발, 대부분이 전이성 유방암 •과체중은 유방암의 위험을 높인다고 알려져 왔으며, •비만은 초기 유방암 환자의 예후를 좋지 않게 만드는 것도 알려짐 •하지만, 체중 감량과 유방암 재발 위험도의 상관관계 연구는 아직 없음 •3,200 명의 과체중, 초기 비만 유방암 환자들이 2년간 참여 •결과에 따라 전세계 유방암 환자의 표준 치료에 체중 감량이 포함될 가능성 •Fitbit 이 체중 감량 프로그램에 대한 지원 •Fitbit Charge HR: 운동량, 칼로리 소모, 심박수 측정 •Fitbit Aria Wi-Fi Smart Scale: 스마트 체중계 •FitStar: 개인 맞춤형 동영상 운동 코칭 서비스 2016. 4. 27.
  • 76.
  • 78. •Biogen Idec, 다발성 경화증 환자의 모니터링에 Fitbit을 사용 •고가의 약 효과성을 검증하여 보험 약가 유지 목적 •정교한 측정으로 MS 전조 증상의 조기 발견 가능? Dec 23, 2014
  • 82. WELT
  • 84.
  • 85. • $20 • the first and only 24-hour thermometer • constantly monitor baby’s temperature • FDA cleared
  • 88.
  • 89.
  • 90.
  • 92.
  • 93. Sensor and Transmitter Transmitter Tiny wire inserted Converts glucose into electrical current Glucose range: 40-400 mg/dL Every 5 minutes, up to 7 days Converts sensor data into glucose readings (Software 505) Glucose data broadcast via Bluetooth to display device Sensor
  • 94.
  • 95. CO-1 Dexcom G5 Mobile Continuous Glucose Monitoring (CGM) System for Non-Adjunctive Management of Diabetes July 21, 2016 Dexcom, Inc. Clinical Chemistry and Clinical Toxicology Devices Panel
  • 96. Dexcom G5 Mobile Continuous Glucose Monitoring (CGM) System for Non-Adjunctive Management of Diabetes •FDA의 Clinical Chemistry and Clinical Toxicology Devices Panel •Dexcom G5가 기존의 SMBG를 대체 가능하다고 권고 •안전 (8:2), 효과 (9:1), 위험 대비 효용 (8:2) •Dexcom G5의 혈당 수치는 SMBG와 약 9% 차이가 날 수 있음 •여러 회사의 SMBG 들 간에도 4-9%의 상대적 차이 존재 •어차피 상당수(69%)의 환자들은 off-label로 CGM을 SMBG 대신 사용중 •차라리 허용 후 환자들을 정식으로 교육/관리하는 것이 나을 것
  • 97. • Health Canada 에서 Dexcom G5 CGM이 SMBG를 대체할 수 있다고 결정 • 의사들이 기존의 SMBG 대신에 Dexcom 을 처방할 수 있게 되었음 • 기존의 SMBG는 하루에 두 번 calibration 목적으로 사용하면 됨
  • 98. • FDA에서도 2016년 12월 Dexcom G5가 기존의 혈당계를 대체할 수 있다고 인허가
  • 99.
  • 100.
  • 101. Transmitter Receiver iPhone Apple Watch the user needed to have all four in reasonably close proximity
  • 102. Transmitter iPhone Apple Watch “not require the user to have a separate receiver box, though it will still require that the iPhone be in range” (2016.3)  http://www.mobihealthnews.com/content/dexcoms-next-generation-apple-watch-cgm-app-needs-one-less-device-work
  • 103. Transmitter Apple Watch “with Bluetooth built into the Watch, users won’t need to have anything on them but the CGM itself and their Apple Watch.” (2017.7) http://www.mobihealthnews.com/content/dexcom-propeller-and-resound-poised-make-use-apple-watch-native-bluetooth-launch
  • 104. FreeStyle Libre Flash Glucose Monitoring System Why prick when you can scan? http://www.freestylelibre.co.uk
  • 105. Temporary Tattoo Offers Needle-Free Way 
 to Monitor Glucose Levels • A very mild electrical current applied to the skin for 10 minutes forces sodium
 
 ions in the fluid between skin cells to migrate toward the tattoo’s electrodes. • These ions carry glucose molecules that are also found in the fluid. • A sensor built into the tattoo then measures the strength of the electrical charge
 
 produced by the glucose to determine a person’s overall glucose levels.
  • 106. GlucoWatch • GlucoWatch 2 - Cygnus • FDA approved and marketed in 2002 • Provides a glucose reading every 10 minutes • … but the device was discontinued because it caused skin irritation
  • 107. 애플워치에 혈당 측정 기능이 추가될까?
  • 108. C8 Medisensor "From a technological standpoint, what we had done with this was a stellar achievement in that people thought even (what we achieved) was beyond possibility.” - Former C8 MediSensors CEO Rudy Hofmeister
  • 110. A P P L I ED S CI E N CE S A N D EN G I N E E R I N G Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to originalU.S.Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC). Soft, smart contact lenses with integrations of wireless circuits, glucose sensors, and displays Jihun Park,1 * Joohee Kim,1 * So-Yun Kim,1 * Woon Hyung Cheong,1 Jiuk Jang,1 Young-Geun Park,1 Kyungmin Na,2 Yun-Tae Kim,3 Jun Hyuk Heo,4 Chang Young Lee,3 Jung Heon Lee,4† Franklin Bien,2† Jang-Ung Park1† Recent advances in wearable electronics combined with wireless communications are essential to the realization of medical applications through health monitoring technologies. For example, a smart contact lens, which is capable of monitoring the physiological information of the eye and tear fluid, could provide real-time, noninvasive medical diag- nostics. However, previous reports concerning the smart contact lens have indicated that opaque and brittle compo- nents have been used to enable the operation of the electronic device, and this could block the user’s vision and potentially damage the eye. In addition, the use of expensive and bulky equipment to measure signals from the con- tact lens sensors could interfere with the user’s external activities. Thus, we report an unconventional approach for the fabrication of a soft, smart contact lens in which glucose sensors, wireless power transfer circuits, and display pixels to visualize sensing signals in real time are fully integrated using transparent and stretchable nanostructures. The inte- gration of this display into the smart lens eliminates the need for additional, bulky measurement equipment. This soft, smart contact lens can be transparent, providing a clear view by matching the refractive indices of its locally patterned areas. The resulting soft, smart contact lens provides real-time, wireless operation, and there are in vivo tests to monitor the glucose concentration in tears (suitable for determining the fasting glucose level in the tears of diabetic patients) and, simultaneously, to provide sensing results through the contact lens display. INTRODUCTION Wearable electronic devices capable of real-time monitoring of the hu- man body can provide new ways to manage the health status and performance of individuals (1–7). Stretchable and skin-like electronics, combined with wireless communications, enable noninvasive and com- fortable physiological measurements by replacing the conventional methods that use penetrating needles, rigid circuit boards, terminal con- nections, and power supplies (8–12). Given this background, a smart contact lens is a promising example of a wearable, health monitoring device (13, 14). The reliability and stability of soft contact lenses have been studied extensively, and significant advances have been made to minimize irritation of the eye to maximize the user’s comfort. In addi- tion, the user’s tears can be collected in the contact lens by completely natural means, such as normal secretion and blinking, and used to assess various biomarkers found in the blood, such as glucose, choles- terol, sodium ions, and potassium ions (13). Thus, lens equipped with sensors can provide noninvasive methods to continuously detect metab- olites in tears. Among various biomarkers, noninvasive detection of glu- cose levels for the diagnosis of diabetes has been studied in numerous ways to replace conventional invasive diagnostic tests (for example, finger pricking for drawing blood), as presented in table S1. By consid- ering the correlation between the tear glucose level and blood glucose level (15), a glucose sensor fitted on a contact lens can provide the non- invasive monitoring of user’s glucose levels from tear fluids with a con- sideration of the lag time between tear glucose level and blood glucose level in the range of 10 to 20 min (13–16). Although such a system provides many capabilities, there are some crucial issues that must be addressed before practical uses of smart con- tact lens can be realized. These issues include (i) the use of opaque electronic materials for sensors, integrated circuit (IC) chips, metal antennas, and interconnects that can block users’ vision (15, 17, 18); (ii) the integration of the components of the electronic device on flat and plastic substrates, resulting in buckled deformations when transformed into the curved shape for lenses, thereby creating foreign objects that can irritate users’ eyes and eyelids (19); (iii) the brittle and rigid materials of the integrated electronic system, such as surface- mounted IC chips and rigid interconnects, which could damage the cornea or the eyelid (20–22); and (iv) the requirement for bulky and expensive equipment for signal measurements, which limits the use of smart contact lenses outside of research laboratories or clinical settings by restricting users’ external activities (14, 15, 20, 23). For all of the reasons stated above, we have introduced an un- conventional approach for the fabrication of a soft, smart contact lens where all of the electronic components are designed with normal usabil- ity in mind. For example, the wearer’s view will not be obstructed be- cause the contact lenses are made of transparent nanomaterials. In addition, these lenses provide superb reliability because they can undergo the mechanical deformations required to fit them into the soft lens without damage. The planar, mesh-like structures of the compo- nents of the device and their interconnects enable high stretchability for the curved soft lens with no buckling. In addition, display pixels integrated in the smart contact lens allow access to real-time sensing data to eliminate the need for additional measurement equipment. To achieve these goals, we used three strategies, as described as follows: (i) For the design of soft contact lens, we formed soft contact lenses with highly transparent and stress-tunable hybrid structures, which are composed of mechanically reinforced islands to locate dis- crete electronic devices (such as rectifying circuits and display pixels) and elastic joints to locate a stretchable, transparent antenna and inter- connect electrodes. The reinforced frames with small segments were 1 School of Materials Science and Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea. 2 School of Electrical and Com- puter Engineering, UNIST, Ulsan 44919, Republic of Korea. 3 School of Life Sciences, School of Energy and Chemical Engineering, UNIST, Ulsan 44919, Republic of Korea. 4 School of Advanced Materials Science and Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea. *These authors contributed equally to this work. †Corresponding author. Email: jangung@unist.ac.kr (J.-U.P.); bien@unist.ac.kr (F.B.); jhlee7@skku.edu (J.H.L.) S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E Park et al., Sci. Adv. 2018;4:eaap9841 24 January 2018 1 of 11 onMarch7,2018http://advances.sciencemag.org/Downloadedfrom •기존 연구의 한계 •렌즈의 소재가 불투명, 부서지기 쉬워서, 안구 손상의 위험 있음 •센서의 크기가 커서 사용자의 불편 초래 •소프트한 소재의 실시간 glucose 센서를 갖춘 콘텍트렌즈 개발 •실시간으로 시야에 센싱 시그널을 띄워주는 렌즈 •transparent, stretchable nanostructure를 활용하여 렌즈에 fully integration •in vivo test를 하였으나, 혈당 측정의 정확성에 대해서는 테스트하지 않음 Science Advances 24 Jan 2018
  • 111. Science Advances 24 Jan 2018 the elastic region was mainly stretched because of the significant difference in Young’s modulus (24, 27, 28). In addition, Fig. 2B shows that there were no gaps at the interfaces between these heter- ogeneous regions even during the stretching states (30% in tensile from the mechanical deformations. Figure 2D and fig. S2 present the atomic force microscopy (AFM) andscanning electron microscopy (SEM) images of the hybrid substrate with continuous interfaces between the reinforced and elastic areas. The Fig. 1. Stretchable, transparent smart contact lens system. (A) Schematic illustration of the soft, smart contact lens. The soft, smart contact lens is composed of a hybrid substrate, functional devices (rectifier, LED, and glucose sensor), and a transparent, stretchable conductor (for antenna and interconnects). (B) Circuit diagram of the smart contact lens system. (C) Operation of this soft, smart contact lens. Electric power is wirelessly transmitted to the lens through the antenna. This power activates the LED pixel and the glucose sensor. After detecting the glucose level in tear fluid above the threshold, this pixel turns off. Park et al., Sci. Adv. 2018;4:eaap9841 24 January 2018 3 of 11 onMarch7,2018http://advances.sciencemag.org/Downloadedfrom
  • 112. Science Advances 24 Jan 2018 Fig. 2. Properties of a stretchable and transparent hybrid substrate. (A) Schematic image of the hybrid substrate where the reinforced islands are embedded in the elastic substrate. (B) SEM images before (top) and during (bottom) 30% stretching. The arrow indicates the direction of stretching direction. Scale bars, 500 mm. (C) Effective strains on each part along the stretching direction indicated in (B). (D) AFM image of the hybrid substrate. Black and blue arrows indicate the elastic region and the reinforced island, respectively. Scale bar, 5 mm. (E) Photograph of the hybrid substrates molded into contact lens shape. Scale bar, 1 cm. (F) Optical transmittance (black) and haze (red) spectra of the hybrid substrate. (G) Schematic diagram of the photographing method to identify the optical clarity of hybrid substrates. (H) Photographs taken by camera where the OP-LENS– based hybrid substrate (left) and the SU8-LENS–based hybrid substrate (right) are located on the camera lens. S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E onMarch7,2018http://advances.sciencemag.org/Downloadedfrom
  • 113. Science Advances 24 Jan 2018 When the glucose concentration is above 0.9 mM, this pixel turns off because the bias applied to the LED becomes below than its turn-off turned off because the glucose concentration was over the threshold, not because of damage to the circuit. The design is such that the LED Fig. 5. Soft, smart contact lens for detecting glucose. (A) Schematic image of the soft, smart contact lens. The rectifier, the LED, and the glucose sensor are located on the reinforced regions. The transparent, stretchable AgNF-based antenna and interconnects are located on an elastic region. (B) Photograph of the fabricated soft, smart contact lens. Scale bar, 1 cm. (C) Photograph of the smart contact lens on an eye of a mannequin. Scale bar, 1 cm. (D) Photographs of the in vivo test on a live rabbit using the soft, smartcontact lens. Left: Turn-on state of the LED in the soft, smart contact lens mounted on the rabbit’s eye. Middle: Injection of tear fluids withthe glucose concentration of 0.9 mM. Right: Turn- off state of the LED after detecting the increased glucose concentration. Scale bars, 1 cm. (E) Heat tests while a live rabbit is wearing the operating soft, smart contact lens. Scale bars, 1 cm. Park et al., Sci. Adv. 2018;4:eaap9841 24 January 2018 8 of 11 onMarch7,2018http://advances.sciencemag.org/Downloadedfrom
  • 119.
  • 120. Night Scout Project •연속 혈당계 기기를 해킹해서 클라우드에 혈당 수치를 전송할 수 있게 •언제 어디서든 스마트폰, 스마트 워치 등으로 자녀의 혈당 수치를 확인 가능 •소아 당뇨병 환자의 부모들이 자발적으로 개발 + 오픈소스로 무료 배포 + 본인이 자발적으로 설치 •상용 의료기기가 아니므로 FDA의 규제 없음
  • 124. Hood Thabit et. al. Home Use of an Artificial Beta Cell in Type 1 Diabetes, NEJM (2015) Home Use of an Artificial Beta Cell in Type 1 Diabetes The proportion of time that the glycated hemoglobin level was in the target range (primary end point) was significantly greater during the intervention period than during the control period — by a mean of 11.0 percentage points (95% confidence interval [CI], 8.1 to 13.8; P<0.001).
  • 125. Hood Thabit et. al. Home Use of an Artificial Beta Cell in Type 1 Diabetes, NEJM (2015) The overnight mean glucose level was significantly lower with the closed-loop system than with the control system (P<0.001), and the proportion of time that the glucose level was within the overnight target range was greater with the closed-loop system (P<0.001) Home Use of an Artificial Beta Cell in Type 1 Diabetes
  • 127. • Self-reported data from a small group – 18 of the first 40 users • The positive glucose and quality of life impact this system has had • 0.9% improvement in A1c (from 7.1% to 6.2%) • a strong time-in-range improvement from 58% to 81% • near-unanimous improvements in sleep quality OpenAPS DIY Automated Insulin Delivery Users Report 81% Time in Range, Better Sleep, and a 0.9% A1c Improvement https://openaps.org/2016/06/11/real-world-use-of-open-source-artificial-pancreas-systems-poster-presented-at-american-diabetes-association-scientific-sessions/
  • 128. #OpenAPS rigs are shrinking in size https://diyps.org
  • 129. First FDA-approved Artificial Pancreas http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm522974.htm • 메드트로닉의 MiniMed 670G 가 최초로 제 1형 당뇨병 환자에 대해서 FDA 승인 • 14세 이상의 제 1형 당뇨병 환자 123명을 대상으로 진행된 임상 • 3개월의 추적 관찰 결과 당화혈색소(A1c) 수치가 7.4%에서 6.9%로 유의미하게 개선 • 당뇨병성 케톤산증, 저혈당증 등의 심각한 부작용이 이 기간 동안 발생 없음 • 메드트로닉은 향후 7-13세 환자들에 대해서 효과성과 안전성을 추가적으로 검증 계혹 (2016. 9. 28)
  • 130. https://myglu.org/articles/a-pathway-to-an-artificial-pancreas-an-interview-with-jdrf-s-aaron-kowalski •Step 1: 혈당 수치가 미리 정해놓은 기준까지 낮아지면, 인슐린 주입을 멈춤 •Step 2: 사용자의 혈당이 기준치까지 낮아질 것을 ‘예측’하여, 인슐린 주입을 미리 멈추거나 줄인다. •Step 3: 혈당이 기준치 이하로 너무 낮아지는 것뿐만 아니라, 기준치 이상으로 너무 높아지는 것도 막는다. •Step 4: 특정 범위 이내가 아니라, 특정 혈당 수치를 유지하는 것을 목표로 한다. (Hybrid closed-loop product) •Step 5: Step 4 에서 더 나아가, 식전의 별도 인슐린 주입까지도 자동화한다. •Step 6: 인슐린 뿐만 아니라, 글루카곤과 같은 추가적인 호르몬도 조절 Six Steps of Artificial Pancreas (JDRF)
  • 131. https://myglu.org/articles/a-pathway-to-an-artificial-pancreas-an-interview-with-jdrf-s-aaron-kowalski •Step 1: 혈당 수치가 미리 정해놓은 기준까지 낮아지면, 인슐린 주입을 멈춤 •Step 2: 사용자의 혈당이 기준치까지 낮아질 것을 ‘예측’하여, 인슐린 주입을 미리 멈추거나 줄인다. •Step 3: 혈당이 기준치 이하로 너무 낮아지는 것뿐만 아니라, 기준치 이상으로 너무 높아지는 것도 막는다. •Step 4: 특정 범위 이내가 아니라, 특정 혈당 수치를 유지하는 것을 목표로 한다. (Hybrid closed-loop product) •Step 5: Step 4 에서 더 나아가, 식전의 별도 인슐린 주입까지도 자동화한다. •Step 6: 인슐린 뿐만 아니라, 글루카곤과 같은 추가적인 호르몬도 조절 Six Steps of Artificial Pancreas (JDRF)
  • 132. MiniMed 670G vs. OpenAPS http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm522974.htm •120 mg/dl 이외의 다른 수치는 지정하기 어려움 •13세 이하의 환자에 대해서는 활용이 불가능 •미국 이외에서는 아직 인허가 이전 •고가의 유지 비용 (800만원+ 매달 40만원)
  • 133. On the courtesy of Miyeong Kim (aka 소명맘)
  • 134. On the courtesy of Miyeong Kim (aka 소명맘)
  • 135. On the courtesy of Miyeong Kim (aka 소명맘)
  • 136. On the courtesy of Miyeong Kim (aka 소명맘)
  • 137. Step 2. Collect the Data
  • 138.
  • 139. Sci Transl Med 2015
  • 140.
  • 143. Epic MyChart Epic EHR Dexcom CGM Patients/User Devices EHR Hospital Whitings + Apple Watch Apps HealthKit
  • 144.
  • 145. • 애플 HealthKit 가 미국의 23개 선도병원 중에, 14개의 병원과 협력 • 경쟁 플랫폼 Google Fit, S-Health 보다 현저히 빠른 움직임 • Beth Israel Deaconess 의 CIO • “25만명의 환자들 중 상당수가 웨어러블로 각종 데이터 생산 중.
 이 모든 디바이스에 인터페이스를 우리 병원은 제공할 수 없다. 
 하지만 애플이라면 가능하다.” 2015.2.5
  • 146. Step 3. Insight from the Data
  • 147.
  • 149. How to Analyze and Interpret the Big Data?
  • 150. and/or Two ways to get insights from the big data
  • 151. Epic MyChart Epic EHR Dexcom CGM Patients/User Devices EHR Hospital Whitings + Apple Watch Apps HealthKit
  • 152. transfer from Share2 to HealthKit as mandated by Dexcom receiver Food and Drug Administration device classification. Once the glucose values reach HealthKit, they are passively shared with the Epic MyChart app (https://www.epic.com/software-phr.php). The MyChart patient portal is a component of the Epic EHR and uses the same data- base, and the CGM values populate a standard glucose flowsheet in the patient’s chart. This connection is initially established when a pro- vider places an order in a patient’s electronic chart, resulting in a re- quest to the patient within the MyChart app. Once the patient or patient proxy (parent) accepts this connection request on the mobile device, a communication bridge is established between HealthKit and MyChart enabling population of CGM data as frequently as every 5 Participation required confirmation of Bluetooth pairing of the CGM re- ceiver to a mobile device, updating the mobile device with the most recent version of the operating system, Dexcom Share2 app, Epic MyChart app, and confirming or establishing a username and password for all accounts, including a parent’s/adolescent’s Epic MyChart account. Setup time aver- aged 45–60 minutes in addition to the scheduled clinic visit. During this time, there was specific verbal and written notification to the patients/par- ents that the diabetes healthcare team would not be actively monitoring or have real-time access to CGM data, which was out of scope for this pi- lot. The patients/parents were advised that they should continue to contact the diabetes care team by established means for any urgent questions/ concerns. Additionally, patients/parents were advised to maintain updates Figure 1: Overview of the CGM data communication bridge architecture. BRIEFCOMMUNICATION Kumar R B, et al. J Am Med Inform Assoc 2016;0:1–6. doi:10.1093/jamia/ocv206, Brief Communication byguestonApril7,2016http://jamia.oxfordjournals.org/Downloadedfrom •Apple HealthKit, Dexcom CGM기기를 통해 지속적으로 혈당을 모니터링한 데이터를 EHR과 통합 •당뇨환자의 혈당관리를 향상시켰다는 연구결과 •Stanford Children’s Health와 Stanford 의대에서 10명 type 1 당뇨 소아환자 대상으로 수행 (288 readings /day) •EHR 기반 데이터분석과 시각화는 데이터 리뷰 및 환자커뮤니케이션을 향상 •환자가 내원하여 진료하는 기존 방식에 비해 실시간 혈당변화에 환자가 대응 JAMIA 2016 Remote Patients Monitoring via Dexcom-HealthKit-Epic-Stanford
  • 154.
  • 155.
  • 156. 16© 2017 by HURAYPOSITIVE INC., a Digital Healthcare Service Provider. This information is strictly privileged and confidential. All rights reserved. 7 7.2 7.4 7.6 7.8 8 8.2 3M 6M 9M 12M0M ▼0.63%p. ▼0.64%p. 당화혈색소(HbA1c,%) & Products & Services 의학적 유효성(Health Switch를 활용한 임상실험) 기간 • 1차 실험(0M-6M) 실험군: 중재 O ( ) 대조군: 중재 X ( ) • 2차 실험: 실험군과 대조군 교차(6M-12M) 대조군: 중재 X ( ) 실험군: 중재 O ( ) 당화혈색소 0.63%p. 감소 무의미한 변화 당화혈색소 수준 유지 당화혈색소 0.64%p. 감소 ▼0.04%p. • N = 148명 • 평균 연령: 52.2세 결과 임상 대상자 1 모바일 중재 서비스의 의미 있는 혈당 감소 효과 2 약 6개월의 서비스 후 생활습관 유지 가능성 3 고령 환자들도 사용할 수 있는 간편한 서비스 임상실험을 통해 검증된 Health Switch의 효과 key facts • 특징: 제2형 당뇨병 유병자 • 기간: 2014.10 ~ 2015.12
  • 157.
  • 158. No choice but to bring AI into the medicine
  • 159. Martin Duggan,“IBM Watson Health - Integrated Care & the Evolution to Cognitive Computing”
  • 160. •약한 인공 지능 (Artificial Narrow Intelligence) • 특정 방면에서 잘하는 인공지능 • 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전 •강한 인공 지능 (Artificial General Intelligence) • 모든 방면에서 인간 급의 인공 지능 • 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습 •초 인공 지능 (Artificial Super Intelligence) • 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능 • “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
  • 161.
  • 162. 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 90% 50% 10% PT-AI AGI EETNTOP100 Combined 언제쯤 기계가 인간 수준의 지능을 획득할 것인가? Philosophy and Theory of AI (2011) Artificial General Intelligence (2012) Greek Association for Artificial Intelligence Survey of most frequently cited 100 authors (2013) Combined 응답자 누적 비율 Superintelligence, Nick Bostrom (2014)
  • 163. Superintelligence: Science of fiction? Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google), David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI) January 6-8, 2017, Asilomar, CA https://brunch.co.kr/@kakao-it/49 https://www.youtube.com/watch?v=h0962biiZa4
  • 164. Superintelligence: Science of fiction? Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google), David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI) January 6-8, 2017, Asilomar, CA Q: 초인공지능이란 영역은 도달 가능한 것인가? Q: 초지능을 가진 개체의 출현이 가능할 것이라고 생각하는가? Table 1 Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn YES YES YES YES YES YES YES YES YES Table 1-1 Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn YES YES YES YES YES YES YES YES YES Q: 초지능의 실현이 일어나기를 희망하는가? Table 1-1-1 Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn Complicated Complicated Complicated YES Complicated YES YES Complicated Complicated https://brunch.co.kr/@kakao-it/49 https://www.youtube.com/watch?v=h0962biiZa4
  • 167. Superintelligence, Nick Bostrom (2014) 일단 인간 수준(human baseline)의 강한 인공지능이 구현되면, 이후 초지능(superintelligence)로 도약(take off)하기까지는 극히 짧은 시간이 걸릴 수 있다. How far to superintelligence
  • 168. •약한 인공 지능 (Artificial Narrow Intelligence) • 특정 방면에서 잘하는 인공지능 • 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전 •강한 인공 지능 (Artificial General Intelligence) • 모든 방면에서 인간 급의 인공 지능 • 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습 •초 인공 지능 (Artificial Super Intelligence) • 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능 • “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
  • 169.
  • 170.
  • 171.
  • 172.
  • 173.
  • 174.
  • 175.
  • 176. •복잡한 의료 데이터의 분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 인공지능의 의료 활용
  • 177. •복잡한 의료 데이터의 분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 인공지능의 의료 활용
  • 178. Jeopardy! 2011년 인간 챔피언 두 명 과 퀴즈 대결을 벌여서 압도적인 우승을 차지
  • 179. IBM Watson on Jeopardy!
  • 180. 600,000 pieces of medical evidence 2 million pages of text from 42 medical journals and clinical trials 69 guidelines, 61,540 clinical trials IBM Watson on Medicine Watson learned... + 1,500 lung cancer cases physician notes, lab results and clinical research + 14,700 hours of hands-on training
  • 181.
  • 182.
  • 183.
  • 184.
  • 185.
  • 186. Annals of Oncology (2016) 27 (suppl_9): ix179-ix180. 10.1093/annonc/mdw601 Validation study to assess performance of IBM cognitive computing system Watson for oncology with Manipal multidisciplinary tumour board for 1000 consecutive cases: 
 An Indian experience • MMDT(Manipal multidisciplinary tumour board) treatment recommendation and data of 1000 cases of 4 different cancers breast (638), colon (126), rectum (124) and lung (112) which were treated in last 3 years was collected. • Of the treatment recommendations given by MMDT, WFO provided 
 
 50% in REC, 28% in FC, 17% in NREC • Nearly 80% of the recommendations were in WFO REC and FC group • 5% of the treatment provided by MMDT was not available with WFO • The degree of concordance varied depending on the type of cancer • WFO-REC was high in Rectum (85%) and least in Lung (17.8%) • high with TNBC (67.9%); HER2 negative (35%)
 • WFO took a median of 40 sec to capture, analyze and give the treatment.
 
 (vs MMDT took the median time of 15 min)
  • 187. WFO in ASCO 2017 • Early experience with IBM WFO cognitive computing system for lung 
 
 and colorectal cancer treatment (마니팔 병원)
 • 지난 3년간: lung cancer(112), colon cancer(126), rectum cancer(124) • lung cancer: localized 88.9%, meta 97.9% • colon cancer: localized 85.5%, meta 76.6% • rectum cancer: localized 96.8%, meta 80.6% Performance of WFO in India 2017 ASCO annual Meeting, J Clin Oncol 35, 2017 (suppl; abstr 8527)
  • 188. San Antonio Breast Cancer Symposium—December 6-10, 2016 Concordance WFO (@T2) and MMDT (@T1* v. T2**) (N= 638 Breast Cancer Cases) Time Point /Concordance REC REC + FC n % n % T1* 296 46 463 73 T2** 381 60 574 90 This presentation is the intellectual property of the author/presenter.Contact somusp@yahoo.com for permission to reprint and/or distribute.26 * T1 Time of original treatment decision by MMDT in the past (last 1-3 years) ** T2 Time (2016) of WFO’s treatment advice and of MMDT’s treatment decision upon blinded re-review of non-concordant cases
  • 189. 잠정적 결론 •왓슨 포 온콜로지와 의사의 일치율: •암종별로 다르다. •같은 암종에서도 병기별로 다르다. •같은 암종에 대해서도 병원별/국가별로 다르다. •시간이 흐름에 따라 달라질 가능성이 있다.
  • 190. 원칙이 필요하다 •어떤 환자의 경우, 왓슨에게 의견을 물을 것인가? •왓슨을 (암종별로) 얼마나 신뢰할 것인가? •왓슨의 의견을 환자에게 공개할 것인가? •왓슨과 의료진의 판단이 다른 경우 어떻게 할 것인가? •왓슨에게 보험 급여를 매길 수 있는가? 이러한 기준에 따라 의료의 질/치료효과가 달라질 수 있으나, 현재 개별 병원이 개별적인 기준으로 활용하게 됨
  • 191. Empowering the Oncology Community for Cancer Care Genomics Oncology Clinical Trial Matching Watson Health’s oncology clients span more than 35 hospital systems “Empowering the Oncology Community for Cancer Care” Andrew Norden, KOTRA Conference, March 2017, “The Future of Health is Cognitive”
  • 192. IBM Watson Health Watson for Clinical Trial Matching (CTM) 18 1. According to the National Comprehensive Cancer Network (NCCN) 2. http://csdd.tufts.edu/files/uploads/02_-_jan_15,_2013_-_recruitment-retention.pdf© 2015 International Business Machines Corporation Searching across eligibility criteria of clinical trials is time consuming and labor intensive Current Challenges Fewer than 5% of adult cancer patients participate in clinical trials1 37% of sites fail to meet minimum enrollment targets. 11% of sites fail to enroll a single patient 2 The Watson solution • Uses structured and unstructured patient data to quickly check eligibility across relevant clinical trials • Provides eligible trial considerations ranked by relevance • Increases speed to qualify patients Clinical Investigators (Opportunity) • Trials to Patient: Perform feasibility analysis for a trial • Identify sites with most potential for patient enrollment • Optimize inclusion/exclusion criteria in protocols Faster, more efficient recruitment strategies, better designed protocols Point of Care (Offering) • Patient to Trials: Quickly find the right trial that a patient might be eligible for amongst 100s of open trials available Improve patient care quality, consistency, increased efficiencyIBM Confidential
  • 193. •총 16주간 HOG( Highlands Oncology Group)의 폐암과 유방암 환자 2,620명을 대상 •90명의 환자를 3개의 노바티스 유방암 임상 프로토콜에 따라 선별 •임상 시험 코디네이터: 1시간 50분 •Watson CTM: 24분 (78% 시간 단축) •Watson CTM은 임상 시험 기준에 해당되지 않는 환자 94%를 자동으로 스크리닝
  • 194. Watson Genomics Overview 20 Watson Genomics Content • 20+ Content Sources Including: • Medical Articles (23Million) • Drug Information • Clinical Trial Information • Genomic Information Case Sequenced VCF / MAF, Log2, Dge Encryption Molecular Profile Analysis Pathway Analysis Drug Analysis Service Analysis, Reports, & Visualizations
  • 195. •복잡한 의료 데이터의 분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 인공지능의 의료 활용
  • 197.
  • 198. 12 Olga Russakovsky* et al. Fig. 4 Random selection of images in ILSVRC detection validation set. The images in the top 4 rows were taken from ILSVRC2012 single-object localization validation set, and the images in the bottom 4 rows were collected from Flickr using scene-level queries. tage of all the positive examples available. The second is images collected from Flickr specifically for the de- http://arxiv.org/pdf/1409.0575.pdf
  • 199. • Main competition • 객체 분류 (Classification): 그림 속의 객체를 분류 • 객체 위치 (localization): 그림 속 ‘하나’의 객체를 분류하고 위치를 파악 • 객체 인식 (object detection): 그림 속 ‘모든’ 객체를 분류하고 위치 파악 16 Olga Russakovsky* et al. Fig. 7 Tasks in ILSVRC. The first column shows the ground truth labeling on an example image, and the next three show three sample outputs with the corresponding evaluation score. http://arxiv.org/pdf/1409.0575.pdf
  • 200. Performance of winning entries in the ILSVRC2010-2015 competitions in each of the three tasks http://image-net.org/challenges/LSVRC/2015/results#loc Single-object localization Localizationerror 0 10 20 30 40 50 2011 2012 2013 2014 2015 Object detection Averageprecision 0.0 17.5 35.0 52.5 70.0 2013 2014 2015 Image classification Classificationerror 0 10 20 30 2010 2011 2012 2013 2014 2015
  • 201.
  • 202. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, “Deep Residual Learning for Image Recognition”, 2015 How deep is deep?
  • 206. DeepFace: Closing the Gap to Human-Level Performance in FaceVerification Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14. Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million parameters, where more than 95% come from the local and fully connected layers. very few parameters. These layers merely expand the input into a set of simple local features. The subsequent layers (L4, L5 and L6) are instead lo- cally connected [13, 16], like a convolutional layer they ap- ply a filter bank, but every location in the feature map learns a different set of filters. Since different regions of an aligned image have different local statistics, the spatial stationarity The goal of training is to maximize the probability of the correct class (face id). We achieve this by minimiz- ing the cross-entropy loss for each training sample. If k is the index of the true label for a given input, the loss is: L = log pk. The loss is minimized over the parameters by computing the gradient of L w.r.t. the parameters and Human: 95% vs. DeepFace in Facebook: 97.35% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
  • 207. FaceNet:A Unified Embedding for Face Recognition and Clustering Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering Human: 95% vs. FaceNet of Google: 99.63% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people) False accept False reject s. This shows all pairs of images that were on LFW. Only eight of the 13 errors shown he other four are mislabeled in LFW. on Youtube Faces DB ge similarity of all pairs of the first one our face detector detects in each video. False accept False reject Figure 6. LFW errors. This shows all pairs of images that were incorrectly classified on LFW. Only eight of the 13 errors shown here are actual errors the other four are mislabeled in LFW. 5.7. Performance on Youtube Faces DB We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. This gives us a classification accuracy of 95.12%±0.39. Using the first one thousand frames results in 95.18%. Compared to [17] 91.4% who also evaluate one hundred frames per video we reduce the error rate by almost half. DeepId2+ [15] achieved 93.2% and our method reduces this error by 30%, comparable to our improvement on LFW. 5.8. Face Clustering Our compact embedding lends itself to be used in order to cluster a users personal photos into groups of people with the same identity. The constraints in assignment imposed by clustering faces, compared to the pure verification task, lead to truly amazing results. Figure 7 shows one cluster in a users personal photo collection, generated using agglom- erative clustering. It is a clear showcase of the incredible invariance to occlusion, lighting, pose and even age. Figure 7. Face Clustering. Shown is an exemplar cluster for one user. All these images in the users personal photo collection were clustered together. 6. Summary We provide a method to directly learn an embedding into an Euclidean space for face verification. This sets it apart from other methods [15, 17] who use the CNN bottleneck layer, or require additional post-processing such as concate- nation of multiple models and PCA, as well as SVM clas- sification. Our end-to-end training both simplifies the setup and shows that directly optimizing a loss relevant to the task at hand improves performance. Another strength of our model is that it only requires False accept False reject Figure 6. LFW errors. This shows all pairs of images that were incorrectly classified on LFW. Only eight of the 13 errors shown here are actual errors the other four are mislabeled in LFW. 5.7. Performance on Youtube Faces DB We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. This gives us a classification accuracy of 95.12%±0.39. Using the first one thousand frames results in 95.18%. Compared to [17] 91.4% who also evaluate one hundred frames per video we reduce the error rate by almost half. DeepId2+ [15] achieved 93.2% and our method reduces this error by 30%, comparable to our improvement on LFW. 5.8. Face Clustering Our compact embedding lends itself to be used in order to cluster a users personal photos into groups of people with the same identity. The constraints in assignment imposed by clustering faces, compared to the pure verification task, Figure 7. Face Clustering. Shown is an exemplar cluster for one user. All these images in the users personal photo collection were clustered together. 6. Summary We provide a method to directly learn an embedding into an Euclidean space for face verification. This sets it apart from other methods [15, 17] who use the CNN bottleneck layer, or require additional post-processing such as concate- nation of multiple models and PCA, as well as SVM clas-
  • 208. Show and Tell: A Neural Image Caption Generator Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555 v om Samy Bengio Google bengio@google.com Dumitru Erhan Google dumitru@google.com s a cts his re- m- ed he de- nts A group of people shopping at an outdoor market. ! There are many vegetables at the fruit stand. Vision! Deep CNN Language ! Generating! RNN Figure 1. NIC, our model, is based end-to-end on a neural net- work consisting of a vision CNN followed by a language gener-
  • 209. Show and Tell: A Neural Image Caption Generator Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555 Figure 5. A selection of evaluation results, grouped by human rating.
  • 211. Medical Imaging AI Startups by Applications
  • 212. Bone Age Assessment • M: 28 Classes • F: 20 Classes • Method: G.P. • Top3-95.28% (F) • Top3-81.55% (M)
  • 213.
  • 214. Business Area Medical Image Analysis VUNOnet and our machine learning technology will help doctors and hospitals manage medical scans and images intelligently to make diagnosis faster and more accurately. Original Image Automatic Segmentation EmphysemaNormal ReticularOpacity Our system finds DILDs at the highest accuracy * DILDs: Diffuse Interstitial Lung Disease Digital Radiologist Collaboration with Prof. Joon Beom Seo (Asan Medical Center) Analysed 1200 patients for 3 months
  • 215. Digital Radiologist Collaboration with Prof. Joon Beom Seo (Asan Medical Center) Analysed 1200 patients for 3 months
  • 216. Digital Radiologist Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214. Collaboration with Prof. Joon Beom Seo (Asan Medical Center) Analysed 1200 patients for 3 months
  • 217. Digital Radiologist Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214. Collaboration with Prof. Joon Beom Seo (Asan Medical Center) Analysed 1200 patients for 3 months
  • 218. Digital Radiologist Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214. Collaboration with Prof. Joon Beom Seo (Asan Medical Center) Analysed 1200 patients for 3 months Feature Engineering vs Feature Learning alization of Hand-crafted Feature vs Learned Feature in 2D Feature Engineering vs Feature Learning • Visualization of Hand-crafted Feature vs Learned Feature in 2D Visualization of Hand-crafted Feature vs Learned Feature in 2D
  • 219. Bench to Bedside : Practical Applications • Contents-based Case Retrieval –Finding similar cases with the clinically matching context - Search engine for medical images. –Clinicians can refer the diagnosis, prognosis of past similar patients to make better clinical decision. –Accepted to present at RSNA 2017 Digital Radiologist
  • 220. •Zebra Medical Vision에서 $1 에 영상의학데이터를 판독해주는 서비스를 런칭 (2017년 10월) •항목은 확정되지는 않았으나, Pulmonary Hypertension, Lung Nodule, Fatty Liver, Emphysema, 
 
 
 Coronary Calcium Scoring, Bone Mineral Density, Aortic Aneurysm 등으로 예상 https://www.zebra-med.com/aione/
  • 221. Zebra Medical Vision’s AI1: AI at Your Fingertips https://www.youtube.com/watch?v=0PGgCpXa-Fs
  • 222. Detection of Diabetic Retinopathy
  • 223. 당뇨성 망막병증 • 당뇨병의 대표적 합병증: 당뇨병력이 30년 이상 환자 90% 발병 • 안과 전문의들이 안저(안구의 안쪽)를 사진으로 찍어서 판독 • 망막 내 미세혈관 생성, 출혈, 삼출물 정도를 파악하여 진단
  • 224. Copyright 2016 American Medical Association. All rights reserved. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs Varun Gulshan, PhD; Lily Peng, MD, PhD; Marc Coram, PhD; Martin C. Stumpe, PhD; Derek Wu, BS; Arunachalam Narayanaswamy, PhD; Subhashini Venugopalan, MS; Kasumi Widner, MS; Tom Madams, MEng; Jorge Cuadros, OD, PhD; Ramasamy Kim, OD, DNB; Rajiv Raman, MS, DNB; Philip C. Nelson, BS; Jessica L. Mega, MD, MPH; Dale R. Webster, PhD IMPORTANCE Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. OBJECTIVE To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. DESIGN AND SETTING A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. EXPOSURE Deep learning–trained algorithm. MAIN OUTCOMES AND MEASURES The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. RESULTS TheEyePACS-1datasetconsistedof9963imagesfrom4997patients(meanage,54.4 years;62.2%women;prevalenceofRDR,683/8878fullygradableimages[7.8%]);the Messidor-2datasethad1748imagesfrom874patients(meanage,57.6years;42.6%women; prevalenceofRDR,254/1745fullygradableimages[14.6%]).FordetectingRDR,thealgorithm hadanareaunderthereceiveroperatingcurveof0.991(95%CI,0.988-0.993)forEyePACS-1and 0.990(95%CI,0.986-0.995)forMessidor-2.Usingthefirstoperatingcutpointwithhigh specificity,forEyePACS-1,thesensitivitywas90.3%(95%CI,87.5%-92.7%)andthespecificity was98.1%(95%CI,97.8%-98.5%).ForMessidor-2,thesensitivitywas87.0%(95%CI,81.1%- 91.0%)andthespecificitywas98.5%(95%CI,97.7%-99.1%).Usingasecondoperatingpoint withhighsensitivityinthedevelopmentset,forEyePACS-1thesensitivitywas97.5%and specificitywas93.4%andforMessidor-2thesensitivitywas96.1%andspecificitywas93.9%. CONCLUSIONS AND RELEVANCE In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment. JAMA. doi:10.1001/jama.2016.17216 Published online November 29, 2016. Editorial Supplemental content Author Affiliations: Google Inc, Mountain View, California (Gulshan, Peng, Coram, Stumpe, Wu, Narayanaswamy, Venugopalan, Widner, Madams, Nelson, Webster); Department of Computer Science, University of Texas, Austin (Venugopalan); EyePACS LLC, San Jose, California (Cuadros); School of Optometry, Vision Science Graduate Group, University of California, Berkeley (Cuadros); Aravind Medical Research Foundation, Aravind Eye Care System, Madurai, India (Kim); Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India (Raman); Verily Life Sciences, Mountain View, California (Mega); Cardiovascular Division, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts (Mega). Corresponding Author: Lily Peng, MD, PhD, Google Research, 1600 Amphitheatre Way, Mountain View, CA 94043 (lhpeng@google.com). Research JAMA | Original Investigation | INNOVATIONS IN HEALTH CARE DELIVERY (Reprinted) E1 Copyright 2016 American Medical Association. All rights reserved.
  • 225. Case Study: TensorFlow in Medicine - Retinal Imaging (TensorFlow Dev Summit 2017)
  • 227. Training Set / Test Set • CNN으로 후향적으로 128,175개의 안저 이미지 학습 • 미국의 안과전문의 54명이 3-7회 판독한 데이터 • 우수한 안과전문의들 7-8명의 판독 결과와 인공지능의 판독 결과 비교 • EyePACS-1 (9,963 개), Messidor-2 (1,748 개)a) Fullscreen mode b) Hit reset to reload this image. This will reset all of the grading. c) Comment box for other pathologies you see eFigure 2. Screenshot of the Second Screen of the Grading Tool, Which Asks Graders to Assess the Image for DR, DME and Other Notable Conditions or Findings
  • 228. • EyePACS-1 과 Messidor-2 의 AUC = 0.991, 0.990 • 7-8명의 안과 전문의와 sensitivity, specificity 가 동일한 수준 • F-score: 0.95 (vs. 인간 의사는 0.91) Additional sensitivity analyses were conducted for sev- eralsubcategories:(1)detectingmoderateorworsediabeticreti- effects of data set size on algorithm performance were exam- ined and shown to plateau at around 60 000 images (or ap- Figure 2. Validation Set Performance for Referable Diabetic Retinopathy 100 80 60 40 20 0 0 70 80 85 95 90 75 0 5 10 15 20 25 30 100806040 Sensitivity,% 1 – Specificity, % 20 EyePACS-1: AUC, 99.1%; 95% CI, 98.8%-99.3%A 100 High-sensitivity operating point High-specificity operating point 100 80 60 40 20 0 0 70 80 85 95 90 75 0 5 10 15 20 25 30 100806040 Sensitivity,% 1 – Specificity, % 20 Messidor-2: AUC, 99.0%; 95% CI, 98.6%-99.5%B 100 High-specificity operating point High-sensitivity operating point Performance of the algorithm (black curve) and ophthalmologists (colored circles) for the presence of referable diabetic retinopathy (moderate or worse diabetic retinopathy or referable diabetic macular edema) on A, EyePACS-1 (8788 fully gradable images) and B, Messidor-2 (1745 fully gradable images). The black diamonds on the graph correspond to the sensitivity and specificity of the algorithm at the high-sensitivity and high-specificity operating points. In A, for the high-sensitivity operating point, specificity was 93.4% (95% CI, 92.8%-94.0%) and sensitivity was 97.5% (95% CI, 95.8%-98.7%); for the high-specificity operating point, specificity was 98.1% (95% CI, 97.8%-98.5%) and sensitivity was 90.3% (95% CI, 87.5%-92.7%). In B, for the high-sensitivity operating point, specificity was 93.9% (95% CI, 92.4%-95.3%) and sensitivity was 96.1% (95% CI, 92.4%-98.3%); for the high-specificity operating point, specificity was 98.5% (95% CI, 97.7%-99.1%) and sensitivity was 87.0% (95% CI, 81.1%-91.0%). There were 8 ophthalmologists who graded EyePACS-1 and 7 ophthalmologists who graded Messidor-2. AUC indicates area under the receiver operating characteristic curve. Research Original Investigation Accuracy of a Deep Learning Algorithm for Detection of Diabetic Retinopathy Results
  • 231. 0 0 M O N T H 2 0 1 7 | V O L 0 0 0 | N A T U R E | 1 LETTER doi:10.1038/nature21056 Dermatologist-level classification of skin cancer with deep neural networks Andre Esteva1 *, Brett Kuprel1 *, Roberto A. Novoa2,3 , Justin Ko2 , Susan M. Swetter2,4 , Helen M. Blau5 & Sebastian Thrun6 Skin cancer, the most common human malignancy1–3 , is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs)4,5 show potential for general and highly variable tasks across many fine-grained object categories6–11 . Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images—two orders of magnitude larger than previous datasets12 —consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care. There are 5.4 million new cases of skin cancer in the United States2 every year. One in five Americans will be diagnosed with a cutaneous malignancy in their lifetime. Although melanomas represent fewer than 5% of all skin cancers in the United States, they account for approxi- mately 75% of all skin-cancer-related deaths, and are responsible for over 10,000 deaths annually in the United States alone. Early detection is critical, as the estimated 5-year survival rate for melanoma drops from over 99% if detected in its earliest stages to about 14% if detected in its latest stages. We developed a computational method which may allow medical practitioners and patients to proactively track skin lesions and detect cancer earlier. By creating a novel disease taxonomy, and a disease-partitioning algorithm that maps individual diseases into training classes, we are able to build a deep learning system for auto- mated dermatology. Previous work in dermatological computer-aided classification12,14,15 has lacked the generalization capability of medical practitioners owing to insufficient data and a focus on standardized tasks such as dermoscopy16–18 and histological image classification19–22 . Dermoscopy images are acquired via a specialized instrument and histological images are acquired via invasive biopsy and microscopy; whereby both modalities yield highly standardized images. Photographic images (for example, smartphone images) exhibit variability in factors such as zoom, angle and lighting, making classification substantially more challenging23,24 . We overcome this challenge by using a data- driven approach—1.41 million pre-training and training images make classification robust to photographic variability. Many previous techniques require extensive preprocessing, lesion segmentation and extraction of domain-specific visual features before classification. By contrast, our system requires no hand-crafted features; it is trained end-to-end directly from image labels and raw pixels, with a single network for both photographic and dermoscopic images. The existing body of work uses small datasets of typically less than a thousand images of skin lesions16,18,19 , which, as a result, do not generalize well to new images. We demonstrate generalizable classification with a new dermatologist-labelled dataset of 129,450 clinical images, including 3,374 dermoscopy images. Deep learning algorithms, powered by advances in computation and very large datasets25 , have recently been shown to exceed human performance in visual tasks such as playing Atari games26 , strategic board games like Go27 and object recognition6 . In this paper we outline the development of a CNN that matches the performance of dermatologists at three key diagnostic tasks: melanoma classification, melanoma classification using dermoscopy and carcinoma classification. We restrict the comparisons to image-based classification. We utilize a GoogleNet Inception v3 CNN architecture9 that was pre- trained on approximately 1.28 million images (1,000 object categories) from the 2014 ImageNet Large Scale Visual Recognition Challenge6 , and train it on our dataset using transfer learning28 . Figure 1 shows the working system. The CNN is trained using 757 disease classes. Our dataset is composed of dermatologist-labelled images organized in a tree-structured taxonomy of 2,032 diseases, in which the individual diseases form the leaf nodes. The images come from 18 different clinician-curated, open-access online repositories, as well as from clinical data from Stanford University Medical Center. Figure 2a shows a subset of the full taxonomy, which has been organized clinically and visually by medical experts. We split our dataset into 127,463 training and validation images and 1,942 biopsy-labelled test images. To take advantage of fine-grained information contained within the taxonomy structure, we develop an algorithm (Extended Data Table 1) to partition diseases into fine-grained training classes (for example, amelanotic melanoma and acrolentiginous melanoma). During inference, the CNN outputs a probability distribution over these fine classes. To recover the probabilities for coarser-level classes of interest (for example, melanoma) we sum the probabilities of their descendants (see Methods and Extended Data Fig. 1 for more details). We validate the effectiveness of the algorithm in two ways, using nine-fold cross-validation. First, we validate the algorithm using a three-class disease partition—the first-level nodes of the taxonomy, which represent benign lesions, malignant lesions and non-neoplastic 1 Department of Electrical Engineering, Stanford University, Stanford, California, USA. 2 Department of Dermatology, Stanford University, Stanford, California, USA. 3 Department of Pathology, Stanford University, Stanford, California, USA. 4 Dermatology Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, California, USA. 5 Baxter Laboratory for Stem Cell Biology, Department of Microbiology and Immunology, Institute for Stem Cell Biology and Regenerative Medicine, Stanford University, Stanford, California, USA. 6 Department of Computer Science, Stanford University, Stanford, California, USA. *These authors contributed equally to this work. © 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
  • 232. LETTERH his task, the CNN achieves 72.1±0.9% (mean±s.d.) overall he average of individual inference class accuracies) and two gists attain 65.56% and 66.0% accuracy on a subset of the set. Second, we validate the algorithm using a nine-class rtition—the second-level nodes—so that the diseases of have similar medical treatment plans. The CNN achieves two trials, one using standard images and the other using images, which reflect the two steps that a dermatologist m to obtain a clinical impression. The same CNN is used for a Figure 2b shows a few example images, demonstrating th distinguishing between malignant and benign lesions, whic visual features. Our comparison metrics are sensitivity an Acral-lentiginous melanoma Amelanotic melanoma Lentigo melanoma … Blue nevus Halo nevus Mongolian spot … Training classes (757)Deep convolutional neural network (Inception v3) Inference classes (varies by task) 92% malignant melanocytic lesion 8% benign melanocytic lesion Skin lesion image Convolution AvgPool MaxPool Concat Dropout Fully connected Softmax Deep CNN layout. Our classification technique is a Data flow is from left to right: an image of a skin lesion e, melanoma) is sequentially warped into a probability over clinical classes of skin disease using Google Inception hitecture pretrained on the ImageNet dataset (1.28 million 1,000 generic object classes) and fine-tuned on our own 29,450 skin lesions comprising 2,032 different diseases. ning classes are defined using a novel taxonomy of skin disease oning algorithm that maps diseases into training classes (for example, acrolentiginous melanoma, amelanotic melano melanoma). Inference classes are more general and are comp or more training classes (for example, malignant melanocytic class of melanomas). The probability of an inference class is c summing the probabilities of the training classes according to structure (see Methods). Inception v3 CNN architecture repr from https://research.googleblog.com/2016/03/train-your-ow classifier-with.html GoogleNet Inception v3 • 129,450개의 피부과 병변 이미지 데이터를 자체 제작 • 미국의 피부과 전문의 18명이 데이터 curation • CNN (Inception v3)으로 이미지를 학습 • 피부과 전문의들 21명과 인공지능의 판독 결과 비교 • 표피세포 암 (keratinocyte carcinoma)과 지루각화증(benign seborrheic keratosis)의 구분 • 악성 흑색종과 양성 병변 구분 (표준 이미지 데이터 기반) • 악성 흑색종과 양성 병변 구분 (더마토스코프로 찍은 이미지 기반)
  • 233. Skin cancer classification performance of the CNN and dermatologists. LETT a b 0 1 Sensitivity 0 1 Specificity Melanoma: 130 images 0 1 Sensitivity 0 1 Specificity Melanoma: 225 images Algorithm: AUC = 0.96 0 1 Sensitivity 0 1 Specificity Melanoma: 111 dermoscopy images 0 1 Sensitivity 0 1 Specificity Carcinoma: 707 images Algorithm: AUC = 0.96 0 1 Sensitivity 0 1 Specificity Melanoma: 1,010 dermoscopy images Algorithm: AUC = 0.94 0 1 Sensitivity 0 1 Specificity Carcinoma: 135 images Algorithm: AUC = 0.96 Dermatologists (25) Average dermatologist Algorithm: AUC = 0.94 Dermatologists (22) Average dermatologist Algorithm: AUC = 0.91 Dermatologists (21) Average dermatologist cancer classification performance of the CNN and 21명 중에 인공지능보다 정확성이 떨어지는 피부과 전문의들이 상당수 있었음 피부과 전문의들의 평균 성적도 인공지능보다 좋지 않았음
  • 234. Skin cancer classification performance of the CNN and dermatologists. LETT a b 0 1 Sensitivity 0 1 Specificity Melanoma: 130 images 0 1 Sensitivity 0 1 Specificity Melanoma: 225 images Algorithm: AUC = 0.96 0 1 Sensitivity 0 1 Specificity Melanoma: 111 dermoscopy images 0 1 Sensitivity 0 1 Specificity Carcinoma: 707 images Algorithm: AUC = 0.96 0 1 Sensitivity 0 1 Specificity Melanoma: 1,010 dermoscopy images Algorithm: AUC = 0.94 0 1 Sensitivity 0 1 Specificity Carcinoma: 135 images Algorithm: AUC = 0.96 Dermatologists (25) Average dermatologist Algorithm: AUC = 0.94 Dermatologists (22) Average dermatologist Algorithm: AUC = 0.91 Dermatologists (21) Average dermatologist cancer classification performance of the CNN and
  • 235. Skin Cancer Image Classification (TensorFlow Dev Summit 2017) Skin cancer classification performance of the CNN and dermatologists. https://www.youtube.com/watch?v=toK1OSLep3s&t=419s
  • 236.
  • 237.
  • 239. Fig 1. What can consumer wearables do? Heart rate can be measured with an oximeter built into a ring [3], muscle activity with an electromyographi sensor embedded into clothing [4], stress with an electodermal sensor incorporated into a wristband [5], and physical activity or sleep patterns via an accelerometer in a watch [6,7]. In addition, a female’s most fertile period can be identified with detailed body temperature tracking [8], while levels of me attention can be monitored with a small number of non-gelled electroencephalogram (EEG) electrodes [9]. Levels of social interaction (also known to a PLOS Medicine 2016
  • 240. •복잡한 의료 데이터의 분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 인공지능의 의료 활용
  • 242.
  • 243. S E P S I S A targeted real-time early warning score (TREWScore) for septic shock Katharine E. Henry,1 David N. Hager,2 Peter J. Pronovost,3,4,5 Suchi Saria1,3,5,6 * Sepsis is a leading cause of death in the United States, with mortality highest among patients who develop septic shock. Early aggressive treatment decreases morbidity and mortality. Although automated screening tools can detect patients currently experiencing severe sepsis and septic shock, none predict those at greatest risk of developing shock. We analyzed routinely available physiological and laboratory data from intensive care unit patients and devel- oped “TREWScore,” a targeted real-time early warning score that predicts which patients will develop septic shock. TREWScore identified patients before the onset of septic shock with an area under the ROC (receiver operating characteristic) curve (AUC) of 0.83 [95% confidence interval (CI), 0.81 to 0.85]. At a specificity of 0.67, TREWScore achieved a sensitivity of 0.85 and identified patients a median of 28.2 [interquartile range (IQR), 10.6 to 94.2] hours before onset. Of those identified, two-thirds were identified before any sepsis-related organ dysfunction. In compar- ison, the Modified Early Warning Score, which has been used clinically for septic shock prediction, achieved a lower AUC of 0.73 (95% CI, 0.71 to 0.76). A routine screening protocol based on the presence of two of the systemic inflam- matory response syndrome criteria, suspicion of infection, and either hypotension or hyperlactatemia achieved a low- er sensitivity of 0.74 at a comparable specificity of 0.64. Continuous sampling of data from the electronic health records and calculation of TREWScore may allow clinicians to identify patients at risk for septic shock and provide earlier interventions that would prevent or mitigate the associated morbidity and mortality. INTRODUCTION Seven hundred fifty thousand patients develop severe sepsis and septic shock in the United States each year. More than half of them are admitted to an intensive care unit (ICU), accounting for 10% of all ICU admissions, 20 to 30% of hospital deaths, and $15.4 billion in an- nual health care costs (1–3). Several studies have demonstrated that morbidity, mortality, and length of stay are decreased when severe sep- sis and septic shock are identified and treated early (4–8). In particular, one study showed that mortality from septic shock increased by 7.6% with every hour that treatment was delayed after the onset of hypo- tension (9). More recent studies comparing protocolized care, usual care, and early goal-directed therapy (EGDT) for patients with septic shock sug- gest that usual care is as effective as EGDT (10–12). Some have inter- preted this to mean that usual care has improved over time and reflects important aspects of EGDT, such as early antibiotics and early ag- gressive fluid resuscitation (13). It is likely that continued early identi- fication and treatment will further improve outcomes. However, the Acute Physiology Score (SAPS II), SequentialOrgan Failure Assessment (SOFA) scores, Modified Early Warning Score (MEWS), and Simple Clinical Score (SCS) have been validated to assess illness severity and risk of death among septic patients (14–17). Although these scores are useful for predicting general deterioration or mortality, they typical- ly cannot distinguish with high sensitivity and specificity which patients are at highest risk of developing a specific acute condition. The increased use of electronic health records (EHRs), which can be queried in real time, has generated interest in automating tools that identify patients at risk for septic shock (18–20). A number of “early warning systems,” “track and trigger” initiatives, “listening applica- tions,” and “sniffers” have been implemented to improve detection andtimelinessof therapy forpatients with severe sepsis andseptic shock (18, 20–23). Although these tools have been successful at detecting pa- tients currently experiencing severe sepsis or septic shock, none predict which patients are at highest risk of developing septic shock. The adoption of the Affordable Care Act has added to the growing excitement around predictive models derived from electronic health R E S E A R C H A R T I C L E onNovember3,2016http://stm.sciencemag.org/Downloadedfrom
  • 244. •아주대병원 외상센터, 응급실, 내과계 중환자실 등 3곳의 80개 병상 •산소포화도, 혈압, 맥박, 뇌파, 체온 등 8가지 환자 생체 데이터를 하나로 통합 저장 •생체 정보를 인공지능으로 실시간 모니터링+분석하여 1-3시간 전에 예측 •부정맥, 패혈증, 급성호흡곤란증후군(ARDS), 계획되지 않은 기도삽관 등의 질병
  • 245.
  • 246. 혈당 관리 •식후 혈당의 변화를 예측할 수 있을까? •어떤 음식이 혈당을 많이 올리는가?
  • 247. 혈당 관리 • 혈당은 당뇨병 뿐만 아니라, 많은 대사성 질환과 관련된 중요한 수치 • 식후 혈당 변화 (PPGR) 예측 위해 • 개별 식품의 혈당 지수 • 개별 식품의 탄수화물 함량 • 동일한 음식에 대해 사람들의 혈당 변화가 동일하게 일어나는가?
  • 248. 혈당 관리 • 혈당은 당뇨병 뿐만 아니라, 많은 대사성 질환과 관련된 중요한 수치 • 식후 혈당 변화 (PPGR) 예측 위해 • 개별 식품의 혈당 지수 • 개별 식품의 탄수화물 함량 • 동일한 음식에 대해 사람들의 혈당 변화가 동일하게 일어나는가?
  • 249. Article Personalized Nutrition by Prediction of Glycemic Responses Graphical Abstract Highlights d High interpersonal variability in post-meal glucose observed in an 800-person cohort d Using personal and microbiome features enables accurate glucose response prediction d Prediction is accurate and superior to common practice in an independent cohort d Short-term personalized dietary interventions successfully lower post-meal glucose Authors David Zeevi, Tal Korem, Niv Zmora, ..., Zamir Halpern, Eran Elinav, Eran Segal Correspondence eran.elinav@weizmann.ac.il (E.E.), eran.segal@weizmann.ac.il (E.S.) In Brief People eating identical meals present high variability in post-meal blood glucose response. Personalized diets created with the help of an accurate predictor of blood glucose response that integrates parameters such as dietary habits, physical activity, and gut microbiota may successfully lower post- meal blood glucose and its long-term metabolic consequences. Zeevi et al., 2015, Cell 163, 1079–1094 November 19, 2015 ª2015 Elsevier Inc. http://www.cell.com/cell/abstract/S0092-8674(15)01481-6
  • 250. Nuts (456,000) Beef (444,000) Legumes (420,000) Fruit (400,000) Poultry (386,000) Rice (331,000) Other (4,010,000) Baked goods (542,000) Vegetables (548,000) Sweets (639,000) Dairy (730,000) Bread (919,000) Overall energy documented: 9,807,000 Calories Glucose(mg/dl) Time Anthropometrics Blood tests Gut microbiome 16S rRNA Metagenomics Questionnaires Food frequency Lifestyle Medical Diary (food, sleep, physical activity) Continuous glucose monitoring Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 Standardized meals (50g available carbohydrates) G G F Bread Bread Bread & butter Bread & butter Glucose Glucose Fructose Per person profiling Computational analysis Main cohort 800 Participants Validation cohort 100 Participants PPGR prediction 26 Participants Dietary intervention A Glucose(mg/dl) DayBMI 1 2 3 4 5 6 7 Standardized meal Lunch Snack Dinner Postprandial glycemic response (PPGR; 2-hour iAUC) D 5,435 days, 46,898 meals, 9.8M Calories, 2,532 exercises 130K hours, 1.56M glucose measurements B C Frequency Frequency HbA1c% 45% 33% 22% 76% 21% 3% F 1000 2000 requency Carbohydrate Fat E Sleep G Study participants MetaHIT - stool HMP - stool HMP - oral (2.2%) Using smartphone-adjusted website Using a subcutaneous sensor (iPro2) Participant 141 http://www.cell.com/cell/abstract/S0092-8674(15)01481-6
  • 252. http://www.cell.com/cell/abstract/S0092-8674(15)01481-6 Predicted PPGR (iAUC, mg/dl.h) R=0.70 Validation cohort prediction Personal features Meal features Main cohort 800 participants Validation cohort 100 participants Time, nutrients, prev. exercise Meal response predictor Meal responses Train predictor Cross-validation Leave-one-person-out 0 20 25 5 30 x4000 Use predictor to predict meal responses Boosted decision trees = ? Meal response prediction Predicted Measured 16S MG Participant MeasuredPPGR (iAUC,mg/dl.h) Meal Carbohydrates (g) R=0.38 Carbohydrate-only prediction Predicted PPGR (iAUC, mg/dl.h) R=0.68 Main cohort prediction (cross-validation) A B C D E MeasuredPPGR (iAUC,mg/dl.h) Calories-only prediction R=0.33 Meal Calories (g) B Q A Figure 3. Accurate Predicti ized Postprandial Glycemic (A) Illustration of our machine-le predicting PPGRs. (B–E) PPGR predictions. Dots r (x axis) and CGM-measured meals, for a model based: only bohydrate content (B); only on content (C); our predictor evalu person-out cross validation on son cohort (D); and our predicto independent 100-person valid Pearson correlation of predict PPGRs is indicated. As expected, the PDP o (Figure 4A) shows that as hydrate content increases predicts, on average, a hi term this relation, of hi PPGR with increasing fe non-beneficial (with respec and the opposite relation dicted PPGR with incr value, as beneficial (also prediction; see PDP lege However, since PDPs dis contribution of each feat entire cohort, we asked w tionship between carboh and PPGRs varies across end, for each participant the slope of the linear regr
  • 253. http://www.cell.com/cell/abstract/S0092-8674(15)01481-6 Breakfast Lunch Snack Dinner 1 2 3 4 5 6 B1 B2 B3 B4 B6B5 L1 L2 L3 L4 L6L5 S1 S2 S3 S4 S6S5 D1 D2 D3 D4 D6D5 Day Dietitian prescribed meals One week profiling (26 participants) 16S MG Personal features Carbs > 10g? HbA1c>5.7%? BMI>25? Firmicutes>5%? YN Y Y YN N N 0 20 25 5 30 x4000 Predictor-based Expert-based ‘Good’ diet B4 B6 L2 L5 S5 S6 D2 D3 ‘Bad’ diet B1 B2 L3 L6 S1 S2 D1 D5 Choose meals for dietary intervention weeks ‘Good’ diet B4 B5 L4 L5 S5 S6 D2 D4 ‘Bad’ diet B1 B3 L1 L6 S1 S2 D1 D6 Measure and analyze intervention weeks Glucose(mg/dl)Glucose(mg/dl) 14 participants (E1, E2, ..., E14) 1 65432Day L6 Text meal identifier Color-coded response ‘Bad’ diet week ‘Good’ diet week ‘Bad’ diet week ‘Good’ diet week (blue - low; yellow - high) Find best and worst meals for each row Predictor-based arm Expert-based arm Predictor Predictor Predictor Expert Expert Expert * ** *** ****** * *** * ** *** n.s. n.s. * *** * *** *** n.s. ** † † † n.s. *** n.s. *** *** ** ** **** ‘Bad’ diet week ‘Good’ diet week PPGR(iAUC,mg/dl.h) PPGR(iAUC,mg/dl.h) Glucosefluctuations(noise,σ/μ) MaxPPGR(iAUC,mg/dl.h) E FDC A Profiling week measured PPGR (iAUC, mg/dl.h) Intervention predicted PPGR (iAUC, mg/dl.h) H IR=0.70 R=0.80 Participant P3 Participant E7 Interventionmeasured PPGR(iAUC,mg/dl.h) Interventionmeasured PPGR(iAUC,mg/dl.h) B Pizza Hummus Potatoes Chickenliver Participants G Food consumed during ‘good’ diet week Food consumed during ‘bad’ diet week B Q A Schnitzel P6 P10 P3 P8 P2 P5 P9 P4 P1 P11 P7 P12 E8 E7 E9 E4 E14 E11 E10 E12 E5 E3 E2 E1 E6 E13 E3 E4 P6 E8 E14 E6 E13 P8 P9 P10 P1 P2 P11 P12 12 participants (P1, P2, ..., P12) Figure 5. Personally Tailored Dietary Interventions Improve Postprandial Glycemic Responses
  • 254.
  • 256. In an early research project involving 600 patient cases, the team was able to 
 predict near-term hypoglycemic events up to 3 hours in advance of the symptoms. IBM Watson-Medtronic Jan 7, 2016
  • 257. Sugar.IQ 사용자의 음식 섭취와 그에 따른 혈당 변 화, 인슐린 주입 등의 과거 기록 기반 식후 사용자의 혈당이 어떻게 변화할지 Watson 이 예측
  • 258. ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
  • 259. ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
  • 260. ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
  • 261. ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
  • 264. • Puretech Health • ‘새로운 개념의 제약회사’를 추구하는 회사 • 기존의 신약 뿐만 아니라, 게임, 앱 등을 이용한 Digital Therapeutics 를 개발 • Digital Therapeutics는 최근 미국 FDA의 de novo 승인을 받기도 함
  • 265.
  • 266.
  • 267. • Puretech Health • 신약 파이프라인 중에는 일반적인 small molecule 등도 있지만, • Akili: ADHD, 우울증, 알츠하이머 등을 위한 인지 능력 개선 목적의 게임 (Project EVO) • Sonde: Voice biomarker 를 이용한 우울증 등 mental health의 진단 및 모니터링 목적
  • 268. • Puretech Health • 신약 파이프라인 중에는 일반적인 small molecule 등도 있지만, • Akili: ADHD, 우울증, 알츠하이머 등을 위한 인지 능력 개선 목적의 게임 (Project EVO) • Sonde: Voice biomarker 를 이용한 우울증 등 mental health의 진단 및 모니터링 목적
  • 269. • Puretech Health • 신약 파이프라인 중에는 일반적인 small molecule 등도 있지만, • Akili: ADHD, 우울증, 알츠하이머 등을 위한 인지 능력 개선 목적의 게임 (Project EVO) • Sonde: Voice biomarker 를 이용한 우울증 등 mental health의 진단 및 모니터링 목적