SlideShare a Scribd company logo
Professor, SAHIST, Sungkyunkwan University
Director, Digital Healthcare Institute
Yoon Sup Choi, Ph.D.
디지털 헬스케어, 의료의 미래

신약 개발을 중심으로
“It's in Apple's DNA that technology alone is not enough. 

It's technology married with liberal arts.”
The Convergence of IT, BT and Medicine
Inevitable Tsunami of Change
http://rockhealth.com/2015/01/digital-health-funding-tops-4-1b-2014-year-review/
•2017년은 역대 디지털 헬스케어 스타트업 펀딩 중 최대의 해. 

•투자횟수와 개별 투자의 규모도 역대 최고 수준을 기록

•$100m 을 넘는 mega deal 도 8건이 있었으며, 

•이에 따라 기업가치 $1b이 넘는 유니콘 기업들이 상당수 생겨남.
https://rockhealth.com/reports/2017-year-end-funding-report-the-end-of-the-beginning-of-digital-health/
https://rockhealth.com/reports/digital-health-funding-2015-year-in-review/
•최근 3년 동안 Merck, J&J, GSK 등의 제약사들의 디지털 헬스케어 분야 투자 급증

•2015-2016년 총 22건의 deal (=2010-2014년의 5년간 투자 건수와 동일)

•Merck 가 가장 활발: 2009년부터 Global Health Innovation Fund 를 통해 24건 투자 ($5-7M)

•GSK 의 경우 2014년부터 6건 (via VC arm, SR One): including Propeller Health
AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial
Post Market
Surveillance
Digital Healthcare in Drug Development
AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial
Post Market
Surveillance
Digital Healthcare in Drug Development
•개인 유전 정보 분석

•블록체인 기반 유전체 거래 플랫폼
Results within 6-8 weeksA little spit is all it takes!
DTC Genetic TestingDirect-To-Consumer
120 Disease Risk
21 Drug Response
49 Carrier Status
57Traits
$99
Health Risks
Health Risks
Health Risks
Drug Response
Inherited Conditions
혈색소증은 유전적 원인으로 철에 대한 체내 대사에 이상이 생겨 음식을
통해 섭취한 철이 너무 많이 흡수되는 질환입니다. 너무 많이 흡수된 철
은 우리 몸의 여러 장기, 특히 간, 심장 및 췌장에 과다하게 축적되며 이
들 장기를 손상시킴으로써 간질환, 심장질환 및 악성종양을 유발합니다.
Traits
음주 후 얼굴이 붉어지는가
쓴 맛을 감지할 수 있나
귀지 유형
눈 색깔
곱슬머리 여부
유당 분해 능력
말라리아 저항성
대머리가 될 가능성
근육 퍼포먼스
혈액형
노로바이러스 저항성
HIV 저항성
흡연 중독 가능성
genetic factor vs. environmental factor
1,200,000
1,000,000
900,000
850,000
650,000
500,000
400,000
300,000
250,000
180,000
100,000
2007-11
2011-06
2011-10
2012-04
2012-10
2013-04
2013-06
2013-09
2013-12
2014-10
2015-02
2015-05
2015-06
2016-02
0
Customer growth of 23andMe
2017-04
2,000,000
Digital Healthcare Institute
Director,Yoon Sup Choi, PhD
yoonsup.choi@gmail.com
https://www.23andme.com/slideshow/research/
고객의 자발적인 참여에 의한 유전학 연구
깍지를 끼면 어느 쪽 엄지가 위로 오는가?
아침형 인간? 저녁형 인간?
빛에 노출되었을 때 재채기를 하는가?
근육의 퍼포먼스
쓴 맛 인식 능력
음주 후 얼굴이 붉어지나?
유당 분해 효소 결핍?
고객의 81%가 10개 이상의 질문에 자발적 답변

매주 1 million 개의 data point 축적

The More Data, The Higher Accuracy!
January 13, 2015January 6, 2015
Data Business
NATURE BIOTECHNOLOGY VOLUME 35 NUMBER 10 OCTOBER 2017 897
23andMe wades further into drug discovery
Direct-to-consumer genetics testing com-
pany 23andMe is advancing its drug dis-
covery efforts with a $250 million financing
round announced in September. The
Mountain View, California–based firm
plans to use the funds for its own therapeu-
tics division aimed at mining the company’s
database for novel drug targets, in addition
to its existing consumer genomics business
and genetic research platform. At the same
time, the company has strengthened ongo-
ing partnerships with Pfizer and Roche, and
inked a new collaboration with Lundbeck—
all are keen to incorporate 23andMe’s human
genetics data cache into their discovery and
clinical programs.
It was over a decade ago that Icelandic
company deCODE Genetics pioneered
genetics-driven drug discovery. The
Reykjavik-based biotech’s DNA database of
140,000 Icelanders, which Amgen bought in
2012 (Nat. Biotechnol. 31, 87–88, 2013), was
set up to identify genes associated with dis-
ease. But whereas the bedrock of deCODE’s
platform was the health records stretching
back over a century, the value in 23andMe’s
platform lies instead in its database of more
than 2 million genotyped customers, and
the reams of phenotypic information par-
ticipants collect at home by online surveys
of mood, cognition and even food intake.
For Danish pharma Lundbeck, a partner-
ship signed in August with 23andMe and
think-tank Milken Institute will provide a
fresh look at major depressive disorder and
bipolar depression. The collaboration study-
ing 25,000 participants will link genomics
with complete cognitive tests and surveys
taken over nine months, providing an almost
continuous monitoring of participants’
symptoms. “Cognition is a key symptom in
depression,” says Niels Plath, vice president
for synaptic transmission at Copenhagen-
based Lundbeck. But the biological processes
leading to depression are poorly understood,
and the condition is difficult to classify as
it includes a broad population of patients.
“If we could use genetic profiling to sort
people into groups and link to biology, we
could identify new drug targets, novel path-
ways and protein networks. With 23andMe,
we can combine the genetic profiling with
symptomatic presentation,” says Plath. An
approach like this leapfrogs the traditional
paradigm of mouse models and cell-based
assays for drug discovery. “Our scientific
hypotheses must come from patient-derived
information,” says Plath. “It could be pheno-
type, it could be genetic.”
Drug maker Roche has been taking advan-
tage of 23andMe’s data cache for several years,
and its collaborations are yielding results. In
September, researchers from the Basel-based
pharma’s wholly owned Genentech subsid-
iary, in partnership with 23andMe and oth-
ers, published a paper showcasing 17 new
Parkinson’s disease risk loci that could be
potential targets for therapeutics (Nat. Genet.
http://dx.doi.org/10.1038/ng.3955, 2017).
A year earlier, in August 2016, scientists
at New York–based Pfizer, 23andMe and
Massachusetts General Hospital announced
that they had identified 15 genetic regions
linked to depression (Nat. Genet. 48, 1031–
1036, 2016). A 23andMe spokesperson this
week called that paper a “landmark,” because
it was the first study to uncover 17 variants
associated with major depressive disorder.
Ashley Winslow, who was corresponding
author on the 2016 Nature Genetics paper, and
who used to work at Pfizer, says, “Initially,
the focus was on using the database to either
confirm [or refute] the findings established
by traditional, clinical methods of ascertain-
ment.” It soon occurred to the investigators
that they could move beyond traditional
association studies and do discovery work in
indications that to date had “not been well
powered,” such as major depression, espe-
cially since some of 23andMe’s questionnaires
specifically asked if subjects had once been
clinically diagnosed.
“I think [the database is] of particular
interest for psychiatric disorders because
the medications just have such a poor track
record of not working,” says Winslow, now
senior director of translational research and
portfolio development at the University of
Pennsylvania’s Orphan Disease Center in
Philadelphia. “23andMe offered us a fresh
new look.”
Winslow thinks there is a “powerful
shift” under way in pharma as it recognizes
the benefits of rooting target discovery in
human-derived data. “You still have to do
the work-up through cell-line screening or
animals at some point, but the starting point
being human-derived data is hugely impor-
tant.”
Justin Petrone Tartu, EstoniaBeyond consumer genetics: 23andMe sells access to its database to drug companies.
KristofferTripplaar/AlamyStockPhoto
N E W S
©2017NatureAmerica,Inc.,partofSpringerNature.Allrightsreserved.
Human genomes are being sequenced at an ever-increasing rate. The 1000 Genomes Project has
aggregated hundreds of genomes; The Cancer Genome Atlas (TGCA) has gathered several thousand; and
the Exome Aggregation Consortium (ExAC) has sequenced more than 60,000 exomes. Dotted lines show
three possible future growth curves.
DNA SEQUENCING SOARS
2001 2005 2010 2015 2020 2025
100
103
106
109
Human Genome Project
Cumulativenumberofhumangenomes
1000 Genomes
TCGA
ExAC
Current amount
1st personal genome
Recorded growth
Projection
Double every 7 months (historical growth rate)
Double every 12 months (Illumina estimate)
Double every 18 months (Moore's law)
Michael Einsetein, Nature, 2015
Why slow?
More DNA More Meaning
더 많은 의미를 파악하기 위해서는
더 많은 DNA가 필요
더 많이 시퀀싱하도록 유도하려면
더 많은 가치를 줘야함
Dilemma in Sequencing
opportunities, we conducted two surveys. First, we surveyed people with diverse backgrounds                       
and determined factors that deter them from sequencing their genomes. Second, we interviewed                         
researchers at many pharma and biotech companies and identified challenges that they face                         
when​ ​working​ ​with​ ​genomic​ ​data.   
 
 
 
Figure​ ​3.​ ​Survey​ ​results​ ​(sample​ ​size​ ​=​ ​402). 
 
4.1. Individuals 
Only 2% of people who participated in our survey have genotyped or sequenced their                           
Dilemma in Sequencing
•시퀀싱을 하지 않는 이유: 너무 비싸서 & 프라이버시 문제 (데이터에 대한 권한)

•시퀀싱에 지불 의사가 크지 않다: 대다수가 250불 이하 (=원가 이하)
 
 
 
 
 
Blockchain-enabled​ ​genomic​ ​data 
sharing​ ​and​ ​analysis​ ​platform 
 
Dennis​ ​Grishin 
Kamal​ ​Obbad 
The traditional business model of direct-to-consumer personal genomics companies is                   
illustrated in Figure 4. People pay to sequence or genotype their genomes and receive analysis                             
results. Personal genomics companies keep the genomic data and sell it to pharma and biotech                             
companies that use the data for research and development. This model addresses none of the                             
challenges​ ​detailed​ ​in​ ​the​ ​previous​ ​sections. 
 
 
 
Figure​ ​4.​ ​Traditional​ ​business​ ​model​ ​of​ ​personal​ ​genomics​ ​companies. 
 
The Nebula model, shown in FIgure 5, eliminates personal genomics companies as                       
middlemen between data owners and data buyers. Instead, data owners can acquire their                         
personal genomic data from Nebula sequencing facilities or other sources, join the Nebula                         
blockchain-based, peer-to-peer network and directly connect with data buyers. As detailed in the                         
following sections, this model reduces effective sequencing costs and enhances protection of                       
personal genomic data. It also satisfies the needs of data buyers in regards to data availability,                               
data​ ​acquisition​ ​logistics​ ​and​ ​resources​ ​needed​ ​for​ ​genomic​ ​big​ ​data. 
 
 
 
 
 
​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​11 
•시퀀싱 비용: 사용자가 일단 시퀀싱 비용을 지불해야 한다. 

•데이터 소유권: 어느 제약사에 얼마에 판매할지는 사용자 본인이 아닌, 중간 밴더가 결정한다.

•프라이버시: 사용자의 데이터가 판매된 이후 어떻게 사용되는지 알 수 없다.

•인센티브: 사용자는 이 판매에 대한 재정적인 보상을 받지 못한다.
서열 생산 및 상호 거래 촉진에 한계
The traditional business model of direct-to-consumer personal genomics companies is                   
illustrated in Figure 4. People pay to sequence or genotype their genomes and receive analysis                             
results. Personal genomics companies keep the genomic data and sell it to pharma and biotech                             
companies that use the data for research and development. This model addresses none of the                             
challenges​ ​detailed​ ​in​ ​the​ ​previous​ ​sections. 
 
 
 
Figure​ ​4.​ ​Traditional​ ​business​ ​model​ ​of​ ​personal​ ​genomics​ ​companies. 
 
The Nebula model, shown in FIgure 5, eliminates personal genomics companies as                       
middlemen between data owners and data buyers. Instead, data owners can acquire their                         
personal genomic data from Nebula sequencing facilities or other sources, join the Nebula                         
blockchain-based, peer-to-peer network and directly connect with data buyers. As detailed in the                         
following sections, this model reduces effective sequencing costs and enhances protection of                       
personal genomic data. It also satisfies the needs of data buyers in regards to data availability,                               
data​ ​acquisition​ ​logistics​ ​and​ ​resources​ ​needed​ ​for​ ​genomic​ ​big​ ​data. 
 
 
 
 
 
​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​11 
 
 
Figure​ ​5.​ ​The​ ​Nebula​ ​model. 
 
5.1.1. Lower​ ​sequencing​ ​costs 
Nebula reduces effective sequencing costs in two ways. First, individuals who have not                         
yet sequenced their personal genomes can join the Nebula network and participate in paid                           
 
 
Figure​ ​5.​ ​The​ ​Nebula​ ​model. 
 
5.1.1. Lower​ ​sequencing​ ​costs 
Nebula reduces effective sequencing costs in two ways. First, individuals who have not                         
yet sequenced their personal genomes can join the Nebula network and participate in paid                           
 
 
Figure​ ​5.​ ​The​ ​Nebula​ ​model. 
 
5.1.1. Lower​ ​sequencing​ ​costs 
Nebula reduces effective sequencing costs in two ways. First, individuals who have not                         
yet sequenced their personal genomes can join the Nebula network and participate in paid                           
 
 
Figure​ ​5.​ ​The​ ​Nebula​ ​model. 
 
5.1.1. Lower​ ​sequencing​ ​costs 
Nebula reduces effective sequencing costs in two ways. First, individuals who have not                         
yet sequenced their personal genomes can join the Nebula network and participate in paid                           
surveys. Thereby data buyers can identify individuals with phenotypes of interest, such as                         
particular medical conditions, and offer to subsidize their genome sequencing costs. As                       
sequencing technology advances and sequencing costs decrease, buyers will be increasingly                     
able to fully pay for personal genome sequencing of many people. Second, individuals who                           
acquired their personal genomic data from Nebula sequencing facilities or other personal                       
genomics companies, can join the Nebula network and profit from selling access to their data.                             
Lowering sequencing costs will incentivize more people to sequence their genomes and result in                           
growth​ ​of​ ​genomic​ ​data​ ​that​ ​will​ ​fuel​ ​medical​ ​research. 
블록체인 기반의 유전체 데이터 플랫폼
•시퀀싱 비용: 사용자의 시퀀싱 비용 지불 없이 일단 시퀀싱을 수행

•데이터 소유권: 어느 제약사에 얼마에 판매할지는 사용자 본인이 결정

•프라이버시: 블록체인 기반으로 데이터의 위변조 및 활용 결과 추적

•인센티브: 네뷸라 토큰 기반으로 사용자에게 재정적 인센티브 제공
블록체인 기반의 유전체 데이터 플랫폼
Nebula tokens will be the currency of the Nebula network. The growth of the Nebula                             
network will set in motion a circular flow of Nebula tokens as illustrated in Figure 6B. Individuals                                 
will buy personal genome sequencing at Nebula sequencing facilities and pay with Nebula                         
tokens, data buyers will use Nebula tokens to purchase access to genomic and phenotypic data,                             
and​ ​Nebula​ ​Genomics​ ​will​ ​sell​ ​Nebula​ ​tokens​ ​to​ ​data​ ​buyers​ ​for​ ​fiat​ ​money. 
 
 
 
Figure​ ​6.​ ​(A)​ ​Growth​ ​of​ ​the​ ​Nebula​ ​network.​ ​(B)​ ​Circular​ ​flow​ ​of​ ​Nebula​ ​tokens. 
 
7. Personal​ ​genomics​ ​companies​ ​in​ ​comparison 
•모든 데이터의 트랜젝션은 프라이빗 토큰 (네뷸라 토큰)을 기반으로 이루어짐

•탈중앙화 방식으로 시퀀싱 비용, 프라이버시 및 인센티브 문제를 해결할 수 있으므로, 

•결국 시퀀싱 분야의 닭과 달걀의 문제를 해결 가능
AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial
Post Market
Surveillance
Digital Healthcare in Drug Development
•딥러닝 기반의 lead discovery

•인공지능+제약사
No choice but to bring AI into the medicine
12 Olga Russakovsky* et al.
Fig. 4 Random selection of images in ILSVRC detection validation set. The images in the top 4 rows were taken from
ILSVRC2012 single-object localization validation set, and the images in the bottom 4 rows were collected from Flickr using
scene-level queries.
tage of all the positive examples available. The second is images collected from Flickr specifically for the de- http://arxiv.org/pdf/1409.0575.pdf
• Main competition

• 객체 분류 (Classification): 그림 속의 객체를 분류

• 객체 위치 (localization): 그림 속 ‘하나’의 객체를 분류하고 위치를 파악

• 객체 인식 (object detection): 그림 속 ‘모든’ 객체를 분류하고 위치 파악
16 Olga Russakovsky* et al.
Fig. 7 Tasks in ILSVRC. The first column shows the ground truth labeling on an example image, and the next three show
three sample outputs with the corresponding evaluation score.
http://arxiv.org/pdf/1409.0575.pdf
Performance of winning entries in the ILSVRC2010-2015 competitions
in each of the three tasks
http://image-net.org/challenges/LSVRC/2015/results#loc
Single-object localization
Localizationerror
0
10
20
30
40
50
2011 2012 2013 2014 2015
Object detection
Averageprecision
0.0
17.5
35.0
52.5
70.0
2013 2014 2015
Image classification
Classificationerror
0
10
20
30
2010 2011 2012 2013 2014 2015
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, “Deep Residual Learning for Image Recognition”, 2015
How deep is deep?
http://image-net.org/challenges/LSVRC/2015/results
Localization
Classification
http://image-net.org/challenges/LSVRC/2015/results
http://venturebeat.com/2015/12/25/5-deep-learning-startups-to-follow-in-2016/
Deep Learning
http://theanalyticsstore.ie/deep-learning/
DeepFace: Closing the Gap to Human-Level
Performance in FaceVerification
Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14.
Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three
locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million
parameters, where more than 95% come from the local and fully connected layers.
very few parameters. These layers merely expand the input
into a set of simple local features.
The subsequent layers (L4, L5 and L6) are instead lo-
cally connected [13, 16], like a convolutional layer they ap-
ply a filter bank, but every location in the feature map learns
a different set of filters. Since different regions of an aligned
image have different local statistics, the spatial stationarity
The goal of training is to maximize the probability of
the correct class (face id). We achieve this by minimiz-
ing the cross-entropy loss for each training sample. If k
is the index of the true label for a given input, the loss is:
L = log pk. The loss is minimized over the parameters
by computing the gradient of L w.r.t. the parameters and
Human: 95% vs. DeepFace in Facebook: 97.35%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
FaceNet:A Unified Embedding for Face
Recognition and Clustering
Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering
Human: 95% vs. FaceNet of Google: 99.63%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
False accept
False reject
s. This shows all pairs of images that were
on LFW. Only eight of the 13 errors shown
he other four are mislabeled in LFW.
on Youtube Faces DB
ge similarity of all pairs of the first one
our face detector detects in each video.
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
lead to truly amazing results. Figure 7 shows one cluster in
a users personal photo collection, generated using agglom-
erative clustering. It is a clear showcase of the incredible
invariance to occlusion, lighting, pose and even age.
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
sification. Our end-to-end training both simplifies the setup
and shows that directly optimizing a loss relevant to the task
at hand improves performance.
Another strength of our model is that it only requires
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
Targeting Ultimate Accuracy: Face
Recognition via Deep Embedding
Jingtuo Liu (2015) Targeting Ultimate Accuracy: Face Recognition via Deep Embedding
Human: 95% vs.Baidu: 99.77%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
3
Although several algorithms have achieved nearly perfect
accuracy in the 6000-pair verification task, a more practical
can achieve 95.8% identification rate, relatively reducing the
error rate by about 77%.
TABLE 3. COMPARISONS WITH OTHER METHODS ON SEVERAL EVALUATION TASKS
Score = -0.060 (pair #113) Score = -0.022 (pair #202) Score = -0.034 (pair #656)
Score = -0.031 (pair #1230) Score = -0.073 (pair #1862) Score = -0.091(pair #2499)
Score = -0.024 (pair #2551) Score = -0.036 (pair #2552) Score = -0.089 (pair #2610)
Method
Performance on tasks
Pair-wise
Accuracy(%)
Rank-1(%)
DIR(%) @
FAR =1%
Verification(%
)@ FAR=0.1%
Open-set
Identification(%
)@ Rank =
1,FAR = 0.1%
IDL Ensemble
Model
99.77 98.03 95.8 99.41 92.09
IDL Single Model 99.68 97.60 94.12 99.11 89.08
FaceNet[12] 99.63 NA NA NA NA
DeepID3[9] 99.53 96.00 81.40 NA NA
Face++[2] 99.50 NA NA NA NA
Facebook[15] 98.37 82.5 61.9 NA NA
Learning from
Scratch[4]
97.73 NA NA 80.26 28.90
HighDimLBP[10] 95.17 NA NA
41.66(reported
in [4])
18.07(reported
in [4])
• 6,000쌍의 얼굴 사진 중에 바이두의 인공지능은 불과 14쌍만을 잘못 판단

• 알고 보니 이 14쌍 중의 5쌍의 사진은 오히려 정답에 오류가 있었고, 



실제로는 인공지능이 정확 (red box)
Show and Tell:
A Neural Image Caption Generator
Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555
v
om
Samy Bengio
Google
bengio@google.com
Dumitru Erhan
Google
dumitru@google.com
s a
cts
his
re-
m-
ed
he
de-
nts
A group of people
shopping at an
outdoor market.
!
There are many
vegetables at the
fruit stand.
Vision!
Deep CNN
Language !
Generating!
RNN
Figure 1. NIC, our model, is based end-to-end on a neural net-
work consisting of a vision CNN followed by a language gener-
Show and Tell:
A Neural Image Caption Generator
Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555
Figure 5. A selection of evaluation results, grouped by human rating.
Radiologist
Bone Age Assessment
• M: 28 Classes
• F: 20 Classes
• Method: G.P.
• Top3-95.28% (F)
• Top3-81.55% (M)
40
50
60
70
80
인공지능 의사 A 의사 B
40
50
60
70
80
의사 A 

+ 인공지능
의사 B 

+ 인공지능
69.5%
63%
49.5%
72.5%
57.5%
정확도(%)
영상의학과 펠로우

(소아영상 세부전공)
영상의학과 

2년차 전공의
인공지능 vs 의사 인공지능 + 의사
AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380.
• 총 환자의 수: 200명

• 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험)

• 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독)

• 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스

• 인공지능: VUNO의 골연령 판독 딥러닝
골연령 판독에 인간 의사와 인공지능의 시너지 효과
Digital Healthcare Institute
Director,Yoon Sup Choi, PhD
yoonsup.choi@gmail.com
총 판독 시간 (m)
0
50
100
150
200
w/o AI w/ AI
0
50
100
150
200
w/o AI w/ AI
188m
154m
180m
108m
saving 40%
of time
saving 18%
of time
의사 A 의사 B
골연령 판독에서 인공지능을 활용하면

판독 시간의 절감도 가능
• 총 환자의 수: 200명

• 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험)

• 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독)

• 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스

• 인공지능: VUNO의 골연령 판독 딥러닝
AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380.
Digital Healthcare Institute
Director,Yoon Sup Choi, PhD
yoonsup.choi@gmail.com
Detection of Diabetic Retinopathy
당뇨성 망막병증
• 당뇨병의 대표적 합병증: 당뇨병력이 30년 이상 환자 90% 발병

• 안과 전문의들이 안저(안구의 안쪽)를 사진으로 찍어서 판독

• 망막 내 미세혈관 생성, 출혈, 삼출물 정도를 파악하여 진단
Training Set / Test Set
• CNN으로 후향적으로 128,175개의 안저 이미지 학습

• 미국의 안과전문의 54명이 3-7회 판독한 데이터

• 우수한 안과전문의들 7-8명의 판독 결과와 인공지능의 판독 결과 비교

• EyePACS-1 (9,963 개), Messidor-2 (1,748 개)a) Fullscreen mode
b) Hit reset to reload this image. This will reset all of the grading.
c) Comment box for other pathologies you see
eFigure 2. Screenshot of the Second Screen of the Grading Tool, Which Asks Graders to Assess the
Image for DR, DME and Other Notable Conditions or Findings
• EyePACS-1 과 Messidor-2 의 AUC = 0.991, 0.990

• 7-8명의 안과 전문의와 sensitivity, specificity 가 동일한 수준

• F-score: 0.95 (vs. 인간 의사는 0.91)
Additional sensitivity analyses were conducted for sev-
eralsubcategories:(1)detectingmoderateorworsediabeticreti-
effects of data set size on algorithm performance were exam-
ined and shown to plateau at around 60 000 images (or ap-
Figure 2. Validation Set Performance for Referable Diabetic Retinopathy
100
80
60
40
20
0
0
70
80
85
95
90
75
0 5 10 15 20 25 30
100806040
Sensitivity,%
1 – Specificity, %
20
EyePACS-1: AUC, 99.1%; 95% CI, 98.8%-99.3%A
100
High-sensitivity operating point
High-specificity operating point
100
80
60
40
20
0
0
70
80
85
95
90
75
0 5 10 15 20 25 30
100806040
Sensitivity,%
1 – Specificity, %
20
Messidor-2: AUC, 99.0%; 95% CI, 98.6%-99.5%B
100
High-specificity operating point
High-sensitivity operating point
Performance of the algorithm (black curve) and ophthalmologists (colored
circles) for the presence of referable diabetic retinopathy (moderate or worse
diabetic retinopathy or referable diabetic macular edema) on A, EyePACS-1
(8788 fully gradable images) and B, Messidor-2 (1745 fully gradable images).
The black diamonds on the graph correspond to the sensitivity and specificity of
the algorithm at the high-sensitivity and high-specificity operating points.
In A, for the high-sensitivity operating point, specificity was 93.4% (95% CI,
92.8%-94.0%) and sensitivity was 97.5% (95% CI, 95.8%-98.7%); for the
high-specificity operating point, specificity was 98.1% (95% CI, 97.8%-98.5%)
and sensitivity was 90.3% (95% CI, 87.5%-92.7%). In B, for the high-sensitivity
operating point, specificity was 93.9% (95% CI, 92.4%-95.3%) and sensitivity
was 96.1% (95% CI, 92.4%-98.3%); for the high-specificity operating point,
specificity was 98.5% (95% CI, 97.7%-99.1%) and sensitivity was 87.0% (95%
CI, 81.1%-91.0%). There were 8 ophthalmologists who graded EyePACS-1 and 7
ophthalmologists who graded Messidor-2. AUC indicates area under the
receiver operating characteristic curve.
Research Original Investigation Accuracy of a Deep Learning Algorithm for Detection of Diabetic Retinopathy
Results
Skin Cancer
ABCDE checklist
0 0 M O N T H 2 0 1 7 | V O L 0 0 0 | N A T U R E | 1
LETTER doi:10.1038/nature21056
Dermatologist-level classification of skin cancer
with deep neural networks
Andre Esteva1
*, Brett Kuprel1
*, Roberto A. Novoa2,3
, Justin Ko2
, Susan M. Swetter2,4
, Helen M. Blau5
& Sebastian Thrun6
Skin cancer, the most common human malignancy1–3
, is primarily
diagnosed visually, beginning with an initial clinical screening
and followed potentially by dermoscopic analysis, a biopsy and
histopathological examination. Automated classification of skin
lesions using images is a challenging task owing to the fine-grained
variability in the appearance of skin lesions. Deep convolutional
neural networks (CNNs)4,5
show potential for general and highly
variable tasks across many fine-grained object categories6–11
.
Here we demonstrate classification of skin lesions using a single
CNN, trained end-to-end from images directly, using only pixels
and disease labels as inputs. We train a CNN using a dataset of
129,450 clinical images—two orders of magnitude larger than
previous datasets12
—consisting of 2,032 different diseases. We
test its performance against 21 board-certified dermatologists on
biopsy-proven clinical images with two critical binary classification
use cases: keratinocyte carcinomas versus benign seborrheic
keratoses; and malignant melanomas versus benign nevi. The first
case represents the identification of the most common cancers, the
second represents the identification of the deadliest skin cancer.
The CNN achieves performance on par with all tested experts
across both tasks, demonstrating an artificial intelligence capable
of classifying skin cancer with a level of competence comparable to
dermatologists. Outfitted with deep neural networks, mobile devices
can potentially extend the reach of dermatologists outside of the
clinic. It is projected that 6.3 billion smartphone subscriptions will
exist by the year 2021 (ref. 13) and can therefore potentially provide
low-cost universal access to vital diagnostic care.
There are 5.4 million new cases of skin cancer in the United States2
every year. One in five Americans will be diagnosed with a cutaneous
malignancy in their lifetime. Although melanomas represent fewer than
5% of all skin cancers in the United States, they account for approxi-
mately 75% of all skin-cancer-related deaths, and are responsible for
over 10,000 deaths annually in the United States alone. Early detection
is critical, as the estimated 5-year survival rate for melanoma drops
from over 99% if detected in its earliest stages to about 14% if detected
in its latest stages. We developed a computational method which may
allow medical practitioners and patients to proactively track skin
lesions and detect cancer earlier. By creating a novel disease taxonomy,
and a disease-partitioning algorithm that maps individual diseases into
training classes, we are able to build a deep learning system for auto-
mated dermatology.
Previous work in dermatological computer-aided classification12,14,15
has lacked the generalization capability of medical practitioners
owing to insufficient data and a focus on standardized tasks such as
dermoscopy16–18
and histological image classification19–22
. Dermoscopy
images are acquired via a specialized instrument and histological
images are acquired via invasive biopsy and microscopy; whereby
both modalities yield highly standardized images. Photographic
images (for example, smartphone images) exhibit variability in factors
such as zoom, angle and lighting, making classification substantially
more challenging23,24
. We overcome this challenge by using a data-
driven approach—1.41 million pre-training and training images
make classification robust to photographic variability. Many previous
techniques require extensive preprocessing, lesion segmentation and
extraction of domain-specific visual features before classification. By
contrast, our system requires no hand-crafted features; it is trained
end-to-end directly from image labels and raw pixels, with a single
network for both photographic and dermoscopic images. The existing
body of work uses small datasets of typically less than a thousand
images of skin lesions16,18,19
, which, as a result, do not generalize well
to new images. We demonstrate generalizable classification with a new
dermatologist-labelled dataset of 129,450 clinical images, including
3,374 dermoscopy images.
Deep learning algorithms, powered by advances in computation
and very large datasets25
, have recently been shown to exceed human
performance in visual tasks such as playing Atari games26
, strategic
board games like Go27
and object recognition6
. In this paper we
outline the development of a CNN that matches the performance of
dermatologists at three key diagnostic tasks: melanoma classification,
melanoma classification using dermoscopy and carcinoma
classification. We restrict the comparisons to image-based classification.
We utilize a GoogleNet Inception v3 CNN architecture9
that was pre-
trained on approximately 1.28 million images (1,000 object categories)
from the 2014 ImageNet Large Scale Visual Recognition Challenge6
,
and train it on our dataset using transfer learning28
. Figure 1 shows the
working system. The CNN is trained using 757 disease classes. Our
dataset is composed of dermatologist-labelled images organized in a
tree-structured taxonomy of 2,032 diseases, in which the individual
diseases form the leaf nodes. The images come from 18 different
clinician-curated, open-access online repositories, as well as from
clinical data from Stanford University Medical Center. Figure 2a shows
a subset of the full taxonomy, which has been organized clinically and
visually by medical experts. We split our dataset into 127,463 training
and validation images and 1,942 biopsy-labelled test images.
To take advantage of fine-grained information contained within the
taxonomy structure, we develop an algorithm (Extended Data Table 1)
to partition diseases into fine-grained training classes (for example,
amelanotic melanoma and acrolentiginous melanoma). During
inference, the CNN outputs a probability distribution over these fine
classes. To recover the probabilities for coarser-level classes of interest
(for example, melanoma) we sum the probabilities of their descendants
(see Methods and Extended Data Fig. 1 for more details).
We validate the effectiveness of the algorithm in two ways, using
nine-fold cross-validation. First, we validate the algorithm using a
three-class disease partition—the first-level nodes of the taxonomy,
which represent benign lesions, malignant lesions and non-neoplastic
1
Department of Electrical Engineering, Stanford University, Stanford, California, USA. 2
Department of Dermatology, Stanford University, Stanford, California, USA. 3
Department of Pathology,
Stanford University, Stanford, California, USA. 4
Dermatology Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, California, USA. 5
Baxter Laboratory for Stem Cell Biology, Department
of Microbiology and Immunology, Institute for Stem Cell Biology and Regenerative Medicine, Stanford University, Stanford, California, USA. 6
Department of Computer Science, Stanford University,
Stanford, California, USA.
*These authors contributed equally to this work.
© 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
LETTERH
his task, the CNN achieves 72.1±0.9% (mean±s.d.) overall
he average of individual inference class accuracies) and two
gists attain 65.56% and 66.0% accuracy on a subset of the
set. Second, we validate the algorithm using a nine-class
rtition—the second-level nodes—so that the diseases of
have similar medical treatment plans. The CNN achieves
two trials, one using standard images and the other using
images, which reflect the two steps that a dermatologist m
to obtain a clinical impression. The same CNN is used for a
Figure 2b shows a few example images, demonstrating th
distinguishing between malignant and benign lesions, whic
visual features. Our comparison metrics are sensitivity an
Acral-lentiginous melanoma
Amelanotic melanoma
Lentigo melanoma
…
Blue nevus
Halo nevus
Mongolian spot
…
Training classes (757)Deep convolutional neural network (Inception v3) Inference classes (varies by task)
92% malignant melanocytic lesion
8% benign melanocytic lesion
Skin lesion image
Convolution
AvgPool
MaxPool
Concat
Dropout
Fully connected
Softmax
Deep CNN layout. Our classification technique is a
Data flow is from left to right: an image of a skin lesion
e, melanoma) is sequentially warped into a probability
over clinical classes of skin disease using Google Inception
hitecture pretrained on the ImageNet dataset (1.28 million
1,000 generic object classes) and fine-tuned on our own
29,450 skin lesions comprising 2,032 different diseases.
ning classes are defined using a novel taxonomy of skin disease
oning algorithm that maps diseases into training classes
(for example, acrolentiginous melanoma, amelanotic melano
melanoma). Inference classes are more general and are comp
or more training classes (for example, malignant melanocytic
class of melanomas). The probability of an inference class is c
summing the probabilities of the training classes according to
structure (see Methods). Inception v3 CNN architecture repr
from https://research.googleblog.com/2016/03/train-your-ow
classifier-with.html
GoogleNet Inception v3
• 129,450개의 피부과 병변 이미지 데이터를 자체 제작

• 미국의 피부과 전문의 18명이 데이터 curation

• CNN (Inception v3)으로 이미지를 학습

• 피부과 전문의들 21명과 인공지능의 판독 결과 비교

• 표피세포 암 (keratinocyte carcinoma)과 지루각화증(benign seborrheic keratosis)의 구분

• 악성 흑색종과 양성 병변 구분 (표준 이미지 데이터 기반)

• 악성 흑색종과 양성 병변 구분 (더마토스코프로 찍은 이미지 기반)
Skin cancer classification performance of
the CNN and dermatologists. LETT
a
b
0 1
Sensitivity
0
1
Specificity
Melanoma: 130 images
0 1
Sensitivity
0
1
Specificity
Melanoma: 225 images
Algorithm: AUC = 0.96
0 1
Sensitivity
0
1
Specificity
Melanoma: 111 dermoscopy images
0 1
Sensitivity
0
1
Specificity
Carcinoma: 707 images
Algorithm: AUC = 0.96
0 1
Sensitivity
0
1
Specificity
Melanoma: 1,010 dermoscopy images
Algorithm: AUC = 0.94
0 1
Sensitivity
0
1
Specificity
Carcinoma: 135 images
Algorithm: AUC = 0.96
Dermatologists (25)
Average dermatologist
Algorithm: AUC = 0.94
Dermatologists (22)
Average dermatologist
Algorithm: AUC = 0.91
Dermatologists (21)
Average dermatologist
cancer classification performance of the CNN and
21명 중에 인공지능보다 정확성이 떨어지는 피부과 전문의들이 상당수 있었음

피부과 전문의들의 평균 성적도 인공지능보다 좋지 않았음
Skin Cancer Image Classification (TensorFlow Dev Summit 2017)
Skin cancer classification performance of
the CNN and dermatologists.
https://www.youtube.com/watch?v=toK1OSLep3s&t=419s
WSJ, 2017 June
• 다국적 제약사는 인공지능 기술을 신약 개발에 활용하기 위해 다양한 시도

• 최근 인공지능에서는 과거의 virtual screening, docking 등과는 다른 방식을 이용
https://research.googleblog.com/2017/12/deepvariant-highly-accurate-genomes.html
DeepVariant: Highly Accurate Genomes
With Deep Neural Networks
•2016년 PrecisionFDA의 SNP 퍼포먼스 부문에서 Verily 가 우승

•이 알고리즘이 개선되어 DeepVariant 라는 이름으로 공개

•Read의 alignment를 위해서 그 자체를 ‘이미지’로 인식하여 CNN으로 학습
targets.
To overcome these limitations we take an indirect approach. Instead of directly visualizing filters
in order to understand their specialization, we apply filters to input data and examine the location
where they maximally fire. Using this technique we were able to map filters to chemical functions.
For example, Figure 5 illustrate the 3D locations at which a particular filter from our first convo-
lutional layer fires. Visual inspection of the locations at which that filter is active reveals that this
filter specializes as a sulfonyl/sulfonamide detector. This demonstrates the ability of the model to
learn complex chemical features from simpler ones. In this case, the filter has inferred a meaningful
spatial arrangement of input atom types without any chemical prior knowledge.
Figure 5: Sulfonyl/sulfonamide detection with autonomously trained convolutional filters.
8
Protein-Compound Complex Structure
Binding, or non-binding?
AtomNet: A Deep Convolutional Neural Network for
Bioactivity Prediction in Structure-based Drug
Discovery
Izhar Wallach
Atomwise, Inc.
izhar@atomwise.com
Michael Dzamba
Atomwise, Inc.
misko@atomwise.com
Abraham Heifets
Atomwise, Inc.
abe@atomwise.com
Abstract
Deep convolutional neural networks comprise a subclass of deep neural networks
(DNN) with a constrained architecture that leverages the spatial and temporal
structure of the domain they model. Convolutional networks achieve the best pre-
dictive performance in areas such as speech and image recognition by hierarchi-
cally composing simple local features into complex models. Although DNNs have
been used in drug discovery for QSAR and ligand-based bioactivity predictions,
none of these models have benefited from this powerful convolutional architec-
ture. This paper introduces AtomNet, the first structure-based, deep convolutional
neural network designed to predict the bioactivity of small molecules for drug dis-
covery applications. We demonstrate how to apply the convolutional concepts of
feature locality and hierarchical composition to the modeling of bioactivity and
chemical interactions. In further contrast to existing DNN techniques, we show
that AtomNet’s application of local convolutional filters to structural target infor-
mation successfully predicts new active molecules for targets with no previously
known modulators. Finally, we show that AtomNet outperforms previous docking
approaches on a diverse set of benchmarks by a large margin, achieving an AUC
greater than 0.9 on 57.8% of the targets in the DUDE benchmark.
1 Introduction
Fundamentally, biological systems operate through the physical interaction of molecules. The ability
to determine when molecular binding occurs is therefore critical for the discovery of new medicines
and for furthering of our understanding of biology. Unfortunately, despite thirty years of compu-
tational efforts, computer tools remain too inaccurate for routine binding prediction, and physical
experiments remain the state of the art for binding determination. The ability to accurately pre-
dict molecular binding would reduce the time-to-discovery of new treatments, help eliminate toxic
molecules early in development, and guide medicinal chemistry efforts [1, 2].
In this paper, we introduce a new predictive architecture, AtomNet, to help address these challenges.
AtomNet is novel in two regards: AtomNet is the first deep convolutional neural network for molec-
ular binding affinity prediction. It is also the first deep learning system that incorporates structural
information about the target to make its predictions.
Deep convolutional neural networks (DCNN) are currently the best performing predictive models
for speech and vision [3, 4, 5, 6]. DCNN is a class of deep neural network that constrains its model
architecture to leverage the spatial and temporal structure of its domain. For example, a low-level
image feature, such as an edge, can be described within a small spatially-proximate patch of pixels.
Such a feature detector can share evidence across the entire receptive field by “tying the weights”
of the detector neurons, as the recognition of the edge does not depend on where it is found within
1
arXiv:1510.02855v1[cs.LG]10Oct2015
AtomNet: A Deep Convolutional Neural Network for
Bioactivity Prediction in Structure-based Drug
Discovery
Izhar Wallach
Atomwise, Inc.
izhar@atomwise.com
Michael Dzamba
Atomwise, Inc.
misko@atomwise.com
Abraham Heifets
Atomwise, Inc.
abe@atomwise.com
Abstract
Deep convolutional neural networks comprise a subclass of deep neural networks
(DNN) with a constrained architecture that leverages the spatial and temporal
structure of the domain they model. Convolutional networks achieve the best pre-
dictive performance in areas such as speech and image recognition by hierarchi-
cally composing simple local features into complex models. Although DNNs have
been used in drug discovery for QSAR and ligand-based bioactivity predictions,
none of these models have benefited from this powerful convolutional architec-
ture. This paper introduces AtomNet, the first structure-based, deep convolutional
neural network designed to predict the bioactivity of small molecules for drug dis-
covery applications. We demonstrate how to apply the convolutional concepts of
feature locality and hierarchical composition to the modeling of bioactivity and
chemical interactions. In further contrast to existing DNN techniques, we show
that AtomNet’s application of local convolutional filters to structural target infor-
mation successfully predicts new active molecules for targets with no previously
known modulators. Finally, we show that AtomNet outperforms previous docking
approaches on a diverse set of benchmarks by a large margin, achieving an AUC
greater than 0.9 on 57.8% of the targets in the DUDE benchmark.
1 Introduction
Fundamentally, biological systems operate through the physical interaction of molecules. The ability
to determine when molecular binding occurs is therefore critical for the discovery of new medicines
and for furthering of our understanding of biology. Unfortunately, despite thirty years of compu-
tational efforts, computer tools remain too inaccurate for routine binding prediction, and physical
experiments remain the state of the art for binding determination. The ability to accurately pre-
dict molecular binding would reduce the time-to-discovery of new treatments, help eliminate toxic
molecules early in development, and guide medicinal chemistry efforts [1, 2].
In this paper, we introduce a new predictive architecture, AtomNet, to help address these challenges.
AtomNet is novel in two regards: AtomNet is the first deep convolutional neural network for molec-
ular binding affinity prediction. It is also the first deep learning system that incorporates structural
information about the target to make its predictions.
Deep convolutional neural networks (DCNN) are currently the best performing predictive models
for speech and vision [3, 4, 5, 6]. DCNN is a class of deep neural network that constrains its model
architecture to leverage the spatial and temporal structure of its domain. For example, a low-level
image feature, such as an edge, can be described within a small spatially-proximate patch of pixels.
Such a feature detector can share evidence across the entire receptive field by “tying the weights”
of the detector neurons, as the recognition of the edge does not depend on where it is found within
1
arXiv:1510.02855v1[cs.LG]10Oct2015
Smina 123 35 5 0 0
Table 3: The number of targets on which AtomNet and Smina exceed given adjusted-logAUC thresh-
olds. For example, on the CHEMBL-20 PMD set, AtomNet achieves an adjusted-logAUC of 0.3
or better for 27 targets (out of 50 possible targets). ChEMBL-20 PMD contains 50 targets, DUDE-
30 contains 30 targets, DUDE-102 contains 102 targets, and ChEMBL-20 inactives contains 149
targets.
To overcome these limitations we take an indirect approach. Instead of directly visualizing filters
in order to understand their specialization, we apply filters to input data and examine the location
where they maximally fire. Using this technique we were able to map filters to chemical functions.
For example, Figure 5 illustrate the 3D locations at which a particular filter from our first convo-
lutional layer fires. Visual inspection of the locations at which that filter is active reveals that this
filter specializes as a sulfonyl/sulfonamide detector. This demonstrates the ability of the model to
learn complex chemical features from simpler ones. In this case, the filter has inferred a meaningful
spatial arrangement of input atom types without any chemical prior knowledge.
Figure 5: Sulfonyl/sulfonamide detection with autonomously trained convolutional filters.
8
• 이미 알려진 단백질-리간드 3차원 결합 구조를 딥러닝(CNN)으로 학습

• 화학 결합 등에 대한 계산 없이도, 단백질-리간드 결합 여부를 계산

• 기존의 구조기반 예측 등 대비, 딥러닝으로 더 정확히 예측하였음
AtomNet: A Deep Convolutional Neural Network for
Bioactivity Prediction in Structure-based Drug
Discovery
Izhar Wallach
Atomwise, Inc.
izhar@atomwise.com
Michael Dzamba
Atomwise, Inc.
misko@atomwise.com
Abraham Heifets
Atomwise, Inc.
abe@atomwise.com
Abstract
Deep convolutional neural networks comprise a subclass of deep neural networks
(DNN) with a constrained architecture that leverages the spatial and temporal
structure of the domain they model. Convolutional networks achieve the best pre-
dictive performance in areas such as speech and image recognition by hierarchi-
cally composing simple local features into complex models. Although DNNs have
been used in drug discovery for QSAR and ligand-based bioactivity predictions,
none of these models have benefited from this powerful convolutional architec-
ture. This paper introduces AtomNet, the first structure-based, deep convolutional
neural network designed to predict the bioactivity of small molecules for drug dis-
covery applications. We demonstrate how to apply the convolutional concepts of
feature locality and hierarchical composition to the modeling of bioactivity and
chemical interactions. In further contrast to existing DNN techniques, we show
that AtomNet’s application of local convolutional filters to structural target infor-
mation successfully predicts new active molecules for targets with no previously
known modulators. Finally, we show that AtomNet outperforms previous docking
approaches on a diverse set of benchmarks by a large margin, achieving an AUC
greater than 0.9 on 57.8% of the targets in the DUDE benchmark.
1 Introduction
Fundamentally, biological systems operate through the physical interaction of molecules. The ability
to determine when molecular binding occurs is therefore critical for the discovery of new medicines
and for furthering of our understanding of biology. Unfortunately, despite thirty years of compu-
tational efforts, computer tools remain too inaccurate for routine binding prediction, and physical
experiments remain the state of the art for binding determination. The ability to accurately pre-
dict molecular binding would reduce the time-to-discovery of new treatments, help eliminate toxic
molecules early in development, and guide medicinal chemistry efforts [1, 2].
In this paper, we introduce a new predictive architecture, AtomNet, to help address these challenges.
AtomNet is novel in two regards: AtomNet is the first deep convolutional neural network for molec-
ular binding affinity prediction. It is also the first deep learning system that incorporates structural
information about the target to make its predictions.
Deep convolutional neural networks (DCNN) are currently the best performing predictive models
for speech and vision [3, 4, 5, 6]. DCNN is a class of deep neural network that constrains its model
architecture to leverage the spatial and temporal structure of its domain. For example, a low-level
image feature, such as an edge, can be described within a small spatially-proximate patch of pixels.
Such a feature detector can share evidence across the entire receptive field by “tying the weights”
of the detector neurons, as the recognition of the edge does not depend on where it is found within
1
arXiv:1510.02855v1[cs.LG]10Oct2015
• 이미 알려진 단백질-리간드 3차원 결합 구조를 딥러닝(CNN)으로 학습

• 화학 결합 등에 대한 계산 없이도, 단백질-리간드 결합 여부를 계산

• 기존의 구조기반 예측 등 대비, 딥러닝으로 더 정확히 예측하였음
604 VOLUME 35 NUMBER 7 JULY 2017 NATURE BIOTECHNOLOGY
AI-powered drug discovery captures pharma interest
Adrug-huntingdealinkedlastmonth,between
Numerate,ofSanBruno,California,andTakeda
PharmaceuticaltouseNumerate’sartificialintel-
ligence (AI) suite to discover small-molecule
therapies for oncology, gastroenterology and
central nervous system disorders, is the latest in
a growing number of research alliances involv-
ing AI-powered computational drug develop-
ment firms. Also last month, GNS Healthcare
of Cambridge, Massachusetts announced a deal
with Roche subsidiary Genentech of South San
Francisco, California to use GNS’s AI platform
to better understand what affects the efficacy of
knowntherapiesinoncology.InMay,Exscientia
of Dundee, Scotland, signed a deal with Paris-
based Sanofi that includes up to €250 ($280)
million in milestone payments. Exscientia will
provide the compound design and Sanofi the
chemical synthesis of new drugs for diabetes
and cardiovascular disease. The trend indicates
thatthepharmaindustry’slong-runningskepti-
cism about AI is softening into genuine interest,
driven by AI’s promise to address the industry’s
principal pain point: clinical failure rates.
The industry’s willingness to consider AI
approaches reflects the reality that drug discov-
eryislaborious,timeconsumingandnotpartic-
ularly effective. A two-decade-long downward
trend in clinical success rates has only recently
improved (Nat. Rev. Drug Disc. 15, 379–380,
2016). Still, today, only about one in ten drugs
thatenterphase1clinicaltrialsreachespatients.
Half those failures are due to a lack of efficacy,
says Jackie Hunter, CEO of BenevolentBio, a
division of BenevolentAI of London. “That tells
you we’re not picking the right targets,” she says.
“Even a 5 or 10% reduction in efficacy failure
would be amazing.” Hunter’s views on AI in
drug discovery are featured in Ernst & Young’s
BiotechnologyReport2017releasedlastmonth.
Companies that have been watching AI from
the sidelines are now jumping in. The best-
known machine-learning model for drug dis-
covery is perhaps IBM’s Watson. IBM signed a
deal in December 2016 with Pfizer to aid the
pharma giant’s immuno-oncology drug discov-
eryefforts,addingtoastringofpreviousdealsin
the biopharma space (Nat.Biotechnol.33,1219–
1220, 2015). IBM’s Watson hunts for drugs by
sorting through vast amounts of textual data to
provide quick analyses, and tests hypotheses by
sorting through massive amounts of laboratory
data, clinical reports and scientific publications.
BenevolentAI takes a similar approach with
algorithms that mine the research literature and
proprietary research databases.
The explosion of biomedical data has driven
much of industry’s interest in AI (Table 1). The
confluence of ever-increasing computational
horsepower and the proliferation of large data
sets has prompted scientists to seek learning
algorithms that can help them navigate such
massive volumes of information.
A lot of the excitement about AI in drug
discovery has spilled over from other fields.
Machine vision, which allows, among other
things, self-driving cars, and language process-
ing have given rise to sophisticated multilevel
artificial neural networks known as deep-
learning algorithms that can be used to model
biological processes from assay data as well as
textual data.
In the past people didn’t have enough data
to properly train deep-learning algorithms,
says Mark Gerstein, a biomedical informat-
ics professor at Yale University in New Haven,
Connecticut.Nowresearchershavebeenableto
build massive databases and harness them with
these algorithms, he says. “I think that excite-
ment is justified.”
Numerate is one of a growing number of AI
companies founded to take advantage of that
dataonslaughtasappliedtodrugdiscovery.“We
apply AI to chemical design at every stage,” says
Guido Lanza, Numerate’s CEO. It will provide
Tokyo-basedTakedawithcandidatesforclinical
trials by virtual compound screenings against
targets, designing and optimizing compounds,
andmodelingabsorption,distribution,metabo-
lism and excretion, and toxicity. The agreement
includes undisclosed milestone payments and
royalties.
Academic laboratories are also embracing
AI tools. In April, Atomwise of San Francisco
launched its Artificial Intelligence Molecular
Screen awards program, which will deliver 72
potentially therapeutic compounds to as many
as 100 university research labs at no charge.
Atomwise is a University of Toronto spinout
that in 2015 secured an alliance with Merck of
Kenilworth, New Jersey. For this new endeavor,
it will screen 10 million molecules using its
AtomNet platform to provide each lab with
72 compounds aimed at a specific target of the
laboratory’s choosing.
The Japanese government launched in
2016 a research consortium centered on
using Japan’s K supercomputer to ramp up
drug discovery efficiency across dozens of
local companies and institutions. Among
those involved are Takeda and tech giants
Fujitsu of Tokyo, Japan, and NEC, also of
Tokyo, as well as Kyoto University Hospital
and Riken, Japan’s National Research and
Development Institute, which will provide
clinical data.
Deep learning is starting to gain acolytes in the drug discovery space.
KTSDESIGN/SciencePhotoLibrary
N E W S©2017NatureAmerica,Inc.,partofSpringerNature.Allrightsreserved.
604 VOLUME 35 NUMBER 7 JULY 2017 NATURE BIOTECHNOLOGY
AI-powered drug discovery captures pharma interest
Adrug-huntingdealinkedlastmonth,between
Numerate,ofSanBruno,California,andTakeda
PharmaceuticaltouseNumerate’sartificialintel-
ligence (AI) suite to discover small-molecule
therapies for oncology, gastroenterology and
central nervous system disorders, is the latest in
a growing number of research alliances involv-
ing AI-powered computational drug develop-
ment firms. Also last month, GNS Healthcare
of Cambridge, Massachusetts announced a deal
with Roche subsidiary Genentech of South San
Francisco, California to use GNS’s AI platform
to better understand what affects the efficacy of
knowntherapiesinoncology.InMay,Exscientia
of Dundee, Scotland, signed a deal with Paris-
based Sanofi that includes up to €250 ($280)
million in milestone payments. Exscientia will
provide the compound design and Sanofi the
chemical synthesis of new drugs for diabetes
and cardiovascular disease. The trend indicates
thatthepharmaindustry’slong-runningskepti-
cism about AI is softening into genuine interest,
driven by AI’s promise to address the industry’s
principal pain point: clinical failure rates.
The industry’s willingness to consider AI
approaches reflects the reality that drug discov-
eryislaborious,timeconsumingandnotpartic-
ularly effective. A two-decade-long downward
trend in clinical success rates has only recently
improved (Nat. Rev. Drug Disc. 15, 379–380,
2016). Still, today, only about one in ten drugs
thatenterphase1clinicaltrialsreachespatients.
Half those failures are due to a lack of efficacy,
says Jackie Hunter, CEO of BenevolentBio, a
division of BenevolentAI of London. “That tells
you we’re not picking the right targets,” she says.
“Even a 5 or 10% reduction in efficacy failure
would be amazing.” Hunter’s views on AI in
drug discovery are featured in Ernst & Young’s
BiotechnologyReport2017releasedlastmonth.
Companies that have been watching AI from
the sidelines are now jumping in. The best-
known machine-learning model for drug dis-
covery is perhaps IBM’s Watson. IBM signed a
deal in December 2016 with Pfizer to aid the
pharma giant’s immuno-oncology drug discov-
eryefforts,addingtoastringofpreviousdealsin
the biopharma space (Nat.Biotechnol.33,1219–
1220, 2015). IBM’s Watson hunts for drugs by
sorting through vast amounts of textual data to
provide quick analyses, and tests hypotheses by
sorting through massive amounts of laboratory
data, clinical reports and scientific publications.
BenevolentAI takes a similar approach with
algorithms that mine the research literature and
proprietary research databases.
The explosion of biomedical data has driven
much of industry’s interest in AI (Table 1). The
confluence of ever-increasing computational
horsepower and the proliferation of large data
sets has prompted scientists to seek learning
algorithms that can help them navigate such
massive volumes of information.
A lot of the excitement about AI in drug
discovery has spilled over from other fields.
Machine vision, which allows, among other
things, self-driving cars, and language process-
ing have given rise to sophisticated multilevel
artificial neural networks known as deep-
learning algorithms that can be used to model
biological processes from assay data as well as
textual data.
In the past people didn’t have enough data
to properly train deep-learning algorithms,
says Mark Gerstein, a biomedical informat-
ics professor at Yale University in New Haven,
Connecticut.Nowresearchershavebeenableto
build massive databases and harness them with
these algorithms, he says. “I think that excite-
ment is justified.”
Numerate is one of a growing number of AI
companies founded to take advantage of that
dataonslaughtasappliedtodrugdiscovery.“We
apply AI to chemical design at every stage,” says
Guido Lanza, Numerate’s CEO. It will provide
Tokyo-basedTakedawithcandidatesforclinical
trials by virtual compound screenings against
targets, designing and optimizing compounds,
andmodelingabsorption,distribution,metabo-
lism and excretion, and toxicity. The agreement
includes undisclosed milestone payments and
royalties.
Academic laboratories are also embracing
AI tools. In April, Atomwise of San Francisco
launched its Artificial Intelligence Molecular
Screen awards program, which will deliver 72
potentially therapeutic compounds to as many
as 100 university research labs at no charge.
Atomwise is a University of Toronto spinout
that in 2015 secured an alliance with Merck of
Kenilworth, New Jersey. For this new endeavor,
it will screen 10 million molecules using its
AtomNet platform to provide each lab with
72 compounds aimed at a specific target of the
laboratory’s choosing.
The Japanese government launched in
2016 a research consortium centered on
using Japan’s K supercomputer to ramp up
drug discovery efficiency across dozens of
local companies and institutions. Among
those involved are Takeda and tech giants
Fujitsu of Tokyo, Japan, and NEC, also of
Tokyo, as well as Kyoto University Hospital
and Riken, Japan’s National Research and
Development Institute, which will provide
clinical data.
Deep learning is starting to gain acolytes in the drug discovery space.
KTSDESIGN/SciencePhotoLibrary
N E W S©2017NatureAmerica,Inc.,partofSpringerNature.Allrightsreserved.
Genomics data analytics startup WuXi
NextCode Genomics of Shanghai; Cambridge,
Massachusetts; and Reykjavík, Iceland, collab-
orated with researchers at Yale University on a
study that used the company’s deep-learning
algorithm to identify a key mechanism in
blood vessel growth. The result could aid drug
discovery efforts aimed at inhibiting blood
vessel growth in tumors (Nature doi:10.1038/
nature22322, 2017).
IntheUS,duringtheObamaadministration,
industry and academia joined forces to apply
AI to accelerate drug discovery as part of the
CancerMoonshotinitiative(Nat.Biotechnol.34,
119, 2016). The Accelerating Therapeutics for
Opportunities in Medicine (ATOM), launched
in January 2016, marries computational and
experimental approaches, with Brentford,
UK-based GlaxoSmithKline, participating
with Lawrence Livermore National Laboratory
in Livermore, California, and the US National
Cancer Institute. The computational portion
of the process, which includes deep-learning
and other AI algorithms, will be tested in the
first two years. In the third year, “we hope to
start on day one with a disease hypothesis and
on day 365 to deliver a drug candidate,” says
MarthaHead,GlaxoSmithKline’shead,insights
from data.
Table 1 Selected collaborations in the AI-drug discovery space
AI company/
location Technology
Announced partner/
location Indication(s) Deal date
Atomwise Deep-learning screening
from molecular structure
data
Merck Malaria 2015
BenevolentAI Deep-learning and natural
language processing of
research literature
Janssen Pharmaceutica
(Johnson & Johnson),
Beerse, Belgium
Multiple November 8,
2016
Berg,
Framingham,
Massachusetts
Deep-learning screening
of biomarkers from patient
data
None Multiple N/A
Exscientia Bispecific compounds via
Bayesian models of ligand
activity from drug discovery
data
Sanofi Metabolic
diseases
May 9, 2017
GNS
Healthcare
Bayesian probabilistic
inference for investigating
efficacy
Genentech Oncology June 19,
2017
Insilico
Medicine
Deep-learning screening
from drug and disease
databases
None Age-related
diseases
N/A
Numerate Deep learning from pheno-
typic data
Takeda Oncology, gastro-
enterology and
central nervous
system disorders
June 12,
2017
Recursion,
Salt Lake City,
Utah
Cellular phenotyping via
image analysis
Sanofi Rare genetic
diseases
April 25,
2016
twoXAR, Palo
Alto, California
Deep-learning screening
from literature and assay
data
Santen
Pharmaceuticals,
Osaka, Japan
Glaucoma February 23,
2017
N/A, none announced. Source: companies’ websites.
N E W S
•현재 하루에 10m 개의 compound 를 스크리닝 가능

•실험보다 10,000배, Ultra HTS 보다 100배 빠름

•Toxicity, side effects, mechanism of action, efficacy 등의 규명을 위해서도 사용

•머크를 포함한 10개의 제약사, 하버드 등 40개 연구 기관과 프로젝트 진행 중

•대상 질병: Alzheimer's disease, bacterial infections, antibiotics, nephrology, 



ophthalmology, immuno-oncology, metabolic and childhood liver diseases 등
Standigm
®
Standard + Next Paradigm
Giant’s shoulder Artificial Intelligence
Gangnam, Seoul, Founded in May 2015
www.standigm.com
Standigm AI for drug repositioning
New
indication
prediction
Prediction
interpretation
Target protein
prioritization
Compound
|
Disease
Compound
|
Pathways
|
Disease
Compound
|
Binding Targets

on Pathways
|
Disease
LINCS L1000
The deep learning algorithm
trained with millions of drug-
perturbed gene expression
responses on various cell lines
The massive biological knowledge
graph database integrated
automatically from various drug-
disease-target resources
The drug structure embedded
machine learning algorithm for
binding affinity prediction
Outcomes
Standigm generated tens of drug candidates for diverse diseases.
The candidates have been experimentally validated with our collaboration partners.
Cancer with CrystalGenomics, Inc.
toward lead optimization (2 hits out of 10 initial candidates)
Parkinson’s disease with Ajou University (College of Pharmacy)
under validating with animal model (1 hit out of 7 initial candidates)
Autism with Korea Institute of Science and Technology
under validating with animal model (10 initial candidates)
Fatty liver disease (In-house project)
validated with gut-liver on a chip (7 hits out of 7 initial candidates)
Mitochondrial diseases (In-house project)
establishing experimental plans with domain experts (3 initial candidates)
Small projects with a Japanese pharmaceutical company
Collaboration
New
indication
prediction
Prediction
interpretation
Target protein
prioritization
Standigm basically aims at exclusive partnership with our collaborators.
Basic pipeline
*Additional customized modules can be developed to pursue the best results upon discussion
The total service fee depends on:
• The number of compounds
• Range of the selected disease area
• Marketability of the selected disease area
The rate of up-front depends on:
• Ownership of the developed product
• Ownership of the produced information during collaboration
(Exclusive for collaborator or joint ownership)
* L1000 profiling service fee by Genometry is not included.
AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial
Post Market
Surveillance
Digital Healthcare in Drug Development
•환자 모집

•데이터 측정: 센서&웨어러블

•디지털 표현형

•복약 순응도
•복잡한 의료 데이터의 분석 및 insight 도출

•영상 의료/병리 데이터의 분석/판독

•연속 데이터의 모니터링 및 예방/예측
인공지능의 의료 활용
Annals of Oncology (2016) 27 (suppl_9): ix179-ix180. 10.1093/annonc/mdw601
Validation study to assess performance of IBM cognitive
computing system Watson for oncology with Manipal
multidisciplinary tumour board for 1000 consecutive cases: 

An Indian experience
• MMDT(Manipal multidisciplinary tumour board) treatment recommendation and
data of 1000 cases of 4 different cancers breast (638), colon (126), rectum (124)
and lung (112) which were treated in last 3 years was collected.
• Of the treatment recommendations given by MMDT, WFO provided 



50% in REC, 28% in FC, 17% in NREC
• Nearly 80% of the recommendations were in WFO REC and FC group
• 5% of the treatment provided by MMDT was not available with WFO
• The degree of concordance varied depending on the type of cancer
• WFO-REC was high in Rectum (85%) and least in Lung (17.8%)
• high with TNBC (67.9%); HER2 negative (35%)

• WFO took a median of 40 sec to capture, analyze and give the treatment.



(vs MMDT took the median time of 15 min)
WFO in ASCO 2017
• Early experience with IBM WFO cognitive computing system for lung 



and colorectal cancer treatment (마니팔 병원)

• 지난 3년간: lung cancer(112), colon cancer(126), rectum cancer(124)
• lung cancer: localized 88.9%, meta 97.9%
• colon cancer: localized 85.5%, meta 76.6%
• rectum cancer: localized 96.8%, meta 80.6%
Performance of WFO in India
2017 ASCO annual Meeting, J Clin Oncol 35, 2017 (suppl; abstr 8527)
Empowering the Oncology Community for Cancer Care
Genomics
Oncology
Clinical
Trial
Matching
Watson Health’s oncology clients span more than 35 hospital systems
“Empowering the Oncology Community
for Cancer Care”
Andrew Norden, KOTRA Conference, March 2017, “The Future of Health is Cognitive”
IBM Watson Health
Watson for Clinical Trial Matching (CTM)
18
1. According to the National Comprehensive Cancer Network (NCCN)
2. http://csdd.tufts.edu/files/uploads/02_-_jan_15,_2013_-_recruitment-retention.pdf© 2015 International Business Machines Corporation
Searching across
eligibility criteria of clinical
trials is time consuming
and labor intensive
Current
Challenges
Fewer than 5% of
adult cancer patients
participate in clinical
trials1
37% of sites fail to meet
minimum enrollment
targets. 11% of sites fail
to enroll a single patient 2
The Watson solution
• Uses structured and unstructured
patient data to quickly check
eligibility across relevant clinical
trials
• Provides eligible trial
considerations ranked by
relevance
• Increases speed to qualify
patients
Clinical Investigators
(Opportunity)
• Trials to Patient: Perform
feasibility analysis for a trial
• Identify sites with most
potential for patient enrollment
• Optimize inclusion/exclusion
criteria in protocols
Faster, more efficient
recruitment strategies,
better designed protocols
Point of Care
(Offering)
• Patient to Trials:
Quickly find the
right trial that a
patient might be
eligible for
amongst 100s of
open trials
available
Improve patient care
quality, consistency,
increased efficiencyIBM Confidential
•총 16주간 HOG( Highlands Oncology Group)의 폐암과 유방암 환자 2,620명을 대상

•90명의 환자를 3개의 노바티스 유방암 임상 프로토콜에 따라 선별

•임상 시험 코디네이터: 1시간 50분

•Watson CTM: 24분 (78% 시간 단축)

•Watson CTM은 임상 시험 기준에 해당되지 않는 환자 94%를 자동으로 스크리닝
•메이요 클리닉의 유방암 신약 임상시험에 등록자의 수가 80% 증가하였다는 결과 발표
AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial
Post Market
Surveillance
Digital Healthcare in Drug Development
•환자 모집

•데이터 측정: 센서&웨어러블

•디지털 표현형

•복약 순응도
Fitbit
Apple Watch
https://clinicaltrials.gov/ct2/results?term=fitbit&Search=Search
•의료기기가 아님에도 Fitbit 은 이미 임상 연구에 폭넓게 사용되고 있음

•Fitbit 이 장려하지 않았음에도, 임상 연구자들이 자발적으로 사용

•Fitbit 을 이용한 임상 연구 수는 계속 증가하는 추세 (16.3(80), 16.8(113), 17.7(173))
•Fitbit이 임상연구에 활용되는 것은 크게 두 가지 경우

•Fitbit 자체가 intervention이 되어서 활동량이나 치료 효과를 증진시킬 수 있는지 여부

•연구 참여자의 활동량을 모니터링 하기 위한 수단

•1. Fitbit으로 환자의 활동량을 증가시키기 위한 연구들

•Fitbit이 소아 비만 환자의 활동량을 증가시키는지 여부를 연구

•Fitbit이 위소매절제술을 받은 환자들의 활동량을 증가시키는지 여부

•Fitbit이 젊은 낭성 섬유증 (cystic fibrosis) 환자의 활동량을 증가시키는지 여부

•Fitbit이 암 환자의 신체 활동량을 증가시키기 위한 동기부여가 되는지 여부

•2. Fitbit으로 임상 연구에 참여하는 환자의 활동량을 모니터링

•항암 치료를 받은 환자들의 건강과 예후를 평가하는데 fitbit을 사용

•현금이 자녀/부모의 활동량을 증가시키는지 파악하기 위해 fitbit을 사용

•Brain tumor 환자의 삶의 질 측정을 위해 다른 survey 결과와 함께 fitbit을 사용

•말초동맥 질환(Peripheral Artery Disease) 환자의 활동량을 평가하기 위해
•체중 감량이 유방암 재발에 미치는 영향을 연구

•유방암 환자들 중 20%는 재발, 대부분이 전이성 유방암

•과체중은 유방암의 위험을 높인다고 알려져 왔으며,

•비만은 초기 유방암 환자의 예후를 좋지 않게 만드는 것도 알려짐 

•하지만, 체중 감량과 유방암 재발 위험도의 상관관계 연구는 아직 없음

•3,200 명의 과체중, 초기 비만 유방암 환자들이 2년간 참여

•결과에 따라 전세계 유방암 환자의 표준 치료에 체중 감량이 포함될 가능성

•Fitbit 이 체중 감량 프로그램에 대한 지원

•Fitbit Charge HR: 운동량, 칼로리 소모, 심박수 측정

•Fitbit Aria Wi-Fi Smart Scale: 스마트 체중계

•FitStar: 개인 맞춤형 동영상 운동 코칭 서비스
2016. 4. 27.
http://nurseslabs.tumblr.com/post/82438508492/medical-surgical-nursing-mnemonics-and-tips-2
•Biogen Idec, 다발성 경화증 환자의 모니터링에 Fitbit을 사용

•고가의 약 효과성을 검증하여 보험 약가 유지 목적

•정교한 측정으로 MS 전조 증상의 조기 발견 가능?
Dec 23, 2014
Zikto:Your Walking Coach
(“FREE VERTICAL MOMENTS AND TRANSVERSE FORCES IN HUMAN WALKING AND
THEIR ROLE IN RELATION TO ARM-SWING”, 	
YU LI*, WEIJIE WANG, ROBIN H. CROMPTON AND MICHAEL M. GUNTHER) 	
(“SYNTHESIS OF NATURAL ARM SWING MOTION IN HUMAN BIPEDAL WALKING”,
JAEHEUNG PARK)
︎
Right Arm
Left Foot
Left Arm
Right Foot
“보행 시 팔의 움직임은 몸의 역학적 균형을 맞추기 위한 자동적인 행동
으로, 반대쪽 발의 움직임을 관찰할 수 있는 지표”
보행 종류에 따른 신체 운동 궤도의 변화
발의 모양 팔의 스윙 궤도
일반 보행
팔자 걸음
구부린 걸음
직토 워크에서 수집하는 데이터
종류 설명 비고
충격량 발에 전해지는 충격량 분석 Impact Score
보행 주기 보행의 주기 분석 Interval Score
보폭 단위 보행 시의 거리 Stride(향후 보행 분석 고도화용)
팔의 3차원 궤도 걸음에 따른 팔의 움직임 궤도 팔의 Accel,Gyro Data 취합
보행 자세 상기 자료를 분석한 보행 자세 분류 총 8가지 종류로 구분
비대칭 지수 신체 부위별(어깨, 허리, 골반) 비대칭 점수 제공 1주일 1회 반대쪽 손 착용을 통한 데이터 취득 필요
걸음걸이 템플릿 보행시 발생하는 특이점들을 추출하여 개인별 템플릿 저장 생체 인증 기능용
with the courtesy of ZIKTO, Inc
Smart Band detecting seizure
https://www.empatica.com/science
Monitoring the Autonomic Nervous System
“Sympathetic activation increases when you experience excitement or
stress whether physical, emotional, or cognitive.The skin is the only organ
that is purely innervated by the sympathetic nervous system.”
https://www.empatica.com/science
from the talk of Professor Rosalind W. Picard @ Univ of Michigan 2015
https://www.empatica.com/science
https://www.empatica.com/science
CellScope’s iPhone-enabled otoscope
SpiroSmart: spirometer using iPhone
Sleep Cycle
• 아이폰의 센서로 측정한 자신의 의료/건강 데이터를 플랫폼에 공유 가능

• 가속도계, 마이크, 자이로스코프, GPS 센서 등을 이용

• 걸음, 운동량, 기억력, 목소리 떨림 등등

• 기존의 의학연구의 문제를 해결: 충분한 의료 데이터의 확보

• 연구 참여자 등록에 물리적, 시간적 장벽을 제거 (1번/3개월 ➞ 1번/1초)

• 대중의 의료 연구 참여 장려: 연구 참여자의 수 증가

• 발표 후 24시간 내에 수만명의 연구 참여자들이 지원

• 사용자 본인의 동의 하에 진행
ResearchKit
•초기 버전으로, 5가지 질환에 대한 앱 5개를 소개
ResearchKit
ResearchKit
ResearchKit
http://www.roche.com/media/store/roche_stories/roche-stories-2015-08-10.htm
http://www.roche.com/media/store/roche_stories/roche-stories-2015-08-10.htm
pRED app to track Parkinson’s symptoms in drug trial
Autism and Beyond EpiWatchMole Mapper
measuring facial expressions of young
patients having autism
measuring morphological changes
of moles
measuring behavioral data
of epilepsy patients
•스탠퍼드의 심혈관 질환 연구 앱, myHeart 

• 발표 하루만에 11,000 명의 참가자가 등록

• 스탠퍼드의 해당 연구 책임자 앨런 영,

“기존의 방식으로는 11,000명 참가자는 

미국 전역의 50개 병원에서 1년간 모집해야 한다”
•파킨슨 병 연구 앱, mPower

• 발표 하루만에 5,589 명의 참가자가 등록

• 기존에 6000만불을 들여 5년 동안 모집한

환자의 수는 단 800명
the manifestations of disease by providing a
more comprehensive and nuanced view of the
experience of illness. Through the lens of the
digital phenotype, an individual’s interaction
The digital phenotype
Sachin H Jain, Brian W Powers, Jared B Hawkins & John S Brownstein
In the coming years, patient phenotypes captured to enhance health and wellness will extend to human interactions with
digital technology.
In 1982, the evolutionary biologist Richard
Dawkins introduced the concept of the
“extended phenotype”1, the idea that pheno-
types should not be limited just to biological
processes, such as protein biosynthesis or tissue
growth, but extended to include all effects that
a gene has on its environment inside or outside
ofthebodyoftheindividualorganism.Dawkins
stressed that many delineations of phenotypes
are arbitrary. Animals and humans can modify
their environments, and these modifications
andassociatedbehaviorsareexpressionsofone’s
genome and, thus, part of their extended phe-
notype. In the animal kingdom, he cites damn
buildingbybeaversasanexampleofthebeaver’s
extended phenotype1.
Aspersonaltechnologybecomesincreasingly
embedded in human lives, we think there is an
important extension of Dawkins’s theory—the
notion of a ‘digital phenotype’. Can aspects of
ourinterfacewithtechnologybesomehowdiag-
nosticand/orprognosticforcertainconditions?
Can one’s clinical data be linked and analyzed
together with online activity and behavior data
to create a unified, nuanced view of human dis-
ease?Here,wedescribetheconceptofthedigital
phenotype. Although several disparate studies
have touched on this notion, the framework for
medicine has yet to be described. We attempt to
define digital phenotype and further describe
the opportunities and challenges in incorporat-
ing these data into healthcare.
Jan. 2013
0.000
0.002
0.004
Density
0.006
July 2013 Jan. 2014 July 2014
User 1
User 2
User 3
User 4
User 5
User 6
User 7
Date
Figure 1 Timeline of insomnia-related tweets from representative individuals. Density distributions
(probability density functions) are shown for seven individual users over a two-year period. Density on
the y axis highlights periods of relative activity for each user. A representative tweet from each user is
shown as an example.
npg©2015NatureAmerica,Inc.Allrightsreserved.
http://www.nature.com/nbt/journal/v33/n5/full/nbt.3223.html
“Extended Phenotype”(확장된 표현형)
“Extended Phenotype”(확장된 표현형)
“Extended Phenotype”(확장된 표현형)
“Extended Phenotype”(확장된 표현형)
Digital Phenotype:
Your smartphone knows if you are depressed
Ginger.io
Ginger.io
•문자를 얼마나 자주 하는지

•통화를 얼마나 오래하는지

•누구와 통화를 하는지

•얼마나 거리를 많이 이동했는지

•얼마나 많이 움직였는지
• UCSF, McLean Hospital: 정신질환 연구

• Novant Health: 당뇨병, 산후 우울증 연구

• UCSF, Duke: 수술 후 회복 모니터링
Digital Phenotype:
Your smartphone knows if you are depressed
J Med Internet Res. 2015 Jul 15;17(7):e175.
The correlation analysis between the features and the PHQ-9 scores revealed that 6 of the 10
features were significantly correlated to the scores:
• strong correlation: circadian movement, normalized entropy, location variance
• correlation: phone usage features, usage duration and usage frequency
Digital Phenotype:
Your smartphone knows if you are depressed
J Med Internet Res. 2015 Jul 15;17(7):e175.
Comparison of location and usage feature statistics between participants with no symptoms of depression (blue) and the
ones with (red). (ENT, entropy; ENTN, normalized entropy; LV, location variance; HS, home stay;TT, transition time;TD,
total distance; CM, circadian movement; NC, number of clusters; UF, usage frequency; UD, usage duration).
Figure 4. Comparison of location and usage feature statistics between participants with no symptoms of depression (blue) and the ones with (red).
Feature values are scaled between 0 and 1 for easier comparison. Boxes extend between 25th and 75th percentiles, and whiskers show the range.
Horizontal solid lines inside the boxes are medians. One, two, and three asterisks show significant differences at P<.05, P<.01, and P<.001 levels,
respectively (ENT, entropy; ENTN, normalized entropy; LV, location variance; HS, home stay; TT, transition time; TD, total distance; CM, circadian
movement; NC, number of clusters; UF, usage frequency; UD, usage duration).
Figure 5. Coefficients of correlation between location features. One, two, and three asterisks indicate significant correlation levels at P<.05, P<.01,
and P<.001, respectively (ENT, entropy; ENTN, normalized entropy; LV, location variance; HS, home stay; TT, transition time; TD, total distance;
CM, circadian movement; NC, number of clusters).
Saeb et alJOURNAL OF MEDICAL INTERNET RESEARCH
the variability of the time
the participant spent at
the location clusters
what extent the participants’
sequence of locations followed a
circadian rhythm.
home stay
Reece & Danforth, “Instagram photos reveal predictive markers of depression” (2016)
higher Hue (bluer)
lower Saturation (grayer)
lower Brightness (darker)
인스타그램으로 당신이 우울한지 알 수 있을까?
Digital Phenotype:
Your Instagram knows if you are depressed
Rao (MVR) (24) .  
 
Results 
Both All­data and Pre­diagnosis models were decisively superior to a null model
. All­data predictors were significant with 99% probability.57.5;(KAll  = 1 K 49.8)  Pre = 1  7
Pre­diagnosis and All­data confidence levels were largely identical, with two exceptions: 
Pre­diagnosis Brightness decreased to 90% confidence, and Pre­diagnosis posting frequency 
dropped to 30% confidence, suggesting a null predictive value in the latter case.  
Increased hue, along with decreased brightness and saturation, predicted depression. This 
means that photos posted by depressed individuals tended to be bluer, darker, and grayer (see 
Fig. 2). The more comments Instagram posts received, the more likely they were posted by 
depressed participants, but the opposite was true for likes received. In the All­data model, higher 
posting frequency was also associated with depression. Depressed participants were more likely 
to post photos with faces, but had a lower average face count per photograph than healthy 
participants. Finally, depressed participants were less likely to apply Instagram filters to their 
posted photos.  
 
Fig. 2. Magnitude and direction of regression coefficients in All­data (N=24,713) and Pre­diagnosis (N=18,513) 
models. X­axis values represent the adjustment in odds of an observation belonging to depressed individuals, per 
Reece & Danforth, “Instagram photos reveal predictive markers of depression” (2016)
 
 
Fig. 1. Comparison of HSV values. Right photograph has higher Hue (bluer), lower Saturation (grayer), and lower 
Brightness (darker) than left photograph. Instagram photos posted by depressed individuals had HSV values 
shifted towards those in the right photograph, compared with photos posted by healthy individuals. 
 
Units of observation 
In determining the best time span for this analysis, we encountered a difficult question: 
When and for how long does depression occur? A diagnosis of depression does not indicate the 
persistence of a depressive state for every moment of every day, and to conduct analysis using an 
individual’s entire posting history as a single unit of observation is therefore rather specious. At 
the other extreme, to take each individual photograph as units of observation runs the risk of 
being too granular. DeChoudhury et al. (5) looked at all of a given user’s posts in a single day, 
and aggregated those data into per­person, per­day units of observation. We adopted this 
precedent of “user­days” as a unit of analysis .  5
 
Statistical framework 
We used Bayesian logistic regression with uninformative priors to determine the strength 
of individual predictors. Two separate models were trained. The All­data model used all 
collected data to address Hypothesis 1. The Pre­diagnosis model used all data collected from 
higher Hue (bluer)
lower Saturation (grayer)
lower Brightness (darker)
Digital Phenotype:
Your Instagram knows if you are depressed
Reece & Danforth, “Instagram photos reveal predictive markers of depression” (2016)
. In particular, depressedχ2 07.84, p .17e 64;( All  = 9   = 9 − 1 13.80, p .87e 44)χ2Pre  = 8   = 2 − 1  
participants were less likely than healthy participants to use any filters at all. When depressed 
participants did employ filters, they most disproportionately favored the “Inkwell” filter, which 
converts color photographs to black­and­white images. Conversely, healthy participants most 
disproportionately favored the Valencia filter, which lightens the tint of photos. Examples of 
filtered photographs are provided in SI Appendix VIII.  
 
Fig. 3. Instagram filter usage among depressed and healthy participants. Bars indicate difference between observed 
and expected usage frequencies, based on a Chi­squared analysis of independence. Blue bars indicate 
disproportionate use of a filter by depressed compared to healthy participants, orange bars indicate the reverse. 
Digital Phenotype:
Your Instagram knows if you are depressed
Reece & Danforth, “Instagram photos reveal predictive markers of depression” (2016)
 
VIII. Instagram filter examples 
 
Fig. S8. Examples of Inkwell and Valencia Instagram filters.  Inkwell converts 
color photos to black­and­white, Valencia lightens tint.  Depressed participants 
most favored Inkwell compared to healthy participants, Healthy participants 
AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial
Post Market
Surveillance
Digital Healthcare in Drug Development
•환자 모집

•데이터 측정: 센서&웨어러블

•디지털 표현형

•복약 순응도
Ingestible Sensor, Proteus Digital Health
Ingestible Sensor, Proteus Digital Health
IEEE Trans Biomed Eng. 2014 Jul
An Ingestible Sensor
for Measuring Medication Adherence
d again on
imal was
ysis were
s detected,
risk of
ed with a
his can be
s during
can be
on, placed
filling, or
an edible
monstrated
cases, the
nts of the
ve release
ity, visual
a suitable
The 0.9% of devices that went undetected represent
contributions from all components of the system. For the
sensor, the most likely contribution is due to physiological
corner cases, where a combination of stomach environment
and receiver-sensor orientation may result in a small
proportion of devices (no greater than 0.9%) being missed.
Table IV- Exposure and performance in clinical trials
412 subjects
20,993 ingestions
Maximum daily ingestion: 34
Maximum use days: 90 days
99.1% Detection accuracy
100% Correct identification
0% False positives
No SAEs / UADEs related to system
Trials were conducted in the following patient populations. The number of
patients in each study is indicated in parentheses: Healthy Volunteers (296),
Cardiovascular disease (53), Tuberculosis (30), Psychiatry (28).
SAE = Serious Adverse Event; UADE = Unanticipated Adverse Device
Effect)
Exposure and performance in clinical trials
Jan 12, 2015
Clinical trial researchers using Oracle’s
software will now be able to track
patients’ medication adherence with
Proteus’s technology.
- Measuring participant adherence to

drug protocols
- Identifying the optimum dosing

regimen for recommended use
Sep 10, 2015
Proteus and Otsuka have submitted a sensor-embedded version
of the antidepressant Abilify for FDA approval.
Jab 11, 2016
Nov 13, 2017
•2017년 11월 FDA는 Abilify MyCite의 시판 허가 

•처방 전 환자의 동의가 필요

•환자의 사생활 침해 우려 의견도 있음

•주치의와 보호자까지 최대 4명이 복약 정보 수령 가능
Nov 13, 2017
•2017년 11월 FDA는 Abilify MyCite의 시판 허가 

•처방 전 환자의 동의가 필요

•환자의 사생활 침해 우려 의견도 있음

•주치의와 보호자까지 최대 4명이 복약 정보 수령 가능
AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial
Post Market
Surveillance
Digital Healthcare in Drug Development
•SNS 기반의 PMS

•블록체인 기반의 PMS
‘Facebook for Patients’, PatientsLikeMe.com
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로
의료의 미래, 디지털 헬스케어: 신약개발을 중심으로

More Related Content

What's hot

한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
Yoon Sup Choi
 
원격의료 시대의 디지털 치료제
원격의료 시대의 디지털 치료제원격의료 시대의 디지털 치료제
원격의료 시대의 디지털 치료제
Yoon Sup Choi
 
[365mc] 디지털 헬스케어: 의료의 미래
[365mc] 디지털 헬스케어: 의료의 미래[365mc] 디지털 헬스케어: 의료의 미래
[365mc] 디지털 헬스케어: 의료의 미래
Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
Yoon Sup Choi
 
When digital medicine becomes the medicine (1/2)
When digital medicine becomes the medicine (1/2)When digital medicine becomes the medicine (1/2)
When digital medicine becomes the medicine (1/2)
Yoon Sup Choi
 
디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)
Yoon Sup Choi
 
How to implement digital medicine in the future
How to implement digital medicine in the futureHow to implement digital medicine in the future
How to implement digital medicine in the future
Yoon Sup Choi
 
디지털 신약, 누구도 가보지 않은 길
디지털 신약, 누구도 가보지 않은 길디지털 신약, 누구도 가보지 않은 길
디지털 신약, 누구도 가보지 않은 길
Yoon Sup Choi
 
의료의 미래, 디지털 헬스케어
의료의 미래, 디지털 헬스케어의료의 미래, 디지털 헬스케어
의료의 미래, 디지털 헬스케어
Yoon Sup Choi
 
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
Yoon Sup Choi
 
디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약
Yoon Sup Choi
 
디지털 헬스케어와 보험의 미래 (2019년 5월)
디지털 헬스케어와 보험의 미래 (2019년 5월)디지털 헬스케어와 보험의 미래 (2019년 5월)
디지털 헬스케어와 보험의 미래 (2019년 5월)
Yoon Sup Choi
 
[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어
Yoon Sup Choi
 
글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향 글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향
Yoon Sup Choi
 
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
Yoon Sup Choi
 
디지털 헬스케어 글로벌 동향: 2017년 상반기
디지털 헬스케어 글로벌 동향: 2017년 상반기디지털 헬스케어 글로벌 동향: 2017년 상반기
디지털 헬스케어 글로벌 동향: 2017년 상반기
Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
Yoon Sup Choi
 
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
Yoon Sup Choi
 
디지털 의료가 '의료'가 될 때 (2/2)
디지털 의료가 '의료'가 될 때 (2/2)디지털 의료가 '의료'가 될 때 (2/2)
디지털 의료가 '의료'가 될 때 (2/2)
Yoon Sup Choi
 
Recent advances and challenges of digital mental healthcare
Recent advances and challenges of digital mental healthcareRecent advances and challenges of digital mental healthcare
Recent advances and challenges of digital mental healthcare
Yoon Sup Choi
 

What's hot (20)

한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
 
원격의료 시대의 디지털 치료제
원격의료 시대의 디지털 치료제원격의료 시대의 디지털 치료제
원격의료 시대의 디지털 치료제
 
[365mc] 디지털 헬스케어: 의료의 미래
[365mc] 디지털 헬스케어: 의료의 미래[365mc] 디지털 헬스케어: 의료의 미래
[365mc] 디지털 헬스케어: 의료의 미래
 
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
인공지능은 의료를 어떻게 혁신하는가 (2017년 11월) 최윤섭
 
When digital medicine becomes the medicine (1/2)
When digital medicine becomes the medicine (1/2)When digital medicine becomes the medicine (1/2)
When digital medicine becomes the medicine (1/2)
 
디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)
 
How to implement digital medicine in the future
How to implement digital medicine in the futureHow to implement digital medicine in the future
How to implement digital medicine in the future
 
디지털 신약, 누구도 가보지 않은 길
디지털 신약, 누구도 가보지 않은 길디지털 신약, 누구도 가보지 않은 길
디지털 신약, 누구도 가보지 않은 길
 
의료의 미래, 디지털 헬스케어
의료의 미래, 디지털 헬스케어의료의 미래, 디지털 헬스케어
의료의 미래, 디지털 헬스케어
 
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
 
디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약
 
디지털 헬스케어와 보험의 미래 (2019년 5월)
디지털 헬스케어와 보험의 미래 (2019년 5월)디지털 헬스케어와 보험의 미래 (2019년 5월)
디지털 헬스케어와 보험의 미래 (2019년 5월)
 
[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어
 
글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향 글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향
 
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
디지털 의료의 현재와 미래: 임상신경생리학을 중심으로
 
디지털 헬스케어 글로벌 동향: 2017년 상반기
디지털 헬스케어 글로벌 동향: 2017년 상반기디지털 헬스케어 글로벌 동향: 2017년 상반기
디지털 헬스케어 글로벌 동향: 2017년 상반기
 
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
 
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
 
디지털 의료가 '의료'가 될 때 (2/2)
디지털 의료가 '의료'가 될 때 (2/2)디지털 의료가 '의료'가 될 때 (2/2)
디지털 의료가 '의료'가 될 때 (2/2)
 
Recent advances and challenges of digital mental healthcare
Recent advances and challenges of digital mental healthcareRecent advances and challenges of digital mental healthcare
Recent advances and challenges of digital mental healthcare
 

Similar to 의료의 미래, 디지털 헬스케어: 신약개발을 중심으로

04 Exec Summ_Shivom
04 Exec Summ_Shivom04 Exec Summ_Shivom
04 Exec Summ_Shivom
Oscar Chan
 
Big Data and the Future by Sherri Rose
Big Data and the Future by Sherri RoseBig Data and the Future by Sherri Rose
Big Data and the Future by Sherri Rose
Lewis Lin 🦊
 
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...Kostyantyn Bobyk
 
Emerging collaboration models for academic medical centers _ our place in the...
Emerging collaboration models for academic medical centers _ our place in the...Emerging collaboration models for academic medical centers _ our place in the...
Emerging collaboration models for academic medical centers _ our place in the...Rick Silva
 
Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
DNA Compass
 
Health IT Summit Austin 2013 - Presentation "The Impact of All Data on Health...
Health IT Summit Austin 2013 - Presentation "The Impact of All Data on Health...Health IT Summit Austin 2013 - Presentation "The Impact of All Data on Health...
Health IT Summit Austin 2013 - Presentation "The Impact of All Data on Health...
Health IT Conference – iHT2
 
The reality of moving towards precision medicine
The reality of moving towards precision medicineThe reality of moving towards precision medicine
The reality of moving towards precision medicine
Elia Stupka
 
Big Data in Biomedicine: Where is the NIH Headed
Big Data in Biomedicine: Where is the NIH HeadedBig Data in Biomedicine: Where is the NIH Headed
Big Data in Biomedicine: Where is the NIH Headed
Philip Bourne
 
Big Data and the Promise and Pitfalls when Applied to Disease Prevention and ...
Big Data and the Promise and Pitfalls when Applied to Disease Prevention and ...Big Data and the Promise and Pitfalls when Applied to Disease Prevention and ...
Big Data and the Promise and Pitfalls when Applied to Disease Prevention and ...
Philip Bourne
 
Rock Report: Big Data by @Rock_Health
Rock Report: Big Data by @Rock_HealthRock Report: Big Data by @Rock_Health
Rock Report: Big Data by @Rock_Health
Rock Health
 
Pattern diagnostics 2015
Pattern diagnostics 2015Pattern diagnostics 2015
Pattern diagnostics 2015
Thomas Wilckens
 
Pattern diagnostics 2015
Pattern diagnostics 2015Pattern diagnostics 2015
Pattern diagnostics 2015
Thomas Wilckens
 
Benefits of Big Data in Health Care A Revolution
Benefits of Big Data in Health Care A RevolutionBenefits of Big Data in Health Care A Revolution
Benefits of Big Data in Health Care A Revolution
ijtsrd
 
IBM Terkko Pop-up Presentation by Pekka Leppänen
IBM Terkko Pop-up Presentation by Pekka LeppänenIBM Terkko Pop-up Presentation by Pekka Leppänen
IBM Terkko Pop-up Presentation by Pekka Leppänen
TerkkoHub
 
La Médecine du futur !
La Médecine du futur !La Médecine du futur !
La Médecine du futur !
Geeks Anonymes
 
Volar Health PharmaVOICE Blogs 2018
Volar Health PharmaVOICE Blogs 2018Volar Health PharmaVOICE Blogs 2018
Volar Health PharmaVOICE Blogs 2018
Carlos Rodarte
 
eBook - Data Analytics in Healthcare
eBook - Data Analytics in HealthcareeBook - Data Analytics in Healthcare
eBook - Data Analytics in Healthcare
NextGen Healthcare
 
3 Round Stones at the New England Health Datapalooza Oct 3, 2012
3 Round Stones at the New England Health Datapalooza Oct 3, 20123 Round Stones at the New England Health Datapalooza Oct 3, 2012
3 Round Stones at the New England Health Datapalooza Oct 3, 2012
3 Round Stones
 
Thomas Willkens-El impacto de las ciencias ómicas en la medicina, la nutrició...
Thomas Willkens-El impacto de las ciencias ómicas en la medicina, la nutrició...Thomas Willkens-El impacto de las ciencias ómicas en la medicina, la nutrició...
Thomas Willkens-El impacto de las ciencias ómicas en la medicina, la nutrició...
Fundación Ramón Areces
 
Slides for rare disorders meeting
Slides for rare disorders meetingSlides for rare disorders meeting
Slides for rare disorders meeting
Sean Ekins
 

Similar to 의료의 미래, 디지털 헬스케어: 신약개발을 중심으로 (20)

04 Exec Summ_Shivom
04 Exec Summ_Shivom04 Exec Summ_Shivom
04 Exec Summ_Shivom
 
Big Data and the Future by Sherri Rose
Big Data and the Future by Sherri RoseBig Data and the Future by Sherri Rose
Big Data and the Future by Sherri Rose
 
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...
K Bobyk - %22A Primer on Personalized Medicine - The Imminent Systemic Shift%...
 
Emerging collaboration models for academic medical centers _ our place in the...
Emerging collaboration models for academic medical centers _ our place in the...Emerging collaboration models for academic medical centers _ our place in the...
Emerging collaboration models for academic medical centers _ our place in the...
 
Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
 
Health IT Summit Austin 2013 - Presentation "The Impact of All Data on Health...
Health IT Summit Austin 2013 - Presentation "The Impact of All Data on Health...Health IT Summit Austin 2013 - Presentation "The Impact of All Data on Health...
Health IT Summit Austin 2013 - Presentation "The Impact of All Data on Health...
 
The reality of moving towards precision medicine
The reality of moving towards precision medicineThe reality of moving towards precision medicine
The reality of moving towards precision medicine
 
Big Data in Biomedicine: Where is the NIH Headed
Big Data in Biomedicine: Where is the NIH HeadedBig Data in Biomedicine: Where is the NIH Headed
Big Data in Biomedicine: Where is the NIH Headed
 
Big Data and the Promise and Pitfalls when Applied to Disease Prevention and ...
Big Data and the Promise and Pitfalls when Applied to Disease Prevention and ...Big Data and the Promise and Pitfalls when Applied to Disease Prevention and ...
Big Data and the Promise and Pitfalls when Applied to Disease Prevention and ...
 
Rock Report: Big Data by @Rock_Health
Rock Report: Big Data by @Rock_HealthRock Report: Big Data by @Rock_Health
Rock Report: Big Data by @Rock_Health
 
Pattern diagnostics 2015
Pattern diagnostics 2015Pattern diagnostics 2015
Pattern diagnostics 2015
 
Pattern diagnostics 2015
Pattern diagnostics 2015Pattern diagnostics 2015
Pattern diagnostics 2015
 
Benefits of Big Data in Health Care A Revolution
Benefits of Big Data in Health Care A RevolutionBenefits of Big Data in Health Care A Revolution
Benefits of Big Data in Health Care A Revolution
 
IBM Terkko Pop-up Presentation by Pekka Leppänen
IBM Terkko Pop-up Presentation by Pekka LeppänenIBM Terkko Pop-up Presentation by Pekka Leppänen
IBM Terkko Pop-up Presentation by Pekka Leppänen
 
La Médecine du futur !
La Médecine du futur !La Médecine du futur !
La Médecine du futur !
 
Volar Health PharmaVOICE Blogs 2018
Volar Health PharmaVOICE Blogs 2018Volar Health PharmaVOICE Blogs 2018
Volar Health PharmaVOICE Blogs 2018
 
eBook - Data Analytics in Healthcare
eBook - Data Analytics in HealthcareeBook - Data Analytics in Healthcare
eBook - Data Analytics in Healthcare
 
3 Round Stones at the New England Health Datapalooza Oct 3, 2012
3 Round Stones at the New England Health Datapalooza Oct 3, 20123 Round Stones at the New England Health Datapalooza Oct 3, 2012
3 Round Stones at the New England Health Datapalooza Oct 3, 2012
 
Thomas Willkens-El impacto de las ciencias ómicas en la medicina, la nutrició...
Thomas Willkens-El impacto de las ciencias ómicas en la medicina, la nutrició...Thomas Willkens-El impacto de las ciencias ómicas en la medicina, la nutrició...
Thomas Willkens-El impacto de las ciencias ómicas en la medicina, la nutrició...
 
Slides for rare disorders meeting
Slides for rare disorders meetingSlides for rare disorders meeting
Slides for rare disorders meeting
 

More from Yoon Sup Choi

한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈
Yoon Sup Choi
 
디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료
Yoon Sup Choi
 
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
Yoon Sup Choi
 
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
Yoon Sup Choi
 
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
Yoon Sup Choi
 
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
Yoon Sup Choi
 
디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약
Yoon Sup Choi
 
[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine
Yoon Sup Choi
 
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
Yoon Sup Choi
 
성공하는 디지털 헬스케어 스타트업을 위한 조언
성공하는 디지털 헬스케어 스타트업을 위한 조언성공하는 디지털 헬스케어 스타트업을 위한 조언
성공하는 디지털 헬스케어 스타트업을 위한 조언
Yoon Sup Choi
 
디지털 헬스케어, 그리고 예상되는 법적 이슈들
디지털 헬스케어, 그리고 예상되는 법적 이슈들디지털 헬스케어, 그리고 예상되는 법적 이슈들
디지털 헬스케어, 그리고 예상되는 법적 이슈들
Yoon Sup Choi
 
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
Yoon Sup Choi
 
When digital medicine becomes the medicine (2/2)
When digital medicine becomes the medicine (2/2)When digital medicine becomes the medicine (2/2)
When digital medicine becomes the medicine (2/2)
Yoon Sup Choi
 

More from Yoon Sup Choi (15)

한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈
 
디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료
 
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
 
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
 
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
 
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
 
디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약
 
[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine
 
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
 
성공하는 디지털 헬스케어 스타트업을 위한 조언
성공하는 디지털 헬스케어 스타트업을 위한 조언성공하는 디지털 헬스케어 스타트업을 위한 조언
성공하는 디지털 헬스케어 스타트업을 위한 조언
 
디지털 헬스케어, 그리고 예상되는 법적 이슈들
디지털 헬스케어, 그리고 예상되는 법적 이슈들디지털 헬스케어, 그리고 예상되는 법적 이슈들
디지털 헬스케어, 그리고 예상되는 법적 이슈들
 
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
디지털 헬스케어 파트너스 (DHP) 소개: 데모데이 2019
 
When digital medicine becomes the medicine (2/2)
When digital medicine becomes the medicine (2/2)When digital medicine becomes the medicine (2/2)
When digital medicine becomes the medicine (2/2)
 

Recently uploaded

Flu Vaccine Alert in Bangalore Karnataka
Flu Vaccine Alert in Bangalore KarnatakaFlu Vaccine Alert in Bangalore Karnataka
Flu Vaccine Alert in Bangalore Karnataka
addon Scans
 
Lung Cancer: Artificial Intelligence, Synergetics, Complex System Analysis, S...
Lung Cancer: Artificial Intelligence, Synergetics, Complex System Analysis, S...Lung Cancer: Artificial Intelligence, Synergetics, Complex System Analysis, S...
Lung Cancer: Artificial Intelligence, Synergetics, Complex System Analysis, S...
Oleg Kshivets
 
Novas diretrizes da OMS para os cuidados perinatais de mais qualidade
Novas diretrizes da OMS para os cuidados perinatais de mais qualidadeNovas diretrizes da OMS para os cuidados perinatais de mais qualidade
Novas diretrizes da OMS para os cuidados perinatais de mais qualidade
Prof. Marcus Renato de Carvalho
 
ARTIFICIAL INTELLIGENCE IN HEALTHCARE.pdf
ARTIFICIAL INTELLIGENCE IN  HEALTHCARE.pdfARTIFICIAL INTELLIGENCE IN  HEALTHCARE.pdf
ARTIFICIAL INTELLIGENCE IN HEALTHCARE.pdf
Anujkumaranit
 
Superficial & Deep Fascia of the NECK.pptx
Superficial & Deep Fascia of the NECK.pptxSuperficial & Deep Fascia of the NECK.pptx
Superficial & Deep Fascia of the NECK.pptx
Dr. Rabia Inam Gandapore
 
POST OPERATIVE OLIGURIA and its management
POST OPERATIVE OLIGURIA and its managementPOST OPERATIVE OLIGURIA and its management
POST OPERATIVE OLIGURIA and its management
touseefaziz1
 
New Directions in Targeted Therapeutic Approaches for Older Adults With Mantl...
New Directions in Targeted Therapeutic Approaches for Older Adults With Mantl...New Directions in Targeted Therapeutic Approaches for Older Adults With Mantl...
New Directions in Targeted Therapeutic Approaches for Older Adults With Mantl...
i3 Health
 
Surat @ℂall @Girls ꧁❤8527049040❤꧂@ℂall @Girls Service Vip Top Model Safe
Surat @ℂall @Girls ꧁❤8527049040❤꧂@ℂall @Girls Service Vip Top Model SafeSurat @ℂall @Girls ꧁❤8527049040❤꧂@ℂall @Girls Service Vip Top Model Safe
Surat @ℂall @Girls ꧁❤8527049040❤꧂@ℂall @Girls Service Vip Top Model Safe
Savita Shen $i11
 
Hemodialysis: Chapter 3, Dialysis Water Unit - Dr.Gawad
Hemodialysis: Chapter 3, Dialysis Water Unit - Dr.GawadHemodialysis: Chapter 3, Dialysis Water Unit - Dr.Gawad
Hemodialysis: Chapter 3, Dialysis Water Unit - Dr.Gawad
NephroTube - Dr.Gawad
 
The Normal Electrocardiogram - Part I of II
The Normal Electrocardiogram - Part I of IIThe Normal Electrocardiogram - Part I of II
The Normal Electrocardiogram - Part I of II
MedicoseAcademics
 
ANATOMY AND PHYSIOLOGY OF URINARY SYSTEM.pptx
ANATOMY AND PHYSIOLOGY OF URINARY SYSTEM.pptxANATOMY AND PHYSIOLOGY OF URINARY SYSTEM.pptx
ANATOMY AND PHYSIOLOGY OF URINARY SYSTEM.pptx
Swetaba Besh
 
24 Upakrama.pptx class ppt useful in all
24 Upakrama.pptx class ppt useful in all24 Upakrama.pptx class ppt useful in all
24 Upakrama.pptx class ppt useful in all
DrSathishMS1
 
heat stroke and heat exhaustion in children
heat stroke and heat exhaustion in childrenheat stroke and heat exhaustion in children
heat stroke and heat exhaustion in children
SumeraAhmad5
 
Ophthalmology Clinical Tests for OSCE exam
Ophthalmology Clinical Tests for OSCE examOphthalmology Clinical Tests for OSCE exam
Ophthalmology Clinical Tests for OSCE exam
KafrELShiekh University
 
Report Back from SGO 2024: What’s the Latest in Cervical Cancer?
Report Back from SGO 2024: What’s the Latest in Cervical Cancer?Report Back from SGO 2024: What’s the Latest in Cervical Cancer?
Report Back from SGO 2024: What’s the Latest in Cervical Cancer?
bkling
 
Evaluation of antidepressant activity of clitoris ternatea in animals
Evaluation of antidepressant activity of clitoris ternatea in animalsEvaluation of antidepressant activity of clitoris ternatea in animals
Evaluation of antidepressant activity of clitoris ternatea in animals
Shweta
 
Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists
Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists  Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists
Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists
Saeid Safari
 
Prix Galien International 2024 Forum Program
Prix Galien International 2024 Forum ProgramPrix Galien International 2024 Forum Program
Prix Galien International 2024 Forum Program
Levi Shapiro
 
For Better Surat #ℂall #Girl Service ❤85270-49040❤ Surat #ℂall #Girls
For Better Surat #ℂall #Girl Service ❤85270-49040❤ Surat #ℂall #GirlsFor Better Surat #ℂall #Girl Service ❤85270-49040❤ Surat #ℂall #Girls
For Better Surat #ℂall #Girl Service ❤85270-49040❤ Surat #ℂall #Girls
Savita Shen $i11
 
Non-respiratory Functions of the Lungs.pdf
Non-respiratory Functions of the Lungs.pdfNon-respiratory Functions of the Lungs.pdf
Non-respiratory Functions of the Lungs.pdf
MedicoseAcademics
 

Recently uploaded (20)

Flu Vaccine Alert in Bangalore Karnataka
Flu Vaccine Alert in Bangalore KarnatakaFlu Vaccine Alert in Bangalore Karnataka
Flu Vaccine Alert in Bangalore Karnataka
 
Lung Cancer: Artificial Intelligence, Synergetics, Complex System Analysis, S...
Lung Cancer: Artificial Intelligence, Synergetics, Complex System Analysis, S...Lung Cancer: Artificial Intelligence, Synergetics, Complex System Analysis, S...
Lung Cancer: Artificial Intelligence, Synergetics, Complex System Analysis, S...
 
Novas diretrizes da OMS para os cuidados perinatais de mais qualidade
Novas diretrizes da OMS para os cuidados perinatais de mais qualidadeNovas diretrizes da OMS para os cuidados perinatais de mais qualidade
Novas diretrizes da OMS para os cuidados perinatais de mais qualidade
 
ARTIFICIAL INTELLIGENCE IN HEALTHCARE.pdf
ARTIFICIAL INTELLIGENCE IN  HEALTHCARE.pdfARTIFICIAL INTELLIGENCE IN  HEALTHCARE.pdf
ARTIFICIAL INTELLIGENCE IN HEALTHCARE.pdf
 
Superficial & Deep Fascia of the NECK.pptx
Superficial & Deep Fascia of the NECK.pptxSuperficial & Deep Fascia of the NECK.pptx
Superficial & Deep Fascia of the NECK.pptx
 
POST OPERATIVE OLIGURIA and its management
POST OPERATIVE OLIGURIA and its managementPOST OPERATIVE OLIGURIA and its management
POST OPERATIVE OLIGURIA and its management
 
New Directions in Targeted Therapeutic Approaches for Older Adults With Mantl...
New Directions in Targeted Therapeutic Approaches for Older Adults With Mantl...New Directions in Targeted Therapeutic Approaches for Older Adults With Mantl...
New Directions in Targeted Therapeutic Approaches for Older Adults With Mantl...
 
Surat @ℂall @Girls ꧁❤8527049040❤꧂@ℂall @Girls Service Vip Top Model Safe
Surat @ℂall @Girls ꧁❤8527049040❤꧂@ℂall @Girls Service Vip Top Model SafeSurat @ℂall @Girls ꧁❤8527049040❤꧂@ℂall @Girls Service Vip Top Model Safe
Surat @ℂall @Girls ꧁❤8527049040❤꧂@ℂall @Girls Service Vip Top Model Safe
 
Hemodialysis: Chapter 3, Dialysis Water Unit - Dr.Gawad
Hemodialysis: Chapter 3, Dialysis Water Unit - Dr.GawadHemodialysis: Chapter 3, Dialysis Water Unit - Dr.Gawad
Hemodialysis: Chapter 3, Dialysis Water Unit - Dr.Gawad
 
The Normal Electrocardiogram - Part I of II
The Normal Electrocardiogram - Part I of IIThe Normal Electrocardiogram - Part I of II
The Normal Electrocardiogram - Part I of II
 
ANATOMY AND PHYSIOLOGY OF URINARY SYSTEM.pptx
ANATOMY AND PHYSIOLOGY OF URINARY SYSTEM.pptxANATOMY AND PHYSIOLOGY OF URINARY SYSTEM.pptx
ANATOMY AND PHYSIOLOGY OF URINARY SYSTEM.pptx
 
24 Upakrama.pptx class ppt useful in all
24 Upakrama.pptx class ppt useful in all24 Upakrama.pptx class ppt useful in all
24 Upakrama.pptx class ppt useful in all
 
heat stroke and heat exhaustion in children
heat stroke and heat exhaustion in childrenheat stroke and heat exhaustion in children
heat stroke and heat exhaustion in children
 
Ophthalmology Clinical Tests for OSCE exam
Ophthalmology Clinical Tests for OSCE examOphthalmology Clinical Tests for OSCE exam
Ophthalmology Clinical Tests for OSCE exam
 
Report Back from SGO 2024: What’s the Latest in Cervical Cancer?
Report Back from SGO 2024: What’s the Latest in Cervical Cancer?Report Back from SGO 2024: What’s the Latest in Cervical Cancer?
Report Back from SGO 2024: What’s the Latest in Cervical Cancer?
 
Evaluation of antidepressant activity of clitoris ternatea in animals
Evaluation of antidepressant activity of clitoris ternatea in animalsEvaluation of antidepressant activity of clitoris ternatea in animals
Evaluation of antidepressant activity of clitoris ternatea in animals
 
Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists
Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists  Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists
Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists
 
Prix Galien International 2024 Forum Program
Prix Galien International 2024 Forum ProgramPrix Galien International 2024 Forum Program
Prix Galien International 2024 Forum Program
 
For Better Surat #ℂall #Girl Service ❤85270-49040❤ Surat #ℂall #Girls
For Better Surat #ℂall #Girl Service ❤85270-49040❤ Surat #ℂall #GirlsFor Better Surat #ℂall #Girl Service ❤85270-49040❤ Surat #ℂall #Girls
For Better Surat #ℂall #Girl Service ❤85270-49040❤ Surat #ℂall #Girls
 
Non-respiratory Functions of the Lungs.pdf
Non-respiratory Functions of the Lungs.pdfNon-respiratory Functions of the Lungs.pdf
Non-respiratory Functions of the Lungs.pdf
 

의료의 미래, 디지털 헬스케어: 신약개발을 중심으로

  • 1. Professor, SAHIST, Sungkyunkwan University Director, Digital Healthcare Institute Yoon Sup Choi, Ph.D. 디지털 헬스케어, 의료의 미래 신약 개발을 중심으로
  • 2. “It's in Apple's DNA that technology alone is not enough. 
 It's technology married with liberal arts.”
  • 3. The Convergence of IT, BT and Medicine
  • 4.
  • 5.
  • 8. •2017년은 역대 디지털 헬스케어 스타트업 펀딩 중 최대의 해. •투자횟수와 개별 투자의 규모도 역대 최고 수준을 기록 •$100m 을 넘는 mega deal 도 8건이 있었으며, •이에 따라 기업가치 $1b이 넘는 유니콘 기업들이 상당수 생겨남. https://rockhealth.com/reports/2017-year-end-funding-report-the-end-of-the-beginning-of-digital-health/
  • 10. •최근 3년 동안 Merck, J&J, GSK 등의 제약사들의 디지털 헬스케어 분야 투자 급증 •2015-2016년 총 22건의 deal (=2010-2014년의 5년간 투자 건수와 동일) •Merck 가 가장 활발: 2009년부터 Global Health Innovation Fund 를 통해 24건 투자 ($5-7M) •GSK 의 경우 2014년부터 6건 (via VC arm, SR One): including Propeller Health
  • 11. AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial Post Market Surveillance Digital Healthcare in Drug Development
  • 12. AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial Post Market Surveillance Digital Healthcare in Drug Development •개인 유전 정보 분석 •블록체인 기반 유전체 거래 플랫폼
  • 13.
  • 14. Results within 6-8 weeksA little spit is all it takes! DTC Genetic TestingDirect-To-Consumer
  • 15. 120 Disease Risk 21 Drug Response 49 Carrier Status 57Traits $99
  • 20. Inherited Conditions 혈색소증은 유전적 원인으로 철에 대한 체내 대사에 이상이 생겨 음식을 통해 섭취한 철이 너무 많이 흡수되는 질환입니다. 너무 많이 흡수된 철 은 우리 몸의 여러 장기, 특히 간, 심장 및 췌장에 과다하게 축적되며 이 들 장기를 손상시킴으로써 간질환, 심장질환 및 악성종양을 유발합니다.
  • 21. Traits 음주 후 얼굴이 붉어지는가 쓴 맛을 감지할 수 있나 귀지 유형 눈 색깔 곱슬머리 여부 유당 분해 능력 말라리아 저항성 대머리가 될 가능성 근육 퍼포먼스 혈액형 노로바이러스 저항성 HIV 저항성 흡연 중독 가능성
  • 22. genetic factor vs. environmental factor
  • 24. https://www.23andme.com/slideshow/research/ 고객의 자발적인 참여에 의한 유전학 연구 깍지를 끼면 어느 쪽 엄지가 위로 오는가? 아침형 인간? 저녁형 인간? 빛에 노출되었을 때 재채기를 하는가? 근육의 퍼포먼스 쓴 맛 인식 능력 음주 후 얼굴이 붉어지나? 유당 분해 효소 결핍? 고객의 81%가 10개 이상의 질문에 자발적 답변 매주 1 million 개의 data point 축적 The More Data, The Higher Accuracy!
  • 25. January 13, 2015January 6, 2015 Data Business
  • 26. NATURE BIOTECHNOLOGY VOLUME 35 NUMBER 10 OCTOBER 2017 897 23andMe wades further into drug discovery Direct-to-consumer genetics testing com- pany 23andMe is advancing its drug dis- covery efforts with a $250 million financing round announced in September. The Mountain View, California–based firm plans to use the funds for its own therapeu- tics division aimed at mining the company’s database for novel drug targets, in addition to its existing consumer genomics business and genetic research platform. At the same time, the company has strengthened ongo- ing partnerships with Pfizer and Roche, and inked a new collaboration with Lundbeck— all are keen to incorporate 23andMe’s human genetics data cache into their discovery and clinical programs. It was over a decade ago that Icelandic company deCODE Genetics pioneered genetics-driven drug discovery. The Reykjavik-based biotech’s DNA database of 140,000 Icelanders, which Amgen bought in 2012 (Nat. Biotechnol. 31, 87–88, 2013), was set up to identify genes associated with dis- ease. But whereas the bedrock of deCODE’s platform was the health records stretching back over a century, the value in 23andMe’s platform lies instead in its database of more than 2 million genotyped customers, and the reams of phenotypic information par- ticipants collect at home by online surveys of mood, cognition and even food intake. For Danish pharma Lundbeck, a partner- ship signed in August with 23andMe and think-tank Milken Institute will provide a fresh look at major depressive disorder and bipolar depression. The collaboration study- ing 25,000 participants will link genomics with complete cognitive tests and surveys taken over nine months, providing an almost continuous monitoring of participants’ symptoms. “Cognition is a key symptom in depression,” says Niels Plath, vice president for synaptic transmission at Copenhagen- based Lundbeck. But the biological processes leading to depression are poorly understood, and the condition is difficult to classify as it includes a broad population of patients. “If we could use genetic profiling to sort people into groups and link to biology, we could identify new drug targets, novel path- ways and protein networks. With 23andMe, we can combine the genetic profiling with symptomatic presentation,” says Plath. An approach like this leapfrogs the traditional paradigm of mouse models and cell-based assays for drug discovery. “Our scientific hypotheses must come from patient-derived information,” says Plath. “It could be pheno- type, it could be genetic.” Drug maker Roche has been taking advan- tage of 23andMe’s data cache for several years, and its collaborations are yielding results. In September, researchers from the Basel-based pharma’s wholly owned Genentech subsid- iary, in partnership with 23andMe and oth- ers, published a paper showcasing 17 new Parkinson’s disease risk loci that could be potential targets for therapeutics (Nat. Genet. http://dx.doi.org/10.1038/ng.3955, 2017). A year earlier, in August 2016, scientists at New York–based Pfizer, 23andMe and Massachusetts General Hospital announced that they had identified 15 genetic regions linked to depression (Nat. Genet. 48, 1031– 1036, 2016). A 23andMe spokesperson this week called that paper a “landmark,” because it was the first study to uncover 17 variants associated with major depressive disorder. Ashley Winslow, who was corresponding author on the 2016 Nature Genetics paper, and who used to work at Pfizer, says, “Initially, the focus was on using the database to either confirm [or refute] the findings established by traditional, clinical methods of ascertain- ment.” It soon occurred to the investigators that they could move beyond traditional association studies and do discovery work in indications that to date had “not been well powered,” such as major depression, espe- cially since some of 23andMe’s questionnaires specifically asked if subjects had once been clinically diagnosed. “I think [the database is] of particular interest for psychiatric disorders because the medications just have such a poor track record of not working,” says Winslow, now senior director of translational research and portfolio development at the University of Pennsylvania’s Orphan Disease Center in Philadelphia. “23andMe offered us a fresh new look.” Winslow thinks there is a “powerful shift” under way in pharma as it recognizes the benefits of rooting target discovery in human-derived data. “You still have to do the work-up through cell-line screening or animals at some point, but the starting point being human-derived data is hugely impor- tant.” Justin Petrone Tartu, EstoniaBeyond consumer genetics: 23andMe sells access to its database to drug companies. KristofferTripplaar/AlamyStockPhoto N E W S ©2017NatureAmerica,Inc.,partofSpringerNature.Allrightsreserved.
  • 27. Human genomes are being sequenced at an ever-increasing rate. The 1000 Genomes Project has aggregated hundreds of genomes; The Cancer Genome Atlas (TGCA) has gathered several thousand; and the Exome Aggregation Consortium (ExAC) has sequenced more than 60,000 exomes. Dotted lines show three possible future growth curves. DNA SEQUENCING SOARS 2001 2005 2010 2015 2020 2025 100 103 106 109 Human Genome Project Cumulativenumberofhumangenomes 1000 Genomes TCGA ExAC Current amount 1st personal genome Recorded growth Projection Double every 7 months (historical growth rate) Double every 12 months (Illumina estimate) Double every 18 months (Moore's law) Michael Einsetein, Nature, 2015
  • 29. More DNA More Meaning 더 많은 의미를 파악하기 위해서는 더 많은 DNA가 필요 더 많이 시퀀싱하도록 유도하려면 더 많은 가치를 줘야함 Dilemma in Sequencing
  • 30. opportunities, we conducted two surveys. First, we surveyed people with diverse backgrounds                        and determined factors that deter them from sequencing their genomes. Second, we interviewed                          researchers at many pharma and biotech companies and identified challenges that they face                          when​ ​working​ ​with​ ​genomic​ ​data.          Figure​ ​3.​ ​Survey​ ​results​ ​(sample​ ​size​ ​=​ ​402).    4.1. Individuals  Only 2% of people who participated in our survey have genotyped or sequenced their                            Dilemma in Sequencing •시퀀싱을 하지 않는 이유: 너무 비싸서 & 프라이버시 문제 (데이터에 대한 권한) •시퀀싱에 지불 의사가 크지 않다: 대다수가 250불 이하 (=원가 이하)
  • 31.           Blockchain-enabled​ ​genomic​ ​data  sharing​ ​and​ ​analysis​ ​platform    Dennis​ ​Grishin  Kamal​ ​Obbad 
  • 32. The traditional business model of direct-to-consumer personal genomics companies is                    illustrated in Figure 4. People pay to sequence or genotype their genomes and receive analysis                              results. Personal genomics companies keep the genomic data and sell it to pharma and biotech                              companies that use the data for research and development. This model addresses none of the                              challenges​ ​detailed​ ​in​ ​the​ ​previous​ ​sections.        Figure​ ​4.​ ​Traditional​ ​business​ ​model​ ​of​ ​personal​ ​genomics​ ​companies.    The Nebula model, shown in FIgure 5, eliminates personal genomics companies as                        middlemen between data owners and data buyers. Instead, data owners can acquire their                          personal genomic data from Nebula sequencing facilities or other sources, join the Nebula                          blockchain-based, peer-to-peer network and directly connect with data buyers. As detailed in the                          following sections, this model reduces effective sequencing costs and enhances protection of                        personal genomic data. It also satisfies the needs of data buyers in regards to data availability,                                data​ ​acquisition​ ​logistics​ ​and​ ​resources​ ​needed​ ​for​ ​genomic​ ​big​ ​data.            ​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​11  •시퀀싱 비용: 사용자가 일단 시퀀싱 비용을 지불해야 한다. •데이터 소유권: 어느 제약사에 얼마에 판매할지는 사용자 본인이 아닌, 중간 밴더가 결정한다. •프라이버시: 사용자의 데이터가 판매된 이후 어떻게 사용되는지 알 수 없다. •인센티브: 사용자는 이 판매에 대한 재정적인 보상을 받지 못한다. 서열 생산 및 상호 거래 촉진에 한계
  • 33. The traditional business model of direct-to-consumer personal genomics companies is                    illustrated in Figure 4. People pay to sequence or genotype their genomes and receive analysis                              results. Personal genomics companies keep the genomic data and sell it to pharma and biotech                              companies that use the data for research and development. This model addresses none of the                              challenges​ ​detailed​ ​in​ ​the​ ​previous​ ​sections.        Figure​ ​4.​ ​Traditional​ ​business​ ​model​ ​of​ ​personal​ ​genomics​ ​companies.    The Nebula model, shown in FIgure 5, eliminates personal genomics companies as                        middlemen between data owners and data buyers. Instead, data owners can acquire their                          personal genomic data from Nebula sequencing facilities or other sources, join the Nebula                          blockchain-based, peer-to-peer network and directly connect with data buyers. As detailed in the                          following sections, this model reduces effective sequencing costs and enhances protection of                        personal genomic data. It also satisfies the needs of data buyers in regards to data availability,                                data​ ​acquisition​ ​logistics​ ​and​ ​resources​ ​needed​ ​for​ ​genomic​ ​big​ ​data.            ​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​11      Figure​ ​5.​ ​The​ ​Nebula​ ​model.    5.1.1. Lower​ ​sequencing​ ​costs  Nebula reduces effective sequencing costs in two ways. First, individuals who have not                          yet sequenced their personal genomes can join the Nebula network and participate in paid                           
  • 34.     Figure​ ​5.​ ​The​ ​Nebula​ ​model.    5.1.1. Lower​ ​sequencing​ ​costs  Nebula reduces effective sequencing costs in two ways. First, individuals who have not                          yet sequenced their personal genomes can join the Nebula network and participate in paid                           
  • 35.     Figure​ ​5.​ ​The​ ​Nebula​ ​model.    5.1.1. Lower​ ​sequencing​ ​costs  Nebula reduces effective sequencing costs in two ways. First, individuals who have not                          yet sequenced their personal genomes can join the Nebula network and participate in paid                           
  • 36.     Figure​ ​5.​ ​The​ ​Nebula​ ​model.    5.1.1. Lower​ ​sequencing​ ​costs  Nebula reduces effective sequencing costs in two ways. First, individuals who have not                          yet sequenced their personal genomes can join the Nebula network and participate in paid                            surveys. Thereby data buyers can identify individuals with phenotypes of interest, such as                          particular medical conditions, and offer to subsidize their genome sequencing costs. As                        sequencing technology advances and sequencing costs decrease, buyers will be increasingly                      able to fully pay for personal genome sequencing of many people. Second, individuals who                            acquired their personal genomic data from Nebula sequencing facilities or other personal                        genomics companies, can join the Nebula network and profit from selling access to their data.                              Lowering sequencing costs will incentivize more people to sequence their genomes and result in                            growth​ ​of​ ​genomic​ ​data​ ​that​ ​will​ ​fuel​ ​medical​ ​research.  블록체인 기반의 유전체 데이터 플랫폼 •시퀀싱 비용: 사용자의 시퀀싱 비용 지불 없이 일단 시퀀싱을 수행 •데이터 소유권: 어느 제약사에 얼마에 판매할지는 사용자 본인이 결정 •프라이버시: 블록체인 기반으로 데이터의 위변조 및 활용 결과 추적 •인센티브: 네뷸라 토큰 기반으로 사용자에게 재정적 인센티브 제공
  • 37. 블록체인 기반의 유전체 데이터 플랫폼 Nebula tokens will be the currency of the Nebula network. The growth of the Nebula                              network will set in motion a circular flow of Nebula tokens as illustrated in Figure 6B. Individuals                                  will buy personal genome sequencing at Nebula sequencing facilities and pay with Nebula                          tokens, data buyers will use Nebula tokens to purchase access to genomic and phenotypic data,                              and​ ​Nebula​ ​Genomics​ ​will​ ​sell​ ​Nebula​ ​tokens​ ​to​ ​data​ ​buyers​ ​for​ ​fiat​ ​money.        Figure​ ​6.​ ​(A)​ ​Growth​ ​of​ ​the​ ​Nebula​ ​network.​ ​(B)​ ​Circular​ ​flow​ ​of​ ​Nebula​ ​tokens.    7. Personal​ ​genomics​ ​companies​ ​in​ ​comparison  •모든 데이터의 트랜젝션은 프라이빗 토큰 (네뷸라 토큰)을 기반으로 이루어짐 •탈중앙화 방식으로 시퀀싱 비용, 프라이버시 및 인센티브 문제를 해결할 수 있으므로, •결국 시퀀싱 분야의 닭과 달걀의 문제를 해결 가능
  • 38.
  • 39. AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial Post Market Surveillance Digital Healthcare in Drug Development •딥러닝 기반의 lead discovery •인공지능+제약사
  • 40. No choice but to bring AI into the medicine
  • 41.
  • 42. 12 Olga Russakovsky* et al. Fig. 4 Random selection of images in ILSVRC detection validation set. The images in the top 4 rows were taken from ILSVRC2012 single-object localization validation set, and the images in the bottom 4 rows were collected from Flickr using scene-level queries. tage of all the positive examples available. The second is images collected from Flickr specifically for the de- http://arxiv.org/pdf/1409.0575.pdf
  • 43. • Main competition • 객체 분류 (Classification): 그림 속의 객체를 분류 • 객체 위치 (localization): 그림 속 ‘하나’의 객체를 분류하고 위치를 파악 • 객체 인식 (object detection): 그림 속 ‘모든’ 객체를 분류하고 위치 파악 16 Olga Russakovsky* et al. Fig. 7 Tasks in ILSVRC. The first column shows the ground truth labeling on an example image, and the next three show three sample outputs with the corresponding evaluation score. http://arxiv.org/pdf/1409.0575.pdf
  • 44. Performance of winning entries in the ILSVRC2010-2015 competitions in each of the three tasks http://image-net.org/challenges/LSVRC/2015/results#loc Single-object localization Localizationerror 0 10 20 30 40 50 2011 2012 2013 2014 2015 Object detection Averageprecision 0.0 17.5 35.0 52.5 70.0 2013 2014 2015 Image classification Classificationerror 0 10 20 30 2010 2011 2012 2013 2014 2015
  • 45.
  • 46. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, “Deep Residual Learning for Image Recognition”, 2015 How deep is deep?
  • 51. DeepFace: Closing the Gap to Human-Level Performance in FaceVerification Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14. Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million parameters, where more than 95% come from the local and fully connected layers. very few parameters. These layers merely expand the input into a set of simple local features. The subsequent layers (L4, L5 and L6) are instead lo- cally connected [13, 16], like a convolutional layer they ap- ply a filter bank, but every location in the feature map learns a different set of filters. Since different regions of an aligned image have different local statistics, the spatial stationarity The goal of training is to maximize the probability of the correct class (face id). We achieve this by minimiz- ing the cross-entropy loss for each training sample. If k is the index of the true label for a given input, the loss is: L = log pk. The loss is minimized over the parameters by computing the gradient of L w.r.t. the parameters and Human: 95% vs. DeepFace in Facebook: 97.35% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
  • 52. FaceNet:A Unified Embedding for Face Recognition and Clustering Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering Human: 95% vs. FaceNet of Google: 99.63% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people) False accept False reject s. This shows all pairs of images that were on LFW. Only eight of the 13 errors shown he other four are mislabeled in LFW. on Youtube Faces DB ge similarity of all pairs of the first one our face detector detects in each video. False accept False reject Figure 6. LFW errors. This shows all pairs of images that were incorrectly classified on LFW. Only eight of the 13 errors shown here are actual errors the other four are mislabeled in LFW. 5.7. Performance on Youtube Faces DB We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. This gives us a classification accuracy of 95.12%±0.39. Using the first one thousand frames results in 95.18%. Compared to [17] 91.4% who also evaluate one hundred frames per video we reduce the error rate by almost half. DeepId2+ [15] achieved 93.2% and our method reduces this error by 30%, comparable to our improvement on LFW. 5.8. Face Clustering Our compact embedding lends itself to be used in order to cluster a users personal photos into groups of people with the same identity. The constraints in assignment imposed by clustering faces, compared to the pure verification task, lead to truly amazing results. Figure 7 shows one cluster in a users personal photo collection, generated using agglom- erative clustering. It is a clear showcase of the incredible invariance to occlusion, lighting, pose and even age. Figure 7. Face Clustering. Shown is an exemplar cluster for one user. All these images in the users personal photo collection were clustered together. 6. Summary We provide a method to directly learn an embedding into an Euclidean space for face verification. This sets it apart from other methods [15, 17] who use the CNN bottleneck layer, or require additional post-processing such as concate- nation of multiple models and PCA, as well as SVM clas- sification. Our end-to-end training both simplifies the setup and shows that directly optimizing a loss relevant to the task at hand improves performance. Another strength of our model is that it only requires False accept False reject Figure 6. LFW errors. This shows all pairs of images that were incorrectly classified on LFW. Only eight of the 13 errors shown here are actual errors the other four are mislabeled in LFW. 5.7. Performance on Youtube Faces DB We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. This gives us a classification accuracy of 95.12%±0.39. Using the first one thousand frames results in 95.18%. Compared to [17] 91.4% who also evaluate one hundred frames per video we reduce the error rate by almost half. DeepId2+ [15] achieved 93.2% and our method reduces this error by 30%, comparable to our improvement on LFW. 5.8. Face Clustering Our compact embedding lends itself to be used in order to cluster a users personal photos into groups of people with the same identity. The constraints in assignment imposed by clustering faces, compared to the pure verification task, Figure 7. Face Clustering. Shown is an exemplar cluster for one user. All these images in the users personal photo collection were clustered together. 6. Summary We provide a method to directly learn an embedding into an Euclidean space for face verification. This sets it apart from other methods [15, 17] who use the CNN bottleneck layer, or require additional post-processing such as concate- nation of multiple models and PCA, as well as SVM clas-
  • 53. Targeting Ultimate Accuracy: Face Recognition via Deep Embedding Jingtuo Liu (2015) Targeting Ultimate Accuracy: Face Recognition via Deep Embedding Human: 95% vs.Baidu: 99.77% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people) 3 Although several algorithms have achieved nearly perfect accuracy in the 6000-pair verification task, a more practical can achieve 95.8% identification rate, relatively reducing the error rate by about 77%. TABLE 3. COMPARISONS WITH OTHER METHODS ON SEVERAL EVALUATION TASKS Score = -0.060 (pair #113) Score = -0.022 (pair #202) Score = -0.034 (pair #656) Score = -0.031 (pair #1230) Score = -0.073 (pair #1862) Score = -0.091(pair #2499) Score = -0.024 (pair #2551) Score = -0.036 (pair #2552) Score = -0.089 (pair #2610) Method Performance on tasks Pair-wise Accuracy(%) Rank-1(%) DIR(%) @ FAR =1% Verification(% )@ FAR=0.1% Open-set Identification(% )@ Rank = 1,FAR = 0.1% IDL Ensemble Model 99.77 98.03 95.8 99.41 92.09 IDL Single Model 99.68 97.60 94.12 99.11 89.08 FaceNet[12] 99.63 NA NA NA NA DeepID3[9] 99.53 96.00 81.40 NA NA Face++[2] 99.50 NA NA NA NA Facebook[15] 98.37 82.5 61.9 NA NA Learning from Scratch[4] 97.73 NA NA 80.26 28.90 HighDimLBP[10] 95.17 NA NA 41.66(reported in [4]) 18.07(reported in [4]) • 6,000쌍의 얼굴 사진 중에 바이두의 인공지능은 불과 14쌍만을 잘못 판단 • 알고 보니 이 14쌍 중의 5쌍의 사진은 오히려 정답에 오류가 있었고, 
 
 실제로는 인공지능이 정확 (red box)
  • 54. Show and Tell: A Neural Image Caption Generator Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555 v om Samy Bengio Google bengio@google.com Dumitru Erhan Google dumitru@google.com s a cts his re- m- ed he de- nts A group of people shopping at an outdoor market. ! There are many vegetables at the fruit stand. Vision! Deep CNN Language ! Generating! RNN Figure 1. NIC, our model, is based end-to-end on a neural net- work consisting of a vision CNN followed by a language gener-
  • 55. Show and Tell: A Neural Image Caption Generator Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555 Figure 5. A selection of evaluation results, grouped by human rating.
  • 57. Bone Age Assessment • M: 28 Classes • F: 20 Classes • Method: G.P. • Top3-95.28% (F) • Top3-81.55% (M)
  • 58.
  • 59. 40 50 60 70 80 인공지능 의사 A 의사 B 40 50 60 70 80 의사 A 
 + 인공지능 의사 B 
 + 인공지능 69.5% 63% 49.5% 72.5% 57.5% 정확도(%) 영상의학과 펠로우 (소아영상 세부전공) 영상의학과 2년차 전공의 인공지능 vs 의사 인공지능 + 의사 AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380. • 총 환자의 수: 200명 • 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험) • 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독) • 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스 • 인공지능: VUNO의 골연령 판독 딥러닝 골연령 판독에 인간 의사와 인공지능의 시너지 효과 Digital Healthcare Institute Director,Yoon Sup Choi, PhD yoonsup.choi@gmail.com
  • 60. 총 판독 시간 (m) 0 50 100 150 200 w/o AI w/ AI 0 50 100 150 200 w/o AI w/ AI 188m 154m 180m 108m saving 40% of time saving 18% of time 의사 A 의사 B 골연령 판독에서 인공지능을 활용하면 판독 시간의 절감도 가능 • 총 환자의 수: 200명 • 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험) • 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독) • 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스 • 인공지능: VUNO의 골연령 판독 딥러닝 AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380. Digital Healthcare Institute Director,Yoon Sup Choi, PhD yoonsup.choi@gmail.com
  • 61. Detection of Diabetic Retinopathy
  • 62. 당뇨성 망막병증 • 당뇨병의 대표적 합병증: 당뇨병력이 30년 이상 환자 90% 발병 • 안과 전문의들이 안저(안구의 안쪽)를 사진으로 찍어서 판독 • 망막 내 미세혈관 생성, 출혈, 삼출물 정도를 파악하여 진단
  • 63. Training Set / Test Set • CNN으로 후향적으로 128,175개의 안저 이미지 학습 • 미국의 안과전문의 54명이 3-7회 판독한 데이터 • 우수한 안과전문의들 7-8명의 판독 결과와 인공지능의 판독 결과 비교 • EyePACS-1 (9,963 개), Messidor-2 (1,748 개)a) Fullscreen mode b) Hit reset to reload this image. This will reset all of the grading. c) Comment box for other pathologies you see eFigure 2. Screenshot of the Second Screen of the Grading Tool, Which Asks Graders to Assess the Image for DR, DME and Other Notable Conditions or Findings
  • 64. • EyePACS-1 과 Messidor-2 의 AUC = 0.991, 0.990 • 7-8명의 안과 전문의와 sensitivity, specificity 가 동일한 수준 • F-score: 0.95 (vs. 인간 의사는 0.91) Additional sensitivity analyses were conducted for sev- eralsubcategories:(1)detectingmoderateorworsediabeticreti- effects of data set size on algorithm performance were exam- ined and shown to plateau at around 60 000 images (or ap- Figure 2. Validation Set Performance for Referable Diabetic Retinopathy 100 80 60 40 20 0 0 70 80 85 95 90 75 0 5 10 15 20 25 30 100806040 Sensitivity,% 1 – Specificity, % 20 EyePACS-1: AUC, 99.1%; 95% CI, 98.8%-99.3%A 100 High-sensitivity operating point High-specificity operating point 100 80 60 40 20 0 0 70 80 85 95 90 75 0 5 10 15 20 25 30 100806040 Sensitivity,% 1 – Specificity, % 20 Messidor-2: AUC, 99.0%; 95% CI, 98.6%-99.5%B 100 High-specificity operating point High-sensitivity operating point Performance of the algorithm (black curve) and ophthalmologists (colored circles) for the presence of referable diabetic retinopathy (moderate or worse diabetic retinopathy or referable diabetic macular edema) on A, EyePACS-1 (8788 fully gradable images) and B, Messidor-2 (1745 fully gradable images). The black diamonds on the graph correspond to the sensitivity and specificity of the algorithm at the high-sensitivity and high-specificity operating points. In A, for the high-sensitivity operating point, specificity was 93.4% (95% CI, 92.8%-94.0%) and sensitivity was 97.5% (95% CI, 95.8%-98.7%); for the high-specificity operating point, specificity was 98.1% (95% CI, 97.8%-98.5%) and sensitivity was 90.3% (95% CI, 87.5%-92.7%). In B, for the high-sensitivity operating point, specificity was 93.9% (95% CI, 92.4%-95.3%) and sensitivity was 96.1% (95% CI, 92.4%-98.3%); for the high-specificity operating point, specificity was 98.5% (95% CI, 97.7%-99.1%) and sensitivity was 87.0% (95% CI, 81.1%-91.0%). There were 8 ophthalmologists who graded EyePACS-1 and 7 ophthalmologists who graded Messidor-2. AUC indicates area under the receiver operating characteristic curve. Research Original Investigation Accuracy of a Deep Learning Algorithm for Detection of Diabetic Retinopathy Results
  • 67. 0 0 M O N T H 2 0 1 7 | V O L 0 0 0 | N A T U R E | 1 LETTER doi:10.1038/nature21056 Dermatologist-level classification of skin cancer with deep neural networks Andre Esteva1 *, Brett Kuprel1 *, Roberto A. Novoa2,3 , Justin Ko2 , Susan M. Swetter2,4 , Helen M. Blau5 & Sebastian Thrun6 Skin cancer, the most common human malignancy1–3 , is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs)4,5 show potential for general and highly variable tasks across many fine-grained object categories6–11 . Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images—two orders of magnitude larger than previous datasets12 —consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care. There are 5.4 million new cases of skin cancer in the United States2 every year. One in five Americans will be diagnosed with a cutaneous malignancy in their lifetime. Although melanomas represent fewer than 5% of all skin cancers in the United States, they account for approxi- mately 75% of all skin-cancer-related deaths, and are responsible for over 10,000 deaths annually in the United States alone. Early detection is critical, as the estimated 5-year survival rate for melanoma drops from over 99% if detected in its earliest stages to about 14% if detected in its latest stages. We developed a computational method which may allow medical practitioners and patients to proactively track skin lesions and detect cancer earlier. By creating a novel disease taxonomy, and a disease-partitioning algorithm that maps individual diseases into training classes, we are able to build a deep learning system for auto- mated dermatology. Previous work in dermatological computer-aided classification12,14,15 has lacked the generalization capability of medical practitioners owing to insufficient data and a focus on standardized tasks such as dermoscopy16–18 and histological image classification19–22 . Dermoscopy images are acquired via a specialized instrument and histological images are acquired via invasive biopsy and microscopy; whereby both modalities yield highly standardized images. Photographic images (for example, smartphone images) exhibit variability in factors such as zoom, angle and lighting, making classification substantially more challenging23,24 . We overcome this challenge by using a data- driven approach—1.41 million pre-training and training images make classification robust to photographic variability. Many previous techniques require extensive preprocessing, lesion segmentation and extraction of domain-specific visual features before classification. By contrast, our system requires no hand-crafted features; it is trained end-to-end directly from image labels and raw pixels, with a single network for both photographic and dermoscopic images. The existing body of work uses small datasets of typically less than a thousand images of skin lesions16,18,19 , which, as a result, do not generalize well to new images. We demonstrate generalizable classification with a new dermatologist-labelled dataset of 129,450 clinical images, including 3,374 dermoscopy images. Deep learning algorithms, powered by advances in computation and very large datasets25 , have recently been shown to exceed human performance in visual tasks such as playing Atari games26 , strategic board games like Go27 and object recognition6 . In this paper we outline the development of a CNN that matches the performance of dermatologists at three key diagnostic tasks: melanoma classification, melanoma classification using dermoscopy and carcinoma classification. We restrict the comparisons to image-based classification. We utilize a GoogleNet Inception v3 CNN architecture9 that was pre- trained on approximately 1.28 million images (1,000 object categories) from the 2014 ImageNet Large Scale Visual Recognition Challenge6 , and train it on our dataset using transfer learning28 . Figure 1 shows the working system. The CNN is trained using 757 disease classes. Our dataset is composed of dermatologist-labelled images organized in a tree-structured taxonomy of 2,032 diseases, in which the individual diseases form the leaf nodes. The images come from 18 different clinician-curated, open-access online repositories, as well as from clinical data from Stanford University Medical Center. Figure 2a shows a subset of the full taxonomy, which has been organized clinically and visually by medical experts. We split our dataset into 127,463 training and validation images and 1,942 biopsy-labelled test images. To take advantage of fine-grained information contained within the taxonomy structure, we develop an algorithm (Extended Data Table 1) to partition diseases into fine-grained training classes (for example, amelanotic melanoma and acrolentiginous melanoma). During inference, the CNN outputs a probability distribution over these fine classes. To recover the probabilities for coarser-level classes of interest (for example, melanoma) we sum the probabilities of their descendants (see Methods and Extended Data Fig. 1 for more details). We validate the effectiveness of the algorithm in two ways, using nine-fold cross-validation. First, we validate the algorithm using a three-class disease partition—the first-level nodes of the taxonomy, which represent benign lesions, malignant lesions and non-neoplastic 1 Department of Electrical Engineering, Stanford University, Stanford, California, USA. 2 Department of Dermatology, Stanford University, Stanford, California, USA. 3 Department of Pathology, Stanford University, Stanford, California, USA. 4 Dermatology Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, California, USA. 5 Baxter Laboratory for Stem Cell Biology, Department of Microbiology and Immunology, Institute for Stem Cell Biology and Regenerative Medicine, Stanford University, Stanford, California, USA. 6 Department of Computer Science, Stanford University, Stanford, California, USA. *These authors contributed equally to this work. © 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
  • 68. LETTERH his task, the CNN achieves 72.1±0.9% (mean±s.d.) overall he average of individual inference class accuracies) and two gists attain 65.56% and 66.0% accuracy on a subset of the set. Second, we validate the algorithm using a nine-class rtition—the second-level nodes—so that the diseases of have similar medical treatment plans. The CNN achieves two trials, one using standard images and the other using images, which reflect the two steps that a dermatologist m to obtain a clinical impression. The same CNN is used for a Figure 2b shows a few example images, demonstrating th distinguishing between malignant and benign lesions, whic visual features. Our comparison metrics are sensitivity an Acral-lentiginous melanoma Amelanotic melanoma Lentigo melanoma … Blue nevus Halo nevus Mongolian spot … Training classes (757)Deep convolutional neural network (Inception v3) Inference classes (varies by task) 92% malignant melanocytic lesion 8% benign melanocytic lesion Skin lesion image Convolution AvgPool MaxPool Concat Dropout Fully connected Softmax Deep CNN layout. Our classification technique is a Data flow is from left to right: an image of a skin lesion e, melanoma) is sequentially warped into a probability over clinical classes of skin disease using Google Inception hitecture pretrained on the ImageNet dataset (1.28 million 1,000 generic object classes) and fine-tuned on our own 29,450 skin lesions comprising 2,032 different diseases. ning classes are defined using a novel taxonomy of skin disease oning algorithm that maps diseases into training classes (for example, acrolentiginous melanoma, amelanotic melano melanoma). Inference classes are more general and are comp or more training classes (for example, malignant melanocytic class of melanomas). The probability of an inference class is c summing the probabilities of the training classes according to structure (see Methods). Inception v3 CNN architecture repr from https://research.googleblog.com/2016/03/train-your-ow classifier-with.html GoogleNet Inception v3 • 129,450개의 피부과 병변 이미지 데이터를 자체 제작 • 미국의 피부과 전문의 18명이 데이터 curation • CNN (Inception v3)으로 이미지를 학습 • 피부과 전문의들 21명과 인공지능의 판독 결과 비교 • 표피세포 암 (keratinocyte carcinoma)과 지루각화증(benign seborrheic keratosis)의 구분 • 악성 흑색종과 양성 병변 구분 (표준 이미지 데이터 기반) • 악성 흑색종과 양성 병변 구분 (더마토스코프로 찍은 이미지 기반)
  • 69. Skin cancer classification performance of the CNN and dermatologists. LETT a b 0 1 Sensitivity 0 1 Specificity Melanoma: 130 images 0 1 Sensitivity 0 1 Specificity Melanoma: 225 images Algorithm: AUC = 0.96 0 1 Sensitivity 0 1 Specificity Melanoma: 111 dermoscopy images 0 1 Sensitivity 0 1 Specificity Carcinoma: 707 images Algorithm: AUC = 0.96 0 1 Sensitivity 0 1 Specificity Melanoma: 1,010 dermoscopy images Algorithm: AUC = 0.94 0 1 Sensitivity 0 1 Specificity Carcinoma: 135 images Algorithm: AUC = 0.96 Dermatologists (25) Average dermatologist Algorithm: AUC = 0.94 Dermatologists (22) Average dermatologist Algorithm: AUC = 0.91 Dermatologists (21) Average dermatologist cancer classification performance of the CNN and 21명 중에 인공지능보다 정확성이 떨어지는 피부과 전문의들이 상당수 있었음 피부과 전문의들의 평균 성적도 인공지능보다 좋지 않았음
  • 70. Skin Cancer Image Classification (TensorFlow Dev Summit 2017) Skin cancer classification performance of the CNN and dermatologists. https://www.youtube.com/watch?v=toK1OSLep3s&t=419s
  • 71. WSJ, 2017 June • 다국적 제약사는 인공지능 기술을 신약 개발에 활용하기 위해 다양한 시도 • 최근 인공지능에서는 과거의 virtual screening, docking 등과는 다른 방식을 이용
  • 72. https://research.googleblog.com/2017/12/deepvariant-highly-accurate-genomes.html DeepVariant: Highly Accurate Genomes With Deep Neural Networks •2016년 PrecisionFDA의 SNP 퍼포먼스 부문에서 Verily 가 우승 •이 알고리즘이 개선되어 DeepVariant 라는 이름으로 공개 •Read의 alignment를 위해서 그 자체를 ‘이미지’로 인식하여 CNN으로 학습
  • 73. targets. To overcome these limitations we take an indirect approach. Instead of directly visualizing filters in order to understand their specialization, we apply filters to input data and examine the location where they maximally fire. Using this technique we were able to map filters to chemical functions. For example, Figure 5 illustrate the 3D locations at which a particular filter from our first convo- lutional layer fires. Visual inspection of the locations at which that filter is active reveals that this filter specializes as a sulfonyl/sulfonamide detector. This demonstrates the ability of the model to learn complex chemical features from simpler ones. In this case, the filter has inferred a meaningful spatial arrangement of input atom types without any chemical prior knowledge. Figure 5: Sulfonyl/sulfonamide detection with autonomously trained convolutional filters. 8 Protein-Compound Complex Structure Binding, or non-binding?
  • 74.
  • 75. AtomNet: A Deep Convolutional Neural Network for Bioactivity Prediction in Structure-based Drug Discovery Izhar Wallach Atomwise, Inc. izhar@atomwise.com Michael Dzamba Atomwise, Inc. misko@atomwise.com Abraham Heifets Atomwise, Inc. abe@atomwise.com Abstract Deep convolutional neural networks comprise a subclass of deep neural networks (DNN) with a constrained architecture that leverages the spatial and temporal structure of the domain they model. Convolutional networks achieve the best pre- dictive performance in areas such as speech and image recognition by hierarchi- cally composing simple local features into complex models. Although DNNs have been used in drug discovery for QSAR and ligand-based bioactivity predictions, none of these models have benefited from this powerful convolutional architec- ture. This paper introduces AtomNet, the first structure-based, deep convolutional neural network designed to predict the bioactivity of small molecules for drug dis- covery applications. We demonstrate how to apply the convolutional concepts of feature locality and hierarchical composition to the modeling of bioactivity and chemical interactions. In further contrast to existing DNN techniques, we show that AtomNet’s application of local convolutional filters to structural target infor- mation successfully predicts new active molecules for targets with no previously known modulators. Finally, we show that AtomNet outperforms previous docking approaches on a diverse set of benchmarks by a large margin, achieving an AUC greater than 0.9 on 57.8% of the targets in the DUDE benchmark. 1 Introduction Fundamentally, biological systems operate through the physical interaction of molecules. The ability to determine when molecular binding occurs is therefore critical for the discovery of new medicines and for furthering of our understanding of biology. Unfortunately, despite thirty years of compu- tational efforts, computer tools remain too inaccurate for routine binding prediction, and physical experiments remain the state of the art for binding determination. The ability to accurately pre- dict molecular binding would reduce the time-to-discovery of new treatments, help eliminate toxic molecules early in development, and guide medicinal chemistry efforts [1, 2]. In this paper, we introduce a new predictive architecture, AtomNet, to help address these challenges. AtomNet is novel in two regards: AtomNet is the first deep convolutional neural network for molec- ular binding affinity prediction. It is also the first deep learning system that incorporates structural information about the target to make its predictions. Deep convolutional neural networks (DCNN) are currently the best performing predictive models for speech and vision [3, 4, 5, 6]. DCNN is a class of deep neural network that constrains its model architecture to leverage the spatial and temporal structure of its domain. For example, a low-level image feature, such as an edge, can be described within a small spatially-proximate patch of pixels. Such a feature detector can share evidence across the entire receptive field by “tying the weights” of the detector neurons, as the recognition of the edge does not depend on where it is found within 1 arXiv:1510.02855v1[cs.LG]10Oct2015
  • 76. AtomNet: A Deep Convolutional Neural Network for Bioactivity Prediction in Structure-based Drug Discovery Izhar Wallach Atomwise, Inc. izhar@atomwise.com Michael Dzamba Atomwise, Inc. misko@atomwise.com Abraham Heifets Atomwise, Inc. abe@atomwise.com Abstract Deep convolutional neural networks comprise a subclass of deep neural networks (DNN) with a constrained architecture that leverages the spatial and temporal structure of the domain they model. Convolutional networks achieve the best pre- dictive performance in areas such as speech and image recognition by hierarchi- cally composing simple local features into complex models. Although DNNs have been used in drug discovery for QSAR and ligand-based bioactivity predictions, none of these models have benefited from this powerful convolutional architec- ture. This paper introduces AtomNet, the first structure-based, deep convolutional neural network designed to predict the bioactivity of small molecules for drug dis- covery applications. We demonstrate how to apply the convolutional concepts of feature locality and hierarchical composition to the modeling of bioactivity and chemical interactions. In further contrast to existing DNN techniques, we show that AtomNet’s application of local convolutional filters to structural target infor- mation successfully predicts new active molecules for targets with no previously known modulators. Finally, we show that AtomNet outperforms previous docking approaches on a diverse set of benchmarks by a large margin, achieving an AUC greater than 0.9 on 57.8% of the targets in the DUDE benchmark. 1 Introduction Fundamentally, biological systems operate through the physical interaction of molecules. The ability to determine when molecular binding occurs is therefore critical for the discovery of new medicines and for furthering of our understanding of biology. Unfortunately, despite thirty years of compu- tational efforts, computer tools remain too inaccurate for routine binding prediction, and physical experiments remain the state of the art for binding determination. The ability to accurately pre- dict molecular binding would reduce the time-to-discovery of new treatments, help eliminate toxic molecules early in development, and guide medicinal chemistry efforts [1, 2]. In this paper, we introduce a new predictive architecture, AtomNet, to help address these challenges. AtomNet is novel in two regards: AtomNet is the first deep convolutional neural network for molec- ular binding affinity prediction. It is also the first deep learning system that incorporates structural information about the target to make its predictions. Deep convolutional neural networks (DCNN) are currently the best performing predictive models for speech and vision [3, 4, 5, 6]. DCNN is a class of deep neural network that constrains its model architecture to leverage the spatial and temporal structure of its domain. For example, a low-level image feature, such as an edge, can be described within a small spatially-proximate patch of pixels. Such a feature detector can share evidence across the entire receptive field by “tying the weights” of the detector neurons, as the recognition of the edge does not depend on where it is found within 1 arXiv:1510.02855v1[cs.LG]10Oct2015 Smina 123 35 5 0 0 Table 3: The number of targets on which AtomNet and Smina exceed given adjusted-logAUC thresh- olds. For example, on the CHEMBL-20 PMD set, AtomNet achieves an adjusted-logAUC of 0.3 or better for 27 targets (out of 50 possible targets). ChEMBL-20 PMD contains 50 targets, DUDE- 30 contains 30 targets, DUDE-102 contains 102 targets, and ChEMBL-20 inactives contains 149 targets. To overcome these limitations we take an indirect approach. Instead of directly visualizing filters in order to understand their specialization, we apply filters to input data and examine the location where they maximally fire. Using this technique we were able to map filters to chemical functions. For example, Figure 5 illustrate the 3D locations at which a particular filter from our first convo- lutional layer fires. Visual inspection of the locations at which that filter is active reveals that this filter specializes as a sulfonyl/sulfonamide detector. This demonstrates the ability of the model to learn complex chemical features from simpler ones. In this case, the filter has inferred a meaningful spatial arrangement of input atom types without any chemical prior knowledge. Figure 5: Sulfonyl/sulfonamide detection with autonomously trained convolutional filters. 8 • 이미 알려진 단백질-리간드 3차원 결합 구조를 딥러닝(CNN)으로 학습 • 화학 결합 등에 대한 계산 없이도, 단백질-리간드 결합 여부를 계산 • 기존의 구조기반 예측 등 대비, 딥러닝으로 더 정확히 예측하였음
  • 77. AtomNet: A Deep Convolutional Neural Network for Bioactivity Prediction in Structure-based Drug Discovery Izhar Wallach Atomwise, Inc. izhar@atomwise.com Michael Dzamba Atomwise, Inc. misko@atomwise.com Abraham Heifets Atomwise, Inc. abe@atomwise.com Abstract Deep convolutional neural networks comprise a subclass of deep neural networks (DNN) with a constrained architecture that leverages the spatial and temporal structure of the domain they model. Convolutional networks achieve the best pre- dictive performance in areas such as speech and image recognition by hierarchi- cally composing simple local features into complex models. Although DNNs have been used in drug discovery for QSAR and ligand-based bioactivity predictions, none of these models have benefited from this powerful convolutional architec- ture. This paper introduces AtomNet, the first structure-based, deep convolutional neural network designed to predict the bioactivity of small molecules for drug dis- covery applications. We demonstrate how to apply the convolutional concepts of feature locality and hierarchical composition to the modeling of bioactivity and chemical interactions. In further contrast to existing DNN techniques, we show that AtomNet’s application of local convolutional filters to structural target infor- mation successfully predicts new active molecules for targets with no previously known modulators. Finally, we show that AtomNet outperforms previous docking approaches on a diverse set of benchmarks by a large margin, achieving an AUC greater than 0.9 on 57.8% of the targets in the DUDE benchmark. 1 Introduction Fundamentally, biological systems operate through the physical interaction of molecules. The ability to determine when molecular binding occurs is therefore critical for the discovery of new medicines and for furthering of our understanding of biology. Unfortunately, despite thirty years of compu- tational efforts, computer tools remain too inaccurate for routine binding prediction, and physical experiments remain the state of the art for binding determination. The ability to accurately pre- dict molecular binding would reduce the time-to-discovery of new treatments, help eliminate toxic molecules early in development, and guide medicinal chemistry efforts [1, 2]. In this paper, we introduce a new predictive architecture, AtomNet, to help address these challenges. AtomNet is novel in two regards: AtomNet is the first deep convolutional neural network for molec- ular binding affinity prediction. It is also the first deep learning system that incorporates structural information about the target to make its predictions. Deep convolutional neural networks (DCNN) are currently the best performing predictive models for speech and vision [3, 4, 5, 6]. DCNN is a class of deep neural network that constrains its model architecture to leverage the spatial and temporal structure of its domain. For example, a low-level image feature, such as an edge, can be described within a small spatially-proximate patch of pixels. Such a feature detector can share evidence across the entire receptive field by “tying the weights” of the detector neurons, as the recognition of the edge does not depend on where it is found within 1 arXiv:1510.02855v1[cs.LG]10Oct2015 • 이미 알려진 단백질-리간드 3차원 결합 구조를 딥러닝(CNN)으로 학습 • 화학 결합 등에 대한 계산 없이도, 단백질-리간드 결합 여부를 계산 • 기존의 구조기반 예측 등 대비, 딥러닝으로 더 정확히 예측하였음
  • 78.
  • 79. 604 VOLUME 35 NUMBER 7 JULY 2017 NATURE BIOTECHNOLOGY AI-powered drug discovery captures pharma interest Adrug-huntingdealinkedlastmonth,between Numerate,ofSanBruno,California,andTakeda PharmaceuticaltouseNumerate’sartificialintel- ligence (AI) suite to discover small-molecule therapies for oncology, gastroenterology and central nervous system disorders, is the latest in a growing number of research alliances involv- ing AI-powered computational drug develop- ment firms. Also last month, GNS Healthcare of Cambridge, Massachusetts announced a deal with Roche subsidiary Genentech of South San Francisco, California to use GNS’s AI platform to better understand what affects the efficacy of knowntherapiesinoncology.InMay,Exscientia of Dundee, Scotland, signed a deal with Paris- based Sanofi that includes up to €250 ($280) million in milestone payments. Exscientia will provide the compound design and Sanofi the chemical synthesis of new drugs for diabetes and cardiovascular disease. The trend indicates thatthepharmaindustry’slong-runningskepti- cism about AI is softening into genuine interest, driven by AI’s promise to address the industry’s principal pain point: clinical failure rates. The industry’s willingness to consider AI approaches reflects the reality that drug discov- eryislaborious,timeconsumingandnotpartic- ularly effective. A two-decade-long downward trend in clinical success rates has only recently improved (Nat. Rev. Drug Disc. 15, 379–380, 2016). Still, today, only about one in ten drugs thatenterphase1clinicaltrialsreachespatients. Half those failures are due to a lack of efficacy, says Jackie Hunter, CEO of BenevolentBio, a division of BenevolentAI of London. “That tells you we’re not picking the right targets,” she says. “Even a 5 or 10% reduction in efficacy failure would be amazing.” Hunter’s views on AI in drug discovery are featured in Ernst & Young’s BiotechnologyReport2017releasedlastmonth. Companies that have been watching AI from the sidelines are now jumping in. The best- known machine-learning model for drug dis- covery is perhaps IBM’s Watson. IBM signed a deal in December 2016 with Pfizer to aid the pharma giant’s immuno-oncology drug discov- eryefforts,addingtoastringofpreviousdealsin the biopharma space (Nat.Biotechnol.33,1219– 1220, 2015). IBM’s Watson hunts for drugs by sorting through vast amounts of textual data to provide quick analyses, and tests hypotheses by sorting through massive amounts of laboratory data, clinical reports and scientific publications. BenevolentAI takes a similar approach with algorithms that mine the research literature and proprietary research databases. The explosion of biomedical data has driven much of industry’s interest in AI (Table 1). The confluence of ever-increasing computational horsepower and the proliferation of large data sets has prompted scientists to seek learning algorithms that can help them navigate such massive volumes of information. A lot of the excitement about AI in drug discovery has spilled over from other fields. Machine vision, which allows, among other things, self-driving cars, and language process- ing have given rise to sophisticated multilevel artificial neural networks known as deep- learning algorithms that can be used to model biological processes from assay data as well as textual data. In the past people didn’t have enough data to properly train deep-learning algorithms, says Mark Gerstein, a biomedical informat- ics professor at Yale University in New Haven, Connecticut.Nowresearchershavebeenableto build massive databases and harness them with these algorithms, he says. “I think that excite- ment is justified.” Numerate is one of a growing number of AI companies founded to take advantage of that dataonslaughtasappliedtodrugdiscovery.“We apply AI to chemical design at every stage,” says Guido Lanza, Numerate’s CEO. It will provide Tokyo-basedTakedawithcandidatesforclinical trials by virtual compound screenings against targets, designing and optimizing compounds, andmodelingabsorption,distribution,metabo- lism and excretion, and toxicity. The agreement includes undisclosed milestone payments and royalties. Academic laboratories are also embracing AI tools. In April, Atomwise of San Francisco launched its Artificial Intelligence Molecular Screen awards program, which will deliver 72 potentially therapeutic compounds to as many as 100 university research labs at no charge. Atomwise is a University of Toronto spinout that in 2015 secured an alliance with Merck of Kenilworth, New Jersey. For this new endeavor, it will screen 10 million molecules using its AtomNet platform to provide each lab with 72 compounds aimed at a specific target of the laboratory’s choosing. The Japanese government launched in 2016 a research consortium centered on using Japan’s K supercomputer to ramp up drug discovery efficiency across dozens of local companies and institutions. Among those involved are Takeda and tech giants Fujitsu of Tokyo, Japan, and NEC, also of Tokyo, as well as Kyoto University Hospital and Riken, Japan’s National Research and Development Institute, which will provide clinical data. Deep learning is starting to gain acolytes in the drug discovery space. KTSDESIGN/SciencePhotoLibrary N E W S©2017NatureAmerica,Inc.,partofSpringerNature.Allrightsreserved.
  • 80. 604 VOLUME 35 NUMBER 7 JULY 2017 NATURE BIOTECHNOLOGY AI-powered drug discovery captures pharma interest Adrug-huntingdealinkedlastmonth,between Numerate,ofSanBruno,California,andTakeda PharmaceuticaltouseNumerate’sartificialintel- ligence (AI) suite to discover small-molecule therapies for oncology, gastroenterology and central nervous system disorders, is the latest in a growing number of research alliances involv- ing AI-powered computational drug develop- ment firms. Also last month, GNS Healthcare of Cambridge, Massachusetts announced a deal with Roche subsidiary Genentech of South San Francisco, California to use GNS’s AI platform to better understand what affects the efficacy of knowntherapiesinoncology.InMay,Exscientia of Dundee, Scotland, signed a deal with Paris- based Sanofi that includes up to €250 ($280) million in milestone payments. Exscientia will provide the compound design and Sanofi the chemical synthesis of new drugs for diabetes and cardiovascular disease. The trend indicates thatthepharmaindustry’slong-runningskepti- cism about AI is softening into genuine interest, driven by AI’s promise to address the industry’s principal pain point: clinical failure rates. The industry’s willingness to consider AI approaches reflects the reality that drug discov- eryislaborious,timeconsumingandnotpartic- ularly effective. A two-decade-long downward trend in clinical success rates has only recently improved (Nat. Rev. Drug Disc. 15, 379–380, 2016). Still, today, only about one in ten drugs thatenterphase1clinicaltrialsreachespatients. Half those failures are due to a lack of efficacy, says Jackie Hunter, CEO of BenevolentBio, a division of BenevolentAI of London. “That tells you we’re not picking the right targets,” she says. “Even a 5 or 10% reduction in efficacy failure would be amazing.” Hunter’s views on AI in drug discovery are featured in Ernst & Young’s BiotechnologyReport2017releasedlastmonth. Companies that have been watching AI from the sidelines are now jumping in. The best- known machine-learning model for drug dis- covery is perhaps IBM’s Watson. IBM signed a deal in December 2016 with Pfizer to aid the pharma giant’s immuno-oncology drug discov- eryefforts,addingtoastringofpreviousdealsin the biopharma space (Nat.Biotechnol.33,1219– 1220, 2015). IBM’s Watson hunts for drugs by sorting through vast amounts of textual data to provide quick analyses, and tests hypotheses by sorting through massive amounts of laboratory data, clinical reports and scientific publications. BenevolentAI takes a similar approach with algorithms that mine the research literature and proprietary research databases. The explosion of biomedical data has driven much of industry’s interest in AI (Table 1). The confluence of ever-increasing computational horsepower and the proliferation of large data sets has prompted scientists to seek learning algorithms that can help them navigate such massive volumes of information. A lot of the excitement about AI in drug discovery has spilled over from other fields. Machine vision, which allows, among other things, self-driving cars, and language process- ing have given rise to sophisticated multilevel artificial neural networks known as deep- learning algorithms that can be used to model biological processes from assay data as well as textual data. In the past people didn’t have enough data to properly train deep-learning algorithms, says Mark Gerstein, a biomedical informat- ics professor at Yale University in New Haven, Connecticut.Nowresearchershavebeenableto build massive databases and harness them with these algorithms, he says. “I think that excite- ment is justified.” Numerate is one of a growing number of AI companies founded to take advantage of that dataonslaughtasappliedtodrugdiscovery.“We apply AI to chemical design at every stage,” says Guido Lanza, Numerate’s CEO. It will provide Tokyo-basedTakedawithcandidatesforclinical trials by virtual compound screenings against targets, designing and optimizing compounds, andmodelingabsorption,distribution,metabo- lism and excretion, and toxicity. The agreement includes undisclosed milestone payments and royalties. Academic laboratories are also embracing AI tools. In April, Atomwise of San Francisco launched its Artificial Intelligence Molecular Screen awards program, which will deliver 72 potentially therapeutic compounds to as many as 100 university research labs at no charge. Atomwise is a University of Toronto spinout that in 2015 secured an alliance with Merck of Kenilworth, New Jersey. For this new endeavor, it will screen 10 million molecules using its AtomNet platform to provide each lab with 72 compounds aimed at a specific target of the laboratory’s choosing. The Japanese government launched in 2016 a research consortium centered on using Japan’s K supercomputer to ramp up drug discovery efficiency across dozens of local companies and institutions. Among those involved are Takeda and tech giants Fujitsu of Tokyo, Japan, and NEC, also of Tokyo, as well as Kyoto University Hospital and Riken, Japan’s National Research and Development Institute, which will provide clinical data. Deep learning is starting to gain acolytes in the drug discovery space. KTSDESIGN/SciencePhotoLibrary N E W S©2017NatureAmerica,Inc.,partofSpringerNature.Allrightsreserved. Genomics data analytics startup WuXi NextCode Genomics of Shanghai; Cambridge, Massachusetts; and Reykjavík, Iceland, collab- orated with researchers at Yale University on a study that used the company’s deep-learning algorithm to identify a key mechanism in blood vessel growth. The result could aid drug discovery efforts aimed at inhibiting blood vessel growth in tumors (Nature doi:10.1038/ nature22322, 2017). IntheUS,duringtheObamaadministration, industry and academia joined forces to apply AI to accelerate drug discovery as part of the CancerMoonshotinitiative(Nat.Biotechnol.34, 119, 2016). The Accelerating Therapeutics for Opportunities in Medicine (ATOM), launched in January 2016, marries computational and experimental approaches, with Brentford, UK-based GlaxoSmithKline, participating with Lawrence Livermore National Laboratory in Livermore, California, and the US National Cancer Institute. The computational portion of the process, which includes deep-learning and other AI algorithms, will be tested in the first two years. In the third year, “we hope to start on day one with a disease hypothesis and on day 365 to deliver a drug candidate,” says MarthaHead,GlaxoSmithKline’shead,insights from data. Table 1 Selected collaborations in the AI-drug discovery space AI company/ location Technology Announced partner/ location Indication(s) Deal date Atomwise Deep-learning screening from molecular structure data Merck Malaria 2015 BenevolentAI Deep-learning and natural language processing of research literature Janssen Pharmaceutica (Johnson & Johnson), Beerse, Belgium Multiple November 8, 2016 Berg, Framingham, Massachusetts Deep-learning screening of biomarkers from patient data None Multiple N/A Exscientia Bispecific compounds via Bayesian models of ligand activity from drug discovery data Sanofi Metabolic diseases May 9, 2017 GNS Healthcare Bayesian probabilistic inference for investigating efficacy Genentech Oncology June 19, 2017 Insilico Medicine Deep-learning screening from drug and disease databases None Age-related diseases N/A Numerate Deep learning from pheno- typic data Takeda Oncology, gastro- enterology and central nervous system disorders June 12, 2017 Recursion, Salt Lake City, Utah Cellular phenotyping via image analysis Sanofi Rare genetic diseases April 25, 2016 twoXAR, Palo Alto, California Deep-learning screening from literature and assay data Santen Pharmaceuticals, Osaka, Japan Glaucoma February 23, 2017 N/A, none announced. Source: companies’ websites. N E W S
  • 81. •현재 하루에 10m 개의 compound 를 스크리닝 가능 •실험보다 10,000배, Ultra HTS 보다 100배 빠름 •Toxicity, side effects, mechanism of action, efficacy 등의 규명을 위해서도 사용 •머크를 포함한 10개의 제약사, 하버드 등 40개 연구 기관과 프로젝트 진행 중 •대상 질병: Alzheimer's disease, bacterial infections, antibiotics, nephrology, 
 
 ophthalmology, immuno-oncology, metabolic and childhood liver diseases 등
  • 82. Standigm ® Standard + Next Paradigm Giant’s shoulder Artificial Intelligence Gangnam, Seoul, Founded in May 2015 www.standigm.com
  • 83. Standigm AI for drug repositioning New indication prediction Prediction interpretation Target protein prioritization Compound | Disease Compound | Pathways | Disease Compound | Binding Targets
 on Pathways | Disease LINCS L1000 The deep learning algorithm trained with millions of drug- perturbed gene expression responses on various cell lines The massive biological knowledge graph database integrated automatically from various drug- disease-target resources The drug structure embedded machine learning algorithm for binding affinity prediction
  • 84.
  • 85. Outcomes Standigm generated tens of drug candidates for diverse diseases. The candidates have been experimentally validated with our collaboration partners. Cancer with CrystalGenomics, Inc. toward lead optimization (2 hits out of 10 initial candidates) Parkinson’s disease with Ajou University (College of Pharmacy) under validating with animal model (1 hit out of 7 initial candidates) Autism with Korea Institute of Science and Technology under validating with animal model (10 initial candidates) Fatty liver disease (In-house project) validated with gut-liver on a chip (7 hits out of 7 initial candidates) Mitochondrial diseases (In-house project) establishing experimental plans with domain experts (3 initial candidates) Small projects with a Japanese pharmaceutical company
  • 86. Collaboration New indication prediction Prediction interpretation Target protein prioritization Standigm basically aims at exclusive partnership with our collaborators. Basic pipeline *Additional customized modules can be developed to pursue the best results upon discussion The total service fee depends on: • The number of compounds • Range of the selected disease area • Marketability of the selected disease area The rate of up-front depends on: • Ownership of the developed product • Ownership of the produced information during collaboration (Exclusive for collaborator or joint ownership) * L1000 profiling service fee by Genometry is not included.
  • 87. AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial Post Market Surveillance Digital Healthcare in Drug Development •환자 모집 •데이터 측정: 센서&웨어러블 •디지털 표현형 •복약 순응도
  • 88.
  • 89. •복잡한 의료 데이터의 분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 인공지능의 의료 활용
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95. Annals of Oncology (2016) 27 (suppl_9): ix179-ix180. 10.1093/annonc/mdw601 Validation study to assess performance of IBM cognitive computing system Watson for oncology with Manipal multidisciplinary tumour board for 1000 consecutive cases: 
 An Indian experience • MMDT(Manipal multidisciplinary tumour board) treatment recommendation and data of 1000 cases of 4 different cancers breast (638), colon (126), rectum (124) and lung (112) which were treated in last 3 years was collected. • Of the treatment recommendations given by MMDT, WFO provided 
 
 50% in REC, 28% in FC, 17% in NREC • Nearly 80% of the recommendations were in WFO REC and FC group • 5% of the treatment provided by MMDT was not available with WFO • The degree of concordance varied depending on the type of cancer • WFO-REC was high in Rectum (85%) and least in Lung (17.8%) • high with TNBC (67.9%); HER2 negative (35%)
 • WFO took a median of 40 sec to capture, analyze and give the treatment.
 
 (vs MMDT took the median time of 15 min)
  • 96. WFO in ASCO 2017 • Early experience with IBM WFO cognitive computing system for lung 
 
 and colorectal cancer treatment (마니팔 병원)
 • 지난 3년간: lung cancer(112), colon cancer(126), rectum cancer(124) • lung cancer: localized 88.9%, meta 97.9% • colon cancer: localized 85.5%, meta 76.6% • rectum cancer: localized 96.8%, meta 80.6% Performance of WFO in India 2017 ASCO annual Meeting, J Clin Oncol 35, 2017 (suppl; abstr 8527)
  • 97. Empowering the Oncology Community for Cancer Care Genomics Oncology Clinical Trial Matching Watson Health’s oncology clients span more than 35 hospital systems “Empowering the Oncology Community for Cancer Care” Andrew Norden, KOTRA Conference, March 2017, “The Future of Health is Cognitive”
  • 98. IBM Watson Health Watson for Clinical Trial Matching (CTM) 18 1. According to the National Comprehensive Cancer Network (NCCN) 2. http://csdd.tufts.edu/files/uploads/02_-_jan_15,_2013_-_recruitment-retention.pdf© 2015 International Business Machines Corporation Searching across eligibility criteria of clinical trials is time consuming and labor intensive Current Challenges Fewer than 5% of adult cancer patients participate in clinical trials1 37% of sites fail to meet minimum enrollment targets. 11% of sites fail to enroll a single patient 2 The Watson solution • Uses structured and unstructured patient data to quickly check eligibility across relevant clinical trials • Provides eligible trial considerations ranked by relevance • Increases speed to qualify patients Clinical Investigators (Opportunity) • Trials to Patient: Perform feasibility analysis for a trial • Identify sites with most potential for patient enrollment • Optimize inclusion/exclusion criteria in protocols Faster, more efficient recruitment strategies, better designed protocols Point of Care (Offering) • Patient to Trials: Quickly find the right trial that a patient might be eligible for amongst 100s of open trials available Improve patient care quality, consistency, increased efficiencyIBM Confidential
  • 99. •총 16주간 HOG( Highlands Oncology Group)의 폐암과 유방암 환자 2,620명을 대상 •90명의 환자를 3개의 노바티스 유방암 임상 프로토콜에 따라 선별 •임상 시험 코디네이터: 1시간 50분 •Watson CTM: 24분 (78% 시간 단축) •Watson CTM은 임상 시험 기준에 해당되지 않는 환자 94%를 자동으로 스크리닝
  • 100. •메이요 클리닉의 유방암 신약 임상시험에 등록자의 수가 80% 증가하였다는 결과 발표
  • 101. AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial Post Market Surveillance Digital Healthcare in Drug Development •환자 모집 •데이터 측정: 센서&웨어러블 •디지털 표현형 •복약 순응도
  • 102. Fitbit
  • 103.
  • 105. https://clinicaltrials.gov/ct2/results?term=fitbit&Search=Search •의료기기가 아님에도 Fitbit 은 이미 임상 연구에 폭넓게 사용되고 있음 •Fitbit 이 장려하지 않았음에도, 임상 연구자들이 자발적으로 사용 •Fitbit 을 이용한 임상 연구 수는 계속 증가하는 추세 (16.3(80), 16.8(113), 17.7(173))
  • 106.
  • 107. •Fitbit이 임상연구에 활용되는 것은 크게 두 가지 경우 •Fitbit 자체가 intervention이 되어서 활동량이나 치료 효과를 증진시킬 수 있는지 여부 •연구 참여자의 활동량을 모니터링 하기 위한 수단
 •1. Fitbit으로 환자의 활동량을 증가시키기 위한 연구들 •Fitbit이 소아 비만 환자의 활동량을 증가시키는지 여부를 연구 •Fitbit이 위소매절제술을 받은 환자들의 활동량을 증가시키는지 여부 •Fitbit이 젊은 낭성 섬유증 (cystic fibrosis) 환자의 활동량을 증가시키는지 여부 •Fitbit이 암 환자의 신체 활동량을 증가시키기 위한 동기부여가 되는지 여부 •2. Fitbit으로 임상 연구에 참여하는 환자의 활동량을 모니터링 •항암 치료를 받은 환자들의 건강과 예후를 평가하는데 fitbit을 사용 •현금이 자녀/부모의 활동량을 증가시키는지 파악하기 위해 fitbit을 사용 •Brain tumor 환자의 삶의 질 측정을 위해 다른 survey 결과와 함께 fitbit을 사용 •말초동맥 질환(Peripheral Artery Disease) 환자의 활동량을 평가하기 위해
  • 108. •체중 감량이 유방암 재발에 미치는 영향을 연구 •유방암 환자들 중 20%는 재발, 대부분이 전이성 유방암 •과체중은 유방암의 위험을 높인다고 알려져 왔으며, •비만은 초기 유방암 환자의 예후를 좋지 않게 만드는 것도 알려짐 •하지만, 체중 감량과 유방암 재발 위험도의 상관관계 연구는 아직 없음 •3,200 명의 과체중, 초기 비만 유방암 환자들이 2년간 참여 •결과에 따라 전세계 유방암 환자의 표준 치료에 체중 감량이 포함될 가능성 •Fitbit 이 체중 감량 프로그램에 대한 지원 •Fitbit Charge HR: 운동량, 칼로리 소모, 심박수 측정 •Fitbit Aria Wi-Fi Smart Scale: 스마트 체중계 •FitStar: 개인 맞춤형 동영상 운동 코칭 서비스 2016. 4. 27.
  • 109.
  • 111. •Biogen Idec, 다발성 경화증 환자의 모니터링에 Fitbit을 사용 •고가의 약 효과성을 검증하여 보험 약가 유지 목적 •정교한 측정으로 MS 전조 증상의 조기 발견 가능? Dec 23, 2014
  • 113.
  • 114.
  • 115.
  • 116.
  • 117. (“FREE VERTICAL MOMENTS AND TRANSVERSE FORCES IN HUMAN WALKING AND THEIR ROLE IN RELATION TO ARM-SWING”, YU LI*, WEIJIE WANG, ROBIN H. CROMPTON AND MICHAEL M. GUNTHER) (“SYNTHESIS OF NATURAL ARM SWING MOTION IN HUMAN BIPEDAL WALKING”, JAEHEUNG PARK) ︎ Right Arm Left Foot Left Arm Right Foot “보행 시 팔의 움직임은 몸의 역학적 균형을 맞추기 위한 자동적인 행동 으로, 반대쪽 발의 움직임을 관찰할 수 있는 지표” 보행 종류에 따른 신체 운동 궤도의 변화 발의 모양 팔의 스윙 궤도 일반 보행 팔자 걸음 구부린 걸음 직토 워크에서 수집하는 데이터 종류 설명 비고 충격량 발에 전해지는 충격량 분석 Impact Score 보행 주기 보행의 주기 분석 Interval Score 보폭 단위 보행 시의 거리 Stride(향후 보행 분석 고도화용) 팔의 3차원 궤도 걸음에 따른 팔의 움직임 궤도 팔의 Accel,Gyro Data 취합 보행 자세 상기 자료를 분석한 보행 자세 분류 총 8가지 종류로 구분 비대칭 지수 신체 부위별(어깨, 허리, 골반) 비대칭 점수 제공 1주일 1회 반대쪽 손 착용을 통한 데이터 취득 필요 걸음걸이 템플릿 보행시 발생하는 특이점들을 추출하여 개인별 템플릿 저장 생체 인증 기능용 with the courtesy of ZIKTO, Inc
  • 119. https://www.empatica.com/science Monitoring the Autonomic Nervous System “Sympathetic activation increases when you experience excitement or stress whether physical, emotional, or cognitive.The skin is the only organ that is purely innervated by the sympathetic nervous system.” https://www.empatica.com/science
  • 120. from the talk of Professor Rosalind W. Picard @ Univ of Michigan 2015
  • 123.
  • 125.
  • 128.
  • 129. • 아이폰의 센서로 측정한 자신의 의료/건강 데이터를 플랫폼에 공유 가능 • 가속도계, 마이크, 자이로스코프, GPS 센서 등을 이용 • 걸음, 운동량, 기억력, 목소리 떨림 등등 • 기존의 의학연구의 문제를 해결: 충분한 의료 데이터의 확보 • 연구 참여자 등록에 물리적, 시간적 장벽을 제거 (1번/3개월 ➞ 1번/1초) • 대중의 의료 연구 참여 장려: 연구 참여자의 수 증가 • 발표 후 24시간 내에 수만명의 연구 참여자들이 지원 • 사용자 본인의 동의 하에 진행 ResearchKit
  • 130. •초기 버전으로, 5가지 질환에 대한 앱 5개를 소개 ResearchKit
  • 135. Autism and Beyond EpiWatchMole Mapper measuring facial expressions of young patients having autism measuring morphological changes of moles measuring behavioral data of epilepsy patients
  • 136. •스탠퍼드의 심혈관 질환 연구 앱, myHeart • 발표 하루만에 11,000 명의 참가자가 등록 • 스탠퍼드의 해당 연구 책임자 앨런 영,
 “기존의 방식으로는 11,000명 참가자는 
 미국 전역의 50개 병원에서 1년간 모집해야 한다”
  • 137. •파킨슨 병 연구 앱, mPower • 발표 하루만에 5,589 명의 참가자가 등록 • 기존에 6000만불을 들여 5년 동안 모집한
 환자의 수는 단 800명
  • 138. the manifestations of disease by providing a more comprehensive and nuanced view of the experience of illness. Through the lens of the digital phenotype, an individual’s interaction The digital phenotype Sachin H Jain, Brian W Powers, Jared B Hawkins & John S Brownstein In the coming years, patient phenotypes captured to enhance health and wellness will extend to human interactions with digital technology. In 1982, the evolutionary biologist Richard Dawkins introduced the concept of the “extended phenotype”1, the idea that pheno- types should not be limited just to biological processes, such as protein biosynthesis or tissue growth, but extended to include all effects that a gene has on its environment inside or outside ofthebodyoftheindividualorganism.Dawkins stressed that many delineations of phenotypes are arbitrary. Animals and humans can modify their environments, and these modifications andassociatedbehaviorsareexpressionsofone’s genome and, thus, part of their extended phe- notype. In the animal kingdom, he cites damn buildingbybeaversasanexampleofthebeaver’s extended phenotype1. Aspersonaltechnologybecomesincreasingly embedded in human lives, we think there is an important extension of Dawkins’s theory—the notion of a ‘digital phenotype’. Can aspects of ourinterfacewithtechnologybesomehowdiag- nosticand/orprognosticforcertainconditions? Can one’s clinical data be linked and analyzed together with online activity and behavior data to create a unified, nuanced view of human dis- ease?Here,wedescribetheconceptofthedigital phenotype. Although several disparate studies have touched on this notion, the framework for medicine has yet to be described. We attempt to define digital phenotype and further describe the opportunities and challenges in incorporat- ing these data into healthcare. Jan. 2013 0.000 0.002 0.004 Density 0.006 July 2013 Jan. 2014 July 2014 User 1 User 2 User 3 User 4 User 5 User 6 User 7 Date Figure 1 Timeline of insomnia-related tweets from representative individuals. Density distributions (probability density functions) are shown for seven individual users over a two-year period. Density on the y axis highlights periods of relative activity for each user. A representative tweet from each user is shown as an example. npg©2015NatureAmerica,Inc.Allrightsreserved. http://www.nature.com/nbt/journal/v33/n5/full/nbt.3223.html
  • 143.
  • 144. Digital Phenotype: Your smartphone knows if you are depressed Ginger.io
  • 145. Ginger.io •문자를 얼마나 자주 하는지 •통화를 얼마나 오래하는지 •누구와 통화를 하는지 •얼마나 거리를 많이 이동했는지 •얼마나 많이 움직였는지 • UCSF, McLean Hospital: 정신질환 연구 • Novant Health: 당뇨병, 산후 우울증 연구 • UCSF, Duke: 수술 후 회복 모니터링
  • 146. Digital Phenotype: Your smartphone knows if you are depressed J Med Internet Res. 2015 Jul 15;17(7):e175. The correlation analysis between the features and the PHQ-9 scores revealed that 6 of the 10 features were significantly correlated to the scores: • strong correlation: circadian movement, normalized entropy, location variance • correlation: phone usage features, usage duration and usage frequency
  • 147. Digital Phenotype: Your smartphone knows if you are depressed J Med Internet Res. 2015 Jul 15;17(7):e175. Comparison of location and usage feature statistics between participants with no symptoms of depression (blue) and the ones with (red). (ENT, entropy; ENTN, normalized entropy; LV, location variance; HS, home stay;TT, transition time;TD, total distance; CM, circadian movement; NC, number of clusters; UF, usage frequency; UD, usage duration). Figure 4. Comparison of location and usage feature statistics between participants with no symptoms of depression (blue) and the ones with (red). Feature values are scaled between 0 and 1 for easier comparison. Boxes extend between 25th and 75th percentiles, and whiskers show the range. Horizontal solid lines inside the boxes are medians. One, two, and three asterisks show significant differences at P<.05, P<.01, and P<.001 levels, respectively (ENT, entropy; ENTN, normalized entropy; LV, location variance; HS, home stay; TT, transition time; TD, total distance; CM, circadian movement; NC, number of clusters; UF, usage frequency; UD, usage duration). Figure 5. Coefficients of correlation between location features. One, two, and three asterisks indicate significant correlation levels at P<.05, P<.01, and P<.001, respectively (ENT, entropy; ENTN, normalized entropy; LV, location variance; HS, home stay; TT, transition time; TD, total distance; CM, circadian movement; NC, number of clusters). Saeb et alJOURNAL OF MEDICAL INTERNET RESEARCH the variability of the time the participant spent at the location clusters what extent the participants’ sequence of locations followed a circadian rhythm. home stay
  • 148. Reece & Danforth, “Instagram photos reveal predictive markers of depression” (2016) higher Hue (bluer) lower Saturation (grayer) lower Brightness (darker) 인스타그램으로 당신이 우울한지 알 수 있을까?
  • 149. Digital Phenotype: Your Instagram knows if you are depressed Rao (MVR) (24) .     Results  Both All­data and Pre­diagnosis models were decisively superior to a null model . All­data predictors were significant with 99% probability.57.5;(KAll  = 1 K 49.8)  Pre = 1  7 Pre­diagnosis and All­data confidence levels were largely identical, with two exceptions:  Pre­diagnosis Brightness decreased to 90% confidence, and Pre­diagnosis posting frequency  dropped to 30% confidence, suggesting a null predictive value in the latter case.   Increased hue, along with decreased brightness and saturation, predicted depression. This  means that photos posted by depressed individuals tended to be bluer, darker, and grayer (see  Fig. 2). The more comments Instagram posts received, the more likely they were posted by  depressed participants, but the opposite was true for likes received. In the All­data model, higher  posting frequency was also associated with depression. Depressed participants were more likely  to post photos with faces, but had a lower average face count per photograph than healthy  participants. Finally, depressed participants were less likely to apply Instagram filters to their  posted photos.     Fig. 2. Magnitude and direction of regression coefficients in All­data (N=24,713) and Pre­diagnosis (N=18,513)  models. X­axis values represent the adjustment in odds of an observation belonging to depressed individuals, per  Reece & Danforth, “Instagram photos reveal predictive markers of depression” (2016)     Fig. 1. Comparison of HSV values. Right photograph has higher Hue (bluer), lower Saturation (grayer), and lower  Brightness (darker) than left photograph. Instagram photos posted by depressed individuals had HSV values  shifted towards those in the right photograph, compared with photos posted by healthy individuals.    Units of observation  In determining the best time span for this analysis, we encountered a difficult question:  When and for how long does depression occur? A diagnosis of depression does not indicate the  persistence of a depressive state for every moment of every day, and to conduct analysis using an  individual’s entire posting history as a single unit of observation is therefore rather specious. At  the other extreme, to take each individual photograph as units of observation runs the risk of  being too granular. DeChoudhury et al. (5) looked at all of a given user’s posts in a single day,  and aggregated those data into per­person, per­day units of observation. We adopted this  precedent of “user­days” as a unit of analysis .  5   Statistical framework  We used Bayesian logistic regression with uninformative priors to determine the strength  of individual predictors. Two separate models were trained. The All­data model used all  collected data to address Hypothesis 1. The Pre­diagnosis model used all data collected from  higher Hue (bluer) lower Saturation (grayer) lower Brightness (darker)
  • 150. Digital Phenotype: Your Instagram knows if you are depressed Reece & Danforth, “Instagram photos reveal predictive markers of depression” (2016) . In particular, depressedχ2 07.84, p .17e 64;( All  = 9   = 9 − 1 13.80, p .87e 44)χ2Pre  = 8   = 2 − 1   participants were less likely than healthy participants to use any filters at all. When depressed  participants did employ filters, they most disproportionately favored the “Inkwell” filter, which  converts color photographs to black­and­white images. Conversely, healthy participants most  disproportionately favored the Valencia filter, which lightens the tint of photos. Examples of  filtered photographs are provided in SI Appendix VIII.     Fig. 3. Instagram filter usage among depressed and healthy participants. Bars indicate difference between observed  and expected usage frequencies, based on a Chi­squared analysis of independence. Blue bars indicate  disproportionate use of a filter by depressed compared to healthy participants, orange bars indicate the reverse. 
  • 151. Digital Phenotype: Your Instagram knows if you are depressed Reece & Danforth, “Instagram photos reveal predictive markers of depression” (2016)   VIII. Instagram filter examples    Fig. S8. Examples of Inkwell and Valencia Instagram filters.  Inkwell converts  color photos to black­and­white, Valencia lightens tint.  Depressed participants  most favored Inkwell compared to healthy participants, Healthy participants 
  • 152. AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial Post Market Surveillance Digital Healthcare in Drug Development •환자 모집 •데이터 측정: 센서&웨어러블 •디지털 표현형 •복약 순응도
  • 153. Ingestible Sensor, Proteus Digital Health
  • 154. Ingestible Sensor, Proteus Digital Health
  • 155.
  • 156. IEEE Trans Biomed Eng. 2014 Jul An Ingestible Sensor for Measuring Medication Adherence d again on imal was ysis were s detected, risk of ed with a his can be s during can be on, placed filling, or an edible monstrated cases, the nts of the ve release ity, visual a suitable The 0.9% of devices that went undetected represent contributions from all components of the system. For the sensor, the most likely contribution is due to physiological corner cases, where a combination of stomach environment and receiver-sensor orientation may result in a small proportion of devices (no greater than 0.9%) being missed. Table IV- Exposure and performance in clinical trials 412 subjects 20,993 ingestions Maximum daily ingestion: 34 Maximum use days: 90 days 99.1% Detection accuracy 100% Correct identification 0% False positives No SAEs / UADEs related to system Trials were conducted in the following patient populations. The number of patients in each study is indicated in parentheses: Healthy Volunteers (296), Cardiovascular disease (53), Tuberculosis (30), Psychiatry (28). SAE = Serious Adverse Event; UADE = Unanticipated Adverse Device Effect) Exposure and performance in clinical trials
  • 157. Jan 12, 2015 Clinical trial researchers using Oracle’s software will now be able to track patients’ medication adherence with Proteus’s technology. - Measuring participant adherence to
 drug protocols - Identifying the optimum dosing
 regimen for recommended use
  • 158. Sep 10, 2015 Proteus and Otsuka have submitted a sensor-embedded version of the antidepressant Abilify for FDA approval.
  • 160. Nov 13, 2017 •2017년 11월 FDA는 Abilify MyCite의 시판 허가 •처방 전 환자의 동의가 필요 •환자의 사생활 침해 우려 의견도 있음 •주치의와 보호자까지 최대 4명이 복약 정보 수령 가능
  • 161. Nov 13, 2017 •2017년 11월 FDA는 Abilify MyCite의 시판 허가 •처방 전 환자의 동의가 필요 •환자의 사생활 침해 우려 의견도 있음 •주치의와 보호자까지 최대 4명이 복약 정보 수령 가능
  • 162. AnalysisTarget Discovery AnalysisLead Discovery Clinical Trial Post Market Surveillance Digital Healthcare in Drug Development •SNS 기반의 PMS •블록체인 기반의 PMS
  • 163. ‘Facebook for Patients’, PatientsLikeMe.com