Professor, SAHIST, Sungkyunkwan University
Director, Digital Healthcare Institute
Yoon Sup Choi, Ph.D.
인공지능은 의료를 어떻게 혁신하는가
“It's in Apple's DNA that technology alone is not enough. 

It's technology married with liberal arts.”
The Convergence of IT, BT and Medicine
최윤섭 지음
의료인공지능
표지디자인•최승협
컴퓨터
털 헬
치를 만드는 것을 화두로
기업가, 엔젤투자가, 에반
의 대표적인 전문가로, 활
이 분야를 처음 소개한 장
포항공과대학교에서 컴
동 대학원 시스템생명공
취득하였다. 스탠퍼드대
조교수, KT 종합기술원 컨
구원 연구조교수 등을 거
저널에 10여 편의 논문을
국내 최초로 디지털 헬스
윤섭 디지털 헬스케어 연
국내 유일의 헬스케어 스
어 파트너스’의 공동 창업
스타트업을 의료 전문가
관대학교 디지털헬스학과
뷰노, 직토, 3billion, 서지
소울링, 메디히어, 모바일
자문을 맡아 한국에서도
고 있다. 국내 최초의 디
케어 이노베이션』에 활발
을 연재하고 있다. 저서로
와 『그렇게 나는 스스로
•블로그_ http://www
•페이스북_ https://w
•이메일_ yoonsup.c
최윤섭
의료 인공지능은 보수적인 의료 시스템을 재편할 혁신을 일으키고 있다. 의료 인공지능의 빠른 발전과
광범위한 영향은 전문화, 세분화되며 발전해 온 현대 의료 전문가들이 이해하기가 어려우며, 어디서부
터 공부해야 할지도 막연하다. 이런 상황에서 의료 인공지능의 개념과 적용, 그리고 의사와의 관계를 쉽
게 풀어내는 이 책은 좋은 길라잡이가 될 것이다. 특히 미래의 주역이 될 의학도와 젊은 의료인에게 유용
한 소개서이다.
━ 서준범, 서울아산병원 영상의학과 교수, 의료영상인공지능사업단장
인공지능이 의료의 패러다임을 크게 바꿀 것이라는 것에 동의하지 않는 사람은 거의 없다. 하지만 인공
지능이 처리해야 할 의료의 난제는 많으며 그 해결 방안도 천차만별이다. 흔히 생각하는 만병통치약 같
은 의료 인공지능은 존재하지 않는다. 이 책은 다양한 의료 인공지능의 개발, 활용 및 가능성을 균형 있
게 분석하고 있다. 인공지능을 도입하려는 의료인, 생소한 의료 영역에 도전할 인공지능 연구자 모두에
게 일독을 권한다.
━ 정지훈, 경희사이버대 미디어커뮤니케이션학과 선임강의교수, 의사
서울의대 기초의학교육을 책임지고 있는 교수의 입장에서, 산업화 이후 변하지 않은 현재의 의학 교육
으로는 격변하는 인공지능 시대에 의대생을 대비시키지 못한다는 한계를 절실히 느낀다. 저와 함께 의
대 인공지능 교육을 개척하고 있는 최윤섭 소장의 전문적 분석과 미래 지향적 안목이 담긴 책이다. 인공
지능이라는 미래를 대비할 의대생과 교수, 그리고 의대 진학을 고민하는 학생과 학부모에게 추천한다.
━ 최형진, 서울대학교 의과대학 해부학교실 교수, 내과 전문의
최근 의료 인공지능의 도입에 대해서 극단적인 시각과 태도가 공존하고 있다. 이 책은 다양한 사례와 깊
은 통찰을 통해 의료 인공지능의 현황과 미래에 대해 균형적인 시각을 제공하여, 인공지능이 의료에 본
격적으로 도입되기 위한 토론의 장을 마련한다. 의료 인공지능이 일상화된 10년 후 돌아보았을 때, 이 책
이 그런 시대를 이끄는 길라잡이 역할을 하였음을 확인할 수 있기를 기대한다.
━ 정규환, 뷰노 CTO
의료 인공지능은 다른 분야 인공지능보다 더 본질적인 이해가 필요하다. 단순히 인간의 일을 대신하는
수준을 넘어 의학의 패러다임을 데이터 기반으로 변화시키기 때문이다. 따라서 인공지능을 균형있게 이
해하고, 어떻게 의사와 환자에게 도움을 줄 수 있을지 깊은 고민이 필요하다. 세계적으로 일어나고 있는
이러한 노력의 결과물을 집대성한 이 책이 반가운 이유다.
━ 백승욱, 루닛 대표
의료 인공지능의 최신 동향뿐만 아니라, 의의와 한계, 전망, 그리고 다양한 생각거리까지 주는 책이다.
논쟁이 되는 여러 이슈에 대해서도 저자는 자신의 시각을 명확한 근거에 기반하여 설득력 있게 제시하
고 있다. 개인적으로는 이 책을 대학원 수업 교재로 활용하려 한다.
━ 신수용, 성균관대학교 디지털헬스학과 교수
최윤섭지음
의료인공지능
값 20,000원
ISBN 979-11-86269-99-2
최초의 책!
계 안팎에서 제기
고 있다. 현재 의
분 커버했다고 자
것인가, 어느 진료
제하고 효용과 안
누가 지는가, 의학
쉬운 언어로 깊이
들이 의료 인공지
적인 용어를 최대
서 다른 곳에서 접
를 접하게 될 것
너무나 빨리 발전
책에서 제시하는
술을 공부하며, 앞
란다.
의사 면허를 취득
저가 도움되면 좋
를 불러일으킬 것
화를 일으킬 수도
슈에 제대로 대응
분은 의학 교육의
예비 의사들은 샌
지능과 함께하는
레이닝 방식도 이
전에 진료실과 수
겠지만, 여러분들
도생하는 수밖에
미래의료학자 최윤섭 박사가 제시하는
의료 인공지능의 현재와 미래
의료 딥러닝과 IBM 왓슨의 현주소
인공지능은 의사를 대체하는가
값 20,000원
ISBN 979-11-86269-99-2
레이닝 방식도 이
전에 진료실과 수
겠지만, 여러분들
도생하는 수밖에
소울링, 메디히어, 모바일
자문을 맡아 한국에서도
고 있다. 국내 최초의 디
케어 이노베이션』에 활발
을 연재하고 있다. 저서로
와 『그렇게 나는 스스로
•블로그_ http://www
•페이스북_ https://w
•이메일_ yoonsup.c
의료 인공지능
•1부: 제 2의 기계시대와 의료 인공지능

•2부: 의료 인공지능의 과거와 현재

•3부: 미래를 어떻게 맞이할 것인가
의료 인공지능
•1부: 제 2의 기계시대와 의료 인공지능

•2부: 의료 인공지능의 과거와 현재

•3부: 미래를 어떻게 맞이할 것인가
Inevitable Tsunami of Change
대한영상의학회 춘계학술대회 2017.6
Vinod Khosla
Founder, 1st CEO of Sun Microsystems
Partner of KPCB, CEO of KhoslaVentures
LegendaryVenture Capitalist in SiliconValley
“Technology will replace 80% of doctors”
https://www.youtube.com/watch?time_continue=70&v=2HMPRXstSvQ
“영상의학과 전문의를 양성하는 것을 당장 그만둬야 한다.
5년 안에 딥러닝이 영상의학과 전문의를 능가할 것은 자명하다.”
Hinton on Radiology
Luddites in the 1810’s
and/or
• AP 통신: 로봇이 인간 대신 기사를 작성
• 초당 2,000 개의 기사 작성 가능
• 기존에 300개 기업의 실적 ➞ 3,000 개 기업을 커버
• 1978
• As part of the obscure task of “discovery” —
providing documents relevant to a lawsuit — the
studios examined six million documents at a
cost of more than $2.2 million, much of it to pay
for a platoon of lawyers and paralegals who
worked for months at high hourly rates.
• 2011
• Now, thanks to advances in artificial intelligence,
“e-discovery” software can analyze documents
in a fraction of the time for a fraction of the
cost.
• In January, for example, Blackstone Discovery of
Palo Alto, Calif., helped analyze 1.5 million
documents for less than $100,000.
“At its height back in 2000, the U.S. cash equities trading desk at
Goldman Sachs’s New York headquarters employed 600 traders,
buying and selling stock on the orders of the investment bank’s
large clients. Today there are just two equity traders left”
• 일본의 Fukoku 생명보험에서는 보험금 지급 여부를 심사
하는 사람을 30명 이상 해고하고, IBM Watson Explorer
에게 맡기기로 결정
• 의료 기록을 바탕으로 Watson이 보험금 지급 여부를 판단
• 인공지능으로 교체하여 생산성을 30% 향상
• 2년 안에 ROI 가 나올 것이라고 예상
• 1년차: 140m yen
• 2년차: 200m yen
No choice but to bring AI into the medicine
Martin Duggan,“IBM Watson Health - Integrated Care & the Evolution to Cognitive Computing”
• 약한 인공 지능 (Artificial Narrow Intelligence)
• 특정 방면에서 잘하는 인공지능
• 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전
• 강한 인공 지능 (Artificial General Intelligence)
• 모든 방면에서 인간 급의 인공 지능
• 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습
• 초 인공 지능 (Artificial Super Intelligence)
• 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능
• “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
2010 2020 2030 2040 2050 2060 2070 2080 2090 2100
90%
50%
10%
PT-AI
AGI
EETNTOP100 Combined
언제쯤 기계가 인간 수준의 지능을 획득할 것인가?
Philosophy and Theory of AI (2011)
Artificial General Intelligence (2012)
Greek Association for Artificial Intelligence
Survey of most frequently cited 100 authors (2013)
Combined
응답자
누적 비율
Superintelligence, Nick Bostrom (2014)
Superintelligence: Science of fiction?
Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google),
David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart
Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI)
January 6-8, 2017, Asilomar, CA
https://brunch.co.kr/@kakao-it/49
https://www.youtube.com/watch?v=h0962biiZa4
Superintelligence: Science of fiction?
Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google),
David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart
Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI)
January 6-8, 2017, Asilomar, CA
Q: 초인공지능이란 영역은 도달 가능한 것인가?
Q: 초지능을 가진 개체의 출현이 가능할 것이라고 생각하는가?
Table 1
Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn
YES YES YES YES YES YES YES YES YES
Table 1-1
Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn
YES YES YES YES YES YES YES YES YES
Q: 초지능의 실현이 일어나기를 희망하는가?
Table 1-1-1
Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn
Complicated Complicated Complicated YES Complicated YES YES Complicated Complicated
https://brunch.co.kr/@kakao-it/49
https://www.youtube.com/watch?v=h0962biiZa4
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
• 약한 인공 지능 (Artificial Narrow Intelligence)
• 특정 방면에서 잘하는 인공지능
• 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전
• 강한 인공 지능 (Artificial General Intelligence)
• 모든 방면에서 인간 급의 인공 지능
• 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습
• 초 인공 지능 (Artificial Super Intelligence)
• 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능
• “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
의료 인공지능
•1부: 제 2의 기계시대와 의료 인공지능

•2부: 의료 인공지능의 과거와 현재

•3부: 미래를 어떻게 맞이할 것인가
•복잡한 의료 데이터의 분석 및 insight 도출

•영상 의료/병리 데이터의 분석/판독

•연속 데이터의 모니터링 및 예방/예측
의료 인공지능의 세 유형
•복잡한 의료 데이터의 분석 및 insight 도출

•영상 의료/병리 데이터의 분석/판독

•연속 데이터의 모니터링 및 예방/예측
의료 인공지능의 세 유형
Jeopardy!
2011년 인간 챔피언 두 명 과 퀴즈 대결을 벌여서 압도적인 우승을 차지
600,000 pieces of medical evidence
2 million pages of text from 42 medical journals and clinical trials
69 guidelines, 61,540 clinical trials
IBM Watson on Medicine
Watson learned...
+
1,500 lung cancer cases
physician notes, lab results and clinical research
+
14,700 hours of hands-on training
메이요 클리닉 협력
(임상 시험 매칭)
전남대병원
도입
인도 마니팔 병원
WFO 도입
식약처 인공지능
가이드라인 초안
메드트로닉과
혈당관리 앱 시연
2011 2012 2013 2014 2015
뉴욕 MSK암센터 협력
(폐암)
MD앤더슨 협력
(백혈병)
MD앤더슨
파일럿 결과 발표
@ASCO
왓슨 펀드,
웰톡에 투자
뉴욕게놈센터 협력
(교모세포종 분석)
GeneMD,
왓슨 모바일 디벨로퍼
챌린지 우승
클리블랜드 클리닉 협력
(암 유전체 분석)
한국 IBM
왓슨 사업부 신설
Watson Health 출범
피텔, 익스플로리스 인수
J&J, 애플, 메드트로닉 협력
에픽 시스템즈, 메이요클리닉
제휴 (EHR 분석)
동경대 도입
( WFO)
왓슨 펀드,
모더나이징 메디슨
투자
학계/의료계
산업계
패쓰웨이 지노믹스 OME
클로즈드 알파 서비스 시작
트루븐 헬스
인수
애플 리서치 키트
통한 수면 연구 시작
2017
가천대
길병원
도입
메드트로닉
Sugar.IQ 출시
제약사
테바와 제휴
태국 범룽랏 국제 병원,
WFO 도입
머지
헬스케어
인수
2016
언더 아머 제휴
브로드 연구소 협력 발표
(유전체 분석-항암제 내성)
마니팔 병원의 

WFO 정확성 발표
대구가톨릭병원
대구동산병원
도입
부산대병원
도입
왓슨 펀드,
패쓰웨이 지노믹스
투자
제퍼디! 우승
조선대병원
도입
한국 왓슨
컨소시움 출범
쥬피터 

메디컬 

센터
도입
식약처 인공지능
가이드라인
메이요 클리닉
임상시험매칭
결과발표
2018
건양대병원
도입
IBM Watson Health Chronicle
WFO
최초 논문
메이요 클리닉 협력
(임상 시험 매칭)
전남대병원
도입
인도 마니팔 병원
WFO 도입
식약처 인공지능
가이드라인 초안
메드트로닉과
혈당관리 앱 시연
2011 2012 2013 2014 2015
뉴욕 MSK암센터 협력
(폐암)
MD앤더슨 협력
(백혈병)
MD앤더슨
파일럿 결과 발표
@ASCO
왓슨 펀드,
웰톡에 투자
뉴욕게놈센터 협력
(교모세포종 분석)
GeneMD,
왓슨 모바일 디벨로퍼
챌린지 우승
클리블랜드 클리닉 협력
(암 유전체 분석)
한국 IBM
왓슨 사업부 신설
Watson Health 출범
피텔, 익스플로리스 인수
J&J, 애플, 메드트로닉 협력
에픽 시스템즈, 메이요클리닉
제휴 (EHR 분석)
동경대 도입
( WFO)
왓슨 펀드,
모더나이징 메디슨
투자
학계/의료계
산업계
패쓰웨이 지노믹스 OME
클로즈드 알파 서비스 시작
트루븐 헬스
인수
애플 리서치 키트
통한 수면 연구 시작
2017
가천대
길병원
도입
메드트로닉
Sugar.IQ 출시
제약사
테바와 제휴
태국 범룽랏 국제 병원,
WFO 도입
머지
헬스케어
인수
2016
언더 아머 제휴
브로드 연구소 협력 발표
(유전체 분석-항암제 내성)
마니팔 병원의 

WFO 정확성 발표
대구가톨릭병원
대구동산병원
도입
부산대병원
도입
왓슨 펀드,
패쓰웨이 지노믹스
투자
제퍼디! 우승
조선대병원
도입
한국 왓슨
컨소시움 출범
쥬피터 

메디컬 

센터
도입
식약처 인공지능
가이드라인
2018
건양대병원
도입
메이요 클리닉
임상시험매칭
결과발표
WFO
최초 논문
IBM Watson Health Chronicle
Annals of Oncology (2016) 27 (suppl_9): ix179-ix180. 10.1093/annonc/mdw601
Validation study to assess performance of IBM cognitive
computing system Watson for oncology with Manipal
multidisciplinary tumour board for 1000 consecutive cases: 

An Indian experience
•인도 마니팔 병원의 1,000명의 암환자 에 대해 의사와 WFO의 권고안의 ‘일치율’을 비교

•유방암 638명, 대장암 126명, 직장암 124명, 폐암 112명

•의사-왓슨 일치율

•추천(50%), 고려(28%), 비추천(17%)

•의사의 진료안 중 5%는 왓슨의 권고안으로 제시되지 않음

•일치율이 암의 종류마다 달랐음

•직장암(85%), 폐암(17.8%)

•삼중음성 유방암(67.9%), HER2 음성 유방암 (35%)
San Antonio Breast Cancer Symposium—December 6-10, 2016
Concordance WFO (@T2) and MMDT (@T1* v. T2**)
(N= 638 Breast Cancer Cases)
Time Point
/Concordance
REC REC + FC
n % n %
T1* 296 46 463 73
T2** 381 60 574 90
This presentation is the intellectual property of the author/presenter.Contact somusp@yahoo.com for permission to reprint and/or distribute.26
* T1 Time of original treatment decision by MMDT in the past (last 1-3 years)
** T2 Time (2016) of WFO’s treatment advice and of MMDT’s treatment decision upon blinded re-review of non-concordant
cases
WFO in ASCO 2017
• Early experience with IBM WFO cognitive computing system for lung 



and colorectal cancer treatment (마니팔 병원)

• 지난 3년간: lung cancer(112), colon cancer(126), rectum cancer(124)
• lung cancer: localized 88.9%, meta 97.9%
• colon cancer: localized 85.5%, meta 76.6%
• rectum cancer: localized 96.8%, meta 80.6%
Performance of WFO in India
2017 ASCO annual Meeting, J Clin Oncol 35, 2017 (suppl; abstr 8527)
WFO in ASCO 2017
•가천대 길병원의 대장암과 위암 환자에 왓슨 적용 결과

• 대장암 환자(stage II-IV) 340명

• 진행성 위암 환자 185명 (Retrospective)

• 의사와의 일치율

• 대장암 환자: 73%

• 보조 (adjuvant) 항암치료를 받은 250명: 85%

• 전이성 환자 90명: 40%

• 위암 환자: 49%

• Trastzumab/FOLFOX 가 국민 건강 보험 수가를 받지 못함

• S-1(tegafur, gimeracil and oteracil)+cisplatin):

• 국내는 매우 루틴; 미국에서는 X
ORIGINAL ARTICLE
Watson for Oncology and breast cancer treatment
recommendations: agreement with an expert
multidisciplinary tumor board
S. P. Somashekhar1*, M.-J. Sepu´lveda2
, S. Puglielli3
, A. D. Norden3
, E. H. Shortliffe4
, C. Rohit Kumar1
,
A. Rauthan1
, N. Arun Kumar1
, P. Patil1
, K. Rhee3
& Y. Ramya1
1
Manipal Comprehensive Cancer Centre, Manipal Hospital, Bangalore, India; 2
IBM Research (Retired), Yorktown Heights; 3
Watson Health, IBM Corporation,
Cambridge; 4
Department of Surgical Oncology, College of Health Solutions, Arizona State University, Phoenix, USA
*Correspondence to: Prof. Sampige Prasannakumar Somashekhar, Manipal Comprehensive Cancer Centre, Manipal Hospital, Old Airport Road, Bangalore 560017, Karnataka,
India. Tel: þ91-9845712012; Fax: þ91-80-2502-3759; E-mail: somashekhar.sp@manipalhospitals.com
Background: Breast cancer oncologists are challenged to personalize care with rapidly changing scientific evidence, drug
approvals, and treatment guidelines. Artificial intelligence (AI) clinical decision-support systems (CDSSs) have the potential to
help address this challenge. We report here the results of examining the level of agreement (concordance) between treatment
recommendations made by the AI CDSS Watson for Oncology (WFO) and a multidisciplinary tumor board for breast cancer.
Patients and methods: Treatment recommendations were provided for 638 breast cancers between 2014 and 2016 at the
Manipal Comprehensive Cancer Center, Bengaluru, India. WFO provided treatment recommendations for the identical cases in
2016. A blinded second review was carried out by the center’s tumor board in 2016 for all cases in which there was not
agreement, to account for treatments and guidelines not available before 2016. Treatment recommendations were considered
concordant if the tumor board recommendations were designated ‘recommended’ or ‘for consideration’ by WFO.
Results: Treatment concordance between WFO and the multidisciplinary tumor board occurred in 93% of breast cancer cases.
Subgroup analysis found that patients with stage I or IV disease were less likely to be concordant than patients with stage II or III
disease. Increasing age was found to have a major impact on concordance. Concordance declined significantly (P 0.02;
P < 0.001) in all age groups compared with patients <45 years of age, except for the age group 55–64 years. Receptor status
was not found to affect concordance.
Conclusion: Treatment recommendations made by WFO and the tumor board were highly concordant for breast cancer cases
examined. Breast cancer stage and patient age had significant influence on concordance, while receptor status alone did not.
This study demonstrates that the AI clinical decision-support system WFO may be a helpful tool for breast cancer treatment
decision making, especially at centers where expert breast cancer resources are limited.
Key words: Watson for Oncology, artificial intelligence, cognitive clinical decision-support systems, breast cancer,
concordance, multidisciplinary tumor board
Introduction
Oncologists who treat breast cancer are challenged by a large and
rapidly expanding knowledge base [1, 2]. As of October 2017, for
example, there were 69 FDA-approved drugs for the treatment of
breast cancer, not including combination treatment regimens
[3]. The growth of massive genetic and clinical databases, along
with computing systems to exploit them, will accelerate the speed
of breast cancer treatment advances and shorten the cycle time
for changes to breast cancer treatment guidelines [4, 5]. In add-
ition, these information management challenges in cancer care
are occurring in a practice environment where there is little time
available for tracking and accessing relevant information at the
point of care [6]. For example, a study that surveyed 1117 oncolo-
gists reported that on average 4.6 h per week were spent keeping
VC The Author(s) 2018. Published by Oxford University Press on behalf of the European Society for Medical Oncology.
All rights reserved. For permissions, please email: journals.permissions@oup.com.
Annals of Oncology 29: 418–423, 2018
doi:10.1093/annonc/mdx781
Published online 9 January 2018
Downloaded from https://academic.oup.com/annonc/article-abstract/29/2/418/4781689
by guest
ORIGINAL ARTICLE
Watson for Oncology and breast cancer treatment
recommendations: agreement with an expert
multidisciplinary tumor board
S. P. Somashekhar1*, M.-J. Sepu´lveda2
, S. Puglielli3
, A. D. Norden3
, E. H. Shortliffe4
, C. Rohit Kumar1
,
A. Rauthan1
, N. Arun Kumar1
, P. Patil1
, K. Rhee3
& Y. Ramya1
1
Manipal Comprehensive Cancer Centre, Manipal Hospital, Bangalore, India; 2
IBM Research (Retired), Yorktown Heights; 3
Watson Health, IBM Corporation,
Cambridge; 4
Department of Surgical Oncology, College of Health Solutions, Arizona State University, Phoenix, USA
*Correspondence to: Prof. Sampige Prasannakumar Somashekhar, Manipal Comprehensive Cancer Centre, Manipal Hospital, Old Airport Road, Bangalore 560017, Karnataka,
India. Tel: þ91-9845712012; Fax: þ91-80-2502-3759; E-mail: somashekhar.sp@manipalhospitals.com
Background: Breast cancer oncologists are challenged to personalize care with rapidly changing scientific evidence, drug
approvals, and treatment guidelines. Artificial intelligence (AI) clinical decision-support systems (CDSSs) have the potential to
help address this challenge. We report here the results of examining the level of agreement (concordance) between treatment
recommendations made by the AI CDSS Watson for Oncology (WFO) and a multidisciplinary tumor board for breast cancer.
Patients and methods: Treatment recommendations were provided for 638 breast cancers between 2014 and 2016 at the
Manipal Comprehensive Cancer Center, Bengaluru, India. WFO provided treatment recommendations for the identical cases in
2016. A blinded second review was carried out by the center’s tumor board in 2016 for all cases in which there was not
agreement, to account for treatments and guidelines not available before 2016. Treatment recommendations were considered
concordant if the tumor board recommendations were designated ‘recommended’ or ‘for consideration’ by WFO.
Results: Treatment concordance between WFO and the multidisciplinary tumor board occurred in 93% of breast cancer cases.
Subgroup analysis found that patients with stage I or IV disease were less likely to be concordant than patients with stage II or III
disease. Increasing age was found to have a major impact on concordance. Concordance declined significantly (P 0.02;
P < 0.001) in all age groups compared with patients <45 years of age, except for the age group 55–64 years. Receptor status
was not found to affect concordance.
Conclusion: Treatment recommendations made by WFO and the tumor board were highly concordant for breast cancer cases
examined. Breast cancer stage and patient age had significant influence on concordance, while receptor status alone did not.
This study demonstrates that the AI clinical decision-support system WFO may be a helpful tool for breast cancer treatment
decision making, especially at centers where expert breast cancer resources are limited.
Key words: Watson for Oncology, artificial intelligence, cognitive clinical decision-support systems, breast cancer,
concordance, multidisciplinary tumor board
Introduction
Oncologists who treat breast cancer are challenged by a large and
rapidly expanding knowledge base [1, 2]. As of October 2017, for
example, there were 69 FDA-approved drugs for the treatment of
breast cancer, not including combination treatment regimens
[3]. The growth of massive genetic and clinical databases, along
with computing systems to exploit them, will accelerate the speed
of breast cancer treatment advances and shorten the cycle time
for changes to breast cancer treatment guidelines [4, 5]. In add-
ition, these information management challenges in cancer care
are occurring in a practice environment where there is little time
available for tracking and accessing relevant information at the
point of care [6]. For example, a study that surveyed 1117 oncolo-
gists reported that on average 4.6 h per week were spent keeping
VC The Author(s) 2018. Published by Oxford University Press on behalf of the European Society for Medical Oncology.
All rights reserved. For permissions, please email: journals.permissions@oup.com.
Annals of Oncology 29: 418–423, 2018
doi:10.1093/annonc/mdx781
Published online 9 January 2018
Downloaded from https://academic.oup.com/annonc/article-abstract/29/2/418/4781689
by guest
Table 2. MMDT and WFO recommendations after the initial and blinded second reviews
Review of breast cancer cases (N 5 638) Concordant cases, n (%) Non-concordant cases, n (%)
Recommended For consideration Total Not recommended Not available Total
Initial review (T1MMDT versus T2WFO) 296 (46) 167 (26) 463 (73) 137 (21) 38 (6) 175 (27)
Second review (T2MMDT versus T2WFO) 397 (62) 194 (30) 591 (93) 36 (5) 11 (2) 47 (7)
T1MMDT, original MMDT recommendation from 2014 to 2016; T2WFO, WFO advisor treatment recommendation in 2016; T2MMDT, MMDT treatment recom-
mendation in 2016; MMDT, Manipal multidisciplinary tumor board; WFO, Watson for Oncology.
31%
18%
1% 2% 33%
5% 31%
6%
0% 10% 20%
Not available Not recommended RecommendedFor consideration
30% 40% 50% 60% 70% 80% 90% 100%
8% 25% 61%
64%
64%
29% 51%
62%
Concordance, 93%
Concordance, 80%
Concordance, 97%
Concordance, 95%
Concordance, 86%
2%
2%
Overall
(n=638)
Stage I
(n=61)
Stage II
(n=262)
Stage III
(n=191)
Stage IV
(n=124)
5%
Figure 1. Treatment concordance between WFO and the MMDT overall and by stage. MMDT, Manipal multidisciplinary tumor board; WFO,
Watson for Oncology.
5%Non-metastatic
HR(+)HER2/neu(+)Triple(–)
Metastatic
Non-metastatic
Metastatic
Non-metastatic
Metastatic
10%
1%
2%
1% 5% 20%
20%10%
0%
Not applicable Not recommended For consideration Recommended
20% 40% 60% 80% 100%
5%
74%
65%
34% 64%
5% 38% 56%
15% 20% 55%
36% 59%
Concordance, 95%
Concordance, 75%
Concordance, 94%
Concordance, 98%
Concordance, 94%
Concordance, 85%
Figure 2. Treatment concordance between WFO and the MMDT by stage and receptor status. HER2/neu, human epidermal growth factor
receptor 2; HR, hormone receptor; MMDT, Manipal multidisciplinary tumor board; WFO, Watson for Oncology.
Annals of Oncology Original article
잠정적 결론
•왓슨 포 온콜로지와 의사의 일치율: 

•암종별로 다르다.

•같은 암종에서도 병기별로 다르다.

•같은 암종에 대해서도 병원별/국가별로 다르다.

•시간이 흐름에 따라 달라질 가능성이 있다.
원칙이 필요하다
•어떤 환자의 경우, 왓슨에게 의견을 물을 것인가?

•왓슨을 (암종별로) 얼마나 신뢰할 것인가?

•왓슨의 의견을 환자에게 공개할 것인가?

•왓슨과 의료진의 판단이 다른 경우 어떻게 할 것인가?

•왓슨에게 보험 급여를 매길 수 있는가?
이러한 기준에 따라 의료의 질/치료효과가 달라질 수 있으나,

현재 개별 병원이 개별적인 기준으로 활용하게 됨
Empowering the Oncology Community for Cancer Care
Genomics
Oncology
Clinical
Trial
Matching
Watson Health’s oncology clients span more than 35 hospital systems
“Empowering the Oncology Community
for Cancer Care”
Andrew Norden, KOTRA Conference, March 2017, “The Future of Health is Cognitive”
IBM Watson Health
Watson for Clinical Trial Matching (CTM)
18
1. According to the National Comprehensive Cancer Network (NCCN)
2. http://csdd.tufts.edu/files/uploads/02_-_jan_15,_2013_-_recruitment-retention.pdf© 2015 International Business Machines Corporation
Searching across
eligibility criteria of clinical
trials is time consuming
and labor intensive
Current
Challenges
Fewer than 5% of
adult cancer patients
participate in clinical
trials1
37% of sites fail to meet
minimum enrollment
targets. 11% of sites fail
to enroll a single patient 2
The Watson solution
• Uses structured and unstructured
patient data to quickly check
eligibility across relevant clinical
trials
• Provides eligible trial
considerations ranked by
relevance
• Increases speed to qualify
patients
Clinical Investigators
(Opportunity)
• Trials to Patient: Perform
feasibility analysis for a trial
• Identify sites with most
potential for patient enrollment
• Optimize inclusion/exclusion
criteria in protocols
Faster, more efficient
recruitment strategies,
better designed protocols
Point of Care
(Offering)
• Patient to Trials:
Quickly find the
right trial that a
patient might be
eligible for
amongst 100s of
open trials
available
Improve patient care
quality, consistency,
increased efficiencyIBM Confidential
•총 16주간 HOG( Highlands Oncology Group)의 폐암과 유방암 환자 2,620명을 대상

•90명의 환자를 3개의 노바티스 유방암 임상 프로토콜에 따라 선별

•임상 시험 코디네이터: 1시간 50분

•Watson CTM: 24분 (78% 시간 단축)

•Watson CTM은 임상 시험 기준에 해당되지 않는 환자 94%를 자동으로 스크리닝
•메이요 클리닉의 유방암 신약 임상시험에 등록자의 수가 80% 증가하였다는 결과 발표
•2018년 1월 구글이 전자의무기록(EMR)을 분석하여, 환자 치료 결과를 예측하는 인공지능 발표

•환자가 입원 중에 사망할 것인지

•장기간 입원할 것인지

•퇴원 후에 30일 내에 재입원할 것인지

•퇴원 시의 진단명

•이번 연구의 특징: 확장성

•과거 다른 연구와 달리 EMR의 일부 데이터를 pre-processing 하지 않고,

•전체 EMR 를 통채로 모두 분석하였음: UCSF, UCM (시카고 대학병원)

•특히, 비정형 데이터인 의사의 진료 노트도 분석
Figure 4: The patient record shows a woman with metastatic breast cancer with malignant pleural
e usions and empyema. The patient timeline at the top of the figure contains circles for every
time-step for which at least a single token exists for the patient, and the horizontal lines show the
data-type. There is a close-up view of the most recent data-points immediately preceding a prediction
made 24 hours after admission. We trained models for each data-type and highlighted in red the
tokens which the models attended to – the non-highlighted text was not attended to but is shown for
context. The models pick up features in the medications, nursing flowsheets, and clinical notes to
make the prediction.
• TAAN(Time-Aware Neural Nework)를 이용하여, 

• 전이성 유방암 환자의 EMR에서 어떤 부분을 인공지능이 더 유의하게 보았는지를 표시해본 결과, 

• 실제로 사망 위험도와 관계가 높은 데이터를 더 중요하게 보았음

• 진료 기록: 농양(empyema), 흉수(pleural effusions) 등

• 간호 기록: 반코마이신, 메트로니다졸 등의 항생제 투약, 욕창(pressure ulcer)의 위험이 높음

• 흉부에 삽입하는 튜브(카테터)의 상표인 'PleurX'도 중요 단어로 파악
• 복잡한 의료 데이터의 분석 및 insight 도출
• 영상 의료/병리 데이터의 분석/판독
• 연속 데이터의 모니터링 및 예방/예측
의료 인공지능의 세 유형
Deep Learning
http://theanalyticsstore.ie/deep-learning/
인공지능
기계학습
딥러닝
전문가 시스템
사이버네틱스
…
인공신경망
결정트리
서포트 벡터 머신
…
컨볼루션 신경망 (CNN)
순환신경망(RNN)
…
인공지능과 딥러닝의 관계
페이스북의 딥페이스
Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14.
Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three
locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million
parameters, where more than 95% come from the local and fully connected layers.
very few parameters. These layers merely expand the input
into a set of simple local features.
The subsequent layers (L4, L5 and L6) are instead lo-
cally connected [13, 16], like a convolutional layer they ap-
ply a filter bank, but every location in the feature map learns
a different set of filters. Since different regions of an aligned
image have different local statistics, the spatial stationarity
The goal of training is to maximize the probability of
the correct class (face id). We achieve this by minimiz-
ing the cross-entropy loss for each training sample. If k
is the index of the true label for a given input, the loss is:
L = log pk. The loss is minimized over the parameters
by computing the gradient of L w.r.t. the parameters and
Human: 95% vs. DeepFace in Facebook: 97.35%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering
Human: 95% vs. FaceNet of Google: 99.63%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
False accept
False reject
s. This shows all pairs of images that were
on LFW. Only eight of the 13 errors shown
he other four are mislabeled in LFW.
on Youtube Faces DB
ge similarity of all pairs of the first one
our face detector detects in each video.
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
lead to truly amazing results. Figure 7 shows one cluster in
a users personal photo collection, generated using agglom-
erative clustering. It is a clear showcase of the incredible
invariance to occlusion, lighting, pose and even age.
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
sification. Our end-to-end training both simplifies the setup
and shows that directly optimizing a loss relevant to the task
at hand improves performance.
Another strength of our model is that it only requires
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
구글의 페이스넷
바이두의 얼굴 인식 인공지능
Jingtuo Liu (2015) Targeting Ultimate Accuracy: Face Recognition via Deep Embedding
Human: 95% vs.Baidu: 99.77%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
3
Although several algorithms have achieved nearly perfect
accuracy in the 6000-pair verification task, a more practical
can achieve 95.8% identification rate, relatively reducing the
error rate by about 77%.
TABLE 3. COMPARISONS WITH OTHER METHODS ON SEVERAL EVALUATION TASKS
Score = -0.060 (pair #113) Score = -0.022 (pair #202) Score = -0.034 (pair #656)
Score = -0.031 (pair #1230) Score = -0.073 (pair #1862) Score = -0.091(pair #2499)
Score = -0.024 (pair #2551) Score = -0.036 (pair #2552) Score = -0.089 (pair #2610)
Method
Performance on tasks
Pair-wise
Accuracy(%)
Rank-1(%)
DIR(%) @
FAR =1%
Verification(%
)@ FAR=0.1%
Open-set
Identification(%
)@ Rank =
1,FAR = 0.1%
IDL Ensemble
Model
99.77 98.03 95.8 99.41 92.09
IDL Single Model 99.68 97.60 94.12 99.11 89.08
FaceNet[12] 99.63 NA NA NA NA
DeepID3[9] 99.53 96.00 81.40 NA NA
Face++[2] 99.50 NA NA NA NA
Facebook[15] 98.37 82.5 61.9 NA NA
Learning from
Scratch[4]
97.73 NA NA 80.26 28.90
HighDimLBP[10] 95.17 NA NA
41.66(reported
in [4])
18.07(reported
in [4])
• 6,000쌍의 얼굴 사진 중에 바이두의 인공지능은 불과 14쌍만을 잘못 판단

• 알고 보니 이 14쌍 중의 5쌍의 사진은 오히려 정답에 오류가 있었고, 



실제로는 인공지능이 정확 (red box)
Radiologist
•손 엑스레이 영상을 판독하여 환자의 골연령 (뼈 나이)를 계산해주는 인공지능

• 기존에 의사는 그룰리히-파일(Greulich-Pyle)법 등으로 표준 사진과 엑스레이를 비교하여 판독

• 인공지능은 참조표준영상에서 성별/나이별 패턴을 찾아서 유사성을 확률로 표시 + 표준 영상 검색

•의사가 성조숙증이나 저성장을 진단하는데 도움을 줄 수 있음
- 1 -
보 도 자 료
국내에서 개발한 인공지능(AI) 기반 의료기기 첫 허가
- 인공지능 기술 활용하여 뼈 나이 판독한다 -
식품의약품안전처 처장 류영진 는 국내 의료기기업체 주 뷰노가
개발한 인공지능 기술이 적용된 의료영상분석장치소프트웨어
뷰노메드 본에이지 를 월 일 허가했다고
밝혔습니다
이번에 허가된 뷰노메드 본에이지 는 인공지능 이 엑스레이 영상을
분석하여 환자의 뼈 나이를 제시하고 의사가 제시된 정보 등으로
성조숙증이나 저성장을 진단하는데 도움을 주는 소프트웨어입니다
그동안 의사가 환자의 왼쪽 손 엑스레이 영상을 참조표준영상
과 비교하면서 수동으로 뼈 나이를 판독하던 것을 자동화하여
판독시간을 단축하였습니다
이번 허가 제품은 년 월부터 빅데이터 및 인공지능 기술이
적용된 의료기기의 허가 심사 가이드라인 적용 대상으로 선정되어
임상시험 설계에서 허가까지 맞춤 지원하였습니다
뷰노메드 본에이지 는 환자 왼쪽 손 엑스레이 영상을 분석하여 의
료인이 환자 뼈 나이를 판단하는데 도움을 주기 위한 목적으로
허가되었습니다
- 2 -
분석은 인공지능이 촬영된 엑스레이 영상의 패턴을 인식하여 성별
남자 개 여자 개 로 분류된 뼈 나이 모델 참조표준영상에서
성별 나이별 패턴을 찾아 유사성을 확률로 표시하면 의사가 확률값
호르몬 수치 등의 정보를 종합하여 성조숙증이나 저성장을 진단합
니다
임상시험을 통해 제품 정확도 성능 를 평가한 결과 의사가 판단한
뼈 나이와 비교했을 때 평균 개월 차이가 있었으며 제조업체가
해당 제품 인공지능이 스스로 인지 학습할 수 있도록 영상자료를
주기적으로 업데이트하여 의사와의 오차를 좁혀나갈 수 있도록
설계되었습니다
인공지능 기반 의료기기 임상시험계획 승인건수는 이번에 허가받은
뷰노메드 본에이지 를 포함하여 현재까지 건입니다
임상시험이 승인된 인공지능 기반 의료기기는 자기공명영상으로
뇌경색 유형을 분류하는 소프트웨어 건 엑스레이 영상을 통해
폐결절 진단을 도와주는 소프트웨어 건 입니다
참고로 식약처는 인공지능 가상현실 프린팅 등 차 산업과
관련된 의료기기 신속한 개발을 지원하기 위하여 제품 연구 개발부터
임상시험 허가에 이르기까지 전 과정을 맞춤 지원하는 차세대
프로젝트 신개발 의료기기 허가도우미 등을 운영하고 있
습니다
식약처는 이번 제품 허가를 통해 개개인의 뼈 나이를 신속하게
분석 판정하는데 도움을 줄 수 있을 것이라며 앞으로도 첨단 의료기기
개발이 활성화될 수 있도록 적극적으로 지원해 나갈 것이라고
밝혔습니다
저는 뷰노의 자문을 맡고 있으며, 지분 관계가 있음을 밝힙니다
AJR:209, December 2017 1
Since 1992, concerns regarding interob-
server variability in manual bone age esti-
mation [4] have led to the establishment of
several automatic computerized methods for
bone age estimation, including computer-as-
sisted skeletal age scores, computer-aided
skeletal maturation assessment systems, and
BoneXpert (Visiana) [5–14]. BoneXpert was
developed according to traditional machine-
learning techniques and has been shown to
have a good performance for patients of var-
ious ethnicities and in various clinical set-
tings [10–14]. The deep-learning technique
is an improvement in artificial neural net-
works. Unlike traditional machine-learning
techniques, deep-learning techniques allow
an algorithm to program itself by learning
from the images given a large dataset of la-
beled examples, thus removing the need to
specify rules [15].
Deep-learning techniques permit higher
levels of abstraction and improved predic-
tions from data. Deep-learning techniques
Computerized Bone Age
Estimation Using Deep Learning–
Based Program: Evaluation of the
Accuracy and Efficiency
Jeong Rye Kim1
Woo Hyun Shim1
Hee Mang Yoon1
Sang Hyup Hong1
Jin Seong Lee1
Young Ah Cho1
Sangki Kim2
Kim JR, Shim WH, Yoon MH, et al.
1
Department of Radiology and Research Institute of
Radiology, Asan Medical Center, University of Ulsan
College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu,
Seoul 05505, South Korea. Address correspondence to
H. M. Yoon (espoirhm@gmail.com).
2
Vuno Research Center, Vuno Inc., Seoul, South Korea.
Pediatric Imaging • Original Research
Supplemental Data
Available online at www.ajronline.org.
AJR 2017; 209:1–7
0361–803X/17/2096–1
© American Roentgen Ray Society
B
one age estimation is crucial for
developmental status determina-
tions and ultimate height predic-
tions in the pediatric population,
particularly for patients with growth disor-
ders and endocrine abnormalities [1]. Two
major left-hand wrist radiograph-based
methods for bone age estimation are current-
ly used: the Greulich-Pyle [2] and Tanner-
Whitehouse [3] methods. The former is much
more frequently used in clinical practice.
Greulich-Pyle–based bone age estimation is
performed by comparing a patient’s left-hand
radiograph to standard radiographs in the
Greulich-Pyle atlas and is therefore simple
and easily applied in clinical practice. How-
ever, the process of bone age estimation,
which comprises a simple comparison of
multiple images, can be repetitive and time
consuming and is thus sometimes burden-
some to radiologists. Moreover, the accuracy
depends on the radiologist’s experience and
tends to be subjective.
Keywords: bone age, children, deep learning, neural
network model
DOI:10.2214/AJR.17.18224
J. R. Kim and W. H. Shim contributed equally to this work.
Received March 12, 2017; accepted after revision
July 7, 2017.
S. Kim is employed by Vuno, Inc., which created the deep
learning–based automatic software system for bone
age determination. J. R. Kim, W. H. Shim, H. M. Yoon,
S. H. Hong, J. S. Lee, and Y. A. Cho are employed by
Asan Medical Center, which holds patent rights for the
deep learning–based automatic software system for
bone age assessment.
OBJECTIVE. The purpose of this study is to evaluate the accuracy and efficiency of a
new automatic software system for bone age assessment and to validate its feasibility in clini-
cal practice.
MATERIALS AND METHODS. A Greulich-Pyle method–based deep-learning tech-
nique was used to develop the automatic software system for bone age determination. Using
this software, bone age was estimated from left-hand radiographs of 200 patients (3–17 years
old) using first-rank bone age (software only), computer-assisted bone age (two radiologists
with software assistance), and Greulich-Pyle atlas–assisted bone age (two radiologists with
Greulich-Pyle atlas assistance only). The reference bone age was determined by the consen-
sus of two experienced radiologists.
RESULTS. First-rank bone ages determined by the automatic software system showed a
69.5% concordance rate and significant correlations with the reference bone age (r = 0.992;
p < 0.001). Concordance rates increased with the use of the automatic software system for
both reviewer 1 (63.0% for Greulich-Pyle atlas–assisted bone age vs 72.5% for computer-as-
sisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas–assisted bone age vs 57.5% for
computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers
1 and 2, respectively.
CONCLUSION. Automatic software system showed reliably accurate bone age estima-
tions and appeared to enhance efficiency by reducing reading times without compromising
the diagnostic accuracy.
Kim et al.
Accuracy and Efficiency of Computerized Bone Age Estimation
Pediatric Imaging
Original Research
Downloadedfromwww.ajronline.orgbyFloridaAtlanticUnivon09/13/17fromIPaddress131.91.169.193.CopyrightARRS.Forpersonaluseonly;allrightsreserved
• 총 환자의 수: 200명

• 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스

• 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험)

• 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독)

• 인공지능: VUNO의 골연령 판독 딥러닝
AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380.
40
50
60
70
80
인공지능 의사 A 의사 B
69.5%
63%
49.5%
정확도(%)
영상의학과 펠로우

(소아영상 세부전공)
영상의학과 

2년차 전공의
인공지능 vs 의사
AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380.
• 총 환자의 수: 200명

• 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험)

• 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독)

• 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스

• 인공지능: VUNO의 골연령 판독 딥러닝
골연령 판독에 인간 의사와 인공지능의 시너지 효과
Digital Healthcare Institute
Director,Yoon Sup Choi, PhD
yoonsup.choi@gmail.com
40
50
60
70
80
인공지능 의사 A 의사 B
40
50
60
70
80
의사 A 

+ 인공지능
의사 B 

+ 인공지능
69.5%
63%
49.5%
72.5%
57.5%
정확도(%)
영상의학과 펠로우

(소아영상 세부전공)
영상의학과 

2년차 전공의
인공지능 vs 의사 인공지능 + 의사
AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380.
• 총 환자의 수: 200명

• 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험)

• 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독)

• 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스

• 인공지능: VUNO의 골연령 판독 딥러닝
골연령 판독에 인간 의사와 인공지능의 시너지 효과
Digital Healthcare Institute
Director,Yoon Sup Choi, PhD
yoonsup.choi@gmail.com
총 판독 시간 (m)
0
50
100
150
200
w/o AI w/ AI
0
50
100
150
200
w/o AI w/ AI
188m
154m
180m
108m
saving 40%
of time
saving 18%
of time
의사 A 의사 B
골연령 판독에서 인공지능을 활용하면

판독 시간의 절감도 가능
• 총 환자의 수: 200명

• 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험)

• 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독)

• 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스

• 인공지능: VUNO의 골연령 판독 딥러닝
AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380.
Digital Healthcare Institute
Director,Yoon Sup Choi, PhD
yoonsup.choi@gmail.com
This copy is for personal use only.
To order printed copies, contact reprints@rsna.org
This copy is for personal use only.
To order printed copies, contact reprints@rsna.org
ORIGINAL RESEARCH • THORACIC IMAGING
hest radiography, one of the most common diagnos- intraobserver agreements because of its limited spatial reso-
Development and Validation of Deep
Learning–based Automatic Detection
Algorithm for Malignant Pulmonary Nodules
on Chest Radiographs
Ju Gang Nam, MD* • Sunggyun Park, PhD* • Eui Jin Hwang, MD • Jong Hyuk Lee, MD • Kwang-Nam Jin, MD,
PhD • KunYoung Lim, MD, PhD • Thienkai HuyVu, MD, PhD • Jae Ho Sohn, MD • Sangheum Hwang, PhD • Jin
Mo Goo, MD, PhD • Chang Min Park, MD, PhD
From the Department of Radiology and Institute of Radiation Medicine, Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul
03080, Republic of Korea (J.G.N., E.J.H., J.M.G., C.M.P.); Lunit Incorporated, Seoul, Republic of Korea (S.P.); Department of Radiology, Armed Forces Seoul Hospital,
Seoul, Republic of Korea (J.H.L.); Department of Radiology, Seoul National University Boramae Medical Center, Seoul, Republic of Korea (K.N.J.); Department of
Radiology, National Cancer Center, Goyang, Republic of Korea (K.Y.L.); Department of Radiology and Biomedical Imaging, University of California, San Francisco,
San Francisco, Calif (T.H.V., J.H.S.); and Department of Industrial & Information Systems Engineering, Seoul National University of Science and Technology, Seoul,
Republic of Korea (S.H.). Received January 30, 2018; revision requested March 20; revision received July 29; accepted August 6. Address correspondence to C.M.P.
(e-mail: cmpark.morphius@gmail.com).
Study supported by SNUH Research Fund and Lunit (06–2016–3000) and by Seoul Research and Business Development Program (FI170002).
*J.G.N. and S.P. contributed equally to this work.
Conflicts of interest are listed at the end of this article.
Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 • Content codes:
Purpose: To develop and validate a deep learning–based automatic detection algorithm (DLAD) for malignant pulmonary nodules
on chest radiographs and to compare its performance with physicians including thoracic radiologists.
Materials and Methods: For this retrospective study, DLAD was developed by using 43292 chest radiographs (normal radiograph–
to–nodule radiograph ratio, 34067:9225) in 34676 patients (healthy-to-nodule ratio, 30784:3892; 19230 men [mean age, 52.8
years; age range, 18–99 years]; 15446 women [mean age, 52.3 years; age range, 18–98 years]) obtained between 2010 and 2015,
which were labeled and partially annotated by 13 board-certified radiologists, in a convolutional neural network. Radiograph clas-
sification and nodule detection performances of DLAD were validated by using one internal and four external data sets from three
South Korean hospitals and one U.S. hospital. For internal and external validation, radiograph classification and nodule detection
performances of DLAD were evaluated by using the area under the receiver operating characteristic curve (AUROC) and jackknife
alternative free-response receiver-operating characteristic (JAFROC) figure of merit (FOM), respectively. An observer performance
test involving 18 physicians, including nine board-certified radiologists, was conducted by using one of the four external validation
data sets. Performances of DLAD, physicians, and physicians assisted with DLAD were evaluated and compared.
Results: According to one internal and four external validation data sets, radiograph classification and nodule detection perfor-
mances of DLAD were a range of 0.92–0.99 (AUROC) and 0.831–0.924 (JAFROC FOM), respectively. DLAD showed a higher
AUROC and JAFROC FOM at the observer performance test than 17 of 18 and 15 of 18 physicians, respectively (P , .05), and
all physicians showed improved nodule detection performances with DLAD (mean JAFROC FOM improvement, 0.043; range,
0.006–0.190; P , .05).
Conclusion: This deep learning–based automatic detection algorithm outperformed physicians in radiograph classification and nod-
ule detection performance for malignant pulmonary nodules on chest radiographs, and it enhanced physicians’ performances when
used as a second reader.
©RSNA, 2018
Online supplemental material is available for this article.
This copy is for personal use only.
To order printed copies, contact reprints@rsna.org
This copy is for personal use only.
To order printed copies, contact reprints@rsna.org
ORIGINAL RESEARCH • THORACIC IMAGING
hest radiography, one of the most common diagnos- intraobserver agreements because of its limited spatial reso-
Development and Validation of Deep
Learning–based Automatic Detection
Algorithm for Malignant Pulmonary Nodules
on Chest Radiographs
Ju Gang Nam, MD* • Sunggyun Park, PhD* • Eui Jin Hwang, MD • Jong Hyuk Lee, MD • Kwang-Nam Jin, MD,
PhD • KunYoung Lim, MD, PhD • Thienkai HuyVu, MD, PhD • Jae Ho Sohn, MD • Sangheum Hwang, PhD • Jin
Mo Goo, MD, PhD • Chang Min Park, MD, PhD
From the Department of Radiology and Institute of Radiation Medicine, Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul
03080, Republic of Korea (J.G.N., E.J.H., J.M.G., C.M.P.); Lunit Incorporated, Seoul, Republic of Korea (S.P.); Department of Radiology, Armed Forces Seoul Hospital,
Seoul, Republic of Korea (J.H.L.); Department of Radiology, Seoul National University Boramae Medical Center, Seoul, Republic of Korea (K.N.J.); Department of
Radiology, National Cancer Center, Goyang, Republic of Korea (K.Y.L.); Department of Radiology and Biomedical Imaging, University of California, San Francisco,
San Francisco, Calif (T.H.V., J.H.S.); and Department of Industrial & Information Systems Engineering, Seoul National University of Science and Technology, Seoul,
Republic of Korea (S.H.). Received January 30, 2018; revision requested March 20; revision received July 29; accepted August 6. Address correspondence to C.M.P.
(e-mail: cmpark.morphius@gmail.com).
Study supported by SNUH Research Fund and Lunit (06–2016–3000) and by Seoul Research and Business Development Program (FI170002).
*J.G.N. and S.P. contributed equally to this work.
Conflicts of interest are listed at the end of this article.
Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 • Content codes:
Purpose: To develop and validate a deep learning–based automatic detection algorithm (DLAD) for malignant pulmonary nodules
on chest radiographs and to compare its performance with physicians including thoracic radiologists.
Materials and Methods: For this retrospective study, DLAD was developed by using 43292 chest radiographs (normal radiograph–
to–nodule radiograph ratio, 34067:9225) in 34676 patients (healthy-to-nodule ratio, 30784:3892; 19230 men [mean age, 52.8
years; age range, 18–99 years]; 15446 women [mean age, 52.3 years; age range, 18–98 years]) obtained between 2010 and 2015,
which were labeled and partially annotated by 13 board-certified radiologists, in a convolutional neural network. Radiograph clas-
sification and nodule detection performances of DLAD were validated by using one internal and four external data sets from three
South Korean hospitals and one U.S. hospital. For internal and external validation, radiograph classification and nodule detection
performances of DLAD were evaluated by using the area under the receiver operating characteristic curve (AUROC) and jackknife
alternative free-response receiver-operating characteristic (JAFROC) figure of merit (FOM), respectively. An observer performance
test involving 18 physicians, including nine board-certified radiologists, was conducted by using one of the four external validation
data sets. Performances of DLAD, physicians, and physicians assisted with DLAD were evaluated and compared.
Results: According to one internal and four external validation data sets, radiograph classification and nodule detection perfor-
mances of DLAD were a range of 0.92–0.99 (AUROC) and 0.831–0.924 (JAFROC FOM), respectively. DLAD showed a higher
AUROC and JAFROC FOM at the observer performance test than 17 of 18 and 15 of 18 physicians, respectively (P , .05), and
all physicians showed improved nodule detection performances with DLAD (mean JAFROC FOM improvement, 0.043; range,
0.006–0.190; P , .05).
Conclusion: This deep learning–based automatic detection algorithm outperformed physicians in radiograph classification and nod-
ule detection performance for malignant pulmonary nodules on chest radiographs, and it enhanced physicians’ performances when
used as a second reader.
©RSNA, 2018
Online supplemental material is available for this article.
• 43,292 chest PA (normal:nodule=34,067:9225)
• labeled/annotated by 13 board-certified radiologists.
• DLAD were validated 1 internal + 4 external datasets
• 서울대병원 / 보라매병원 / 국립암센터 / UCSF
• Classification / Lesion localization
• 인공지능 vs. 의사 vs. 인공지능+의사
• 다양한 수준의 의사와 비교
• non-radiology / radiology residents
• board-certified radiologist / Thoracic radiologists
This copy is for personal use only.
To order printed copies, contact reprints@rsna.org
This copy is for personal use only.
To order printed copies, contact reprints@rsna.org
ORIGINAL RESEARCH • THORACIC IMAGING
hest radiography, one of the most common diagnos- intraobserver agreements because of its limited spatial reso-
Development and Validation of Deep
Learning–based Automatic Detection
Algorithm for Malignant Pulmonary Nodules
on Chest Radiographs
Ju Gang Nam, MD* • Sunggyun Park, PhD* • Eui Jin Hwang, MD • Jong Hyuk Lee, MD • Kwang-Nam Jin, MD,
PhD • KunYoung Lim, MD, PhD • Thienkai HuyVu, MD, PhD • Jae Ho Sohn, MD • Sangheum Hwang, PhD • Jin
Mo Goo, MD, PhD • Chang Min Park, MD, PhD
From the Department of Radiology and Institute of Radiation Medicine, Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul
03080, Republic of Korea (J.G.N., E.J.H., J.M.G., C.M.P.); Lunit Incorporated, Seoul, Republic of Korea (S.P.); Department of Radiology, Armed Forces Seoul Hospital,
Seoul, Republic of Korea (J.H.L.); Department of Radiology, Seoul National University Boramae Medical Center, Seoul, Republic of Korea (K.N.J.); Department of
Radiology, National Cancer Center, Goyang, Republic of Korea (K.Y.L.); Department of Radiology and Biomedical Imaging, University of California, San Francisco,
San Francisco, Calif (T.H.V., J.H.S.); and Department of Industrial & Information Systems Engineering, Seoul National University of Science and Technology, Seoul,
Republic of Korea (S.H.). Received January 30, 2018; revision requested March 20; revision received July 29; accepted August 6. Address correspondence to C.M.P.
(e-mail: cmpark.morphius@gmail.com).
Study supported by SNUH Research Fund and Lunit (06–2016–3000) and by Seoul Research and Business Development Program (FI170002).
*J.G.N. and S.P. contributed equally to this work.
Conflicts of interest are listed at the end of this article.
Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 • Content codes:
Purpose: To develop and validate a deep learning–based automatic detection algorithm (DLAD) for malignant pulmonary nodules
on chest radiographs and to compare its performance with physicians including thoracic radiologists.
Materials and Methods: For this retrospective study, DLAD was developed by using 43292 chest radiographs (normal radiograph–
to–nodule radiograph ratio, 34067:9225) in 34676 patients (healthy-to-nodule ratio, 30784:3892; 19230 men [mean age, 52.8
years; age range, 18–99 years]; 15446 women [mean age, 52.3 years; age range, 18–98 years]) obtained between 2010 and 2015,
which were labeled and partially annotated by 13 board-certified radiologists, in a convolutional neural network. Radiograph clas-
sification and nodule detection performances of DLAD were validated by using one internal and four external data sets from three
South Korean hospitals and one U.S. hospital. For internal and external validation, radiograph classification and nodule detection
performances of DLAD were evaluated by using the area under the receiver operating characteristic curve (AUROC) and jackknife
alternative free-response receiver-operating characteristic (JAFROC) figure of merit (FOM), respectively. An observer performance
test involving 18 physicians, including nine board-certified radiologists, was conducted by using one of the four external validation
data sets. Performances of DLAD, physicians, and physicians assisted with DLAD were evaluated and compared.
Results: According to one internal and four external validation data sets, radiograph classification and nodule detection perfor-
mances of DLAD were a range of 0.92–0.99 (AUROC) and 0.831–0.924 (JAFROC FOM), respectively. DLAD showed a higher
AUROC and JAFROC FOM at the observer performance test than 17 of 18 and 15 of 18 physicians, respectively (P , .05), and
all physicians showed improved nodule detection performances with DLAD (mean JAFROC FOM improvement, 0.043; range,
0.006–0.190; P , .05).
Conclusion: This deep learning–based automatic detection algorithm outperformed physicians in radiograph classification and nod-
ule detection performance for malignant pulmonary nodules on chest radiographs, and it enhanced physicians’ performances when
used as a second reader.
©RSNA, 2018
Online supplemental material is available for this article.
• 43,292 chest PA (normal:nodule=34,067:9225)
• labeled/annotated by 13 board-certified radiologists.
• DLAD were validated 1 internal + 4 external datasets
• 서울대병원 / 보라매병원 / 국립암센터 / UCSF
• Classification / Lesion localization
• 인공지능 vs. 의사 vs. 인공지능+의사
• 다양한 수준의 의사와 비교
• non-radiology / radiology residents
• board-certified radiologist / Thoracic radiologists
“아산인공지능워크샵”, 박창민 교수님, 서울아산병원 2018.7.28
Deep Learning Automatic Detection Algorithm for Malignant Pulmonary Nodules
Table 3: Patient Classification and Nodule Detection at the Observer Performance Test
Observer
Test 1
DLAD versus Test 1
(P Value) Test 2
Test 1 versus Test 2 (P
Value)
Radiograph
Classification
(AUROC)
Nodule
Detection
(JAFROC FOM)
Radiograph
Classification
Nodule
Detection
Radiograph
Classification
(AUROC)
Nodule
Detection
(JAFROC
FOM)
Radiograph
Classification
Nodule
Detection
Nonradiology
physicians
Observer 1 0.77 0.716 ,.001 ,.001 0.91 0.853 ,.001 ,.001
Observer 2 0.78 0.657 ,.001 ,.001 0.90 0.846 ,.001 ,.001
Observer 3 0.80 0.700 ,.001 ,.001 0.88 0.783 ,.001 ,.001
Group 0.691 ,.001* 0.828 ,.001*
Radiology residents
Observer 4 0.78 0.767 ,.001 ,.001 0.80 0.785 .02 .03
Observer 5 0.86 0.772 .001 ,.001 0.91 0.837 .02 ,.001
Observer 6 0.86 0.789 .05 .002 0.86 0.799 .08 .54
Observer 7 0.84 0.807 .01 .003 0.91 0.843 .003 .02
Observer 8 0.87 0.797 .10 .003 0.90 0.845 .03 .001
Observer 9 0.90 0.847 .52 .12 0.92 0.867 .04 .03
Group 0.790 ,.001* 0.867 ,.001*
Board-certified
radiologists
Observer 10 0.87 0.836 .05 .01 0.90 0.865 .004 .002
Observer 11 0.83 0.804 ,.001 ,.001 0.84 0.817 .03 .04
Observer 12 0.88 0.817 .18 .005 0.91 0.841 .01 .01
Observer 13 0.91 0.824 ..99 .02 0.92 0.836 .51 .24
Observer 14 0.88 0.834 .14 .03 0.88 0.840 .87 .23
Group 0.821 .02* 0.840 .01*
Thoracic radiologists
Observer 15 0.94 0.856 .15 .21 0.96 0.878 .08 .03
Observer 16 0.92 0.854 .60 .17 0.93 0.872 .34 .02
Observer 17 0.86 0.820 .02 .01 0.88 0.838 .14 .12
Observer 18 0.84 0.800 ,.001 ,.001 0.87 0.827 .02 .02
Group 0.833 .08* 0.854 ,.001*
Note.—Observer 4 had 1 year of experience; observers 5 and 6 had 2 years of experience; observers 7–9 had 3 years of experience; observers
10–12 had 7 years of experience; observers 13 and 14 had 8 years of experience; observer 15 had 26 years of experience; observer 16 had 13
years of experience; and observers 17 and 18 had 9 years of experience. Observers 1–3 were 4th-year residents from obstetrics and gynecolo-
의사
인공지능 vs. 의사만
(p value) 의사+인공지능
의사 vs. 의사+인공지능
(p value)
영상의학과 1년차 전공의
영상의학과 2년차 전공의
영상의학과 3년차 전공의
산부인과 4년차 전공의
정형외과 4년차 전공의
내과 4년차 전공의
영상의학과 전문의
7년 경력
8년 경력
영상의학과 전문의 (흉부)
26년 경력
13년 경력
9년 경력
영상의학과 전공의
비영상의학과 의사
Deep Learning Automatic Detection Algorithm for Malignant Pulmonary Nodules
Table 3: Patient Classification and Nodule Detection at the Observer Performance Test
Observer
Test 1
DLAD versus Test 1
(P Value) Test 2
Test 1 versus Test 2 (P
Value)
Radiograph
Classification
(AUROC)
Nodule
Detection
(JAFROC FOM)
Radiograph
Classification
Nodule
Detection
Radiograph
Classification
(AUROC)
Nodule
Detection
(JAFROC
FOM)
Radiograph
Classification
Nodule
Detection
Nonradiology
physicians
Observer 1 0.77 0.716 ,.001 ,.001 0.91 0.853 ,.001 ,.001
Observer 2 0.78 0.657 ,.001 ,.001 0.90 0.846 ,.001 ,.001
Observer 3 0.80 0.700 ,.001 ,.001 0.88 0.783 ,.001 ,.001
Group 0.691 ,.001* 0.828 ,.001*
Radiology residents
Observer 4 0.78 0.767 ,.001 ,.001 0.80 0.785 .02 .03
Observer 5 0.86 0.772 .001 ,.001 0.91 0.837 .02 ,.001
Observer 6 0.86 0.789 .05 .002 0.86 0.799 .08 .54
Observer 7 0.84 0.807 .01 .003 0.91 0.843 .003 .02
Observer 8 0.87 0.797 .10 .003 0.90 0.845 .03 .001
Observer 9 0.90 0.847 .52 .12 0.92 0.867 .04 .03
Group 0.790 ,.001* 0.867 ,.001*
Board-certified
radiologists
Observer 10 0.87 0.836 .05 .01 0.90 0.865 .004 .002
Observer 11 0.83 0.804 ,.001 ,.001 0.84 0.817 .03 .04
Observer 12 0.88 0.817 .18 .005 0.91 0.841 .01 .01
Observer 13 0.91 0.824 ..99 .02 0.92 0.836 .51 .24
Observer 14 0.88 0.834 .14 .03 0.88 0.840 .87 .23
Group 0.821 .02* 0.840 .01*
Thoracic radiologists
Observer 15 0.94 0.856 .15 .21 0.96 0.878 .08 .03
Observer 16 0.92 0.854 .60 .17 0.93 0.872 .34 .02
Observer 17 0.86 0.820 .02 .01 0.88 0.838 .14 .12
Observer 18 0.84 0.800 ,.001 ,.001 0.87 0.827 .02 .02
Group 0.833 .08* 0.854 ,.001*
Note.—Observer 4 had 1 year of experience; observers 5 and 6 had 2 years of experience; observers 7–9 had 3 years of experience; observers
10–12 had 7 years of experience; observers 13 and 14 had 8 years of experience; observer 15 had 26 years of experience; observer 16 had 13
years of experience; and observers 17 and 18 had 9 years of experience. Observers 1–3 were 4th-year residents from obstetrics and gynecolo-
의사
인공지능 vs. 의사만
(p value) 의사+인공지능
의사 vs. 의사+인공지능
(p value)
영상의학과 1년차 전공의
영상의학과 2년차 전공의
영상의학과 3년차 전공의
산부인과 4년차 전공의
정형외과 4년차 전공의
내과 4년차 전공의
영상의학과 전문의
7년 경력
8년 경력
영상의학과 전문의 (흉부)
26년 경력
13년 경력
9년 경력
영상의학과 전공의
비영상의학과 의사
•인공지능을 second reader로 활용하면 정확도가 개선
•classification: 17 of 18 명이 개선 (15 of 18, P<0.05)
•nodule detection: 18 of 18 명이 개선 (14 of 18, P<0.05)
Deep Learning Automatic Detection Algorithm for Malignant Pulmonary Nodules
Table 3: Patient Classification and Nodule Detection at the Observer Performance Test
Observer
Test 1
DLAD versus Test 1
(P Value) Test 2
Test 1 versus Test 2 (P
Value)
Radiograph
Classification
(AUROC)
Nodule
Detection
(JAFROC FOM)
Radiograph
Classification
Nodule
Detection
Radiograph
Classification
(AUROC)
Nodule
Detection
(JAFROC
FOM)
Radiograph
Classification
Nodule
Detection
Nonradiology
physicians
Observer 1 0.77 0.716 ,.001 ,.001 0.91 0.853 ,.001 ,.001
Observer 2 0.78 0.657 ,.001 ,.001 0.90 0.846 ,.001 ,.001
Observer 3 0.80 0.700 ,.001 ,.001 0.88 0.783 ,.001 ,.001
Group 0.691 ,.001* 0.828 ,.001*
Radiology residents
Observer 4 0.78 0.767 ,.001 ,.001 0.80 0.785 .02 .03
Observer 5 0.86 0.772 .001 ,.001 0.91 0.837 .02 ,.001
Observer 6 0.86 0.789 .05 .002 0.86 0.799 .08 .54
Observer 7 0.84 0.807 .01 .003 0.91 0.843 .003 .02
Observer 8 0.87 0.797 .10 .003 0.90 0.845 .03 .001
Observer 9 0.90 0.847 .52 .12 0.92 0.867 .04 .03
Group 0.790 ,.001* 0.867 ,.001*
Board-certified
radiologists
Observer 10 0.87 0.836 .05 .01 0.90 0.865 .004 .002
Observer 11 0.83 0.804 ,.001 ,.001 0.84 0.817 .03 .04
Observer 12 0.88 0.817 .18 .005 0.91 0.841 .01 .01
Observer 13 0.91 0.824 ..99 .02 0.92 0.836 .51 .24
Observer 14 0.88 0.834 .14 .03 0.88 0.840 .87 .23
Group 0.821 .02* 0.840 .01*
Thoracic radiologists
Observer 15 0.94 0.856 .15 .21 0.96 0.878 .08 .03
Observer 16 0.92 0.854 .60 .17 0.93 0.872 .34 .02
Observer 17 0.86 0.820 .02 .01 0.88 0.838 .14 .12
Observer 18 0.84 0.800 ,.001 ,.001 0.87 0.827 .02 .02
Group 0.833 .08* 0.854 ,.001*
Note.—Observer 4 had 1 year of experience; observers 5 and 6 had 2 years of experience; observers 7–9 had 3 years of experience; observers
10–12 had 7 years of experience; observers 13 and 14 had 8 years of experience; observer 15 had 26 years of experience; observer 16 had 13
years of experience; and observers 17 and 18 had 9 years of experience. Observers 1–3 were 4th-year residents from obstetrics and gynecolo-
의사
인공지능 vs. 의사만
(p value) 의사+인공지능
의사 vs. 의사+인공지능
(p value)
영상의학과 1년차 전공의
영상의학과 2년차 전공의
영상의학과 3년차 전공의
산부인과 4년차 전공의
정형외과 4년차 전공의
내과 4년차 전공의
영상의학과 전문의
7년 경력
8년 경력
영상의학과 전문의 (흉부)
26년 경력
13년 경력
9년 경력
영상의학과 전공의
비영상의학과 의사
인공지능 0.91 0.885
Deep Learning Automatic Detection Algorithm for Malignant Pulmonary Nodules
Table 3: Patient Classification and Nodule Detection at the Observer Performance Test
Observer
Test 1
DLAD versus Test 1
(P Value) Test 2
Test 1 versus Test 2 (P
Value)
Radiograph
Classification
(AUROC)
Nodule
Detection
(JAFROC FOM)
Radiograph
Classification
Nodule
Detection
Radiograph
Classification
(AUROC)
Nodule
Detection
(JAFROC
FOM)
Radiograph
Classification
Nodule
Detection
Nonradiology
physicians
Observer 1 0.77 0.716 ,.001 ,.001 0.91 0.853 ,.001 ,.001
Observer 2 0.78 0.657 ,.001 ,.001 0.90 0.846 ,.001 ,.001
Observer 3 0.80 0.700 ,.001 ,.001 0.88 0.783 ,.001 ,.001
Group 0.691 ,.001* 0.828 ,.001*
Radiology residents
Observer 4 0.78 0.767 ,.001 ,.001 0.80 0.785 .02 .03
Observer 5 0.86 0.772 .001 ,.001 0.91 0.837 .02 ,.001
Observer 6 0.86 0.789 .05 .002 0.86 0.799 .08 .54
Observer 7 0.84 0.807 .01 .003 0.91 0.843 .003 .02
Observer 8 0.87 0.797 .10 .003 0.90 0.845 .03 .001
Observer 9 0.90 0.847 .52 .12 0.92 0.867 .04 .03
Group 0.790 ,.001* 0.867 ,.001*
Board-certified
radiologists
Observer 10 0.87 0.836 .05 .01 0.90 0.865 .004 .002
Observer 11 0.83 0.804 ,.001 ,.001 0.84 0.817 .03 .04
Observer 12 0.88 0.817 .18 .005 0.91 0.841 .01 .01
Observer 13 0.91 0.824 ..99 .02 0.92 0.836 .51 .24
Observer 14 0.88 0.834 .14 .03 0.88 0.840 .87 .23
Group 0.821 .02* 0.840 .01*
Thoracic radiologists
Observer 15 0.94 0.856 .15 .21 0.96 0.878 .08 .03
Observer 16 0.92 0.854 .60 .17 0.93 0.872 .34 .02
Observer 17 0.86 0.820 .02 .01 0.88 0.838 .14 .12
Observer 18 0.84 0.800 ,.001 ,.001 0.87 0.827 .02 .02
Group 0.833 .08* 0.854 ,.001*
Note.—Observer 4 had 1 year of experience; observers 5 and 6 had 2 years of experience; observers 7–9 had 3 years of experience; observers
10–12 had 7 years of experience; observers 13 and 14 had 8 years of experience; observer 15 had 26 years of experience; observer 16 had 13
years of experience; and observers 17 and 18 had 9 years of experience. Observers 1–3 were 4th-year residents from obstetrics and gynecolo-
의사
인공지능 vs. 의사만
(p value) 의사+인공지능
의사 vs. 의사+인공지능
(p value)
영상의학과 1년차 전공의
영상의학과 2년차 전공의
영상의학과 3년차 전공의
산부인과 4년차 전공의
정형외과 4년차 전공의
내과 4년차 전공의
영상의학과 전문의
7년 경력
8년 경력
영상의학과 전문의 (흉부)
26년 경력
13년 경력
9년 경력
영상의학과 전공의
비영상의학과 의사
인공지능 0.91 0.885
•“인공지능 혼자” 한 것이 “영상의학과 전문의+인공지능”보다 대부분 더 정확
•classification: 9명 중 6명보다 나음
•nodule detection: 9명 전원보다 나음
당뇨성 망막병증 판독 인공지능
당뇨성 망막병증
• 당뇨병의 대표적 합병증: 당뇨병력이 30년 이상 환자 90% 발병

• 안과 전문의들이 안저(안구의 안쪽)를 사진으로 찍어서 판독

• 망막 내 미세혈관 생성, 출혈, 삼출물 정도를 파악하여 진단
Case Study: TensorFlow in Medicine - Retinal Imaging (TensorFlow Dev Summit 2017)
Copyright 2016 American Medical Association. All rights reserved.
Development and Validation of a Deep Learning Algorithm
for Detection of Diabetic Retinopathy
in Retinal Fundus Photographs
Varun Gulshan, PhD; Lily Peng, MD, PhD; Marc Coram, PhD; Martin C. Stumpe, PhD; Derek Wu, BS; Arunachalam Narayanaswamy, PhD;
Subhashini Venugopalan, MS; Kasumi Widner, MS; Tom Madams, MEng; Jorge Cuadros, OD, PhD; Ramasamy Kim, OD, DNB;
Rajiv Raman, MS, DNB; Philip C. Nelson, BS; Jessica L. Mega, MD, MPH; Dale R. Webster, PhD
IMPORTANCE Deep learning is a family of computational methods that allow an algorithm to
program itself by learning from a large set of examples that demonstrate the desired
behavior, removing the need to specify rules explicitly. Application of these methods to
medical imaging requires further assessment and validation.
OBJECTIVE To apply deep learning to create an algorithm for automated detection of diabetic
retinopathy and diabetic macular edema in retinal fundus photographs.
DESIGN AND SETTING A specific type of neural network optimized for image classification
called a deep convolutional neural network was trained using a retrospective development
data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy,
diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists
and ophthalmology senior residents between May and December 2015. The resultant
algorithm was validated in January and February 2016 using 2 separate data sets, both
graded by at least 7 US board-certified ophthalmologists with high intragrader consistency.
EXPOSURE Deep learning–trained algorithm.
MAIN OUTCOMES AND MEASURES The sensitivity and specificity of the algorithm for detecting
referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy,
referable diabetic macular edema, or both, were generated based on the reference standard
of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2
operating points selected from the development set, one selected for high specificity and
another for high sensitivity.
RESULTS TheEyePACS-1datasetconsistedof9963imagesfrom4997patients(meanage,54.4
years;62.2%women;prevalenceofRDR,683/8878fullygradableimages[7.8%]);the
Messidor-2datasethad1748imagesfrom874patients(meanage,57.6years;42.6%women;
prevalenceofRDR,254/1745fullygradableimages[14.6%]).FordetectingRDR,thealgorithm
hadanareaunderthereceiveroperatingcurveof0.991(95%CI,0.988-0.993)forEyePACS-1and
0.990(95%CI,0.986-0.995)forMessidor-2.Usingthefirstoperatingcutpointwithhigh
specificity,forEyePACS-1,thesensitivitywas90.3%(95%CI,87.5%-92.7%)andthespecificity
was98.1%(95%CI,97.8%-98.5%).ForMessidor-2,thesensitivitywas87.0%(95%CI,81.1%-
91.0%)andthespecificitywas98.5%(95%CI,97.7%-99.1%).Usingasecondoperatingpoint
withhighsensitivityinthedevelopmentset,forEyePACS-1thesensitivitywas97.5%and
specificitywas93.4%andforMessidor-2thesensitivitywas96.1%andspecificitywas93.9%.
CONCLUSIONS AND RELEVANCE In this evaluation of retinal fundus photographs from adults
with diabetes, an algorithm based on deep machine learning had high sensitivity and
specificity for detecting referable diabetic retinopathy. Further research is necessary to
determine the feasibility of applying this algorithm in the clinical setting and to determine
whether use of the algorithm could lead to improved care and outcomes compared with
current ophthalmologic assessment.
JAMA. doi:10.1001/jama.2016.17216
Published online November 29, 2016.
Editorial
Supplemental content
Author Affiliations: Google Inc,
Mountain View, California (Gulshan,
Peng, Coram, Stumpe, Wu,
Narayanaswamy, Venugopalan,
Widner, Madams, Nelson, Webster);
Department of Computer Science,
University of Texas, Austin
(Venugopalan); EyePACS LLC,
San Jose, California (Cuadros); School
of Optometry, Vision Science
Graduate Group, University of
California, Berkeley (Cuadros);
Aravind Medical Research
Foundation, Aravind Eye Care
System, Madurai, India (Kim); Shri
Bhagwan Mahavir Vitreoretinal
Services, Sankara Nethralaya,
Chennai, Tamil Nadu, India (Raman);
Verily Life Sciences, Mountain View,
California (Mega); Cardiovascular
Division, Department of Medicine,
Brigham and Women’s Hospital and
Harvard Medical School, Boston,
Massachusetts (Mega).
Corresponding Author: Lily Peng,
MD, PhD, Google Research, 1600
Amphitheatre Way, Mountain View,
CA 94043 (lhpeng@google.com).
Research
JAMA | Original Investigation | INNOVATIONS IN HEALTH CARE DELIVERY
(Reprinted) E1
Copyright 2016 American Medical Association. All rights reserved.
안저 판독 인공지능의 개발
• CNN으로 후향적으로 128,175개의 안저 이미지 학습

• 미국의 안과전문의 54명이 3-7회 판독한 데이터

• 우수한 안과전문의들 7-8명의 판독 결과와 인공지능의 판독 결과 비교

• EyePACS-1 (9,963 개), Messidor-2 (1,748 개)a) Fullscreen mode
b) Hit reset to reload this image. This will reset all of the grading.
c) Comment box for other pathologies you see
eFigure 2. Screenshot of the Second Screen of the Grading Tool, Which Asks Graders to Assess the
Image for DR, DME and Other Notable Conditions or Findings
• EyePACS-1 과 Messidor-2 의 AUC = 0.991, 0.990
• 7-8명의 안과 전문의와 민감도와 특이도가 동일한 수준
• F-score: 0.95 (vs. 인간 의사는 0.91)
Additional sensitivity analyses were conducted for sev- effects of data set size on algorithm performance were exam-
Figure 2. Validation Set Performance for Referable Diabetic Retinopathy
100
80
60
40
20
0
0
70
80
85
95
90
75
0 5 10 15 20 25 30
100806040
Sensitivity,%
1 – Specificity, %
20
EyePACS-1: AUC, 99.1%; 95% CI, 98.8%-99.3%A
100
High-sensitivity operating point
High-specificity operating point
100
80
60
40
20
0
0
70
80
85
95
90
75
0 5 10 15 20 25 30
100806040
Sensitivity,% 1 – Specificity, %
20
Messidor-2: AUC, 99.0%; 95% CI, 98.6%-99.5%B
100
High-specificity operating point
High-sensitivity operating point
Performance of the algorithm (black curve) and ophthalmologists (colored
circles) for the presence of referable diabetic retinopathy (moderate or worse
diabetic retinopathy or referable diabetic macular edema) on A, EyePACS-1
(8788 fully gradable images) and B, Messidor-2 (1745 fully gradable images).
The black diamonds on the graph correspond to the sensitivity and specificity of
the algorithm at the high-sensitivity and high-specificity operating points.
In A, for the high-sensitivity operating point, specificity was 93.4% (95% CI,
92.8%-94.0%) and sensitivity was 97.5% (95% CI, 95.8%-98.7%); for the
high-specificity operating point, specificity was 98.1% (95% CI, 97.8%-98.5%)
and sensitivity was 90.3% (95% CI, 87.5%-92.7%). In B, for the high-sensitivity
operating point, specificity was 93.9% (95% CI, 92.4%-95.3%) and sensitivity
was 96.1% (95% CI, 92.4%-98.3%); for the high-specificity operating point,
specificity was 98.5% (95% CI, 97.7%-99.1%) and sensitivity was 87.0% (95%
CI, 81.1%-91.0%). There were 8 ophthalmologists who graded EyePACS-1 and 7
ophthalmologists who graded Messidor-2. AUC indicates area under the
receiver operating characteristic curve.
Research Original Investigation Accuracy of a Deep Learning Algorithm for Detection of Diabetic Retinopathy
안저 판독 인공지능의 정확도
•2018년 4월 FDA는 안저사진을 판독하여 당뇨성 망막병증(DR)을 진단하는 인공지능 시판 허가

•IDx-DR: 클라우드 기반의 소프트웨어로, Topcon NW400 로 찍은 사진을 판독

•의사의 개입 없이 안저 사진을 판독하여 DR 여부를 진단

•두 가지 답 중에 하나를 준다

•1) mild DR 이상이 detection 되었으니, 의사에게 가봐라

•2) mild DR 이상은 없는 것 같으니, 12개월 이후에 다시 검사 받아봐라

•임상시험 및 성능

•10개의 병원에서 멀티센터로 900명 환자의 데이터를 분석

•민감도와 특이도가 각각 87.4%, 89.5% (JAMA 논문의 구글 인공지능 보다 낮음)

•FDA가 de novo premarket review pathway로 진행
병리과
조직검사; 확진을 내리는 대법관
A B DC
Benign without atypia / Atypic / DCIS (ductal carcinoma in situ) / Invasive Carcinoma
Interpretation?
Elmore etl al. JAMA 2015
Diagnostic Concordance Among Pathologists
유방암 병리 데이터 판독하기
Figure 4. Participating Pathologists’ Interpretations of Each of the 240 Breast Biopsy Test Cases
0 25 50 75 100
Interpretations, %
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
46
48
50
52
54
56
58
60
62
64
66
68
70
72
Case
Benign without atypia
72 Cases
2070 Total interpretations
A
0 25 50 75 100
Interpretations, %
218
220
222
224
226
228
230
232
234
236
238
240
Case
Invasive carcinoma
23 Cases
663 Total interpretations
D
0 25 50 75 100
Interpretations, %
147
145
149
151
153
155
157
159
161
163
165
167
169
171
173
175
177
179
181
183
185
187
189
191
193
195
197
199
201
203
205
207
209
211
213
215
217
Case
DCIS
73 Cases
2097 Total interpretations
C
0 25 50 75 100
Interpretations, %
74
76
78
80
82
84
86
88
90
92
94
96
98
100
102
104
106
108
110
112
114
116
118
120
122
124
126
128
130
132
134
136
138
140
142
144
Case
Atypia
72 Cases
2070 Total interpretations
B
Benign without atypia
Atypia
DCIS
Invasive carcinoma
Pathologist interpretation
DCIS indicates ductal carcinoma in situ.
Diagnostic Concordance in Interpreting Breast Biopsies Original Investigation Research
Elmore etl al. JAMA 2015
유방암 판독에 대한 병리학과 전문의들의 불일치도
Elmore etl al. JAMA 2015
•정확도: 75.3%

(정답은 경험이 많은 세 명의 병리학과 전문의가 협의를 통해 정하였음)
spentonthisactivitywas16(95%CI,15-17);43participantswere
awarded the maximum 20 hours.
Pathologists’ Diagnoses Compared With Consensus-Derived
Reference Diagnoses
The 115 participants each interpreted 60 cases, providing 6900
total individual interpretations for comparison with the con-
sensus-derived reference diagnoses (Figure 3). Participants
agreed with the consensus-derived reference diagnosis for
75.3% of the interpretations (95% CI, 73.4%-77.0%). Partici-
pants (n = 94) who completed the CME activity reported that
Patient and Pathologist Characteristics Associated With
Overinterpretation and Underinterpretation
The association of breast density with overall pathologists’
concordance (as well as both overinterpretation and under-
interpretation rates) was statistically significant, as shown
in Table 3 when comparing mammographic density grouped
into 2 categories (low density vs high density). The overall
concordance estimates also decreased consistently with
increasing breast density across all 4 Breast Imaging-
Reporting and Data System (BI-RADS) density categories:
BI-RADS A, 81% (95% CI, 75%-86%); BI-RADS B, 77% (95%
Figure 3. Comparison of 115 Participating Pathologists’ Interpretations vs the Consensus-Derived Reference
Diagnosis for 6900 Total Case Interpretationsa
Participating Pathologists’ Interpretation
ConsensusReference
Diagnosisb
Benign
without atypia Atypia DCIS
Invasive
carcinoma Total
Benign without atypia 1803 200 46 21 2070
Atypia 719 990 353 8 2070
DCIS 133 146 1764 54 2097
Invasive carcinoma 3 0 23 637 663
Total 2658 1336 2186 720 6900
DCIS indicates ductal carcinoma
in situ.
a
Concordance noted in 5194 of
6900 case interpretations or
75.3%.
b
Reference diagnosis was obtained
from consensus of 3 experienced
breast pathologists.
Diagnostic Concordance in Interpreting Breast Biopsies Original Investigation Research
총 240개의 병리 샘플에 대해서,

115명의 병리학과 전문의들이 판독한 총 6900건의 사례를 정답과 비교
유방암 판독에 대한 병리학과 전문의들의 불일치도
Constructing higher-level
contextual/relational features:
Relationships between epithelial
nuclear neighbors
Relationships between morphologically
regular and irregular nuclei
Relationships between epithelial
and stromal objects
Relationships between epithelial
nuclei and cytoplasm
Characteristics of
stromal nuclei
and stromal matrix
Characteristics of
epithelial nuclei and
epithelial cytoplasm
Building an epithelial/stromal classifier:
Epithelial vs.stroma
classifier
Epithelial vs.stroma
classifier
B
Basic image processing and feature construction:
H&E image Image broken into superpixels Nuclei identified within
each superpixel
A
Relationships of contiguous epithelial
regions with underlying nuclear objects
Learning an image-based model to predict survival
Processed images from patients Processed images from patients
C
D
onNovember17,2011stm.sciencemag.orgwnloadedfrom
TMAs contain 0.6-mm-diameter cores (median
of two cores per case) that represent only a small
sample of the full tumor. We acquired data from
two separate and independent cohorts: Nether-
lands Cancer Institute (NKI; 248 patients) and
Vancouver General Hospital (VGH; 328 patients).
Unlike previous work in cancer morphom-
etry (18–21), our image analysis pipeline was
not limited to a predefined set of morphometric
features selected by pathologists. Rather, C-Path
measures an extensive, quantitative feature set
from the breast cancer epithelium and the stro-
ma (Fig. 1). Our image processing system first
performed an automated, hierarchical scene seg-
mentation that generated thousands of measure-
ments, including both standard morphometric
descriptors of image objects and higher-level
contextual, relational, and global image features.
The pipeline consisted of three stages (Fig. 1, A
to C, and tables S8 and S9). First, we used a set of
processing steps to separate the tissue from the
background, partition the image into small regions
of coherent appearance known as superpixels,
find nuclei within the superpixels, and construct
Constructing higher-level
contextual/relational features:
Relationships between epithelial
nuclear neighbors
Relationships between morphologically
regular and irregular nuclei
Relationships between epithelial
and stromal objects
Relationships between epithelial
nuclei and cytoplasm
Characteristics of
stromal nuclei
and stromal matrix
Characteristics of
epithelial nuclei and
epithelial cytoplasm
Epithelial vs.stroma
classifier
Epithelial vs.stroma
classifier
Relationships of contiguous epithelial
regions with underlying nuclear objects
Learning an image-based model to predict survival
Processed images from patients
alive at 5 years
Processed images from patients
deceased at 5 years
L1-regularized
logisticregression
modelbuilding
5YS predictive model
Unlabeled images
Time
P(survival)
C
D
Identification of novel prognostically
important morphologic features
basic cellular morphologic properties (epithelial reg-
ular nuclei = red; epithelial atypical nuclei = pale blue;
epithelial cytoplasm = purple; stromal matrix = green;
stromal round nuclei = dark green; stromal spindled
nuclei = teal blue; unclassified regions = dark gray;
spindled nuclei in unclassified regions = yellow; round
nuclei in unclassified regions = gray; background =
white). (Left panel) After the classification of each
image object, a rich feature set is constructed. (D)
Learning an image-based model to predict survival.
Processed images from patients alive at 5 years after
surgery and from patients deceased at 5 years after
surgery were used to construct an image-based prog-
nostic model. After construction of the model, it was
applied to a test set of breast cancer images (not
used in model building) to classify patients as high
or low risk of death by 5 years.
www.ScienceTranslationalMedicine.org 9 November 2011 Vol 3 Issue 108 108ra113 2
onNovember17,2011stm.sciencemag.orgDownloadedfrom
Digital Pathologist
•6642 가지의 유방암의 다양한 정량적인 feature를 사용
•이 feature 들은 표준 morphometric descriptor 를 포함할 뿐만 아니라,
•higher level contextual, relational, global image feature 들을 포함
Sci Transl Med. 2011 Nov 9;3(108):108ra113
Top stromal features associated with survival.
our system measures thousands of morphologic descriptors of diverse prognostic factor on another, independent data set with very different
SD of the ratio of the pixel intensity SD to the mean intensity
for pixels within a ring of the center of epithelial nuclei
A
The sum of the number of unclassified objects
SD of the maximum blue pixel value for atypical epithelial nuclei
Maximum distance between atypical epithelial nuclei
B
C
D
Maximum value of the minimum green pixel intensity value in
epithelial contiguous regions
Minimum elliptic fit of epithelial contiguous regions
SD of distance between epithelial cytoplasmic and nuclear objects
Average border between epithelial cytoplasmic objects
E
F
G
H
Fig. 5. Top epithelial features. The eight panels in the figure (A to H) each
shows one of the top-ranking epithelial features from the bootstrap anal-
ysis. Left panels, improved prognosis; right panels, worse prognosis. (A) SD
of the (SD of intensity/mean intensity) for pixels within a ring of the center
of epithelial nuclei. Left, relatively consistent nuclear intensity pattern (low
score); right, great nuclear intensity diversity (high score). (B) Sum of the
number of unclassified objects. Red, epithelial regions; green, stromal re-
gions; no overlaid color, unclassified region. Left, few unclassified objects
(low score); right, higher number of unclassified objects (high score). (C) SD
of the maximum blue pixel value for atypical epithelial nuclei. Left, high
score; right, low score. (D) Maximum distance between atypical epithe-
lial nuclei. Left, high score; right, low score. (Insets) Red, atypical epithelial
nuclei; black, typical epithelial nuclei. (E) Minimum elliptic fit of epithelial
contiguous regions. Left, high score; right, low score. (F) SD of distance
between epithelial cytoplasmic and nuclear objects. Left, high score; right,
low score. (G) Average border between epithelial cytoplasmic objects. Left,
high score; right, low score. (H) Maximum value of the minimum green
pixel intensity value in epithelial contiguous regions. Left, low score indi-
cating black pixels within epithelial region; right, higher score indicating
presence of epithelial regions lacking black pixels.
www.ScienceTranslationalMedicine.org 9 November 2011 Vol 3 Issue 108 108ra113 7
onNovember17,2011stm.sciencemag.orgDownloadedfrom
stromal matrix region borders a relatively constant proportion of ep- tensity value of stromal-contiguous regions. This feature received a
value of zero when stromal regions contained dark pixels (such as
inflammatory nuclei). The feature received a positive value when
stromal objects were devoid of dark pixels. This feature provided in-
formation about the relationship between stromal cellular composi-
tion and prognosis and suggested that the presence of inflammatory
cells in the stroma is associated with poor prognosis, a finding con-
sistent with previous observations (32). The third most significant
stromal feature (Fig. 4C) was a measure of the relative border between
spindled stromal nuclei to round stromal nuclei, with an increased rel-
ative border of spindled stromal nuclei to round stromal nuclei asso-
ciated with worse overall survival. Although the biological underpinning
of this morphologic feature is currently not known, this analysis sug-
gested that spatial relationships between different populations of stro-
mal cell types are associated with breast cancer progression.
Reproducibility of C-Path 5YS model predictions on
samples with multiple TMA cores
For the C-Path 5YS model (which was trained on the full NKI data
set), we assessed the intrapatient agreement of model predictions when
predictions were made separately on each image contributed by pa-
tients in the VGH data set. For the 190 VGH patients who contributed
two images with complete image data, the binary predictions (high
or low risk) on the individual images agreed with each other for 69%
(131 of 190) of the cases and agreed with the prediction on the aver-
aged data for 84% (319 of 380) of the images. Using the continuous
prediction score (which ranged from 0 to 100), the median of the ab-
solute difference in prediction score among the patients with replicate
images was 5%, and the Spearman correlation among replicates was
0.27 (P = 0.0002) (fig. S3). This degree of intrapatient agreement is
only moderate, and these findings suggest significant intrapatient tumor
heterogeneity, which is a cardinal feature of breast carcinomas (33–35).
Qualitative visual inspection of images receiving discordant scores
suggested that intrapatient variability in both the epithelial and the
stromal components is likely to contribute to discordant scores for
the individual images. These differences appeared to relate both to
the proportions of the epithelium and stroma and to the appearance
of the epithelium and stroma. Last, we sought to analyze whether sur-
vival predictions were more accurate on the VGH cases that contributed
multiple cores compared to the cases that contributed only a single
core. This analysis showed that the C-Path 5YS model showed signif-
icantly improved prognostic prediction accuracy on the VGH cases
for which we had multiple images compared to the cases that con-
tributed only a single image (Fig. 7). Together, these findings show
a significant degree of intrapatient variability and indicate that increased
tumor sampling is associated with improved model performance.
DISCUSSION
We have developed a system for the automatic hierarchical segmen-
tation of microscopic breast cancer images and the generation of a
rich set of quantitative features to characterize the image. On the
basis of these features, we built an image-based model to predict pa-
tient outcome and to identify clinically significant morphologic
features. Most previous work in quantitative pathology has required
Heat map of stromal matrix
objects mean abs.diff
to neighbors
H&E image separated
into epithelial and
stromal objects
A
B
C
Worse
prognosis
Improved
prognosis
Improved
prognosis
Improved
prognosis
Worse
prognosis
Worse
prognosis
Fig. 4. Top stromal features associated with survival. (A) Variability in ab-
solute difference in intensity between stromal matrix regions and neigh-
bors. Top panel, high score (24.1); bottom panel, low score (10.5). (Insets)
Top panel, high score; bottom panel; low score. Right panels, stromal matrix
objects colored blue (low), green (medium), or white (high) according to
each object’s absolute difference in intensity to neighbors. (B) Presence
of stromal regions without nuclei. Top panels, high scores; bottom panels,
0 score. Green, stromal contiguous regions with score 0; red, stromal con-
tiguous regions with high score. (Insets) Red stromal regions are thin and
do not contain nuclei; green regions are larger with nuclei. (C) Average
relative border of stromal spindle nuclei to stromal round nuclei. Top panel,
low score; bottom panel, high score. (Insets) Stromal spindled nuclear
objects are green and stromal round nuclear objects are red. Right panels,
higher magnification of a portion of the larger image.
onNovember17,2011stm.sciencemag.orgDownloadedfrom
Top epithelial features.The eight panels in the figure (A to H) each
shows one of the top-ranking epithelial features from the bootstrap
analysis. Left panels, improved prognosis; right panels, worse prognosis.
•C-Path 를 이용한 예후 예측 모델이 두 유방암 코호트와 강한 상관관계
•3개의 stromal feature가 오히려 epithelial feature보다 더 강한 상관관계
•Stromal morphologic structure가 유방암의 새로운 예후 예측 인자가 될 수도 있음
Sci Transl Med. 2011 Nov 9;3(108):108ra113
ISBI Grand Challenge on
Cancer Metastases Detection in Lymph Node
Camelyon16 (>200 registrants)
International Symposium on Biomedical Imaging 2016
H&E Image Processing Framework
Train
whole slide image
sample
sample
training data
normaltumor
Test
whole slide image
overlapping image
patches tumor prob. map
1.0
0.0
0.5
Convolutional Neural
Network
P(tumor)
https://blogs.nvidia.com/blog/2016/09/19/deep-learning-breast-cancer-diagnosis/
Clinical study on ISBI dataset
Error Rate
Pathologist in competition setting 3.5%
Pathologists in clinical practice (n = 12) 13% - 26%
Pathologists on micro-metastasis(small tumors) 23% - 42%
Beck Lab Deep Learning Model 0.65%
Beck Lab’s deep learning model now outperforms pathologist
Andrew Beck, Machine Learning for Healthcare, MIT 2017
구글의 유방 병리 판독 인공지능
• The localization score(FROC) for the algorithm reached 89%, which significantly
exceeded the score of 73% for a pathologist with no time constraint.
인공지능의 민감도 + 인간의 특이도
Yun Liu et al. Detecting Cancer Metastases on Gigapixel Pathology Images (2017)
• 구글의 인공지능은 민감도에서 큰 개선 (92.9%, 88.5%)

•@8FP: FP를 8개까지 봐주면서, 달성할 수 있는 민감도

•FROC: FP를 슬라이드당 1/4, 1/2, 1, 2, 4, 8개를 허용한 민감도의 평균

•즉, FP를 조금 봐준다면, 인공지능은 매우 높은 민감도를 달성 가능

• 인간 병리학자는 민감도 73%에 반해, 특이도는 거의 100% 달성
•인간 병리학자와 인공지능 병리학자는 서로 잘하는 것이 다름 

•양쪽이 협력하면 판독 효율성, 일관성, 민감도 등에서 개선 기대 가능
ARTICLES
https://doi.org/10.1038/s41591-018-0177-5
A
ccording to the American Cancer Society and the Cancer
Statistics Center (see URLs), over 150,000 patients with lung
cancer succumb to the disease each year (154,050 expected
for 2018), while another 200,000 new cases are diagnosed on a
yearly basis (234,030 expected for 2018). It is one of the most widely
spread cancers in the world because of not only smoking, but also
exposure to toxic chemicals like radon, asbestos and arsenic. LUAD
and LUSC are the two most prevalent types of non–small cell lung
cancer1
, and each is associated with discrete treatment guidelines. In
the absence of definitive histologic features, this important distinc-
tion can be challenging and time-consuming, and requires confir-
matory immunohistochemical stains.
Classification of lung cancer type is a key diagnostic process
because the available treatment options, including conventional
chemotherapy and, more recently, targeted therapies, differ for
LUAD and LUSC2
. Also, a LUAD diagnosis will prompt the search
for molecular biomarkers and sensitizing mutations and thus has
a great impact on treatment options3,4
. For example, epidermal
growth factor receptor (EGFR) mutations, present in about 20% of
LUAD, and anaplastic lymphoma receptor tyrosine kinase (ALK)
rearrangements, present in<5% of LUAD5
, currently have tar-
geted therapies approved by the Food and Drug Administration
(FDA)6,7
. Mutations in other genes, such as KRAS and tumor pro-
tein P53 (TP53) are very common (about 25% and 50%, respec-
tively) but have proven to be particularly challenging drug targets
so far5,8
. Lung biopsies are typically used to diagnose lung cancer
type and stage. Virtual microscopy of stained images of tissues is
typically acquired at magnifications of 20×to 40×, generating very
large two-dimensional images (10,000 to>100,000 pixels in each
dimension) that are oftentimes challenging to visually inspect in
an exhaustive manner. Furthermore, accurate interpretation can be
difficult, and the distinction between LUAD and LUSC is not always
clear, particularly in poorly differentiated tumors; in this case, ancil-
lary studies are recommended for accurate classification9,10
. To assist
experts, automatic analysis of lung cancer whole-slide images has
been recently studied to predict survival outcomes11
and classifica-
tion12
. For the latter, Yu et al.12
combined conventional thresholding
and image processing techniques with machine-learning methods,
such as random forest classifiers, support vector machines (SVM) or
Naive Bayes classifiers, achieving an AUC of ~0.85 in distinguishing
normal from tumor slides, and ~0.75 in distinguishing LUAD from
LUSC slides. More recently, deep learning was used for the classi-
fication of breast, bladder and lung tumors, achieving an AUC of
0.83 in classification of lung tumor types on tumor slides from The
Cancer Genome Atlas (TCGA)13
. Analysis of plasma DNA values
was also shown to be a good predictor of the presence of non–small
cell cancer, with an AUC of ~0.94 (ref. 14
) in distinguishing LUAD
from LUSC, whereas the use of immunochemical markers yields an
AUC of ~0.94115
.
Here, we demonstrate how the field can further benefit from deep
learning by presenting a strategy based on convolutional neural
networks (CNNs) that not only outperforms methods in previously
Classification and mutation prediction from
non–small cell lung cancer histopathology
images using deep learning
Nicolas Coudray 1,2,9
, Paolo Santiago Ocampo3,9
, Theodore Sakellaropoulos4
, Navneet Narula3
,
Matija Snuderl3
, David Fenyö5,6
, Andre L. Moreira3,7
, Narges Razavian 8
* and Aristotelis Tsirigos 1,3
*
Visual inspection of histopathology slides is one of the main methods used by pathologists to assess the stage, type and sub-
type of lung tumors. Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung
cancer, and their distinction requires visual inspection by an experienced pathologist. In this study, we trained a deep con-
volutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and
automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of
pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen
tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most
commonly mutated genes in LUAD. We found that six of them—STK11, EGFR, FAT1, SETBP1, KRAS and TP53—can be pre-
dicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest
that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations. Our approach can be
applied to any cancer type, and the code is available at https://github.com/ncoudray/DeepPATH.
• NYU 연구팀
• TCGA의 병리 이미지(whole-slide image)
• 구글넷(Inception v3)로 학습
ARTICLESNATURE MEDICINE
and that of each pathologist, of their consensus and finally of our
deep-learning model (with an optimal threshold leading to sensitiv-
ity and specificity of 89% and 93%) using Cohen’s Kappa statistic
approach). Each TCGA image is almost exclusively composed of
either LUAD cells, LUSC cells, or normal lung tissue. As a result,
several images in the two new datasets contain features that the
Normal
459
LUAD
567
LUSC
609
0
100
200
300
400
500
5,00015,00025,00035,00045,00055,00065,00075,00085,00095,000
>100,000
Number
ofslides
Length (pixels)
0
50
100
150
200
250
100
300
500
700
9001,1001,3001,5001,7001,900>2,000
Number
ofslides
Number of tiles per slide
Validation
set
Test
set
Training
set
iiDownload
from GDC
database
Separate in
3 datasets
Tile and filter out
background tiles
Per-tile training Testing and
per-slide tile
aggregation
a c
b
d
iiii viv
Inception v3
Model
Fig. 1 | Data and strategy. a, Number of whole-slide images per class. b, Strategy for training. (b, i), Images of lung cancer tissues were first downloaded
from the Genomic Data Commons database; (b, ii), slides were then separated into a training (70%), a validation (15%) and a test set (15%); (b, iii), slides
were tiled by nonoverlapping 512-×512-pixel windows, omitting those with over 50% background; (b, iv), the Inception v3 architecture was used and
partially or fully retrained using the training and validation tiles; (b, v), classifications were performed on tiles from an independent test set, and the results
were finally aggregated per slide to extract the heatmaps and the AUC statistics. c, Size distribution of the images widths (gray) and heights (black).
d, Distribution of the number of tiles per slide.
Classification and mutation prediction from
non-small cell lung cancer histopathology
images using deep learning
•Normal, adenocarcinoma (LUAD), squamous cell carcinoma
(LUSC)를 매우 정확하게 구분
• Tumor vs. normal, LUAD vs. LUSC 의 구분에 AUC 0.99, 0.95 이상
• Normal, LUAD, LUSC 중 하나를 다른 두 가지와 구분하는 것도 5x 20x 모두 AUC 0.9 이상
•이 정확도는 세 명의 병리과 전문의와 동등한 수준
• 딥러닝이 틀린 것 중에 50%는, 병리과 전문의 세 명 중 적어도 한 명이 틀렸고,
• 병리과 전문의 세 명 중 적어도 한 명이 틀린 케이스 중, 83%는 딥러닝이 정확히 분류했다.
• 더 나아가서 TCGA를 바탕으로 개발된 인공지능을,
• 완전히 독립적인, 특히 fresh frozen, FFPE, biopsy 의 세 가지 방식으로 얻은
• LUAD, LUSC 데이터에 적용해보았을 때에도 대부분 AUC 0.9 이상으로 정확하게 판독
ARTICLES NATURE MEDICINE
fibrosis, inflammation or blood was also present, but also in very
poorly differentiated tumors. Sections obtained from biopsies are
usually much smaller, which reduces the number of tiles per slide,
but the performance of our model remains consistent for the 102
samples tested (AUC ~0.834–0.861 using 20×magnification and
0.871–0.928 using 5×magnification; Fig. 2c), and the accuracy
of the classification does not correlate with the sample size or the
size of the area selected by our pathologist (Supplementary Fig. 4;
the tumor area on the frozen and FFPE samples, then applied this
model to the biopsies and finally applied the TCGA-trained three-
way classifier on the tumor area selected by the automatic tumor
selection model. The per-tile AUC of the automatic tumor selection
model (using the pathologist’s tumor selection as reference) was
0.886 (CI, 0.880–0.891) for the biopsies, 0.797 (CI, 0.795–0.800)
for the frozen samples, and 0.852 (CI, 0.808–0.895) for the FFPE
samples. As demonstrated in Supplementary Fig. 3a (right-most bar
LUAD at 5×
AUC = 0.919, CI = 0.861–0.949
1
a
b
c
0.5
Truepositive
0
0 0.5
False positive
1
1
0.5
Truepositive
0
0 0.5
False positive
1
1
0.5
Truepositive
0
0 0.5
False positive
1
Frozen
FFPE
Biopsies
LUSC at 5×
AUC = 0.977, CI = 0.949–0.995
LUAD at 20×
AUC = 0.913, CI = 0.849–0.963
LUSC at 20×
AUC = 0.941, CI = 0.894–0.977
LUAD at 5×
AUC = 0.861, CI = 0.792–0.919
LUSC at 5×
AUC = 0.975, CI = 0.945–0.996
LUAD at 20×
AUC = 0.833, CI = 0.762–0.894
LUSC at 20×
AUC = 0.932, CI = 0.884–0.971
LUAD at 5×
AUC = 0.871, CI = 0.784–0.938
LUSC at 5×
AUC = 0.928, CIs = 0.871–0.972
LUAD at 20×
AUC = 0.834, CI = 0.743–0.909
LUSC at 20×
AUC = 0.861, CI = 0.780–0.928
Fig. 2 | Classification of presence and type of tumor on alternative cohorts. a–c, Receiver operating characteristic (ROC) curves (left) from tests on
frozen sections (n=98 biologically independent slides) (a), FFPE sections (n=140 biologically independent slides) (b) and biopsies (n=102 biologically
independent slides) from NYU Langone Medical Center (c). On the right of each plot, we show examples of raw images with an overlap in light gray of the
mask generated by a pathologist and the corresponding heatmaps obtained with the three-way classifier. Scale bars, 1mm.
Frozen
FFPE
Biopsy
ARTICLESNATURE MEDICINE
11 (STK11), EGFR, FAT atypical cadherin 1 (FAT1), SET bind-
ing protein 1 (SETBP1), KRAS and TP53 were between 0.733 and
0.856 (Table 1). Availability of more data for training is expected
to substantially improve the performance.
As mentioned earlier, EGFR already has targeted therapies.
STK11, also known as liver kinase 1 (LKB1), is a tumor suppres-
sor inactivated in 15–30% of non–small cell lung cancers36
and is
also a potential therapeutic target: it has been reported that phen-
formin, a mitochondrial inhibitor, increases survival in mice37
. Also,
it has been shown that STK11 mutations in combination with KRAS
Averagetileprobability
1.0
0.8
0.6
0.4
0.2
0
Gene
EGFR
WTEGFR
FAT1
WTFAT1
FAT4
WTFAT4
KEAP1
WTKEAP1
WTKRAS
KRAS
LRP1B
WTLRP1B
NF1
WTNF1
SETBP1
WTSETBP1
STK11
WTSTK11
TP53
WTTP53
** * ** *n.s. n.s. n.s. ***** ***
1
1
0.5
0.5
0
0
Truepositive
EGFR
SETBP1
STK11
TP53
False positive
100%
80%
60%
40%
20%
0%
Allelefrequency
P(EGFR)≥0.5
P(EGFR)<0.5
P(FAT1)≥0.5
P(FAT1)<0.5
P(FAT4)≥0.5
P(FAT4)<0.5
P(KEAP1)≥0.5
P(KEAP1)<0.5
P(KRAS)≥0.5
P(KRAS)<0.5
P(LRP1B)≥0.5
P(LRP1B)<0.5
P(NF1)≥0.5
P(NF1)<0.5
P(SETBP1)≥0.5
P(SETBP1)<0.5
P(STK11)≥0.5
P(STK11)<0.5
P(TP53)≥0.5
P(TP53)<0.5
Gene classified as mutation (aggregated percentage ≥ 0.5)
or wild type (<0.5)
a
b
c
P
=
5.1
×
10–3
P
=
2.4
×
10–2
P
=
8.8
×
10–2
P
=
5.5
×
10–2
P
=
3.2
×
10–3
P
=
4.0
×
10–2
P
=
1.5
×
10–1
P
=
4.5
×
10–3
P
=
2.5
×
10–4
P
=
4.9
×
10–4
P
=
1.1
×
10–2
P
=
2.0
×
10–4
P
=
3.8
×
10–2
P
=
1.3
×
10–3
P
=
6.7
×
10–4
P
=
2.2
×
10–2
P
=
1.0
×
10–2
P
=
6.9
×
10–6
P
=
7.8
×
10–4
P
=
1.4
×
10–2
*******************
Fig. 3 | Gene mutation prediction from histopathology slides give
promising results for at least six genes. a, Distribution of probability
of mutation in genes from slides where each mutation is present or
absent (tile aggregation by averaging output probability). b, ROC curves
associated with the top four predictions in a. c, Allele frequency as a
function of slides classified by the deep-learning network as having a
certain gene mutation (P ≥ 0.5) or the wild type (P<0.5). P values were
Table 1 | AUC achieved by the network trained on mutations
(with 95% CIs)
Mutations Per-tile
AUC
Per-slide AUC after aggregation by…
… average
predicted
probability
… percentage of positively
classified tiles
STK11 0.845
(0.838–
0.852)
0.856 (0.709–
0.964)
0.842 (0.683-0.967)
EGFR 0.754
(0.746–
0.761)
0.826 (0.628–
0.979)
0.782 (0.516-0.979)
SETBP1 0.785
(0.776–
0.794)
0.775 (0.595–
0.931)
0.752 (0.550–0.927)
TP53 0.674
(0.666–
0.681)
0.760 (0.626–
0.872)
0.754 (0.627–0.870)
FAT1 0.739
(0.732–
0.746)
0.750 (0.512–
0.940)
0.750 (0.491–0.946)
KRAS 0.814
(0.807–
0.829)
0.733 (0.580–
0.857)
0.716 (0.552–0.854)
KEAP1 0.684
(0.670–
0.694)
0.675 (0.466–
0.865)
0.659 (0.440–0.856)
LRP1B 0.640
(0.633–
0.647)
0.656 (0.513–
0.797)
0.657 (0.512–0.799)
FAT4 0.768
(0.760–
0.775)
0.642 (0.470–
0.799)
0.640 (0.440–0.856)
NF1 0.714
(0.704–
0.723)
0.640 (0.419–
0.845)
0.632 (0.405–0.845)
n=62 slides from 59 patients.
•Radiogenomics
•병리 이미지만 보고 EGFR, TP53, KRAS 등의 6개 유전자에서 

LUAD에서 호발하는 mutation이 존재하는지를 AUC 0.7-0.8 로 판독
•심지어는 allele frequency 도 통계적으로 유의미하게 맞췄다
https://www.facebook.com/groups/TensorFlowKR/permalink/633902253617503/
구글 엔지니어들이 AACR 2018 에서

의료 인공지능 기조 연설
AACR 2018
AACR 2018인공지능을 이용하면 총 판독 시간을 줄일 수 있다
AACR 2018인공지능을 이용하면 판독 정확도를 (micro에서 특히) 높일 수 수 있다
Access to Pathology AI algorithms is limited
Adoption barriers for digital pathology
• Expensive scanners
• IT infrastructure required
• Disrupt existing workflows
• Not all clinical needs addressed (speed, focus, etc)
 
 
Figures 
 
 
 
Figure 1: System overview.  
1: Schematic sketch of the whole device. 
2: A photo of the actual implementation. 
An Augmented Reality Microscope for
Realtime Automated Detection of Cancer
https://research.googleblog.com/2018/04/an-augmented-reality-microscope.html
An Augmented Reality Microscope for Cancer Detection
https://www.youtube.com/watch?v=9Mz84cwVmS0
 
 
 
 
 
 
Figure 3: Sample views through the lens. 
Top: Lymph node metastasis detection at 4X, 10X, 20X, and 40X. 
Bottom: Prostate cancer detection at 4X, 10X, and 20X. 
An Augmented Reality Microscope for
Realtime Automated Detection of Cancer
https://research.googleblog.com/2018/04/an-augmented-reality-microscope.html
An Augmented Reality Microscope for
Realtime Automated Detection of Cancer
 
 
PR quantification Mitosis Counting on H&E slide Measurement of tumor size
Identification of H. pylori Identification of Mycobacterium
Identification of prostate cancer
region with estimation of
percentage tumor involvement
Ki67 quantification P53 quantification CD8 quantification
https://research.googleblog.com/2018/04/an-augmented-reality-microscope.html
http://www.rolls-royce.com/about/our-technology/enabling-technologies/engine-health-management.aspx#sense
250 sensors to monitor the “health” of the GE turbines
Fig 1. What can consumer wearables do? Heart rate can be measured with an oximeter built into a ring [3], muscle activity with an electromyographi
sensor embedded into clothing [4], stress with an electodermal sensor incorporated into a wristband [5], and physical activity or sleep patterns via an
accelerometer in a watch [6,7]. In addition, a female’s most fertile period can be identified with detailed body temperature tracking [8], while levels of me
attention can be monitored with a small number of non-gelled electroencephalogram (EEG) electrodes [9]. Levels of social interaction (also known to a
PLOS Medicine 2016
• 복잡한 의료 데이터의 분석 및 insight 도출
• 영상 의료/병리 데이터의 분석/판독
• 연속 데이터의 모니터링 및 예방/예측
의료 인공지능의 세 유형
Project Artemis at UIOT
S E P S I S
A targeted real-time early warning score (TREWScore)
for septic shock
Katharine E. Henry,1
David N. Hager,2
Peter J. Pronovost,3,4,5
Suchi Saria1,3,5,6
*
Sepsis is a leading cause of death in the United States, with mortality highest among patients who develop septic
shock. Early aggressive treatment decreases morbidity and mortality. Although automated screening tools can detect
patients currently experiencing severe sepsis and septic shock, none predict those at greatest risk of developing
shock. We analyzed routinely available physiological and laboratory data from intensive care unit patients and devel-
oped “TREWScore,” a targeted real-time early warning score that predicts which patients will develop septic shock.
TREWScore identified patients before the onset of septic shock with an area under the ROC (receiver operating
characteristic) curve (AUC) of 0.83 [95% confidence interval (CI), 0.81 to 0.85]. At a specificity of 0.67, TREWScore
achieved a sensitivity of 0.85 and identified patients a median of 28.2 [interquartile range (IQR), 10.6 to 94.2] hours
before onset. Of those identified, two-thirds were identified before any sepsis-related organ dysfunction. In compar-
ison, the Modified Early Warning Score, which has been used clinically for septic shock prediction, achieved a lower
AUC of 0.73 (95% CI, 0.71 to 0.76). A routine screening protocol based on the presence of two of the systemic inflam-
matory response syndrome criteria, suspicion of infection, and either hypotension or hyperlactatemia achieved a low-
er sensitivity of 0.74 at a comparable specificity of 0.64. Continuous sampling of data from the electronic health
records and calculation of TREWScore may allow clinicians to identify patients at risk for septic shock and provide
earlier interventions that would prevent or mitigate the associated morbidity and mortality.
INTRODUCTION
Seven hundred fifty thousand patients develop severe sepsis and septic
shock in the United States each year. More than half of them are
admitted to an intensive care unit (ICU), accounting for 10% of all
ICU admissions, 20 to 30% of hospital deaths, and $15.4 billion in an-
nual health care costs (1–3). Several studies have demonstrated that
morbidity, mortality, and length of stay are decreased when severe sep-
sis and septic shock are identified and treated early (4–8). In particular,
one study showed that mortality from septic shock increased by 7.6%
with every hour that treatment was delayed after the onset of hypo-
tension (9).
More recent studies comparing protocolized care, usual care, and
early goal-directed therapy (EGDT) for patients with septic shock sug-
gest that usual care is as effective as EGDT (10–12). Some have inter-
preted this to mean that usual care has improved over time and reflects
important aspects of EGDT, such as early antibiotics and early ag-
gressive fluid resuscitation (13). It is likely that continued early identi-
fication and treatment will further improve outcomes. However, the
Acute Physiology Score (SAPS II), SequentialOrgan Failure Assessment
(SOFA) scores, Modified Early Warning Score (MEWS), and Simple
Clinical Score (SCS) have been validated to assess illness severity and
risk of death among septic patients (14–17). Although these scores
are useful for predicting general deterioration or mortality, they typical-
ly cannot distinguish with high sensitivity and specificity which patients
are at highest risk of developing a specific acute condition.
The increased use of electronic health records (EHRs), which can be
queried in real time, has generated interest in automating tools that
identify patients at risk for septic shock (18–20). A number of “early
warning systems,” “track and trigger” initiatives, “listening applica-
tions,” and “sniffers” have been implemented to improve detection
andtimelinessof therapy forpatients with severe sepsis andseptic shock
(18, 20–23). Although these tools have been successful at detecting pa-
tients currently experiencing severe sepsis or septic shock, none predict
which patients are at highest risk of developing septic shock.
The adoption of the Affordable Care Act has added to the growing
excitement around predictive models derived from electronic health
R E S E A R C H A R T I C L E
onNovember3,2016http://stm.sciencemag.org/Downloadedfrom
puted as new data became avail
when his or her score crossed t
dation set, the AUC obtained f
0.81 to 0.85) (Fig. 2). At a spec
of 0.33], TREWScore achieved a s
a median of 28.2 hours (IQR, 10
Identification of patients b
A critical event in the developme
related organ dysfunction (seve
been shown to increase after th
more than two-thirds (68.8%) o
were identified before any sepsi
tients were identified a median
(Fig. 3B).
Comparison of TREWScore
Weevaluatedtheperformanceof
methods for the purpose of provid
use of TREWScore. We first com
to MEWS, a general metric used
of catastrophic deterioration (17
oped for tracking sepsis, MEWS
tion of patients at risk for severe
Fig. 2. ROC for detection of septic shock before onset in the validation
set. The ROC curve for TREWScore is shown in blue, with the ROC curve for
MEWS in red. The sensitivity and specificity performance of the routine
screening criteria is indicated by the purple dot. Normal 95% CIs are shown
for TREWScore and MEWS. TPR, true-positive rate; FPR, false-positive rate.
R E S E A R C H A R T I C L E
A targeted real-time early warning score (TREWScore)
for septic shock
AUC=0.83
At a specificity of 0.67,TREWScore achieved a sensitivity of 0.85 

and identified patients a median of 28.2 hours before onset.
Sugar.IQ
사용자의 음식 섭취와 그에 따른 혈당 변화,
인슐린 주입 등의 과거 기록 기반
식후 사용자의 혈당이 어떻게 변화할지
Watson 이 예측
ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
ADA 2017, San Diego, Courtesy of Taeho Kim (Seoul Medical Center)
•미국에서 아이폰 앱으로 출시

•사용이 얼마나 번거로울지가 관건

•어느 정도의 기간을 활용해야 효과가 있는가: 2주? 평생?

•Food logging 등을 어떻게 할 것인가?

•과금 방식도 아직 공개되지 않은듯
애플워치4: 심전도, 부정맥, 낙상 측정
FDA 의료기기 인허가
Cardiogram
• 실리콘밸리의 Cardiogram 은 애플워치로 측정한 심박수 데이터를 바탕으로 서비스
• 2016년 10월 Andressen Horowitz 에서 $2m의 투자 유치
https://blog.cardiogr.am/what-do-normal-and-abnormal-heart-rhythms-look-like-on-apple-watch-7b33b4a8ecfa
• Cardiogram은 심박수에 운동, 수면, 감정, 의료적인 상태가 반영된다고 주장
• 특히, 심박 데이터를 기반으로 심방세동(atrial fibrillation)과 심방 조동(atrial flutter)의 detection 시도
Cardiogram
• Cardiogram은 심박 데이터만으로 심방세동을 detection할 수 있다고 주장
• “Irregularly irregular”
• high absolute variability (a range of 30+ bpm)
• a higher fraction missing measurements
• a lack of periodicity in heart rate variability
• 심방세동 특유의 불규칙적인 리듬을 detection 하는 정도로 생각하면 될 듯
• “불규칙적인 리듬을 가지는 (심방세동이 아닌) 다른 부정맥과 구분 가능한가?” (쉽지 않을듯)
• 따라서, 심박으로 detection한 환자를 심전도(ECG)로 confirm 하는 것이 필요
Cardiogram for A.Fib
Cardiogram for Aflutter
• Cardiogram은 심박 데이터만으로 심방조동을 detection할 수 있다고 주장
• “Mechanically Regular”
• high absolute variability (a range of 30+ bpm)
• a higher fraction missing measurements
• a lack of periodicity in heart rate variability
• 심방세동 특유의 불규칙적인 리듬을 detection 하는 정도로 생각하면 될 듯
• “불규칙적인 리듬을 가지는 (심방세동이 아닌) 다른 부정맥과 구분 가능한가?” (쉽지 않을듯)
• 따라서, 심박으로 detection한 환자를 심전도(ECG)로 confirm 하는 것이 필요
Passive Detection of Atrial Fibrillation
Using a Commercially Available Smartwatch
Geoffrey H. Tison, MD, MPH; José M. Sanchez, MD; Brandon Ballinger, BS; Avesh Singh, MS; Jeffrey E. Olgin, MD;
Mark J. Pletcher, MD, MPH; Eric Vittinghoff, PhD; Emily S. Lee, BA; Shannon M. Fan, BA; Rachel A. Gladstone, BA;
Carlos Mikell, BS; Nimit Sohoni, BS; Johnson Hsieh, MS; Gregory M. Marcus, MD, MAS
IMPORTANCE Atrial fibrillation (AF) affects 34 million people worldwide and is a leading cause
of stroke. A readily accessible means to continuously monitor for AF could prevent large
numbers of strokes and death.
OBJECTIVE To develop and validate a deep neural network to detect AF using smartwatch
data.
DESIGN, SETTING, AND PARTICIPANTS In this multinational cardiovascular remote cohort study
coordinated at the University of California, San Francisco, smartwatches were used to obtain
heart rate and step count data for algorithm development. A total of 9750 participants
enrolled in the Health eHeart Study and 51 patients undergoing cardioversion at the
University of California, San Francisco, were enrolled between February 2016 and March 2017.
A deep neural network was trained using a method called heuristic pretraining in which the
network approximated representations of the R-R interval (ie, time between heartbeats)
without manual labeling of training data. Validation was performed against the reference
standard 12-lead electrocardiography (ECG) in a separate cohort of patients undergoing
cardioversion. A second exploratory validation was performed using smartwatch data from
ambulatory individuals against the reference standard of self-reported history of persistent
AF. Data were analyzed from March 2017 to September 2017.
MAIN OUTCOMES AND MEASURES The sensitivity, specificity, and receiver operating
characteristic C statistic for the algorithm to detect AF were generated based on the
reference standard of 12-lead ECG–diagnosed AF.
RESULTS Of the 9750 participants enrolled in the remote cohort, including 347 participants
with AF, 6143 (63.0%) were male, and the mean (SD) age was 42 (12) years. There were more
than 139 million heart rate measurements on which the deep neural network was trained. The
deep neural network exhibited a C statistic of 0.97 (95% CI, 0.94-1.00; P < .001) to detect AF
against the reference standard 12-lead ECG–diagnosed AF in the external validation cohort of
51 patients undergoing cardioversion; sensitivity was 98.0% and specificity was 90.2%. In an
exploratory analysis relying on self-report of persistent AF in ambulatory participants, the C
statistic was 0.72 (95% CI, 0.64-0.78); sensitivity was 67.7% and specificity was 67.6%.
CONCLUSIONS AND RELEVANCE This proof-of-concept study found that smartwatch
photoplethysmography coupled with a deep neural network can passively detect AF but with
some loss of sensitivity and specificity against a criterion-standard ECG. Further studies will
help identify the optimal role for smartwatch-guided rhythm assessment.
JAMA Cardiol. doi:10.1001/jamacardio.2018.0136
Published online March 21, 2018.
Editorial
Supplemental content and
Audio
Author Affiliations: Division of
Cardiology, Department of Medicine,
University of California, San Francisco
(Tison, Sanchez, Olgin, Lee, Fan,
Gladstone, Mikell, Marcus);
Cardiogram Incorporated, San
Francisco, California (Ballinger, Singh,
Sohoni, Hsieh); Department of
Epidemiology and Biostatistics,
University of California, San Francisco
(Pletcher, Vittinghoff).
Corresponding Author: Gregory M.
Marcus, MD, MAS, Division of
Cardiology, Department of Medicine,
University of California, San
Francisco, 505 Parnassus Ave,
M1180B, San Francisco, CA 94143-
0124 (marcusg@medicine.ucsf.edu).
Research
JAMA Cardiology | Original Investigation
(Reprinted) E1
© 2018 American Medical Association. All rights reserved.
Passive Detection of Atrial Fibrillation
Using a Commercially Available Smartwatch
Geoffrey H. Tison, MD, MPH; José M. Sanchez, MD; Brandon Ballinger, BS; Avesh Singh, MS; Jeffrey E. Olgin, MD;
Mark J. Pletcher, MD, MPH; Eric Vittinghoff, PhD; Emily S. Lee, BA; Shannon M. Fan, BA; Rachel A. Gladstone, BA;
Carlos Mikell, BS; Nimit Sohoni, BS; Johnson Hsieh, MS; Gregory M. Marcus, MD, MAS
IMPORTANCE Atrial fibrillation (AF) affects 34 million people worldwide and is a leading cause
of stroke. A readily accessible means to continuously monitor for AF could prevent large
numbers of strokes and death.
OBJECTIVE To develop and validate a deep neural network to detect AF using smartwatch
data.
DESIGN, SETTING, AND PARTICIPANTS In this multinational cardiovascular remote cohort study
coordinated at the University of California, San Francisco, smartwatches were used to obtain
heart rate and step count data for algorithm development. A total of 9750 participants
enrolled in the Health eHeart Study and 51 patients undergoing cardioversion at the
University of California, San Francisco, were enrolled between February 2016 and March 2017.
A deep neural network was trained using a method called heuristic pretraining in which the
network approximated representations of the R-R interval (ie, time between heartbeats)
without manual labeling of training data. Validation was performed against the reference
standard 12-lead electrocardiography (ECG) in a separate cohort of patients undergoing
cardioversion. A second exploratory validation was performed using smartwatch data from
ambulatory individuals against the reference standard of self-reported history of persistent
AF. Data were analyzed from March 2017 to September 2017.
MAIN OUTCOMES AND MEASURES The sensitivity, specificity, and receiver operating
characteristic C statistic for the algorithm to detect AF were generated based on the
reference standard of 12-lead ECG–diagnosed AF.
RESULTS Of the 9750 participants enrolled in the remote cohort, including 347 participants
with AF, 6143 (63.0%) were male, and the mean (SD) age was 42 (12) years. There were more
than 139 million heart rate measurements on which the deep neural network was trained. The
deep neural network exhibited a C statistic of 0.97 (95% CI, 0.94-1.00; P < .001) to detect AF
against the reference standard 12-lead ECG–diagnosed AF in the external validation cohort of
51 patients undergoing cardioversion; sensitivity was 98.0% and specificity was 90.2%. In an
exploratory analysis relying on self-report of persistent AF in ambulatory participants, the C
statistic was 0.72 (95% CI, 0.64-0.78); sensitivity was 67.7% and specificity was 67.6%.
CONCLUSIONS AND RELEVANCE This proof-of-concept study found that smartwatch
photoplethysmography coupled with a deep neural network can passively detect AF but with
some loss of sensitivity and specificity against a criterion-standard ECG. Further studies will
help identify the optimal role for smartwatch-guided rhythm assessment.
JAMA Cardiol. doi:10.1001/jamacardio.2018.0136
Published online March 21, 2018.
Editorial
Supplemental content and
Audio
Author Affiliations: Division of
Cardiology, Department of Medicine,
University of California, San Francisco
(Tison, Sanchez, Olgin, Lee, Fan,
Gladstone, Mikell, Marcus);
Cardiogram Incorporated, San
Francisco, California (Ballinger, Singh,
Sohoni, Hsieh); Department of
Epidemiology and Biostatistics,
University of California, San Francisco
(Pletcher, Vittinghoff).
Corresponding Author: Gregory M.
Marcus, MD, MAS, Division of
Cardiology, Department of Medicine,
University of California, San
Francisco, 505 Parnassus Ave,
M1180B, San Francisco, CA 94143-
0124 (marcusg@medicine.ucsf.edu).
Research
JAMA Cardiology | Original Investigation
(Reprinted) E1
© 2018 American Medical Association. All rights reserved.
• eHeart Study in UCSF
• A total of 9,750 participants
• 51 patients undergoing cardio version
• Validated against standard 12-lead ECG
Passive Detection of Atrial Fibrillation
Using a Commercially Available Smartwatch
Geoffrey H. Tison, MD, MPH; José M. Sanchez, MD; Brandon Ballinger, BS; Avesh Singh, MS; Jeffrey E. Olgin, MD;
Mark J. Pletcher, MD, MPH; Eric Vittinghoff, PhD; Emily S. Lee, BA; Shannon M. Fan, BA; Rachel A. Gladstone, BA;
Carlos Mikell, BS; Nimit Sohoni, BS; Johnson Hsieh, MS; Gregory M. Marcus, MD, MAS
IMPORTANCE Atrial fibrillation (AF) affects 34 million people worldwide and is a leading cause
of stroke. A readily accessible means to continuously monitor for AF could prevent large
numbers of strokes and death.
OBJECTIVE To develop and validate a deep neural network to detect AF using smartwatch
data.
DESIGN, SETTING, AND PARTICIPANTS In this multinational cardiovascular remote cohort study
coordinated at the University of California, San Francisco, smartwatches were used to obtain
heart rate and step count data for algorithm development. A total of 9750 participants
enrolled in the Health eHeart Study and 51 patients undergoing cardioversion at the
University of California, San Francisco, were enrolled between February 2016 and March 2017.
A deep neural network was trained using a method called heuristic pretraining in which the
network approximated representations of the R-R interval (ie, time between heartbeats)
without manual labeling of training data. Validation was performed against the reference
standard 12-lead electrocardiography (ECG) in a separate cohort of patients undergoing
cardioversion. A second exploratory validation was performed using smartwatch data from
ambulatory individuals against the reference standard of self-reported history of persistent
AF. Data were analyzed from March 2017 to September 2017.
MAIN OUTCOMES AND MEASURES The sensitivity, specificity, and receiver operating
characteristic C statistic for the algorithm to detect AF were generated based on the
reference standard of 12-lead ECG–diagnosed AF.
RESULTS Of the 9750 participants enrolled in the remote cohort, including 347 participants
with AF, 6143 (63.0%) were male, and the mean (SD) age was 42 (12) years. There were more
than 139 million heart rate measurements on which the deep neural network was trained. The
deep neural network exhibited a C statistic of 0.97 (95% CI, 0.94-1.00; P < .001) to detect AF
against the reference standard 12-lead ECG–diagnosed AF in the external validation cohort of
51 patients undergoing cardioversion; sensitivity was 98.0% and specificity was 90.2%. In an
exploratory analysis relying on self-report of persistent AF in ambulatory participants, the C
statistic was 0.72 (95% CI, 0.64-0.78); sensitivity was 67.7% and specificity was 67.6%.
CONCLUSIONS AND RELEVANCE This proof-of-concept study found that smartwatch
photoplethysmography coupled with a deep neural network can passively detect AF but with
some loss of sensitivity and specificity against a criterion-standard ECG. Further studies will
help identify the optimal role for smartwatch-guided rhythm assessment.
JAMA Cardiol. doi:10.1001/jamacardio.2018.0136
Published online March 21, 2018.
Editorial
Supplemental content and
Audio
Author Affiliations: Division of
Cardiology, Department of Medicine,
University of California, San Francisco
(Tison, Sanchez, Olgin, Lee, Fan,
Gladstone, Mikell, Marcus);
Cardiogram Incorporated, San
Francisco, California (Ballinger, Singh,
Sohoni, Hsieh); Department of
Epidemiology and Biostatistics,
University of California, San Francisco
(Pletcher, Vittinghoff).
Corresponding Author: Gregory M.
Marcus, MD, MAS, Division of
Cardiology, Department of Medicine,
University of California, San
Francisco, 505 Parnassus Ave,
M1180B, San Francisco, CA 94143-
0124 (marcusg@medicine.ucsf.edu).
Research
JAMA Cardiology | Original Investigation
(Reprinted) E1
© 2018 American Medical Association. All rights reserved.
tion from the participant (dependent on user adherence) and
by the episodic nature of data obtained. A Samsung Simband
(Samsung) exhibited high sensitivity and specificity for AF de-
32
costs associated with the care of those patients, the potential
reduction in stroke could ultimately provide cost savings.
SeveralfactorsmakedetectionofAFfromambulatorydata
Figure 2. Accuracy of Detecting Atrial Fibrillation in the Cardioversion Cohort
100
80
60
40
20
0
0 10080
Sensitivity,%
1 –Specificity, %
604020
Cardioversion cohortA
100
80
60
40
20
0
0 10080
Sensitivity,%
1 –Specificity, %
604020
Ambulatory subset of remote cohortB
A, Receiver operating characteristic
curve among 51 individuals
undergoing in-hospital cardioversion.
The curve demonstrates a C statistic
of 0.97 (95% CI, 0.94-1.00), and the
point on the curve indicates a
sensitivity of 98.0% and a specificity
of 90.2%. B, Receiver operating
characteristic curve among 1617
individuals in the ambulatory subset
of the remote cohort. The curve
demonstrates a C statistic of 0.72
(95% CI, 0.64-0.78), and the point on
the curve indicates a sensitivity of
67.7% and a specificity of 67.6%.
Table 3. Performance Characteristics of Deep Neural Network in Validation Cohortsa
Cohort
%
AUCSensitivity Specificity PPV NPV
Cardioversion cohort (sedentary) 98.0 90.2 90.9 97.8 0.97
Subset of remote cohort (ambulatory) 67.7 67.6 7.9 98.1 0.72
Abbreviations: AUC, area under the receiver operating characteristic curve;
NPV, negative predictive value; PPV, positive predictive value.
a
In the cardioversion cohort, the atrial fibrillation reference standard was
12-lead electrocardiography diagnosis; in the remote cohort, the atrial
fibrillation reference standard was limited to self-reported history of persistent
atrial fibrillation.
Research Original Investigation Passive Detection of Atrial Fibrillation Using a Commercially Available Smartwatch
AUC=0.98 AUC=0.72
• In external validation using standard 12-lead ECG, algorithm
performance achieved a C statistic of 0.97.
• The passive detection of AF from free-living smartwatch data
has substantial clinical implications.
• Importantly, the accuracy of detecting self-reported AF in an
ambulatory setting was more modest (C statistic of 0.72)
Prediction ofVentricular Arrhythmia
An Algorithm Based on Deep Learning for Predicting In-Hospital
Cardiac Arrest
Joon-myoung Kwon, MD;* Youngnam Lee, MS;* Yeha Lee, PhD; Seungwoo Lee, BS; Jinsik Park, MD, PhD
Background-—In-hospital cardiac arrest is a major burden to public health, which affects patient safety. Although traditional track-
and-trigger systems are used to predict cardiac arrest early, they have limitations, with low sensitivity and high false-alarm rates.
We propose a deep learning–based early warning system that shows higher performance than the existing track-and-trigger
systems.
Methods and Results-—This retrospective cohort study reviewed patients who were admitted to 2 hospitals from June 2010 to July
2017. A total of 52 131 patients were included. Specifically, a recurrent neural network was trained using data from June 2010 to
January 2017. The result was tested using the data from February to July 2017. The primary outcome was cardiac arrest, and the
secondary outcome was death without attempted resuscitation. As comparative measures, we used the area under the receiver
operating characteristic curve (AUROC), the area under the precision–recall curve (AUPRC), and the net reclassification index.
Furthermore, we evaluated sensitivity while varying the number of alarms. The deep learning–based early warning system (AUROC:
0.850; AUPRC: 0.044) significantly outperformed a modified early warning score (AUROC: 0.603; AUPRC: 0.003), a random forest
algorithm (AUROC: 0.780; AUPRC: 0.014), and logistic regression (AUROC: 0.613; AUPRC: 0.007). Furthermore, the deep learning–
based early warning system reduced the number of alarms by 82.2%, 13.5%, and 42.1% compared with the modified early warning
system, random forest, and logistic regression, respectively, at the same sensitivity.
Conclusions-—An algorithm based on deep learning had high sensitivity and a low false-alarm rate for detection of patients with
cardiac arrest in the multicenter study. (J Am Heart Assoc. 2018;7:e008678. DOI: 10.1161/JAHA.118.008678.)
Key Words: artificial intelligence • cardiac arrest • deep learning • machine learning • rapid response system • resuscitation
In-hospital cardiac arrest is a major burden to public health,
which affects patient safety.1–3
More than a half of cardiac
arrests result from respiratory failure or hypovolemic shock,
and 80% of patients with cardiac arrest show signs of
deterioration in the 8 hours before cardiac arrest.4–9
However,
209 000 in-hospital cardiac arrests occur in the United States
each year, and the survival discharge rate for patients with
cardiac arrest is <20% worldwide.10,11
Rapid response systems
(RRSs) have been introduced in many hospitals to detect
cardiac arrest using the track-and-trigger system (TTS).12,13
Two types of TTS are used in RRSs. For the single-parameter
TTS (SPTTS), cardiac arrest is predicted if any single vital sign
(eg, heart rate [HR], blood pressure) is out of the normal
range.14
The aggregated weighted TTS calculates a weighted
score for each vital sign and then finds patients with cardiac
arrest based on the sum of these scores.15
The modified early
warning score (MEWS) is one of the most widely used
approaches among all aggregated weighted TTSs (Table 1)16
;
however, traditional TTSs including MEWS have limitations, with
low sensitivity or high false-alarm rates.14,15,17
Sensitivity and
false-alarm rate interact: Increased sensitivity creates higher
false-alarm rates and vice versa.
Current RRSs suffer from low sensitivity or a high false-
alarm rate. An RRS was used for only 30% of patients before
unplanned intensive care unit admission and was not used for
22.8% of patients, even if they met the criteria.18,19
From the Departments of Emergency Medicine (J.-m.K.) and Cardiology (J.P.), Mediplex Sejong Hospital, Incheon, Korea; VUNO, Seoul, Korea (Youngnam L., Yeha L.,
S.L.).
*Dr Kwon and Mr Youngnam Lee contributed equally to this study.
Correspondence to: Joon-myoung Kwon, MD, Department of Emergency medicine, Mediplex Sejong Hospital, 20, Gyeyangmunhwa-ro, Gyeyang-gu, Incheon 21080,
Korea. E-mail: kwonjm@sejongh.co.kr
Received January 18, 2018; accepted May 31, 2018.
ª 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley. This is an open access article under the terms of the Creative Commons
Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for
commercial purposes.
DOI: 10.1161/JAHA.118.008678 Journal of the American Heart Association 1
ORIGINAL RESEARCH
byguestonJune28,2018http://jaha.ahajournals.org/Downloadedfrom
• 환자 수: 86,290
• cardiac arrest: 633
• Input: Heart rate, Respiratory rate, Body temperature, Systolic Blood Pressure
(source: VUNO)
Cardiac Arrest Prediction Accuracy
• 대학병원 신속 대응팀에서 처리 가능한 알림 수 (A, B 지점) 에서 더 큰 정확도 차이를 보임
• A: DEWS 33.0%, MEWS 0.3%
• B: DEWS 42.7%, MEWS 4.0%
(source: VUNO)
APPH(Alarms Per Patients Per Hour)
(source: VUNO)
Less False Alarm
(source: VUNO)
시간에 따른 DEWS 예측 변화
•복잡한 의료 데이터의 분석 및 insight 도출

•영상 의료/병리 데이터의 분석/판독

•연속 데이터의 모니터링 및 예방/예측
의료 인공지능의 세 유형
의료 인공지능
•1부: 제 2의 기계시대와 의료 인공지능

•2부: 의료 인공지능의 과거와 현재

•3부: 미래를 어떻게 맞이할 것인가
•인공지능은 의사를 대체하는가

•인간 의사의 새로운 역할은

•결과에 대한 책임은 누가 지는가

•블랙박스 문제

•탈숙련화 문제

•어떻게 인허가/규제할 것인가

•의학적 효용을 어떻게 증명할 것인가
Issues
의료 인공지능
•1부: 제 2의 기계시대와 의료 인공지능

•2부: 의료 인공지능의 과거와 현재

•3부: 미래를 어떻게 맞이할 것인가
Feedback/Questions
• Email: yoonsup.choi@gmail.com
• Blog: http://www.yoonsupchoi.com
• Facebook: Yoon Sup Choi

의료 인공지능: 인공지능은 의료를 어떻게 혁신하는가 - 최윤섭 (updated 18년 10월)

  • 1.
    Professor, SAHIST, SungkyunkwanUniversity Director, Digital Healthcare Institute Yoon Sup Choi, Ph.D. 인공지능은 의료를 어떻게 혁신하는가
  • 2.
    “It's in Apple'sDNA that technology alone is not enough. 
 It's technology married with liberal arts.”
  • 3.
    The Convergence ofIT, BT and Medicine
  • 5.
    최윤섭 지음 의료인공지능 표지디자인•최승협 컴퓨터 털 헬 치를만드는 것을 화두로 기업가, 엔젤투자가, 에반 의 대표적인 전문가로, 활 이 분야를 처음 소개한 장 포항공과대학교에서 컴 동 대학원 시스템생명공 취득하였다. 스탠퍼드대 조교수, KT 종합기술원 컨 구원 연구조교수 등을 거 저널에 10여 편의 논문을 국내 최초로 디지털 헬스 윤섭 디지털 헬스케어 연 국내 유일의 헬스케어 스 어 파트너스’의 공동 창업 스타트업을 의료 전문가 관대학교 디지털헬스학과 뷰노, 직토, 3billion, 서지 소울링, 메디히어, 모바일 자문을 맡아 한국에서도 고 있다. 국내 최초의 디 케어 이노베이션』에 활발 을 연재하고 있다. 저서로 와 『그렇게 나는 스스로 •블로그_ http://www •페이스북_ https://w •이메일_ yoonsup.c 최윤섭 의료 인공지능은 보수적인 의료 시스템을 재편할 혁신을 일으키고 있다. 의료 인공지능의 빠른 발전과 광범위한 영향은 전문화, 세분화되며 발전해 온 현대 의료 전문가들이 이해하기가 어려우며, 어디서부 터 공부해야 할지도 막연하다. 이런 상황에서 의료 인공지능의 개념과 적용, 그리고 의사와의 관계를 쉽 게 풀어내는 이 책은 좋은 길라잡이가 될 것이다. 특히 미래의 주역이 될 의학도와 젊은 의료인에게 유용 한 소개서이다. ━ 서준범, 서울아산병원 영상의학과 교수, 의료영상인공지능사업단장 인공지능이 의료의 패러다임을 크게 바꿀 것이라는 것에 동의하지 않는 사람은 거의 없다. 하지만 인공 지능이 처리해야 할 의료의 난제는 많으며 그 해결 방안도 천차만별이다. 흔히 생각하는 만병통치약 같 은 의료 인공지능은 존재하지 않는다. 이 책은 다양한 의료 인공지능의 개발, 활용 및 가능성을 균형 있 게 분석하고 있다. 인공지능을 도입하려는 의료인, 생소한 의료 영역에 도전할 인공지능 연구자 모두에 게 일독을 권한다. ━ 정지훈, 경희사이버대 미디어커뮤니케이션학과 선임강의교수, 의사 서울의대 기초의학교육을 책임지고 있는 교수의 입장에서, 산업화 이후 변하지 않은 현재의 의학 교육 으로는 격변하는 인공지능 시대에 의대생을 대비시키지 못한다는 한계를 절실히 느낀다. 저와 함께 의 대 인공지능 교육을 개척하고 있는 최윤섭 소장의 전문적 분석과 미래 지향적 안목이 담긴 책이다. 인공 지능이라는 미래를 대비할 의대생과 교수, 그리고 의대 진학을 고민하는 학생과 학부모에게 추천한다. ━ 최형진, 서울대학교 의과대학 해부학교실 교수, 내과 전문의 최근 의료 인공지능의 도입에 대해서 극단적인 시각과 태도가 공존하고 있다. 이 책은 다양한 사례와 깊 은 통찰을 통해 의료 인공지능의 현황과 미래에 대해 균형적인 시각을 제공하여, 인공지능이 의료에 본 격적으로 도입되기 위한 토론의 장을 마련한다. 의료 인공지능이 일상화된 10년 후 돌아보았을 때, 이 책 이 그런 시대를 이끄는 길라잡이 역할을 하였음을 확인할 수 있기를 기대한다. ━ 정규환, 뷰노 CTO 의료 인공지능은 다른 분야 인공지능보다 더 본질적인 이해가 필요하다. 단순히 인간의 일을 대신하는 수준을 넘어 의학의 패러다임을 데이터 기반으로 변화시키기 때문이다. 따라서 인공지능을 균형있게 이 해하고, 어떻게 의사와 환자에게 도움을 줄 수 있을지 깊은 고민이 필요하다. 세계적으로 일어나고 있는 이러한 노력의 결과물을 집대성한 이 책이 반가운 이유다. ━ 백승욱, 루닛 대표 의료 인공지능의 최신 동향뿐만 아니라, 의의와 한계, 전망, 그리고 다양한 생각거리까지 주는 책이다. 논쟁이 되는 여러 이슈에 대해서도 저자는 자신의 시각을 명확한 근거에 기반하여 설득력 있게 제시하 고 있다. 개인적으로는 이 책을 대학원 수업 교재로 활용하려 한다. ━ 신수용, 성균관대학교 디지털헬스학과 교수 최윤섭지음 의료인공지능 값 20,000원 ISBN 979-11-86269-99-2 최초의 책! 계 안팎에서 제기 고 있다. 현재 의 분 커버했다고 자 것인가, 어느 진료 제하고 효용과 안 누가 지는가, 의학 쉬운 언어로 깊이 들이 의료 인공지 적인 용어를 최대 서 다른 곳에서 접 를 접하게 될 것 너무나 빨리 발전 책에서 제시하는 술을 공부하며, 앞 란다. 의사 면허를 취득 저가 도움되면 좋 를 불러일으킬 것 화를 일으킬 수도 슈에 제대로 대응 분은 의학 교육의 예비 의사들은 샌 지능과 함께하는 레이닝 방식도 이 전에 진료실과 수 겠지만, 여러분들 도생하는 수밖에 미래의료학자 최윤섭 박사가 제시하는 의료 인공지능의 현재와 미래 의료 딥러닝과 IBM 왓슨의 현주소 인공지능은 의사를 대체하는가 값 20,000원 ISBN 979-11-86269-99-2 레이닝 방식도 이 전에 진료실과 수 겠지만, 여러분들 도생하는 수밖에 소울링, 메디히어, 모바일 자문을 맡아 한국에서도 고 있다. 국내 최초의 디 케어 이노베이션』에 활발 을 연재하고 있다. 저서로 와 『그렇게 나는 스스로 •블로그_ http://www •페이스북_ https://w •이메일_ yoonsup.c
  • 6.
    의료 인공지능 •1부: 제2의 기계시대와 의료 인공지능 •2부: 의료 인공지능의 과거와 현재 •3부: 미래를 어떻게 맞이할 것인가
  • 7.
    의료 인공지능 •1부: 제2의 기계시대와 의료 인공지능 •2부: 의료 인공지능의 과거와 현재 •3부: 미래를 어떻게 맞이할 것인가
  • 9.
  • 10.
  • 11.
    Vinod Khosla Founder, 1stCEO of Sun Microsystems Partner of KPCB, CEO of KhoslaVentures LegendaryVenture Capitalist in SiliconValley
  • 12.
    “Technology will replace80% of doctors”
  • 13.
    https://www.youtube.com/watch?time_continue=70&v=2HMPRXstSvQ “영상의학과 전문의를 양성하는것을 당장 그만둬야 한다. 5년 안에 딥러닝이 영상의학과 전문의를 능가할 것은 자명하다.” Hinton on Radiology
  • 16.
  • 17.
  • 19.
    • AP 통신:로봇이 인간 대신 기사를 작성 • 초당 2,000 개의 기사 작성 가능 • 기존에 300개 기업의 실적 ➞ 3,000 개 기업을 커버
  • 20.
    • 1978 • Aspart of the obscure task of “discovery” — providing documents relevant to a lawsuit — the studios examined six million documents at a cost of more than $2.2 million, much of it to pay for a platoon of lawyers and paralegals who worked for months at high hourly rates. • 2011 • Now, thanks to advances in artificial intelligence, “e-discovery” software can analyze documents in a fraction of the time for a fraction of the cost. • In January, for example, Blackstone Discovery of Palo Alto, Calif., helped analyze 1.5 million documents for less than $100,000.
  • 21.
    “At its heightback in 2000, the U.S. cash equities trading desk at Goldman Sachs’s New York headquarters employed 600 traders, buying and selling stock on the orders of the investment bank’s large clients. Today there are just two equity traders left”
  • 22.
    • 일본의 Fukoku생명보험에서는 보험금 지급 여부를 심사 하는 사람을 30명 이상 해고하고, IBM Watson Explorer 에게 맡기기로 결정 • 의료 기록을 바탕으로 Watson이 보험금 지급 여부를 판단 • 인공지능으로 교체하여 생산성을 30% 향상 • 2년 안에 ROI 가 나올 것이라고 예상 • 1년차: 140m yen • 2년차: 200m yen
  • 25.
    No choice butto bring AI into the medicine
  • 26.
    Martin Duggan,“IBM WatsonHealth - Integrated Care & the Evolution to Cognitive Computing”
  • 27.
    • 약한 인공지능 (Artificial Narrow Intelligence) • 특정 방면에서 잘하는 인공지능 • 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전 • 강한 인공 지능 (Artificial General Intelligence) • 모든 방면에서 인간 급의 인공 지능 • 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습 • 초 인공 지능 (Artificial Super Intelligence) • 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능 • “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
  • 29.
    2010 2020 20302040 2050 2060 2070 2080 2090 2100 90% 50% 10% PT-AI AGI EETNTOP100 Combined 언제쯤 기계가 인간 수준의 지능을 획득할 것인가? Philosophy and Theory of AI (2011) Artificial General Intelligence (2012) Greek Association for Artificial Intelligence Survey of most frequently cited 100 authors (2013) Combined 응답자 누적 비율 Superintelligence, Nick Bostrom (2014)
  • 30.
    Superintelligence: Science offiction? Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google), David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI) January 6-8, 2017, Asilomar, CA https://brunch.co.kr/@kakao-it/49 https://www.youtube.com/watch?v=h0962biiZa4
  • 31.
    Superintelligence: Science offiction? Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google), David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI) January 6-8, 2017, Asilomar, CA Q: 초인공지능이란 영역은 도달 가능한 것인가? Q: 초지능을 가진 개체의 출현이 가능할 것이라고 생각하는가? Table 1 Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn YES YES YES YES YES YES YES YES YES Table 1-1 Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn YES YES YES YES YES YES YES YES YES Q: 초지능의 실현이 일어나기를 희망하는가? Table 1-1-1 Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn Complicated Complicated Complicated YES Complicated YES YES Complicated Complicated https://brunch.co.kr/@kakao-it/49 https://www.youtube.com/watch?v=h0962biiZa4
  • 32.
  • 33.
  • 34.
    • 약한 인공지능 (Artificial Narrow Intelligence) • 특정 방면에서 잘하는 인공지능 • 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전 • 강한 인공 지능 (Artificial General Intelligence) • 모든 방면에서 인간 급의 인공 지능 • 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습 • 초 인공 지능 (Artificial Super Intelligence) • 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능 • “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
  • 42.
    의료 인공지능 •1부: 제2의 기계시대와 의료 인공지능 •2부: 의료 인공지능의 과거와 현재 •3부: 미래를 어떻게 맞이할 것인가
  • 43.
    •복잡한 의료 데이터의분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 의료 인공지능의 세 유형
  • 44.
    •복잡한 의료 데이터의분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 의료 인공지능의 세 유형
  • 45.
    Jeopardy! 2011년 인간 챔피언두 명 과 퀴즈 대결을 벌여서 압도적인 우승을 차지
  • 46.
    600,000 pieces ofmedical evidence 2 million pages of text from 42 medical journals and clinical trials 69 guidelines, 61,540 clinical trials IBM Watson on Medicine Watson learned... + 1,500 lung cancer cases physician notes, lab results and clinical research + 14,700 hours of hands-on training
  • 50.
    메이요 클리닉 협력 (임상시험 매칭) 전남대병원 도입 인도 마니팔 병원 WFO 도입 식약처 인공지능 가이드라인 초안 메드트로닉과 혈당관리 앱 시연 2011 2012 2013 2014 2015 뉴욕 MSK암센터 협력 (폐암) MD앤더슨 협력 (백혈병) MD앤더슨 파일럿 결과 발표 @ASCO 왓슨 펀드, 웰톡에 투자 뉴욕게놈센터 협력 (교모세포종 분석) GeneMD, 왓슨 모바일 디벨로퍼 챌린지 우승 클리블랜드 클리닉 협력 (암 유전체 분석) 한국 IBM 왓슨 사업부 신설 Watson Health 출범 피텔, 익스플로리스 인수 J&J, 애플, 메드트로닉 협력 에픽 시스템즈, 메이요클리닉 제휴 (EHR 분석) 동경대 도입 ( WFO) 왓슨 펀드, 모더나이징 메디슨 투자 학계/의료계 산업계 패쓰웨이 지노믹스 OME 클로즈드 알파 서비스 시작 트루븐 헬스 인수 애플 리서치 키트 통한 수면 연구 시작 2017 가천대 길병원 도입 메드트로닉 Sugar.IQ 출시 제약사 테바와 제휴 태국 범룽랏 국제 병원, WFO 도입 머지 헬스케어 인수 2016 언더 아머 제휴 브로드 연구소 협력 발표 (유전체 분석-항암제 내성) 마니팔 병원의 
 WFO 정확성 발표 대구가톨릭병원 대구동산병원 도입 부산대병원 도입 왓슨 펀드, 패쓰웨이 지노믹스 투자 제퍼디! 우승 조선대병원 도입 한국 왓슨 컨소시움 출범 쥬피터 
 메디컬 
 센터 도입 식약처 인공지능 가이드라인 메이요 클리닉 임상시험매칭 결과발표 2018 건양대병원 도입 IBM Watson Health Chronicle WFO 최초 논문
  • 51.
    메이요 클리닉 협력 (임상시험 매칭) 전남대병원 도입 인도 마니팔 병원 WFO 도입 식약처 인공지능 가이드라인 초안 메드트로닉과 혈당관리 앱 시연 2011 2012 2013 2014 2015 뉴욕 MSK암센터 협력 (폐암) MD앤더슨 협력 (백혈병) MD앤더슨 파일럿 결과 발표 @ASCO 왓슨 펀드, 웰톡에 투자 뉴욕게놈센터 협력 (교모세포종 분석) GeneMD, 왓슨 모바일 디벨로퍼 챌린지 우승 클리블랜드 클리닉 협력 (암 유전체 분석) 한국 IBM 왓슨 사업부 신설 Watson Health 출범 피텔, 익스플로리스 인수 J&J, 애플, 메드트로닉 협력 에픽 시스템즈, 메이요클리닉 제휴 (EHR 분석) 동경대 도입 ( WFO) 왓슨 펀드, 모더나이징 메디슨 투자 학계/의료계 산업계 패쓰웨이 지노믹스 OME 클로즈드 알파 서비스 시작 트루븐 헬스 인수 애플 리서치 키트 통한 수면 연구 시작 2017 가천대 길병원 도입 메드트로닉 Sugar.IQ 출시 제약사 테바와 제휴 태국 범룽랏 국제 병원, WFO 도입 머지 헬스케어 인수 2016 언더 아머 제휴 브로드 연구소 협력 발표 (유전체 분석-항암제 내성) 마니팔 병원의 
 WFO 정확성 발표 대구가톨릭병원 대구동산병원 도입 부산대병원 도입 왓슨 펀드, 패쓰웨이 지노믹스 투자 제퍼디! 우승 조선대병원 도입 한국 왓슨 컨소시움 출범 쥬피터 
 메디컬 
 센터 도입 식약처 인공지능 가이드라인 2018 건양대병원 도입 메이요 클리닉 임상시험매칭 결과발표 WFO 최초 논문 IBM Watson Health Chronicle
  • 53.
    Annals of Oncology(2016) 27 (suppl_9): ix179-ix180. 10.1093/annonc/mdw601 Validation study to assess performance of IBM cognitive computing system Watson for oncology with Manipal multidisciplinary tumour board for 1000 consecutive cases: 
 An Indian experience •인도 마니팔 병원의 1,000명의 암환자 에 대해 의사와 WFO의 권고안의 ‘일치율’을 비교 •유방암 638명, 대장암 126명, 직장암 124명, 폐암 112명 •의사-왓슨 일치율 •추천(50%), 고려(28%), 비추천(17%) •의사의 진료안 중 5%는 왓슨의 권고안으로 제시되지 않음 •일치율이 암의 종류마다 달랐음 •직장암(85%), 폐암(17.8%) •삼중음성 유방암(67.9%), HER2 음성 유방암 (35%)
  • 54.
    San Antonio BreastCancer Symposium—December 6-10, 2016 Concordance WFO (@T2) and MMDT (@T1* v. T2**) (N= 638 Breast Cancer Cases) Time Point /Concordance REC REC + FC n % n % T1* 296 46 463 73 T2** 381 60 574 90 This presentation is the intellectual property of the author/presenter.Contact somusp@yahoo.com for permission to reprint and/or distribute.26 * T1 Time of original treatment decision by MMDT in the past (last 1-3 years) ** T2 Time (2016) of WFO’s treatment advice and of MMDT’s treatment decision upon blinded re-review of non-concordant cases
  • 55.
    WFO in ASCO2017 • Early experience with IBM WFO cognitive computing system for lung 
 
 and colorectal cancer treatment (마니팔 병원)
 • 지난 3년간: lung cancer(112), colon cancer(126), rectum cancer(124) • lung cancer: localized 88.9%, meta 97.9% • colon cancer: localized 85.5%, meta 76.6% • rectum cancer: localized 96.8%, meta 80.6% Performance of WFO in India 2017 ASCO annual Meeting, J Clin Oncol 35, 2017 (suppl; abstr 8527)
  • 56.
    WFO in ASCO2017 •가천대 길병원의 대장암과 위암 환자에 왓슨 적용 결과 • 대장암 환자(stage II-IV) 340명 • 진행성 위암 환자 185명 (Retrospective)
 • 의사와의 일치율 • 대장암 환자: 73% • 보조 (adjuvant) 항암치료를 받은 250명: 85% • 전이성 환자 90명: 40%
 • 위암 환자: 49% • Trastzumab/FOLFOX 가 국민 건강 보험 수가를 받지 못함 • S-1(tegafur, gimeracil and oteracil)+cisplatin): • 국내는 매우 루틴; 미국에서는 X
  • 57.
    ORIGINAL ARTICLE Watson forOncology and breast cancer treatment recommendations: agreement with an expert multidisciplinary tumor board S. P. Somashekhar1*, M.-J. Sepu´lveda2 , S. Puglielli3 , A. D. Norden3 , E. H. Shortliffe4 , C. Rohit Kumar1 , A. Rauthan1 , N. Arun Kumar1 , P. Patil1 , K. Rhee3 & Y. Ramya1 1 Manipal Comprehensive Cancer Centre, Manipal Hospital, Bangalore, India; 2 IBM Research (Retired), Yorktown Heights; 3 Watson Health, IBM Corporation, Cambridge; 4 Department of Surgical Oncology, College of Health Solutions, Arizona State University, Phoenix, USA *Correspondence to: Prof. Sampige Prasannakumar Somashekhar, Manipal Comprehensive Cancer Centre, Manipal Hospital, Old Airport Road, Bangalore 560017, Karnataka, India. Tel: þ91-9845712012; Fax: þ91-80-2502-3759; E-mail: somashekhar.sp@manipalhospitals.com Background: Breast cancer oncologists are challenged to personalize care with rapidly changing scientific evidence, drug approvals, and treatment guidelines. Artificial intelligence (AI) clinical decision-support systems (CDSSs) have the potential to help address this challenge. We report here the results of examining the level of agreement (concordance) between treatment recommendations made by the AI CDSS Watson for Oncology (WFO) and a multidisciplinary tumor board for breast cancer. Patients and methods: Treatment recommendations were provided for 638 breast cancers between 2014 and 2016 at the Manipal Comprehensive Cancer Center, Bengaluru, India. WFO provided treatment recommendations for the identical cases in 2016. A blinded second review was carried out by the center’s tumor board in 2016 for all cases in which there was not agreement, to account for treatments and guidelines not available before 2016. Treatment recommendations were considered concordant if the tumor board recommendations were designated ‘recommended’ or ‘for consideration’ by WFO. Results: Treatment concordance between WFO and the multidisciplinary tumor board occurred in 93% of breast cancer cases. Subgroup analysis found that patients with stage I or IV disease were less likely to be concordant than patients with stage II or III disease. Increasing age was found to have a major impact on concordance. Concordance declined significantly (P 0.02; P < 0.001) in all age groups compared with patients <45 years of age, except for the age group 55–64 years. Receptor status was not found to affect concordance. Conclusion: Treatment recommendations made by WFO and the tumor board were highly concordant for breast cancer cases examined. Breast cancer stage and patient age had significant influence on concordance, while receptor status alone did not. This study demonstrates that the AI clinical decision-support system WFO may be a helpful tool for breast cancer treatment decision making, especially at centers where expert breast cancer resources are limited. Key words: Watson for Oncology, artificial intelligence, cognitive clinical decision-support systems, breast cancer, concordance, multidisciplinary tumor board Introduction Oncologists who treat breast cancer are challenged by a large and rapidly expanding knowledge base [1, 2]. As of October 2017, for example, there were 69 FDA-approved drugs for the treatment of breast cancer, not including combination treatment regimens [3]. The growth of massive genetic and clinical databases, along with computing systems to exploit them, will accelerate the speed of breast cancer treatment advances and shorten the cycle time for changes to breast cancer treatment guidelines [4, 5]. In add- ition, these information management challenges in cancer care are occurring in a practice environment where there is little time available for tracking and accessing relevant information at the point of care [6]. For example, a study that surveyed 1117 oncolo- gists reported that on average 4.6 h per week were spent keeping VC The Author(s) 2018. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com. Annals of Oncology 29: 418–423, 2018 doi:10.1093/annonc/mdx781 Published online 9 January 2018 Downloaded from https://academic.oup.com/annonc/article-abstract/29/2/418/4781689 by guest
  • 58.
    ORIGINAL ARTICLE Watson forOncology and breast cancer treatment recommendations: agreement with an expert multidisciplinary tumor board S. P. Somashekhar1*, M.-J. Sepu´lveda2 , S. Puglielli3 , A. D. Norden3 , E. H. Shortliffe4 , C. Rohit Kumar1 , A. Rauthan1 , N. Arun Kumar1 , P. Patil1 , K. Rhee3 & Y. Ramya1 1 Manipal Comprehensive Cancer Centre, Manipal Hospital, Bangalore, India; 2 IBM Research (Retired), Yorktown Heights; 3 Watson Health, IBM Corporation, Cambridge; 4 Department of Surgical Oncology, College of Health Solutions, Arizona State University, Phoenix, USA *Correspondence to: Prof. Sampige Prasannakumar Somashekhar, Manipal Comprehensive Cancer Centre, Manipal Hospital, Old Airport Road, Bangalore 560017, Karnataka, India. Tel: þ91-9845712012; Fax: þ91-80-2502-3759; E-mail: somashekhar.sp@manipalhospitals.com Background: Breast cancer oncologists are challenged to personalize care with rapidly changing scientific evidence, drug approvals, and treatment guidelines. Artificial intelligence (AI) clinical decision-support systems (CDSSs) have the potential to help address this challenge. We report here the results of examining the level of agreement (concordance) between treatment recommendations made by the AI CDSS Watson for Oncology (WFO) and a multidisciplinary tumor board for breast cancer. Patients and methods: Treatment recommendations were provided for 638 breast cancers between 2014 and 2016 at the Manipal Comprehensive Cancer Center, Bengaluru, India. WFO provided treatment recommendations for the identical cases in 2016. A blinded second review was carried out by the center’s tumor board in 2016 for all cases in which there was not agreement, to account for treatments and guidelines not available before 2016. Treatment recommendations were considered concordant if the tumor board recommendations were designated ‘recommended’ or ‘for consideration’ by WFO. Results: Treatment concordance between WFO and the multidisciplinary tumor board occurred in 93% of breast cancer cases. Subgroup analysis found that patients with stage I or IV disease were less likely to be concordant than patients with stage II or III disease. Increasing age was found to have a major impact on concordance. Concordance declined significantly (P 0.02; P < 0.001) in all age groups compared with patients <45 years of age, except for the age group 55–64 years. Receptor status was not found to affect concordance. Conclusion: Treatment recommendations made by WFO and the tumor board were highly concordant for breast cancer cases examined. Breast cancer stage and patient age had significant influence on concordance, while receptor status alone did not. This study demonstrates that the AI clinical decision-support system WFO may be a helpful tool for breast cancer treatment decision making, especially at centers where expert breast cancer resources are limited. Key words: Watson for Oncology, artificial intelligence, cognitive clinical decision-support systems, breast cancer, concordance, multidisciplinary tumor board Introduction Oncologists who treat breast cancer are challenged by a large and rapidly expanding knowledge base [1, 2]. As of October 2017, for example, there were 69 FDA-approved drugs for the treatment of breast cancer, not including combination treatment regimens [3]. The growth of massive genetic and clinical databases, along with computing systems to exploit them, will accelerate the speed of breast cancer treatment advances and shorten the cycle time for changes to breast cancer treatment guidelines [4, 5]. In add- ition, these information management challenges in cancer care are occurring in a practice environment where there is little time available for tracking and accessing relevant information at the point of care [6]. For example, a study that surveyed 1117 oncolo- gists reported that on average 4.6 h per week were spent keeping VC The Author(s) 2018. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com. Annals of Oncology 29: 418–423, 2018 doi:10.1093/annonc/mdx781 Published online 9 January 2018 Downloaded from https://academic.oup.com/annonc/article-abstract/29/2/418/4781689 by guest Table 2. MMDT and WFO recommendations after the initial and blinded second reviews Review of breast cancer cases (N 5 638) Concordant cases, n (%) Non-concordant cases, n (%) Recommended For consideration Total Not recommended Not available Total Initial review (T1MMDT versus T2WFO) 296 (46) 167 (26) 463 (73) 137 (21) 38 (6) 175 (27) Second review (T2MMDT versus T2WFO) 397 (62) 194 (30) 591 (93) 36 (5) 11 (2) 47 (7) T1MMDT, original MMDT recommendation from 2014 to 2016; T2WFO, WFO advisor treatment recommendation in 2016; T2MMDT, MMDT treatment recom- mendation in 2016; MMDT, Manipal multidisciplinary tumor board; WFO, Watson for Oncology. 31% 18% 1% 2% 33% 5% 31% 6% 0% 10% 20% Not available Not recommended RecommendedFor consideration 30% 40% 50% 60% 70% 80% 90% 100% 8% 25% 61% 64% 64% 29% 51% 62% Concordance, 93% Concordance, 80% Concordance, 97% Concordance, 95% Concordance, 86% 2% 2% Overall (n=638) Stage I (n=61) Stage II (n=262) Stage III (n=191) Stage IV (n=124) 5% Figure 1. Treatment concordance between WFO and the MMDT overall and by stage. MMDT, Manipal multidisciplinary tumor board; WFO, Watson for Oncology. 5%Non-metastatic HR(+)HER2/neu(+)Triple(–) Metastatic Non-metastatic Metastatic Non-metastatic Metastatic 10% 1% 2% 1% 5% 20% 20%10% 0% Not applicable Not recommended For consideration Recommended 20% 40% 60% 80% 100% 5% 74% 65% 34% 64% 5% 38% 56% 15% 20% 55% 36% 59% Concordance, 95% Concordance, 75% Concordance, 94% Concordance, 98% Concordance, 94% Concordance, 85% Figure 2. Treatment concordance between WFO and the MMDT by stage and receptor status. HER2/neu, human epidermal growth factor receptor 2; HR, hormone receptor; MMDT, Manipal multidisciplinary tumor board; WFO, Watson for Oncology. Annals of Oncology Original article
  • 59.
    잠정적 결론 •왓슨 포온콜로지와 의사의 일치율: •암종별로 다르다. •같은 암종에서도 병기별로 다르다. •같은 암종에 대해서도 병원별/국가별로 다르다. •시간이 흐름에 따라 달라질 가능성이 있다.
  • 60.
    원칙이 필요하다 •어떤 환자의경우, 왓슨에게 의견을 물을 것인가? •왓슨을 (암종별로) 얼마나 신뢰할 것인가? •왓슨의 의견을 환자에게 공개할 것인가? •왓슨과 의료진의 판단이 다른 경우 어떻게 할 것인가? •왓슨에게 보험 급여를 매길 수 있는가? 이러한 기준에 따라 의료의 질/치료효과가 달라질 수 있으나, 현재 개별 병원이 개별적인 기준으로 활용하게 됨
  • 61.
    Empowering the OncologyCommunity for Cancer Care Genomics Oncology Clinical Trial Matching Watson Health’s oncology clients span more than 35 hospital systems “Empowering the Oncology Community for Cancer Care” Andrew Norden, KOTRA Conference, March 2017, “The Future of Health is Cognitive”
  • 62.
    IBM Watson Health Watsonfor Clinical Trial Matching (CTM) 18 1. According to the National Comprehensive Cancer Network (NCCN) 2. http://csdd.tufts.edu/files/uploads/02_-_jan_15,_2013_-_recruitment-retention.pdf© 2015 International Business Machines Corporation Searching across eligibility criteria of clinical trials is time consuming and labor intensive Current Challenges Fewer than 5% of adult cancer patients participate in clinical trials1 37% of sites fail to meet minimum enrollment targets. 11% of sites fail to enroll a single patient 2 The Watson solution • Uses structured and unstructured patient data to quickly check eligibility across relevant clinical trials • Provides eligible trial considerations ranked by relevance • Increases speed to qualify patients Clinical Investigators (Opportunity) • Trials to Patient: Perform feasibility analysis for a trial • Identify sites with most potential for patient enrollment • Optimize inclusion/exclusion criteria in protocols Faster, more efficient recruitment strategies, better designed protocols Point of Care (Offering) • Patient to Trials: Quickly find the right trial that a patient might be eligible for amongst 100s of open trials available Improve patient care quality, consistency, increased efficiencyIBM Confidential
  • 63.
    •총 16주간 HOG(Highlands Oncology Group)의 폐암과 유방암 환자 2,620명을 대상 •90명의 환자를 3개의 노바티스 유방암 임상 프로토콜에 따라 선별 •임상 시험 코디네이터: 1시간 50분 •Watson CTM: 24분 (78% 시간 단축) •Watson CTM은 임상 시험 기준에 해당되지 않는 환자 94%를 자동으로 스크리닝
  • 64.
    •메이요 클리닉의 유방암신약 임상시험에 등록자의 수가 80% 증가하였다는 결과 발표
  • 65.
    •2018년 1월 구글이전자의무기록(EMR)을 분석하여, 환자 치료 결과를 예측하는 인공지능 발표 •환자가 입원 중에 사망할 것인지 •장기간 입원할 것인지 •퇴원 후에 30일 내에 재입원할 것인지 •퇴원 시의 진단명
 •이번 연구의 특징: 확장성 •과거 다른 연구와 달리 EMR의 일부 데이터를 pre-processing 하지 않고, •전체 EMR 를 통채로 모두 분석하였음: UCSF, UCM (시카고 대학병원) •특히, 비정형 데이터인 의사의 진료 노트도 분석
  • 66.
    Figure 4: Thepatient record shows a woman with metastatic breast cancer with malignant pleural e usions and empyema. The patient timeline at the top of the figure contains circles for every time-step for which at least a single token exists for the patient, and the horizontal lines show the data-type. There is a close-up view of the most recent data-points immediately preceding a prediction made 24 hours after admission. We trained models for each data-type and highlighted in red the tokens which the models attended to – the non-highlighted text was not attended to but is shown for context. The models pick up features in the medications, nursing flowsheets, and clinical notes to make the prediction. • TAAN(Time-Aware Neural Nework)를 이용하여, • 전이성 유방암 환자의 EMR에서 어떤 부분을 인공지능이 더 유의하게 보았는지를 표시해본 결과, • 실제로 사망 위험도와 관계가 높은 데이터를 더 중요하게 보았음 • 진료 기록: 농양(empyema), 흉수(pleural effusions) 등 • 간호 기록: 반코마이신, 메트로니다졸 등의 항생제 투약, 욕창(pressure ulcer)의 위험이 높음 • 흉부에 삽입하는 튜브(카테터)의 상표인 'PleurX'도 중요 단어로 파악
  • 67.
    • 복잡한 의료데이터의 분석 및 insight 도출 • 영상 의료/병리 데이터의 분석/판독 • 연속 데이터의 모니터링 및 예방/예측 의료 인공지능의 세 유형
  • 68.
  • 69.
    인공지능 기계학습 딥러닝 전문가 시스템 사이버네틱스 … 인공신경망 결정트리 서포트 벡터머신 … 컨볼루션 신경망 (CNN) 순환신경망(RNN) … 인공지능과 딥러닝의 관계
  • 70.
    페이스북의 딥페이스 Taigman,Y. etal. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14. Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million parameters, where more than 95% come from the local and fully connected layers. very few parameters. These layers merely expand the input into a set of simple local features. The subsequent layers (L4, L5 and L6) are instead lo- cally connected [13, 16], like a convolutional layer they ap- ply a filter bank, but every location in the feature map learns a different set of filters. Since different regions of an aligned image have different local statistics, the spatial stationarity The goal of training is to maximize the probability of the correct class (face id). We achieve this by minimiz- ing the cross-entropy loss for each training sample. If k is the index of the true label for a given input, the loss is: L = log pk. The loss is minimized over the parameters by computing the gradient of L w.r.t. the parameters and Human: 95% vs. DeepFace in Facebook: 97.35% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
  • 71.
    Schroff, F. etal. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering Human: 95% vs. FaceNet of Google: 99.63% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people) False accept False reject s. This shows all pairs of images that were on LFW. Only eight of the 13 errors shown he other four are mislabeled in LFW. on Youtube Faces DB ge similarity of all pairs of the first one our face detector detects in each video. False accept False reject Figure 6. LFW errors. This shows all pairs of images that were incorrectly classified on LFW. Only eight of the 13 errors shown here are actual errors the other four are mislabeled in LFW. 5.7. Performance on Youtube Faces DB We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. This gives us a classification accuracy of 95.12%±0.39. Using the first one thousand frames results in 95.18%. Compared to [17] 91.4% who also evaluate one hundred frames per video we reduce the error rate by almost half. DeepId2+ [15] achieved 93.2% and our method reduces this error by 30%, comparable to our improvement on LFW. 5.8. Face Clustering Our compact embedding lends itself to be used in order to cluster a users personal photos into groups of people with the same identity. The constraints in assignment imposed by clustering faces, compared to the pure verification task, lead to truly amazing results. Figure 7 shows one cluster in a users personal photo collection, generated using agglom- erative clustering. It is a clear showcase of the incredible invariance to occlusion, lighting, pose and even age. Figure 7. Face Clustering. Shown is an exemplar cluster for one user. All these images in the users personal photo collection were clustered together. 6. Summary We provide a method to directly learn an embedding into an Euclidean space for face verification. This sets it apart from other methods [15, 17] who use the CNN bottleneck layer, or require additional post-processing such as concate- nation of multiple models and PCA, as well as SVM clas- sification. Our end-to-end training both simplifies the setup and shows that directly optimizing a loss relevant to the task at hand improves performance. Another strength of our model is that it only requires False accept False reject Figure 6. LFW errors. This shows all pairs of images that were incorrectly classified on LFW. Only eight of the 13 errors shown here are actual errors the other four are mislabeled in LFW. 5.7. Performance on Youtube Faces DB We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. This gives us a classification accuracy of 95.12%±0.39. Using the first one thousand frames results in 95.18%. Compared to [17] 91.4% who also evaluate one hundred frames per video we reduce the error rate by almost half. DeepId2+ [15] achieved 93.2% and our method reduces this error by 30%, comparable to our improvement on LFW. 5.8. Face Clustering Our compact embedding lends itself to be used in order to cluster a users personal photos into groups of people with the same identity. The constraints in assignment imposed by clustering faces, compared to the pure verification task, Figure 7. Face Clustering. Shown is an exemplar cluster for one user. All these images in the users personal photo collection were clustered together. 6. Summary We provide a method to directly learn an embedding into an Euclidean space for face verification. This sets it apart from other methods [15, 17] who use the CNN bottleneck layer, or require additional post-processing such as concate- nation of multiple models and PCA, as well as SVM clas- 구글의 페이스넷
  • 72.
    바이두의 얼굴 인식인공지능 Jingtuo Liu (2015) Targeting Ultimate Accuracy: Face Recognition via Deep Embedding Human: 95% vs.Baidu: 99.77% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people) 3 Although several algorithms have achieved nearly perfect accuracy in the 6000-pair verification task, a more practical can achieve 95.8% identification rate, relatively reducing the error rate by about 77%. TABLE 3. COMPARISONS WITH OTHER METHODS ON SEVERAL EVALUATION TASKS Score = -0.060 (pair #113) Score = -0.022 (pair #202) Score = -0.034 (pair #656) Score = -0.031 (pair #1230) Score = -0.073 (pair #1862) Score = -0.091(pair #2499) Score = -0.024 (pair #2551) Score = -0.036 (pair #2552) Score = -0.089 (pair #2610) Method Performance on tasks Pair-wise Accuracy(%) Rank-1(%) DIR(%) @ FAR =1% Verification(% )@ FAR=0.1% Open-set Identification(% )@ Rank = 1,FAR = 0.1% IDL Ensemble Model 99.77 98.03 95.8 99.41 92.09 IDL Single Model 99.68 97.60 94.12 99.11 89.08 FaceNet[12] 99.63 NA NA NA NA DeepID3[9] 99.53 96.00 81.40 NA NA Face++[2] 99.50 NA NA NA NA Facebook[15] 98.37 82.5 61.9 NA NA Learning from Scratch[4] 97.73 NA NA 80.26 28.90 HighDimLBP[10] 95.17 NA NA 41.66(reported in [4]) 18.07(reported in [4]) • 6,000쌍의 얼굴 사진 중에 바이두의 인공지능은 불과 14쌍만을 잘못 판단 • 알고 보니 이 14쌍 중의 5쌍의 사진은 오히려 정답에 오류가 있었고, 
 
 실제로는 인공지능이 정확 (red box)
  • 73.
  • 74.
    •손 엑스레이 영상을판독하여 환자의 골연령 (뼈 나이)를 계산해주는 인공지능 • 기존에 의사는 그룰리히-파일(Greulich-Pyle)법 등으로 표준 사진과 엑스레이를 비교하여 판독 • 인공지능은 참조표준영상에서 성별/나이별 패턴을 찾아서 유사성을 확률로 표시 + 표준 영상 검색 •의사가 성조숙증이나 저성장을 진단하는데 도움을 줄 수 있음
  • 75.
    - 1 - 보도 자 료 국내에서 개발한 인공지능(AI) 기반 의료기기 첫 허가 - 인공지능 기술 활용하여 뼈 나이 판독한다 - 식품의약품안전처 처장 류영진 는 국내 의료기기업체 주 뷰노가 개발한 인공지능 기술이 적용된 의료영상분석장치소프트웨어 뷰노메드 본에이지 를 월 일 허가했다고 밝혔습니다 이번에 허가된 뷰노메드 본에이지 는 인공지능 이 엑스레이 영상을 분석하여 환자의 뼈 나이를 제시하고 의사가 제시된 정보 등으로 성조숙증이나 저성장을 진단하는데 도움을 주는 소프트웨어입니다 그동안 의사가 환자의 왼쪽 손 엑스레이 영상을 참조표준영상 과 비교하면서 수동으로 뼈 나이를 판독하던 것을 자동화하여 판독시간을 단축하였습니다 이번 허가 제품은 년 월부터 빅데이터 및 인공지능 기술이 적용된 의료기기의 허가 심사 가이드라인 적용 대상으로 선정되어 임상시험 설계에서 허가까지 맞춤 지원하였습니다 뷰노메드 본에이지 는 환자 왼쪽 손 엑스레이 영상을 분석하여 의 료인이 환자 뼈 나이를 판단하는데 도움을 주기 위한 목적으로 허가되었습니다 - 2 - 분석은 인공지능이 촬영된 엑스레이 영상의 패턴을 인식하여 성별 남자 개 여자 개 로 분류된 뼈 나이 모델 참조표준영상에서 성별 나이별 패턴을 찾아 유사성을 확률로 표시하면 의사가 확률값 호르몬 수치 등의 정보를 종합하여 성조숙증이나 저성장을 진단합 니다 임상시험을 통해 제품 정확도 성능 를 평가한 결과 의사가 판단한 뼈 나이와 비교했을 때 평균 개월 차이가 있었으며 제조업체가 해당 제품 인공지능이 스스로 인지 학습할 수 있도록 영상자료를 주기적으로 업데이트하여 의사와의 오차를 좁혀나갈 수 있도록 설계되었습니다 인공지능 기반 의료기기 임상시험계획 승인건수는 이번에 허가받은 뷰노메드 본에이지 를 포함하여 현재까지 건입니다 임상시험이 승인된 인공지능 기반 의료기기는 자기공명영상으로 뇌경색 유형을 분류하는 소프트웨어 건 엑스레이 영상을 통해 폐결절 진단을 도와주는 소프트웨어 건 입니다 참고로 식약처는 인공지능 가상현실 프린팅 등 차 산업과 관련된 의료기기 신속한 개발을 지원하기 위하여 제품 연구 개발부터 임상시험 허가에 이르기까지 전 과정을 맞춤 지원하는 차세대 프로젝트 신개발 의료기기 허가도우미 등을 운영하고 있 습니다 식약처는 이번 제품 허가를 통해 개개인의 뼈 나이를 신속하게 분석 판정하는데 도움을 줄 수 있을 것이라며 앞으로도 첨단 의료기기 개발이 활성화될 수 있도록 적극적으로 지원해 나갈 것이라고 밝혔습니다
  • 76.
    저는 뷰노의 자문을맡고 있으며, 지분 관계가 있음을 밝힙니다
  • 77.
    AJR:209, December 20171 Since 1992, concerns regarding interob- server variability in manual bone age esti- mation [4] have led to the establishment of several automatic computerized methods for bone age estimation, including computer-as- sisted skeletal age scores, computer-aided skeletal maturation assessment systems, and BoneXpert (Visiana) [5–14]. BoneXpert was developed according to traditional machine- learning techniques and has been shown to have a good performance for patients of var- ious ethnicities and in various clinical set- tings [10–14]. The deep-learning technique is an improvement in artificial neural net- works. Unlike traditional machine-learning techniques, deep-learning techniques allow an algorithm to program itself by learning from the images given a large dataset of la- beled examples, thus removing the need to specify rules [15]. Deep-learning techniques permit higher levels of abstraction and improved predic- tions from data. Deep-learning techniques Computerized Bone Age Estimation Using Deep Learning– Based Program: Evaluation of the Accuracy and Efficiency Jeong Rye Kim1 Woo Hyun Shim1 Hee Mang Yoon1 Sang Hyup Hong1 Jin Seong Lee1 Young Ah Cho1 Sangki Kim2 Kim JR, Shim WH, Yoon MH, et al. 1 Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea. Address correspondence to H. M. Yoon (espoirhm@gmail.com). 2 Vuno Research Center, Vuno Inc., Seoul, South Korea. Pediatric Imaging • Original Research Supplemental Data Available online at www.ajronline.org. AJR 2017; 209:1–7 0361–803X/17/2096–1 © American Roentgen Ray Society B one age estimation is crucial for developmental status determina- tions and ultimate height predic- tions in the pediatric population, particularly for patients with growth disor- ders and endocrine abnormalities [1]. Two major left-hand wrist radiograph-based methods for bone age estimation are current- ly used: the Greulich-Pyle [2] and Tanner- Whitehouse [3] methods. The former is much more frequently used in clinical practice. Greulich-Pyle–based bone age estimation is performed by comparing a patient’s left-hand radiograph to standard radiographs in the Greulich-Pyle atlas and is therefore simple and easily applied in clinical practice. How- ever, the process of bone age estimation, which comprises a simple comparison of multiple images, can be repetitive and time consuming and is thus sometimes burden- some to radiologists. Moreover, the accuracy depends on the radiologist’s experience and tends to be subjective. Keywords: bone age, children, deep learning, neural network model DOI:10.2214/AJR.17.18224 J. R. Kim and W. H. Shim contributed equally to this work. Received March 12, 2017; accepted after revision July 7, 2017. S. Kim is employed by Vuno, Inc., which created the deep learning–based automatic software system for bone age determination. J. R. Kim, W. H. Shim, H. M. Yoon, S. H. Hong, J. S. Lee, and Y. A. Cho are employed by Asan Medical Center, which holds patent rights for the deep learning–based automatic software system for bone age assessment. OBJECTIVE. The purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clini- cal practice. MATERIALS AND METHODS. A Greulich-Pyle method–based deep-learning tech- nique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3–17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas–assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consen- sus of two experienced radiologists. RESULTS. First-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p < 0.001). Concordance rates increased with the use of the automatic software system for both reviewer 1 (63.0% for Greulich-Pyle atlas–assisted bone age vs 72.5% for computer-as- sisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas–assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively. CONCLUSION. Automatic software system showed reliably accurate bone age estima- tions and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy. Kim et al. Accuracy and Efficiency of Computerized Bone Age Estimation Pediatric Imaging Original Research Downloadedfromwww.ajronline.orgbyFloridaAtlanticUnivon09/13/17fromIPaddress131.91.169.193.CopyrightARRS.Forpersonaluseonly;allrightsreserved • 총 환자의 수: 200명 • 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스 • 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험) • 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독) • 인공지능: VUNO의 골연령 판독 딥러닝 AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380.
  • 78.
    40 50 60 70 80 인공지능 의사 A의사 B 69.5% 63% 49.5% 정확도(%) 영상의학과 펠로우 (소아영상 세부전공) 영상의학과 2년차 전공의 인공지능 vs 의사 AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380. • 총 환자의 수: 200명 • 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험) • 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독) • 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스 • 인공지능: VUNO의 골연령 판독 딥러닝 골연령 판독에 인간 의사와 인공지능의 시너지 효과 Digital Healthcare Institute Director,Yoon Sup Choi, PhD yoonsup.choi@gmail.com
  • 79.
    40 50 60 70 80 인공지능 의사 A의사 B 40 50 60 70 80 의사 A 
 + 인공지능 의사 B 
 + 인공지능 69.5% 63% 49.5% 72.5% 57.5% 정확도(%) 영상의학과 펠로우 (소아영상 세부전공) 영상의학과 2년차 전공의 인공지능 vs 의사 인공지능 + 의사 AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380. • 총 환자의 수: 200명 • 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험) • 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독) • 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스 • 인공지능: VUNO의 골연령 판독 딥러닝 골연령 판독에 인간 의사와 인공지능의 시너지 효과 Digital Healthcare Institute Director,Yoon Sup Choi, PhD yoonsup.choi@gmail.com
  • 80.
    총 판독 시간(m) 0 50 100 150 200 w/o AI w/ AI 0 50 100 150 200 w/o AI w/ AI 188m 154m 180m 108m saving 40% of time saving 18% of time 의사 A 의사 B 골연령 판독에서 인공지능을 활용하면 판독 시간의 절감도 가능 • 총 환자의 수: 200명 • 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험) • 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독) • 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스 • 인공지능: VUNO의 골연령 판독 딥러닝 AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380. Digital Healthcare Institute Director,Yoon Sup Choi, PhD yoonsup.choi@gmail.com
  • 81.
    This copy isfor personal use only. To order printed copies, contact reprints@rsna.org This copy is for personal use only. To order printed copies, contact reprints@rsna.org ORIGINAL RESEARCH • THORACIC IMAGING hest radiography, one of the most common diagnos- intraobserver agreements because of its limited spatial reso- Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs Ju Gang Nam, MD* • Sunggyun Park, PhD* • Eui Jin Hwang, MD • Jong Hyuk Lee, MD • Kwang-Nam Jin, MD, PhD • KunYoung Lim, MD, PhD • Thienkai HuyVu, MD, PhD • Jae Ho Sohn, MD • Sangheum Hwang, PhD • Jin Mo Goo, MD, PhD • Chang Min Park, MD, PhD From the Department of Radiology and Institute of Radiation Medicine, Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea (J.G.N., E.J.H., J.M.G., C.M.P.); Lunit Incorporated, Seoul, Republic of Korea (S.P.); Department of Radiology, Armed Forces Seoul Hospital, Seoul, Republic of Korea (J.H.L.); Department of Radiology, Seoul National University Boramae Medical Center, Seoul, Republic of Korea (K.N.J.); Department of Radiology, National Cancer Center, Goyang, Republic of Korea (K.Y.L.); Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, Calif (T.H.V., J.H.S.); and Department of Industrial & Information Systems Engineering, Seoul National University of Science and Technology, Seoul, Republic of Korea (S.H.). Received January 30, 2018; revision requested March 20; revision received July 29; accepted August 6. Address correspondence to C.M.P. (e-mail: cmpark.morphius@gmail.com). Study supported by SNUH Research Fund and Lunit (06–2016–3000) and by Seoul Research and Business Development Program (FI170002). *J.G.N. and S.P. contributed equally to this work. Conflicts of interest are listed at the end of this article. Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 • Content codes: Purpose: To develop and validate a deep learning–based automatic detection algorithm (DLAD) for malignant pulmonary nodules on chest radiographs and to compare its performance with physicians including thoracic radiologists. Materials and Methods: For this retrospective study, DLAD was developed by using 43292 chest radiographs (normal radiograph– to–nodule radiograph ratio, 34067:9225) in 34676 patients (healthy-to-nodule ratio, 30784:3892; 19230 men [mean age, 52.8 years; age range, 18–99 years]; 15446 women [mean age, 52.3 years; age range, 18–98 years]) obtained between 2010 and 2015, which were labeled and partially annotated by 13 board-certified radiologists, in a convolutional neural network. Radiograph clas- sification and nodule detection performances of DLAD were validated by using one internal and four external data sets from three South Korean hospitals and one U.S. hospital. For internal and external validation, radiograph classification and nodule detection performances of DLAD were evaluated by using the area under the receiver operating characteristic curve (AUROC) and jackknife alternative free-response receiver-operating characteristic (JAFROC) figure of merit (FOM), respectively. An observer performance test involving 18 physicians, including nine board-certified radiologists, was conducted by using one of the four external validation data sets. Performances of DLAD, physicians, and physicians assisted with DLAD were evaluated and compared. Results: According to one internal and four external validation data sets, radiograph classification and nodule detection perfor- mances of DLAD were a range of 0.92–0.99 (AUROC) and 0.831–0.924 (JAFROC FOM), respectively. DLAD showed a higher AUROC and JAFROC FOM at the observer performance test than 17 of 18 and 15 of 18 physicians, respectively (P , .05), and all physicians showed improved nodule detection performances with DLAD (mean JAFROC FOM improvement, 0.043; range, 0.006–0.190; P , .05). Conclusion: This deep learning–based automatic detection algorithm outperformed physicians in radiograph classification and nod- ule detection performance for malignant pulmonary nodules on chest radiographs, and it enhanced physicians’ performances when used as a second reader. ©RSNA, 2018 Online supplemental material is available for this article.
  • 82.
    This copy isfor personal use only. To order printed copies, contact reprints@rsna.org This copy is for personal use only. To order printed copies, contact reprints@rsna.org ORIGINAL RESEARCH • THORACIC IMAGING hest radiography, one of the most common diagnos- intraobserver agreements because of its limited spatial reso- Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs Ju Gang Nam, MD* • Sunggyun Park, PhD* • Eui Jin Hwang, MD • Jong Hyuk Lee, MD • Kwang-Nam Jin, MD, PhD • KunYoung Lim, MD, PhD • Thienkai HuyVu, MD, PhD • Jae Ho Sohn, MD • Sangheum Hwang, PhD • Jin Mo Goo, MD, PhD • Chang Min Park, MD, PhD From the Department of Radiology and Institute of Radiation Medicine, Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea (J.G.N., E.J.H., J.M.G., C.M.P.); Lunit Incorporated, Seoul, Republic of Korea (S.P.); Department of Radiology, Armed Forces Seoul Hospital, Seoul, Republic of Korea (J.H.L.); Department of Radiology, Seoul National University Boramae Medical Center, Seoul, Republic of Korea (K.N.J.); Department of Radiology, National Cancer Center, Goyang, Republic of Korea (K.Y.L.); Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, Calif (T.H.V., J.H.S.); and Department of Industrial & Information Systems Engineering, Seoul National University of Science and Technology, Seoul, Republic of Korea (S.H.). Received January 30, 2018; revision requested March 20; revision received July 29; accepted August 6. Address correspondence to C.M.P. (e-mail: cmpark.morphius@gmail.com). Study supported by SNUH Research Fund and Lunit (06–2016–3000) and by Seoul Research and Business Development Program (FI170002). *J.G.N. and S.P. contributed equally to this work. Conflicts of interest are listed at the end of this article. Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 • Content codes: Purpose: To develop and validate a deep learning–based automatic detection algorithm (DLAD) for malignant pulmonary nodules on chest radiographs and to compare its performance with physicians including thoracic radiologists. Materials and Methods: For this retrospective study, DLAD was developed by using 43292 chest radiographs (normal radiograph– to–nodule radiograph ratio, 34067:9225) in 34676 patients (healthy-to-nodule ratio, 30784:3892; 19230 men [mean age, 52.8 years; age range, 18–99 years]; 15446 women [mean age, 52.3 years; age range, 18–98 years]) obtained between 2010 and 2015, which were labeled and partially annotated by 13 board-certified radiologists, in a convolutional neural network. Radiograph clas- sification and nodule detection performances of DLAD were validated by using one internal and four external data sets from three South Korean hospitals and one U.S. hospital. For internal and external validation, radiograph classification and nodule detection performances of DLAD were evaluated by using the area under the receiver operating characteristic curve (AUROC) and jackknife alternative free-response receiver-operating characteristic (JAFROC) figure of merit (FOM), respectively. An observer performance test involving 18 physicians, including nine board-certified radiologists, was conducted by using one of the four external validation data sets. Performances of DLAD, physicians, and physicians assisted with DLAD were evaluated and compared. Results: According to one internal and four external validation data sets, radiograph classification and nodule detection perfor- mances of DLAD were a range of 0.92–0.99 (AUROC) and 0.831–0.924 (JAFROC FOM), respectively. DLAD showed a higher AUROC and JAFROC FOM at the observer performance test than 17 of 18 and 15 of 18 physicians, respectively (P , .05), and all physicians showed improved nodule detection performances with DLAD (mean JAFROC FOM improvement, 0.043; range, 0.006–0.190; P , .05). Conclusion: This deep learning–based automatic detection algorithm outperformed physicians in radiograph classification and nod- ule detection performance for malignant pulmonary nodules on chest radiographs, and it enhanced physicians’ performances when used as a second reader. ©RSNA, 2018 Online supplemental material is available for this article. • 43,292 chest PA (normal:nodule=34,067:9225) • labeled/annotated by 13 board-certified radiologists. • DLAD were validated 1 internal + 4 external datasets • 서울대병원 / 보라매병원 / 국립암센터 / UCSF • Classification / Lesion localization • 인공지능 vs. 의사 vs. 인공지능+의사 • 다양한 수준의 의사와 비교 • non-radiology / radiology residents • board-certified radiologist / Thoracic radiologists
  • 83.
    This copy isfor personal use only. To order printed copies, contact reprints@rsna.org This copy is for personal use only. To order printed copies, contact reprints@rsna.org ORIGINAL RESEARCH • THORACIC IMAGING hest radiography, one of the most common diagnos- intraobserver agreements because of its limited spatial reso- Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs Ju Gang Nam, MD* • Sunggyun Park, PhD* • Eui Jin Hwang, MD • Jong Hyuk Lee, MD • Kwang-Nam Jin, MD, PhD • KunYoung Lim, MD, PhD • Thienkai HuyVu, MD, PhD • Jae Ho Sohn, MD • Sangheum Hwang, PhD • Jin Mo Goo, MD, PhD • Chang Min Park, MD, PhD From the Department of Radiology and Institute of Radiation Medicine, Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea (J.G.N., E.J.H., J.M.G., C.M.P.); Lunit Incorporated, Seoul, Republic of Korea (S.P.); Department of Radiology, Armed Forces Seoul Hospital, Seoul, Republic of Korea (J.H.L.); Department of Radiology, Seoul National University Boramae Medical Center, Seoul, Republic of Korea (K.N.J.); Department of Radiology, National Cancer Center, Goyang, Republic of Korea (K.Y.L.); Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, Calif (T.H.V., J.H.S.); and Department of Industrial & Information Systems Engineering, Seoul National University of Science and Technology, Seoul, Republic of Korea (S.H.). Received January 30, 2018; revision requested March 20; revision received July 29; accepted August 6. Address correspondence to C.M.P. (e-mail: cmpark.morphius@gmail.com). Study supported by SNUH Research Fund and Lunit (06–2016–3000) and by Seoul Research and Business Development Program (FI170002). *J.G.N. and S.P. contributed equally to this work. Conflicts of interest are listed at the end of this article. Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 • Content codes: Purpose: To develop and validate a deep learning–based automatic detection algorithm (DLAD) for malignant pulmonary nodules on chest radiographs and to compare its performance with physicians including thoracic radiologists. Materials and Methods: For this retrospective study, DLAD was developed by using 43292 chest radiographs (normal radiograph– to–nodule radiograph ratio, 34067:9225) in 34676 patients (healthy-to-nodule ratio, 30784:3892; 19230 men [mean age, 52.8 years; age range, 18–99 years]; 15446 women [mean age, 52.3 years; age range, 18–98 years]) obtained between 2010 and 2015, which were labeled and partially annotated by 13 board-certified radiologists, in a convolutional neural network. Radiograph clas- sification and nodule detection performances of DLAD were validated by using one internal and four external data sets from three South Korean hospitals and one U.S. hospital. For internal and external validation, radiograph classification and nodule detection performances of DLAD were evaluated by using the area under the receiver operating characteristic curve (AUROC) and jackknife alternative free-response receiver-operating characteristic (JAFROC) figure of merit (FOM), respectively. An observer performance test involving 18 physicians, including nine board-certified radiologists, was conducted by using one of the four external validation data sets. Performances of DLAD, physicians, and physicians assisted with DLAD were evaluated and compared. Results: According to one internal and four external validation data sets, radiograph classification and nodule detection perfor- mances of DLAD were a range of 0.92–0.99 (AUROC) and 0.831–0.924 (JAFROC FOM), respectively. DLAD showed a higher AUROC and JAFROC FOM at the observer performance test than 17 of 18 and 15 of 18 physicians, respectively (P , .05), and all physicians showed improved nodule detection performances with DLAD (mean JAFROC FOM improvement, 0.043; range, 0.006–0.190; P , .05). Conclusion: This deep learning–based automatic detection algorithm outperformed physicians in radiograph classification and nod- ule detection performance for malignant pulmonary nodules on chest radiographs, and it enhanced physicians’ performances when used as a second reader. ©RSNA, 2018 Online supplemental material is available for this article. • 43,292 chest PA (normal:nodule=34,067:9225) • labeled/annotated by 13 board-certified radiologists. • DLAD were validated 1 internal + 4 external datasets • 서울대병원 / 보라매병원 / 국립암센터 / UCSF • Classification / Lesion localization • 인공지능 vs. 의사 vs. 인공지능+의사 • 다양한 수준의 의사와 비교 • non-radiology / radiology residents • board-certified radiologist / Thoracic radiologists
  • 84.
  • 85.
    Deep Learning AutomaticDetection Algorithm for Malignant Pulmonary Nodules Table 3: Patient Classification and Nodule Detection at the Observer Performance Test Observer Test 1 DLAD versus Test 1 (P Value) Test 2 Test 1 versus Test 2 (P Value) Radiograph Classification (AUROC) Nodule Detection (JAFROC FOM) Radiograph Classification Nodule Detection Radiograph Classification (AUROC) Nodule Detection (JAFROC FOM) Radiograph Classification Nodule Detection Nonradiology physicians Observer 1 0.77 0.716 ,.001 ,.001 0.91 0.853 ,.001 ,.001 Observer 2 0.78 0.657 ,.001 ,.001 0.90 0.846 ,.001 ,.001 Observer 3 0.80 0.700 ,.001 ,.001 0.88 0.783 ,.001 ,.001 Group 0.691 ,.001* 0.828 ,.001* Radiology residents Observer 4 0.78 0.767 ,.001 ,.001 0.80 0.785 .02 .03 Observer 5 0.86 0.772 .001 ,.001 0.91 0.837 .02 ,.001 Observer 6 0.86 0.789 .05 .002 0.86 0.799 .08 .54 Observer 7 0.84 0.807 .01 .003 0.91 0.843 .003 .02 Observer 8 0.87 0.797 .10 .003 0.90 0.845 .03 .001 Observer 9 0.90 0.847 .52 .12 0.92 0.867 .04 .03 Group 0.790 ,.001* 0.867 ,.001* Board-certified radiologists Observer 10 0.87 0.836 .05 .01 0.90 0.865 .004 .002 Observer 11 0.83 0.804 ,.001 ,.001 0.84 0.817 .03 .04 Observer 12 0.88 0.817 .18 .005 0.91 0.841 .01 .01 Observer 13 0.91 0.824 ..99 .02 0.92 0.836 .51 .24 Observer 14 0.88 0.834 .14 .03 0.88 0.840 .87 .23 Group 0.821 .02* 0.840 .01* Thoracic radiologists Observer 15 0.94 0.856 .15 .21 0.96 0.878 .08 .03 Observer 16 0.92 0.854 .60 .17 0.93 0.872 .34 .02 Observer 17 0.86 0.820 .02 .01 0.88 0.838 .14 .12 Observer 18 0.84 0.800 ,.001 ,.001 0.87 0.827 .02 .02 Group 0.833 .08* 0.854 ,.001* Note.—Observer 4 had 1 year of experience; observers 5 and 6 had 2 years of experience; observers 7–9 had 3 years of experience; observers 10–12 had 7 years of experience; observers 13 and 14 had 8 years of experience; observer 15 had 26 years of experience; observer 16 had 13 years of experience; and observers 17 and 18 had 9 years of experience. Observers 1–3 were 4th-year residents from obstetrics and gynecolo- 의사 인공지능 vs. 의사만 (p value) 의사+인공지능 의사 vs. 의사+인공지능 (p value) 영상의학과 1년차 전공의 영상의학과 2년차 전공의 영상의학과 3년차 전공의 산부인과 4년차 전공의 정형외과 4년차 전공의 내과 4년차 전공의 영상의학과 전문의 7년 경력 8년 경력 영상의학과 전문의 (흉부) 26년 경력 13년 경력 9년 경력 영상의학과 전공의 비영상의학과 의사
  • 86.
    Deep Learning AutomaticDetection Algorithm for Malignant Pulmonary Nodules Table 3: Patient Classification and Nodule Detection at the Observer Performance Test Observer Test 1 DLAD versus Test 1 (P Value) Test 2 Test 1 versus Test 2 (P Value) Radiograph Classification (AUROC) Nodule Detection (JAFROC FOM) Radiograph Classification Nodule Detection Radiograph Classification (AUROC) Nodule Detection (JAFROC FOM) Radiograph Classification Nodule Detection Nonradiology physicians Observer 1 0.77 0.716 ,.001 ,.001 0.91 0.853 ,.001 ,.001 Observer 2 0.78 0.657 ,.001 ,.001 0.90 0.846 ,.001 ,.001 Observer 3 0.80 0.700 ,.001 ,.001 0.88 0.783 ,.001 ,.001 Group 0.691 ,.001* 0.828 ,.001* Radiology residents Observer 4 0.78 0.767 ,.001 ,.001 0.80 0.785 .02 .03 Observer 5 0.86 0.772 .001 ,.001 0.91 0.837 .02 ,.001 Observer 6 0.86 0.789 .05 .002 0.86 0.799 .08 .54 Observer 7 0.84 0.807 .01 .003 0.91 0.843 .003 .02 Observer 8 0.87 0.797 .10 .003 0.90 0.845 .03 .001 Observer 9 0.90 0.847 .52 .12 0.92 0.867 .04 .03 Group 0.790 ,.001* 0.867 ,.001* Board-certified radiologists Observer 10 0.87 0.836 .05 .01 0.90 0.865 .004 .002 Observer 11 0.83 0.804 ,.001 ,.001 0.84 0.817 .03 .04 Observer 12 0.88 0.817 .18 .005 0.91 0.841 .01 .01 Observer 13 0.91 0.824 ..99 .02 0.92 0.836 .51 .24 Observer 14 0.88 0.834 .14 .03 0.88 0.840 .87 .23 Group 0.821 .02* 0.840 .01* Thoracic radiologists Observer 15 0.94 0.856 .15 .21 0.96 0.878 .08 .03 Observer 16 0.92 0.854 .60 .17 0.93 0.872 .34 .02 Observer 17 0.86 0.820 .02 .01 0.88 0.838 .14 .12 Observer 18 0.84 0.800 ,.001 ,.001 0.87 0.827 .02 .02 Group 0.833 .08* 0.854 ,.001* Note.—Observer 4 had 1 year of experience; observers 5 and 6 had 2 years of experience; observers 7–9 had 3 years of experience; observers 10–12 had 7 years of experience; observers 13 and 14 had 8 years of experience; observer 15 had 26 years of experience; observer 16 had 13 years of experience; and observers 17 and 18 had 9 years of experience. Observers 1–3 were 4th-year residents from obstetrics and gynecolo- 의사 인공지능 vs. 의사만 (p value) 의사+인공지능 의사 vs. 의사+인공지능 (p value) 영상의학과 1년차 전공의 영상의학과 2년차 전공의 영상의학과 3년차 전공의 산부인과 4년차 전공의 정형외과 4년차 전공의 내과 4년차 전공의 영상의학과 전문의 7년 경력 8년 경력 영상의학과 전문의 (흉부) 26년 경력 13년 경력 9년 경력 영상의학과 전공의 비영상의학과 의사 •인공지능을 second reader로 활용하면 정확도가 개선 •classification: 17 of 18 명이 개선 (15 of 18, P<0.05) •nodule detection: 18 of 18 명이 개선 (14 of 18, P<0.05)
  • 87.
    Deep Learning AutomaticDetection Algorithm for Malignant Pulmonary Nodules Table 3: Patient Classification and Nodule Detection at the Observer Performance Test Observer Test 1 DLAD versus Test 1 (P Value) Test 2 Test 1 versus Test 2 (P Value) Radiograph Classification (AUROC) Nodule Detection (JAFROC FOM) Radiograph Classification Nodule Detection Radiograph Classification (AUROC) Nodule Detection (JAFROC FOM) Radiograph Classification Nodule Detection Nonradiology physicians Observer 1 0.77 0.716 ,.001 ,.001 0.91 0.853 ,.001 ,.001 Observer 2 0.78 0.657 ,.001 ,.001 0.90 0.846 ,.001 ,.001 Observer 3 0.80 0.700 ,.001 ,.001 0.88 0.783 ,.001 ,.001 Group 0.691 ,.001* 0.828 ,.001* Radiology residents Observer 4 0.78 0.767 ,.001 ,.001 0.80 0.785 .02 .03 Observer 5 0.86 0.772 .001 ,.001 0.91 0.837 .02 ,.001 Observer 6 0.86 0.789 .05 .002 0.86 0.799 .08 .54 Observer 7 0.84 0.807 .01 .003 0.91 0.843 .003 .02 Observer 8 0.87 0.797 .10 .003 0.90 0.845 .03 .001 Observer 9 0.90 0.847 .52 .12 0.92 0.867 .04 .03 Group 0.790 ,.001* 0.867 ,.001* Board-certified radiologists Observer 10 0.87 0.836 .05 .01 0.90 0.865 .004 .002 Observer 11 0.83 0.804 ,.001 ,.001 0.84 0.817 .03 .04 Observer 12 0.88 0.817 .18 .005 0.91 0.841 .01 .01 Observer 13 0.91 0.824 ..99 .02 0.92 0.836 .51 .24 Observer 14 0.88 0.834 .14 .03 0.88 0.840 .87 .23 Group 0.821 .02* 0.840 .01* Thoracic radiologists Observer 15 0.94 0.856 .15 .21 0.96 0.878 .08 .03 Observer 16 0.92 0.854 .60 .17 0.93 0.872 .34 .02 Observer 17 0.86 0.820 .02 .01 0.88 0.838 .14 .12 Observer 18 0.84 0.800 ,.001 ,.001 0.87 0.827 .02 .02 Group 0.833 .08* 0.854 ,.001* Note.—Observer 4 had 1 year of experience; observers 5 and 6 had 2 years of experience; observers 7–9 had 3 years of experience; observers 10–12 had 7 years of experience; observers 13 and 14 had 8 years of experience; observer 15 had 26 years of experience; observer 16 had 13 years of experience; and observers 17 and 18 had 9 years of experience. Observers 1–3 were 4th-year residents from obstetrics and gynecolo- 의사 인공지능 vs. 의사만 (p value) 의사+인공지능 의사 vs. 의사+인공지능 (p value) 영상의학과 1년차 전공의 영상의학과 2년차 전공의 영상의학과 3년차 전공의 산부인과 4년차 전공의 정형외과 4년차 전공의 내과 4년차 전공의 영상의학과 전문의 7년 경력 8년 경력 영상의학과 전문의 (흉부) 26년 경력 13년 경력 9년 경력 영상의학과 전공의 비영상의학과 의사 인공지능 0.91 0.885
  • 88.
    Deep Learning AutomaticDetection Algorithm for Malignant Pulmonary Nodules Table 3: Patient Classification and Nodule Detection at the Observer Performance Test Observer Test 1 DLAD versus Test 1 (P Value) Test 2 Test 1 versus Test 2 (P Value) Radiograph Classification (AUROC) Nodule Detection (JAFROC FOM) Radiograph Classification Nodule Detection Radiograph Classification (AUROC) Nodule Detection (JAFROC FOM) Radiograph Classification Nodule Detection Nonradiology physicians Observer 1 0.77 0.716 ,.001 ,.001 0.91 0.853 ,.001 ,.001 Observer 2 0.78 0.657 ,.001 ,.001 0.90 0.846 ,.001 ,.001 Observer 3 0.80 0.700 ,.001 ,.001 0.88 0.783 ,.001 ,.001 Group 0.691 ,.001* 0.828 ,.001* Radiology residents Observer 4 0.78 0.767 ,.001 ,.001 0.80 0.785 .02 .03 Observer 5 0.86 0.772 .001 ,.001 0.91 0.837 .02 ,.001 Observer 6 0.86 0.789 .05 .002 0.86 0.799 .08 .54 Observer 7 0.84 0.807 .01 .003 0.91 0.843 .003 .02 Observer 8 0.87 0.797 .10 .003 0.90 0.845 .03 .001 Observer 9 0.90 0.847 .52 .12 0.92 0.867 .04 .03 Group 0.790 ,.001* 0.867 ,.001* Board-certified radiologists Observer 10 0.87 0.836 .05 .01 0.90 0.865 .004 .002 Observer 11 0.83 0.804 ,.001 ,.001 0.84 0.817 .03 .04 Observer 12 0.88 0.817 .18 .005 0.91 0.841 .01 .01 Observer 13 0.91 0.824 ..99 .02 0.92 0.836 .51 .24 Observer 14 0.88 0.834 .14 .03 0.88 0.840 .87 .23 Group 0.821 .02* 0.840 .01* Thoracic radiologists Observer 15 0.94 0.856 .15 .21 0.96 0.878 .08 .03 Observer 16 0.92 0.854 .60 .17 0.93 0.872 .34 .02 Observer 17 0.86 0.820 .02 .01 0.88 0.838 .14 .12 Observer 18 0.84 0.800 ,.001 ,.001 0.87 0.827 .02 .02 Group 0.833 .08* 0.854 ,.001* Note.—Observer 4 had 1 year of experience; observers 5 and 6 had 2 years of experience; observers 7–9 had 3 years of experience; observers 10–12 had 7 years of experience; observers 13 and 14 had 8 years of experience; observer 15 had 26 years of experience; observer 16 had 13 years of experience; and observers 17 and 18 had 9 years of experience. Observers 1–3 were 4th-year residents from obstetrics and gynecolo- 의사 인공지능 vs. 의사만 (p value) 의사+인공지능 의사 vs. 의사+인공지능 (p value) 영상의학과 1년차 전공의 영상의학과 2년차 전공의 영상의학과 3년차 전공의 산부인과 4년차 전공의 정형외과 4년차 전공의 내과 4년차 전공의 영상의학과 전문의 7년 경력 8년 경력 영상의학과 전문의 (흉부) 26년 경력 13년 경력 9년 경력 영상의학과 전공의 비영상의학과 의사 인공지능 0.91 0.885 •“인공지능 혼자” 한 것이 “영상의학과 전문의+인공지능”보다 대부분 더 정확 •classification: 9명 중 6명보다 나음 •nodule detection: 9명 전원보다 나음
  • 89.
  • 90.
    당뇨성 망막병증 • 당뇨병의대표적 합병증: 당뇨병력이 30년 이상 환자 90% 발병 • 안과 전문의들이 안저(안구의 안쪽)를 사진으로 찍어서 판독 • 망막 내 미세혈관 생성, 출혈, 삼출물 정도를 파악하여 진단
  • 91.
    Case Study: TensorFlowin Medicine - Retinal Imaging (TensorFlow Dev Summit 2017)
  • 92.
    Copyright 2016 AmericanMedical Association. All rights reserved. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs Varun Gulshan, PhD; Lily Peng, MD, PhD; Marc Coram, PhD; Martin C. Stumpe, PhD; Derek Wu, BS; Arunachalam Narayanaswamy, PhD; Subhashini Venugopalan, MS; Kasumi Widner, MS; Tom Madams, MEng; Jorge Cuadros, OD, PhD; Ramasamy Kim, OD, DNB; Rajiv Raman, MS, DNB; Philip C. Nelson, BS; Jessica L. Mega, MD, MPH; Dale R. Webster, PhD IMPORTANCE Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. OBJECTIVE To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. DESIGN AND SETTING A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. EXPOSURE Deep learning–trained algorithm. MAIN OUTCOMES AND MEASURES The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. RESULTS TheEyePACS-1datasetconsistedof9963imagesfrom4997patients(meanage,54.4 years;62.2%women;prevalenceofRDR,683/8878fullygradableimages[7.8%]);the Messidor-2datasethad1748imagesfrom874patients(meanage,57.6years;42.6%women; prevalenceofRDR,254/1745fullygradableimages[14.6%]).FordetectingRDR,thealgorithm hadanareaunderthereceiveroperatingcurveof0.991(95%CI,0.988-0.993)forEyePACS-1and 0.990(95%CI,0.986-0.995)forMessidor-2.Usingthefirstoperatingcutpointwithhigh specificity,forEyePACS-1,thesensitivitywas90.3%(95%CI,87.5%-92.7%)andthespecificity was98.1%(95%CI,97.8%-98.5%).ForMessidor-2,thesensitivitywas87.0%(95%CI,81.1%- 91.0%)andthespecificitywas98.5%(95%CI,97.7%-99.1%).Usingasecondoperatingpoint withhighsensitivityinthedevelopmentset,forEyePACS-1thesensitivitywas97.5%and specificitywas93.4%andforMessidor-2thesensitivitywas96.1%andspecificitywas93.9%. CONCLUSIONS AND RELEVANCE In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment. JAMA. doi:10.1001/jama.2016.17216 Published online November 29, 2016. Editorial Supplemental content Author Affiliations: Google Inc, Mountain View, California (Gulshan, Peng, Coram, Stumpe, Wu, Narayanaswamy, Venugopalan, Widner, Madams, Nelson, Webster); Department of Computer Science, University of Texas, Austin (Venugopalan); EyePACS LLC, San Jose, California (Cuadros); School of Optometry, Vision Science Graduate Group, University of California, Berkeley (Cuadros); Aravind Medical Research Foundation, Aravind Eye Care System, Madurai, India (Kim); Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India (Raman); Verily Life Sciences, Mountain View, California (Mega); Cardiovascular Division, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts (Mega). Corresponding Author: Lily Peng, MD, PhD, Google Research, 1600 Amphitheatre Way, Mountain View, CA 94043 (lhpeng@google.com). Research JAMA | Original Investigation | INNOVATIONS IN HEALTH CARE DELIVERY (Reprinted) E1 Copyright 2016 American Medical Association. All rights reserved.
  • 93.
    안저 판독 인공지능의개발 • CNN으로 후향적으로 128,175개의 안저 이미지 학습 • 미국의 안과전문의 54명이 3-7회 판독한 데이터 • 우수한 안과전문의들 7-8명의 판독 결과와 인공지능의 판독 결과 비교 • EyePACS-1 (9,963 개), Messidor-2 (1,748 개)a) Fullscreen mode b) Hit reset to reload this image. This will reset all of the grading. c) Comment box for other pathologies you see eFigure 2. Screenshot of the Second Screen of the Grading Tool, Which Asks Graders to Assess the Image for DR, DME and Other Notable Conditions or Findings
  • 94.
    • EyePACS-1 과 Messidor-2의 AUC = 0.991, 0.990 • 7-8명의 안과 전문의와 민감도와 특이도가 동일한 수준 • F-score: 0.95 (vs. 인간 의사는 0.91) Additional sensitivity analyses were conducted for sev- effects of data set size on algorithm performance were exam- Figure 2. Validation Set Performance for Referable Diabetic Retinopathy 100 80 60 40 20 0 0 70 80 85 95 90 75 0 5 10 15 20 25 30 100806040 Sensitivity,% 1 – Specificity, % 20 EyePACS-1: AUC, 99.1%; 95% CI, 98.8%-99.3%A 100 High-sensitivity operating point High-specificity operating point 100 80 60 40 20 0 0 70 80 85 95 90 75 0 5 10 15 20 25 30 100806040 Sensitivity,% 1 – Specificity, % 20 Messidor-2: AUC, 99.0%; 95% CI, 98.6%-99.5%B 100 High-specificity operating point High-sensitivity operating point Performance of the algorithm (black curve) and ophthalmologists (colored circles) for the presence of referable diabetic retinopathy (moderate or worse diabetic retinopathy or referable diabetic macular edema) on A, EyePACS-1 (8788 fully gradable images) and B, Messidor-2 (1745 fully gradable images). The black diamonds on the graph correspond to the sensitivity and specificity of the algorithm at the high-sensitivity and high-specificity operating points. In A, for the high-sensitivity operating point, specificity was 93.4% (95% CI, 92.8%-94.0%) and sensitivity was 97.5% (95% CI, 95.8%-98.7%); for the high-specificity operating point, specificity was 98.1% (95% CI, 97.8%-98.5%) and sensitivity was 90.3% (95% CI, 87.5%-92.7%). In B, for the high-sensitivity operating point, specificity was 93.9% (95% CI, 92.4%-95.3%) and sensitivity was 96.1% (95% CI, 92.4%-98.3%); for the high-specificity operating point, specificity was 98.5% (95% CI, 97.7%-99.1%) and sensitivity was 87.0% (95% CI, 81.1%-91.0%). There were 8 ophthalmologists who graded EyePACS-1 and 7 ophthalmologists who graded Messidor-2. AUC indicates area under the receiver operating characteristic curve. Research Original Investigation Accuracy of a Deep Learning Algorithm for Detection of Diabetic Retinopathy 안저 판독 인공지능의 정확도
  • 95.
    •2018년 4월 FDA는안저사진을 판독하여 당뇨성 망막병증(DR)을 진단하는 인공지능 시판 허가 •IDx-DR: 클라우드 기반의 소프트웨어로, Topcon NW400 로 찍은 사진을 판독 •의사의 개입 없이 안저 사진을 판독하여 DR 여부를 진단 •두 가지 답 중에 하나를 준다 •1) mild DR 이상이 detection 되었으니, 의사에게 가봐라 •2) mild DR 이상은 없는 것 같으니, 12개월 이후에 다시 검사 받아봐라
 •임상시험 및 성능 •10개의 병원에서 멀티센터로 900명 환자의 데이터를 분석 •민감도와 특이도가 각각 87.4%, 89.5% (JAMA 논문의 구글 인공지능 보다 낮음) •FDA가 de novo premarket review pathway로 진행
  • 96.
  • 97.
    A B DC Benignwithout atypia / Atypic / DCIS (ductal carcinoma in situ) / Invasive Carcinoma Interpretation? Elmore etl al. JAMA 2015 Diagnostic Concordance Among Pathologists 유방암 병리 데이터 판독하기
  • 98.
    Figure 4. ParticipatingPathologists’ Interpretations of Each of the 240 Breast Biopsy Test Cases 0 25 50 75 100 Interpretations, % 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 Case Benign without atypia 72 Cases 2070 Total interpretations A 0 25 50 75 100 Interpretations, % 218 220 222 224 226 228 230 232 234 236 238 240 Case Invasive carcinoma 23 Cases 663 Total interpretations D 0 25 50 75 100 Interpretations, % 147 145 149 151 153 155 157 159 161 163 165 167 169 171 173 175 177 179 181 183 185 187 189 191 193 195 197 199 201 203 205 207 209 211 213 215 217 Case DCIS 73 Cases 2097 Total interpretations C 0 25 50 75 100 Interpretations, % 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108 110 112 114 116 118 120 122 124 126 128 130 132 134 136 138 140 142 144 Case Atypia 72 Cases 2070 Total interpretations B Benign without atypia Atypia DCIS Invasive carcinoma Pathologist interpretation DCIS indicates ductal carcinoma in situ. Diagnostic Concordance in Interpreting Breast Biopsies Original Investigation Research Elmore etl al. JAMA 2015 유방암 판독에 대한 병리학과 전문의들의 불일치도
  • 99.
    Elmore etl al.JAMA 2015 •정확도: 75.3%
 (정답은 경험이 많은 세 명의 병리학과 전문의가 협의를 통해 정하였음) spentonthisactivitywas16(95%CI,15-17);43participantswere awarded the maximum 20 hours. Pathologists’ Diagnoses Compared With Consensus-Derived Reference Diagnoses The 115 participants each interpreted 60 cases, providing 6900 total individual interpretations for comparison with the con- sensus-derived reference diagnoses (Figure 3). Participants agreed with the consensus-derived reference diagnosis for 75.3% of the interpretations (95% CI, 73.4%-77.0%). Partici- pants (n = 94) who completed the CME activity reported that Patient and Pathologist Characteristics Associated With Overinterpretation and Underinterpretation The association of breast density with overall pathologists’ concordance (as well as both overinterpretation and under- interpretation rates) was statistically significant, as shown in Table 3 when comparing mammographic density grouped into 2 categories (low density vs high density). The overall concordance estimates also decreased consistently with increasing breast density across all 4 Breast Imaging- Reporting and Data System (BI-RADS) density categories: BI-RADS A, 81% (95% CI, 75%-86%); BI-RADS B, 77% (95% Figure 3. Comparison of 115 Participating Pathologists’ Interpretations vs the Consensus-Derived Reference Diagnosis for 6900 Total Case Interpretationsa Participating Pathologists’ Interpretation ConsensusReference Diagnosisb Benign without atypia Atypia DCIS Invasive carcinoma Total Benign without atypia 1803 200 46 21 2070 Atypia 719 990 353 8 2070 DCIS 133 146 1764 54 2097 Invasive carcinoma 3 0 23 637 663 Total 2658 1336 2186 720 6900 DCIS indicates ductal carcinoma in situ. a Concordance noted in 5194 of 6900 case interpretations or 75.3%. b Reference diagnosis was obtained from consensus of 3 experienced breast pathologists. Diagnostic Concordance in Interpreting Breast Biopsies Original Investigation Research 총 240개의 병리 샘플에 대해서, 115명의 병리학과 전문의들이 판독한 총 6900건의 사례를 정답과 비교 유방암 판독에 대한 병리학과 전문의들의 불일치도
  • 100.
    Constructing higher-level contextual/relational features: Relationshipsbetween epithelial nuclear neighbors Relationships between morphologically regular and irregular nuclei Relationships between epithelial and stromal objects Relationships between epithelial nuclei and cytoplasm Characteristics of stromal nuclei and stromal matrix Characteristics of epithelial nuclei and epithelial cytoplasm Building an epithelial/stromal classifier: Epithelial vs.stroma classifier Epithelial vs.stroma classifier B Basic image processing and feature construction: H&E image Image broken into superpixels Nuclei identified within each superpixel A Relationships of contiguous epithelial regions with underlying nuclear objects Learning an image-based model to predict survival Processed images from patients Processed images from patients C D onNovember17,2011stm.sciencemag.orgwnloadedfrom TMAs contain 0.6-mm-diameter cores (median of two cores per case) that represent only a small sample of the full tumor. We acquired data from two separate and independent cohorts: Nether- lands Cancer Institute (NKI; 248 patients) and Vancouver General Hospital (VGH; 328 patients). Unlike previous work in cancer morphom- etry (18–21), our image analysis pipeline was not limited to a predefined set of morphometric features selected by pathologists. Rather, C-Path measures an extensive, quantitative feature set from the breast cancer epithelium and the stro- ma (Fig. 1). Our image processing system first performed an automated, hierarchical scene seg- mentation that generated thousands of measure- ments, including both standard morphometric descriptors of image objects and higher-level contextual, relational, and global image features. The pipeline consisted of three stages (Fig. 1, A to C, and tables S8 and S9). First, we used a set of processing steps to separate the tissue from the background, partition the image into small regions of coherent appearance known as superpixels, find nuclei within the superpixels, and construct Constructing higher-level contextual/relational features: Relationships between epithelial nuclear neighbors Relationships between morphologically regular and irregular nuclei Relationships between epithelial and stromal objects Relationships between epithelial nuclei and cytoplasm Characteristics of stromal nuclei and stromal matrix Characteristics of epithelial nuclei and epithelial cytoplasm Epithelial vs.stroma classifier Epithelial vs.stroma classifier Relationships of contiguous epithelial regions with underlying nuclear objects Learning an image-based model to predict survival Processed images from patients alive at 5 years Processed images from patients deceased at 5 years L1-regularized logisticregression modelbuilding 5YS predictive model Unlabeled images Time P(survival) C D Identification of novel prognostically important morphologic features basic cellular morphologic properties (epithelial reg- ular nuclei = red; epithelial atypical nuclei = pale blue; epithelial cytoplasm = purple; stromal matrix = green; stromal round nuclei = dark green; stromal spindled nuclei = teal blue; unclassified regions = dark gray; spindled nuclei in unclassified regions = yellow; round nuclei in unclassified regions = gray; background = white). (Left panel) After the classification of each image object, a rich feature set is constructed. (D) Learning an image-based model to predict survival. Processed images from patients alive at 5 years after surgery and from patients deceased at 5 years after surgery were used to construct an image-based prog- nostic model. After construction of the model, it was applied to a test set of breast cancer images (not used in model building) to classify patients as high or low risk of death by 5 years. www.ScienceTranslationalMedicine.org 9 November 2011 Vol 3 Issue 108 108ra113 2 onNovember17,2011stm.sciencemag.orgDownloadedfrom Digital Pathologist •6642 가지의 유방암의 다양한 정량적인 feature를 사용 •이 feature 들은 표준 morphometric descriptor 를 포함할 뿐만 아니라, •higher level contextual, relational, global image feature 들을 포함 Sci Transl Med. 2011 Nov 9;3(108):108ra113
  • 101.
    Top stromal featuresassociated with survival. our system measures thousands of morphologic descriptors of diverse prognostic factor on another, independent data set with very different SD of the ratio of the pixel intensity SD to the mean intensity for pixels within a ring of the center of epithelial nuclei A The sum of the number of unclassified objects SD of the maximum blue pixel value for atypical epithelial nuclei Maximum distance between atypical epithelial nuclei B C D Maximum value of the minimum green pixel intensity value in epithelial contiguous regions Minimum elliptic fit of epithelial contiguous regions SD of distance between epithelial cytoplasmic and nuclear objects Average border between epithelial cytoplasmic objects E F G H Fig. 5. Top epithelial features. The eight panels in the figure (A to H) each shows one of the top-ranking epithelial features from the bootstrap anal- ysis. Left panels, improved prognosis; right panels, worse prognosis. (A) SD of the (SD of intensity/mean intensity) for pixels within a ring of the center of epithelial nuclei. Left, relatively consistent nuclear intensity pattern (low score); right, great nuclear intensity diversity (high score). (B) Sum of the number of unclassified objects. Red, epithelial regions; green, stromal re- gions; no overlaid color, unclassified region. Left, few unclassified objects (low score); right, higher number of unclassified objects (high score). (C) SD of the maximum blue pixel value for atypical epithelial nuclei. Left, high score; right, low score. (D) Maximum distance between atypical epithe- lial nuclei. Left, high score; right, low score. (Insets) Red, atypical epithelial nuclei; black, typical epithelial nuclei. (E) Minimum elliptic fit of epithelial contiguous regions. Left, high score; right, low score. (F) SD of distance between epithelial cytoplasmic and nuclear objects. Left, high score; right, low score. (G) Average border between epithelial cytoplasmic objects. Left, high score; right, low score. (H) Maximum value of the minimum green pixel intensity value in epithelial contiguous regions. Left, low score indi- cating black pixels within epithelial region; right, higher score indicating presence of epithelial regions lacking black pixels. www.ScienceTranslationalMedicine.org 9 November 2011 Vol 3 Issue 108 108ra113 7 onNovember17,2011stm.sciencemag.orgDownloadedfrom stromal matrix region borders a relatively constant proportion of ep- tensity value of stromal-contiguous regions. This feature received a value of zero when stromal regions contained dark pixels (such as inflammatory nuclei). The feature received a positive value when stromal objects were devoid of dark pixels. This feature provided in- formation about the relationship between stromal cellular composi- tion and prognosis and suggested that the presence of inflammatory cells in the stroma is associated with poor prognosis, a finding con- sistent with previous observations (32). The third most significant stromal feature (Fig. 4C) was a measure of the relative border between spindled stromal nuclei to round stromal nuclei, with an increased rel- ative border of spindled stromal nuclei to round stromal nuclei asso- ciated with worse overall survival. Although the biological underpinning of this morphologic feature is currently not known, this analysis sug- gested that spatial relationships between different populations of stro- mal cell types are associated with breast cancer progression. Reproducibility of C-Path 5YS model predictions on samples with multiple TMA cores For the C-Path 5YS model (which was trained on the full NKI data set), we assessed the intrapatient agreement of model predictions when predictions were made separately on each image contributed by pa- tients in the VGH data set. For the 190 VGH patients who contributed two images with complete image data, the binary predictions (high or low risk) on the individual images agreed with each other for 69% (131 of 190) of the cases and agreed with the prediction on the aver- aged data for 84% (319 of 380) of the images. Using the continuous prediction score (which ranged from 0 to 100), the median of the ab- solute difference in prediction score among the patients with replicate images was 5%, and the Spearman correlation among replicates was 0.27 (P = 0.0002) (fig. S3). This degree of intrapatient agreement is only moderate, and these findings suggest significant intrapatient tumor heterogeneity, which is a cardinal feature of breast carcinomas (33–35). Qualitative visual inspection of images receiving discordant scores suggested that intrapatient variability in both the epithelial and the stromal components is likely to contribute to discordant scores for the individual images. These differences appeared to relate both to the proportions of the epithelium and stroma and to the appearance of the epithelium and stroma. Last, we sought to analyze whether sur- vival predictions were more accurate on the VGH cases that contributed multiple cores compared to the cases that contributed only a single core. This analysis showed that the C-Path 5YS model showed signif- icantly improved prognostic prediction accuracy on the VGH cases for which we had multiple images compared to the cases that con- tributed only a single image (Fig. 7). Together, these findings show a significant degree of intrapatient variability and indicate that increased tumor sampling is associated with improved model performance. DISCUSSION We have developed a system for the automatic hierarchical segmen- tation of microscopic breast cancer images and the generation of a rich set of quantitative features to characterize the image. On the basis of these features, we built an image-based model to predict pa- tient outcome and to identify clinically significant morphologic features. Most previous work in quantitative pathology has required Heat map of stromal matrix objects mean abs.diff to neighbors H&E image separated into epithelial and stromal objects A B C Worse prognosis Improved prognosis Improved prognosis Improved prognosis Worse prognosis Worse prognosis Fig. 4. Top stromal features associated with survival. (A) Variability in ab- solute difference in intensity between stromal matrix regions and neigh- bors. Top panel, high score (24.1); bottom panel, low score (10.5). (Insets) Top panel, high score; bottom panel; low score. Right panels, stromal matrix objects colored blue (low), green (medium), or white (high) according to each object’s absolute difference in intensity to neighbors. (B) Presence of stromal regions without nuclei. Top panels, high scores; bottom panels, 0 score. Green, stromal contiguous regions with score 0; red, stromal con- tiguous regions with high score. (Insets) Red stromal regions are thin and do not contain nuclei; green regions are larger with nuclei. (C) Average relative border of stromal spindle nuclei to stromal round nuclei. Top panel, low score; bottom panel, high score. (Insets) Stromal spindled nuclear objects are green and stromal round nuclear objects are red. Right panels, higher magnification of a portion of the larger image. onNovember17,2011stm.sciencemag.orgDownloadedfrom Top epithelial features.The eight panels in the figure (A to H) each shows one of the top-ranking epithelial features from the bootstrap analysis. Left panels, improved prognosis; right panels, worse prognosis. •C-Path 를 이용한 예후 예측 모델이 두 유방암 코호트와 강한 상관관계 •3개의 stromal feature가 오히려 epithelial feature보다 더 강한 상관관계 •Stromal morphologic structure가 유방암의 새로운 예후 예측 인자가 될 수도 있음 Sci Transl Med. 2011 Nov 9;3(108):108ra113
  • 102.
    ISBI Grand Challengeon Cancer Metastases Detection in Lymph Node
  • 104.
  • 105.
    International Symposium onBiomedical Imaging 2016 H&E Image Processing Framework Train whole slide image sample sample training data normaltumor Test whole slide image overlapping image patches tumor prob. map 1.0 0.0 0.5 Convolutional Neural Network P(tumor)
  • 106.
  • 107.
    Clinical study onISBI dataset Error Rate Pathologist in competition setting 3.5% Pathologists in clinical practice (n = 12) 13% - 26% Pathologists on micro-metastasis(small tumors) 23% - 42% Beck Lab Deep Learning Model 0.65% Beck Lab’s deep learning model now outperforms pathologist Andrew Beck, Machine Learning for Healthcare, MIT 2017
  • 108.
    구글의 유방 병리판독 인공지능 • The localization score(FROC) for the algorithm reached 89%, which significantly exceeded the score of 73% for a pathologist with no time constraint.
  • 109.
    인공지능의 민감도 +인간의 특이도 Yun Liu et al. Detecting Cancer Metastases on Gigapixel Pathology Images (2017) • 구글의 인공지능은 민감도에서 큰 개선 (92.9%, 88.5%) •@8FP: FP를 8개까지 봐주면서, 달성할 수 있는 민감도 •FROC: FP를 슬라이드당 1/4, 1/2, 1, 2, 4, 8개를 허용한 민감도의 평균 •즉, FP를 조금 봐준다면, 인공지능은 매우 높은 민감도를 달성 가능 • 인간 병리학자는 민감도 73%에 반해, 특이도는 거의 100% 달성 •인간 병리학자와 인공지능 병리학자는 서로 잘하는 것이 다름 •양쪽이 협력하면 판독 효율성, 일관성, 민감도 등에서 개선 기대 가능
  • 110.
    ARTICLES https://doi.org/10.1038/s41591-018-0177-5 A ccording to theAmerican Cancer Society and the Cancer Statistics Center (see URLs), over 150,000 patients with lung cancer succumb to the disease each year (154,050 expected for 2018), while another 200,000 new cases are diagnosed on a yearly basis (234,030 expected for 2018). It is one of the most widely spread cancers in the world because of not only smoking, but also exposure to toxic chemicals like radon, asbestos and arsenic. LUAD and LUSC are the two most prevalent types of non–small cell lung cancer1 , and each is associated with discrete treatment guidelines. In the absence of definitive histologic features, this important distinc- tion can be challenging and time-consuming, and requires confir- matory immunohistochemical stains. Classification of lung cancer type is a key diagnostic process because the available treatment options, including conventional chemotherapy and, more recently, targeted therapies, differ for LUAD and LUSC2 . Also, a LUAD diagnosis will prompt the search for molecular biomarkers and sensitizing mutations and thus has a great impact on treatment options3,4 . For example, epidermal growth factor receptor (EGFR) mutations, present in about 20% of LUAD, and anaplastic lymphoma receptor tyrosine kinase (ALK) rearrangements, present in<5% of LUAD5 , currently have tar- geted therapies approved by the Food and Drug Administration (FDA)6,7 . Mutations in other genes, such as KRAS and tumor pro- tein P53 (TP53) are very common (about 25% and 50%, respec- tively) but have proven to be particularly challenging drug targets so far5,8 . Lung biopsies are typically used to diagnose lung cancer type and stage. Virtual microscopy of stained images of tissues is typically acquired at magnifications of 20×to 40×, generating very large two-dimensional images (10,000 to>100,000 pixels in each dimension) that are oftentimes challenging to visually inspect in an exhaustive manner. Furthermore, accurate interpretation can be difficult, and the distinction between LUAD and LUSC is not always clear, particularly in poorly differentiated tumors; in this case, ancil- lary studies are recommended for accurate classification9,10 . To assist experts, automatic analysis of lung cancer whole-slide images has been recently studied to predict survival outcomes11 and classifica- tion12 . For the latter, Yu et al.12 combined conventional thresholding and image processing techniques with machine-learning methods, such as random forest classifiers, support vector machines (SVM) or Naive Bayes classifiers, achieving an AUC of ~0.85 in distinguishing normal from tumor slides, and ~0.75 in distinguishing LUAD from LUSC slides. More recently, deep learning was used for the classi- fication of breast, bladder and lung tumors, achieving an AUC of 0.83 in classification of lung tumor types on tumor slides from The Cancer Genome Atlas (TCGA)13 . Analysis of plasma DNA values was also shown to be a good predictor of the presence of non–small cell cancer, with an AUC of ~0.94 (ref. 14 ) in distinguishing LUAD from LUSC, whereas the use of immunochemical markers yields an AUC of ~0.94115 . Here, we demonstrate how the field can further benefit from deep learning by presenting a strategy based on convolutional neural networks (CNNs) that not only outperforms methods in previously Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning Nicolas Coudray 1,2,9 , Paolo Santiago Ocampo3,9 , Theodore Sakellaropoulos4 , Navneet Narula3 , Matija Snuderl3 , David Fenyö5,6 , Andre L. Moreira3,7 , Narges Razavian 8 * and Aristotelis Tsirigos 1,3 * Visual inspection of histopathology slides is one of the main methods used by pathologists to assess the stage, type and sub- type of lung tumors. Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung cancer, and their distinction requires visual inspection by an experienced pathologist. In this study, we trained a deep con- volutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them—STK11, EGFR, FAT1, SETBP1, KRAS and TP53—can be pre- dicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations. Our approach can be applied to any cancer type, and the code is available at https://github.com/ncoudray/DeepPATH.
  • 111.
    • NYU 연구팀 •TCGA의 병리 이미지(whole-slide image) • 구글넷(Inception v3)로 학습 ARTICLESNATURE MEDICINE and that of each pathologist, of their consensus and finally of our deep-learning model (with an optimal threshold leading to sensitiv- ity and specificity of 89% and 93%) using Cohen’s Kappa statistic approach). Each TCGA image is almost exclusively composed of either LUAD cells, LUSC cells, or normal lung tissue. As a result, several images in the two new datasets contain features that the Normal 459 LUAD 567 LUSC 609 0 100 200 300 400 500 5,00015,00025,00035,00045,00055,00065,00075,00085,00095,000 >100,000 Number ofslides Length (pixels) 0 50 100 150 200 250 100 300 500 700 9001,1001,3001,5001,7001,900>2,000 Number ofslides Number of tiles per slide Validation set Test set Training set iiDownload from GDC database Separate in 3 datasets Tile and filter out background tiles Per-tile training Testing and per-slide tile aggregation a c b d iiii viv Inception v3 Model Fig. 1 | Data and strategy. a, Number of whole-slide images per class. b, Strategy for training. (b, i), Images of lung cancer tissues were first downloaded from the Genomic Data Commons database; (b, ii), slides were then separated into a training (70%), a validation (15%) and a test set (15%); (b, iii), slides were tiled by nonoverlapping 512-×512-pixel windows, omitting those with over 50% background; (b, iv), the Inception v3 architecture was used and partially or fully retrained using the training and validation tiles; (b, v), classifications were performed on tiles from an independent test set, and the results were finally aggregated per slide to extract the heatmaps and the AUC statistics. c, Size distribution of the images widths (gray) and heights (black). d, Distribution of the number of tiles per slide.
  • 112.
    Classification and mutationprediction from non-small cell lung cancer histopathology images using deep learning •Normal, adenocarcinoma (LUAD), squamous cell carcinoma (LUSC)를 매우 정확하게 구분 • Tumor vs. normal, LUAD vs. LUSC 의 구분에 AUC 0.99, 0.95 이상 • Normal, LUAD, LUSC 중 하나를 다른 두 가지와 구분하는 것도 5x 20x 모두 AUC 0.9 이상 •이 정확도는 세 명의 병리과 전문의와 동등한 수준 • 딥러닝이 틀린 것 중에 50%는, 병리과 전문의 세 명 중 적어도 한 명이 틀렸고, • 병리과 전문의 세 명 중 적어도 한 명이 틀린 케이스 중, 83%는 딥러닝이 정확히 분류했다.
  • 113.
    • 더 나아가서TCGA를 바탕으로 개발된 인공지능을, • 완전히 독립적인, 특히 fresh frozen, FFPE, biopsy 의 세 가지 방식으로 얻은 • LUAD, LUSC 데이터에 적용해보았을 때에도 대부분 AUC 0.9 이상으로 정확하게 판독 ARTICLES NATURE MEDICINE fibrosis, inflammation or blood was also present, but also in very poorly differentiated tumors. Sections obtained from biopsies are usually much smaller, which reduces the number of tiles per slide, but the performance of our model remains consistent for the 102 samples tested (AUC ~0.834–0.861 using 20×magnification and 0.871–0.928 using 5×magnification; Fig. 2c), and the accuracy of the classification does not correlate with the sample size or the size of the area selected by our pathologist (Supplementary Fig. 4; the tumor area on the frozen and FFPE samples, then applied this model to the biopsies and finally applied the TCGA-trained three- way classifier on the tumor area selected by the automatic tumor selection model. The per-tile AUC of the automatic tumor selection model (using the pathologist’s tumor selection as reference) was 0.886 (CI, 0.880–0.891) for the biopsies, 0.797 (CI, 0.795–0.800) for the frozen samples, and 0.852 (CI, 0.808–0.895) for the FFPE samples. As demonstrated in Supplementary Fig. 3a (right-most bar LUAD at 5× AUC = 0.919, CI = 0.861–0.949 1 a b c 0.5 Truepositive 0 0 0.5 False positive 1 1 0.5 Truepositive 0 0 0.5 False positive 1 1 0.5 Truepositive 0 0 0.5 False positive 1 Frozen FFPE Biopsies LUSC at 5× AUC = 0.977, CI = 0.949–0.995 LUAD at 20× AUC = 0.913, CI = 0.849–0.963 LUSC at 20× AUC = 0.941, CI = 0.894–0.977 LUAD at 5× AUC = 0.861, CI = 0.792–0.919 LUSC at 5× AUC = 0.975, CI = 0.945–0.996 LUAD at 20× AUC = 0.833, CI = 0.762–0.894 LUSC at 20× AUC = 0.932, CI = 0.884–0.971 LUAD at 5× AUC = 0.871, CI = 0.784–0.938 LUSC at 5× AUC = 0.928, CIs = 0.871–0.972 LUAD at 20× AUC = 0.834, CI = 0.743–0.909 LUSC at 20× AUC = 0.861, CI = 0.780–0.928 Fig. 2 | Classification of presence and type of tumor on alternative cohorts. a–c, Receiver operating characteristic (ROC) curves (left) from tests on frozen sections (n=98 biologically independent slides) (a), FFPE sections (n=140 biologically independent slides) (b) and biopsies (n=102 biologically independent slides) from NYU Langone Medical Center (c). On the right of each plot, we show examples of raw images with an overlap in light gray of the mask generated by a pathologist and the corresponding heatmaps obtained with the three-way classifier. Scale bars, 1mm. Frozen FFPE Biopsy
  • 114.
    ARTICLESNATURE MEDICINE 11 (STK11),EGFR, FAT atypical cadherin 1 (FAT1), SET bind- ing protein 1 (SETBP1), KRAS and TP53 were between 0.733 and 0.856 (Table 1). Availability of more data for training is expected to substantially improve the performance. As mentioned earlier, EGFR already has targeted therapies. STK11, also known as liver kinase 1 (LKB1), is a tumor suppres- sor inactivated in 15–30% of non–small cell lung cancers36 and is also a potential therapeutic target: it has been reported that phen- formin, a mitochondrial inhibitor, increases survival in mice37 . Also, it has been shown that STK11 mutations in combination with KRAS Averagetileprobability 1.0 0.8 0.6 0.4 0.2 0 Gene EGFR WTEGFR FAT1 WTFAT1 FAT4 WTFAT4 KEAP1 WTKEAP1 WTKRAS KRAS LRP1B WTLRP1B NF1 WTNF1 SETBP1 WTSETBP1 STK11 WTSTK11 TP53 WTTP53 ** * ** *n.s. n.s. n.s. ***** *** 1 1 0.5 0.5 0 0 Truepositive EGFR SETBP1 STK11 TP53 False positive 100% 80% 60% 40% 20% 0% Allelefrequency P(EGFR)≥0.5 P(EGFR)<0.5 P(FAT1)≥0.5 P(FAT1)<0.5 P(FAT4)≥0.5 P(FAT4)<0.5 P(KEAP1)≥0.5 P(KEAP1)<0.5 P(KRAS)≥0.5 P(KRAS)<0.5 P(LRP1B)≥0.5 P(LRP1B)<0.5 P(NF1)≥0.5 P(NF1)<0.5 P(SETBP1)≥0.5 P(SETBP1)<0.5 P(STK11)≥0.5 P(STK11)<0.5 P(TP53)≥0.5 P(TP53)<0.5 Gene classified as mutation (aggregated percentage ≥ 0.5) or wild type (<0.5) a b c P = 5.1 × 10–3 P = 2.4 × 10–2 P = 8.8 × 10–2 P = 5.5 × 10–2 P = 3.2 × 10–3 P = 4.0 × 10–2 P = 1.5 × 10–1 P = 4.5 × 10–3 P = 2.5 × 10–4 P = 4.9 × 10–4 P = 1.1 × 10–2 P = 2.0 × 10–4 P = 3.8 × 10–2 P = 1.3 × 10–3 P = 6.7 × 10–4 P = 2.2 × 10–2 P = 1.0 × 10–2 P = 6.9 × 10–6 P = 7.8 × 10–4 P = 1.4 × 10–2 ******************* Fig. 3 | Gene mutation prediction from histopathology slides give promising results for at least six genes. a, Distribution of probability of mutation in genes from slides where each mutation is present or absent (tile aggregation by averaging output probability). b, ROC curves associated with the top four predictions in a. c, Allele frequency as a function of slides classified by the deep-learning network as having a certain gene mutation (P ≥ 0.5) or the wild type (P<0.5). P values were Table 1 | AUC achieved by the network trained on mutations (with 95% CIs) Mutations Per-tile AUC Per-slide AUC after aggregation by… … average predicted probability … percentage of positively classified tiles STK11 0.845 (0.838– 0.852) 0.856 (0.709– 0.964) 0.842 (0.683-0.967) EGFR 0.754 (0.746– 0.761) 0.826 (0.628– 0.979) 0.782 (0.516-0.979) SETBP1 0.785 (0.776– 0.794) 0.775 (0.595– 0.931) 0.752 (0.550–0.927) TP53 0.674 (0.666– 0.681) 0.760 (0.626– 0.872) 0.754 (0.627–0.870) FAT1 0.739 (0.732– 0.746) 0.750 (0.512– 0.940) 0.750 (0.491–0.946) KRAS 0.814 (0.807– 0.829) 0.733 (0.580– 0.857) 0.716 (0.552–0.854) KEAP1 0.684 (0.670– 0.694) 0.675 (0.466– 0.865) 0.659 (0.440–0.856) LRP1B 0.640 (0.633– 0.647) 0.656 (0.513– 0.797) 0.657 (0.512–0.799) FAT4 0.768 (0.760– 0.775) 0.642 (0.470– 0.799) 0.640 (0.440–0.856) NF1 0.714 (0.704– 0.723) 0.640 (0.419– 0.845) 0.632 (0.405–0.845) n=62 slides from 59 patients. •Radiogenomics •병리 이미지만 보고 EGFR, TP53, KRAS 등의 6개 유전자에서 
 LUAD에서 호발하는 mutation이 존재하는지를 AUC 0.7-0.8 로 판독 •심지어는 allele frequency 도 통계적으로 유의미하게 맞췄다
  • 115.
  • 116.
  • 117.
    AACR 2018인공지능을 이용하면총 판독 시간을 줄일 수 있다
  • 118.
    AACR 2018인공지능을 이용하면판독 정확도를 (micro에서 특히) 높일 수 수 있다
  • 119.
    Access to PathologyAI algorithms is limited Adoption barriers for digital pathology • Expensive scanners • IT infrastructure required • Disrupt existing workflows • Not all clinical needs addressed (speed, focus, etc)
  • 121.
        Figures        Figure 1: System overview.   1: Schematic sketch of the whole device.  2: A photo of the actual implementation.  An Augmented RealityMicroscope for Realtime Automated Detection of Cancer https://research.googleblog.com/2018/04/an-augmented-reality-microscope.html
  • 122.
    An Augmented RealityMicroscope for Cancer Detection https://www.youtube.com/watch?v=9Mz84cwVmS0
  • 123.
  • 124.
    An Augmented RealityMicroscope for Realtime Automated Detection of Cancer     PR quantification Mitosis Counting on H&E slide Measurement of tumor size Identification of H. pylori Identification of Mycobacterium Identification of prostate cancer region with estimation of percentage tumor involvement Ki67 quantification P53 quantification CD8 quantification https://research.googleblog.com/2018/04/an-augmented-reality-microscope.html
  • 127.
  • 128.
    Fig 1. Whatcan consumer wearables do? Heart rate can be measured with an oximeter built into a ring [3], muscle activity with an electromyographi sensor embedded into clothing [4], stress with an electodermal sensor incorporated into a wristband [5], and physical activity or sleep patterns via an accelerometer in a watch [6,7]. In addition, a female’s most fertile period can be identified with detailed body temperature tracking [8], while levels of me attention can be monitored with a small number of non-gelled electroencephalogram (EEG) electrodes [9]. Levels of social interaction (also known to a PLOS Medicine 2016
  • 129.
    • 복잡한 의료데이터의 분석 및 insight 도출 • 영상 의료/병리 데이터의 분석/판독 • 연속 데이터의 모니터링 및 예방/예측 의료 인공지능의 세 유형
  • 130.
  • 134.
    S E PS I S A targeted real-time early warning score (TREWScore) for septic shock Katharine E. Henry,1 David N. Hager,2 Peter J. Pronovost,3,4,5 Suchi Saria1,3,5,6 * Sepsis is a leading cause of death in the United States, with mortality highest among patients who develop septic shock. Early aggressive treatment decreases morbidity and mortality. Although automated screening tools can detect patients currently experiencing severe sepsis and septic shock, none predict those at greatest risk of developing shock. We analyzed routinely available physiological and laboratory data from intensive care unit patients and devel- oped “TREWScore,” a targeted real-time early warning score that predicts which patients will develop septic shock. TREWScore identified patients before the onset of septic shock with an area under the ROC (receiver operating characteristic) curve (AUC) of 0.83 [95% confidence interval (CI), 0.81 to 0.85]. At a specificity of 0.67, TREWScore achieved a sensitivity of 0.85 and identified patients a median of 28.2 [interquartile range (IQR), 10.6 to 94.2] hours before onset. Of those identified, two-thirds were identified before any sepsis-related organ dysfunction. In compar- ison, the Modified Early Warning Score, which has been used clinically for septic shock prediction, achieved a lower AUC of 0.73 (95% CI, 0.71 to 0.76). A routine screening protocol based on the presence of two of the systemic inflam- matory response syndrome criteria, suspicion of infection, and either hypotension or hyperlactatemia achieved a low- er sensitivity of 0.74 at a comparable specificity of 0.64. Continuous sampling of data from the electronic health records and calculation of TREWScore may allow clinicians to identify patients at risk for septic shock and provide earlier interventions that would prevent or mitigate the associated morbidity and mortality. INTRODUCTION Seven hundred fifty thousand patients develop severe sepsis and septic shock in the United States each year. More than half of them are admitted to an intensive care unit (ICU), accounting for 10% of all ICU admissions, 20 to 30% of hospital deaths, and $15.4 billion in an- nual health care costs (1–3). Several studies have demonstrated that morbidity, mortality, and length of stay are decreased when severe sep- sis and septic shock are identified and treated early (4–8). In particular, one study showed that mortality from septic shock increased by 7.6% with every hour that treatment was delayed after the onset of hypo- tension (9). More recent studies comparing protocolized care, usual care, and early goal-directed therapy (EGDT) for patients with septic shock sug- gest that usual care is as effective as EGDT (10–12). Some have inter- preted this to mean that usual care has improved over time and reflects important aspects of EGDT, such as early antibiotics and early ag- gressive fluid resuscitation (13). It is likely that continued early identi- fication and treatment will further improve outcomes. However, the Acute Physiology Score (SAPS II), SequentialOrgan Failure Assessment (SOFA) scores, Modified Early Warning Score (MEWS), and Simple Clinical Score (SCS) have been validated to assess illness severity and risk of death among septic patients (14–17). Although these scores are useful for predicting general deterioration or mortality, they typical- ly cannot distinguish with high sensitivity and specificity which patients are at highest risk of developing a specific acute condition. The increased use of electronic health records (EHRs), which can be queried in real time, has generated interest in automating tools that identify patients at risk for septic shock (18–20). A number of “early warning systems,” “track and trigger” initiatives, “listening applica- tions,” and “sniffers” have been implemented to improve detection andtimelinessof therapy forpatients with severe sepsis andseptic shock (18, 20–23). Although these tools have been successful at detecting pa- tients currently experiencing severe sepsis or septic shock, none predict which patients are at highest risk of developing septic shock. The adoption of the Affordable Care Act has added to the growing excitement around predictive models derived from electronic health R E S E A R C H A R T I C L E onNovember3,2016http://stm.sciencemag.org/Downloadedfrom
  • 135.
    puted as newdata became avail when his or her score crossed t dation set, the AUC obtained f 0.81 to 0.85) (Fig. 2). At a spec of 0.33], TREWScore achieved a s a median of 28.2 hours (IQR, 10 Identification of patients b A critical event in the developme related organ dysfunction (seve been shown to increase after th more than two-thirds (68.8%) o were identified before any sepsi tients were identified a median (Fig. 3B). Comparison of TREWScore Weevaluatedtheperformanceof methods for the purpose of provid use of TREWScore. We first com to MEWS, a general metric used of catastrophic deterioration (17 oped for tracking sepsis, MEWS tion of patients at risk for severe Fig. 2. ROC for detection of septic shock before onset in the validation set. The ROC curve for TREWScore is shown in blue, with the ROC curve for MEWS in red. The sensitivity and specificity performance of the routine screening criteria is indicated by the purple dot. Normal 95% CIs are shown for TREWScore and MEWS. TPR, true-positive rate; FPR, false-positive rate. R E S E A R C H A R T I C L E A targeted real-time early warning score (TREWScore) for septic shock AUC=0.83 At a specificity of 0.67,TREWScore achieved a sensitivity of 0.85 
 and identified patients a median of 28.2 hours before onset.
  • 137.
    Sugar.IQ 사용자의 음식 섭취와그에 따른 혈당 변화, 인슐린 주입 등의 과거 기록 기반 식후 사용자의 혈당이 어떻게 변화할지 Watson 이 예측
  • 138.
    ADA 2017, SanDiego, Courtesy of Taeho Kim (Seoul Medical Center)
  • 139.
    ADA 2017, SanDiego, Courtesy of Taeho Kim (Seoul Medical Center)
  • 140.
    ADA 2017, SanDiego, Courtesy of Taeho Kim (Seoul Medical Center)
  • 141.
    ADA 2017, SanDiego, Courtesy of Taeho Kim (Seoul Medical Center)
  • 142.
    •미국에서 아이폰 앱으로출시 •사용이 얼마나 번거로울지가 관건 •어느 정도의 기간을 활용해야 효과가 있는가: 2주? 평생? •Food logging 등을 어떻게 할 것인가? •과금 방식도 아직 공개되지 않은듯
  • 143.
    애플워치4: 심전도, 부정맥,낙상 측정 FDA 의료기기 인허가
  • 144.
    Cardiogram • 실리콘밸리의 Cardiogram은 애플워치로 측정한 심박수 데이터를 바탕으로 서비스 • 2016년 10월 Andressen Horowitz 에서 $2m의 투자 유치
  • 145.
    https://blog.cardiogr.am/what-do-normal-and-abnormal-heart-rhythms-look-like-on-apple-watch-7b33b4a8ecfa • Cardiogram은 심박수에운동, 수면, 감정, 의료적인 상태가 반영된다고 주장 • 특히, 심박 데이터를 기반으로 심방세동(atrial fibrillation)과 심방 조동(atrial flutter)의 detection 시도 Cardiogram
  • 146.
    • Cardiogram은 심박데이터만으로 심방세동을 detection할 수 있다고 주장 • “Irregularly irregular” • high absolute variability (a range of 30+ bpm) • a higher fraction missing measurements • a lack of periodicity in heart rate variability • 심방세동 특유의 불규칙적인 리듬을 detection 하는 정도로 생각하면 될 듯 • “불규칙적인 리듬을 가지는 (심방세동이 아닌) 다른 부정맥과 구분 가능한가?” (쉽지 않을듯) • 따라서, 심박으로 detection한 환자를 심전도(ECG)로 confirm 하는 것이 필요 Cardiogram for A.Fib
  • 147.
    Cardiogram for Aflutter •Cardiogram은 심박 데이터만으로 심방조동을 detection할 수 있다고 주장 • “Mechanically Regular” • high absolute variability (a range of 30+ bpm) • a higher fraction missing measurements • a lack of periodicity in heart rate variability • 심방세동 특유의 불규칙적인 리듬을 detection 하는 정도로 생각하면 될 듯 • “불규칙적인 리듬을 가지는 (심방세동이 아닌) 다른 부정맥과 구분 가능한가?” (쉽지 않을듯) • 따라서, 심박으로 detection한 환자를 심전도(ECG)로 confirm 하는 것이 필요
  • 148.
    Passive Detection ofAtrial Fibrillation Using a Commercially Available Smartwatch Geoffrey H. Tison, MD, MPH; José M. Sanchez, MD; Brandon Ballinger, BS; Avesh Singh, MS; Jeffrey E. Olgin, MD; Mark J. Pletcher, MD, MPH; Eric Vittinghoff, PhD; Emily S. Lee, BA; Shannon M. Fan, BA; Rachel A. Gladstone, BA; Carlos Mikell, BS; Nimit Sohoni, BS; Johnson Hsieh, MS; Gregory M. Marcus, MD, MAS IMPORTANCE Atrial fibrillation (AF) affects 34 million people worldwide and is a leading cause of stroke. A readily accessible means to continuously monitor for AF could prevent large numbers of strokes and death. OBJECTIVE To develop and validate a deep neural network to detect AF using smartwatch data. DESIGN, SETTING, AND PARTICIPANTS In this multinational cardiovascular remote cohort study coordinated at the University of California, San Francisco, smartwatches were used to obtain heart rate and step count data for algorithm development. A total of 9750 participants enrolled in the Health eHeart Study and 51 patients undergoing cardioversion at the University of California, San Francisco, were enrolled between February 2016 and March 2017. A deep neural network was trained using a method called heuristic pretraining in which the network approximated representations of the R-R interval (ie, time between heartbeats) without manual labeling of training data. Validation was performed against the reference standard 12-lead electrocardiography (ECG) in a separate cohort of patients undergoing cardioversion. A second exploratory validation was performed using smartwatch data from ambulatory individuals against the reference standard of self-reported history of persistent AF. Data were analyzed from March 2017 to September 2017. MAIN OUTCOMES AND MEASURES The sensitivity, specificity, and receiver operating characteristic C statistic for the algorithm to detect AF were generated based on the reference standard of 12-lead ECG–diagnosed AF. RESULTS Of the 9750 participants enrolled in the remote cohort, including 347 participants with AF, 6143 (63.0%) were male, and the mean (SD) age was 42 (12) years. There were more than 139 million heart rate measurements on which the deep neural network was trained. The deep neural network exhibited a C statistic of 0.97 (95% CI, 0.94-1.00; P < .001) to detect AF against the reference standard 12-lead ECG–diagnosed AF in the external validation cohort of 51 patients undergoing cardioversion; sensitivity was 98.0% and specificity was 90.2%. In an exploratory analysis relying on self-report of persistent AF in ambulatory participants, the C statistic was 0.72 (95% CI, 0.64-0.78); sensitivity was 67.7% and specificity was 67.6%. CONCLUSIONS AND RELEVANCE This proof-of-concept study found that smartwatch photoplethysmography coupled with a deep neural network can passively detect AF but with some loss of sensitivity and specificity against a criterion-standard ECG. Further studies will help identify the optimal role for smartwatch-guided rhythm assessment. JAMA Cardiol. doi:10.1001/jamacardio.2018.0136 Published online March 21, 2018. Editorial Supplemental content and Audio Author Affiliations: Division of Cardiology, Department of Medicine, University of California, San Francisco (Tison, Sanchez, Olgin, Lee, Fan, Gladstone, Mikell, Marcus); Cardiogram Incorporated, San Francisco, California (Ballinger, Singh, Sohoni, Hsieh); Department of Epidemiology and Biostatistics, University of California, San Francisco (Pletcher, Vittinghoff). Corresponding Author: Gregory M. Marcus, MD, MAS, Division of Cardiology, Department of Medicine, University of California, San Francisco, 505 Parnassus Ave, M1180B, San Francisco, CA 94143- 0124 (marcusg@medicine.ucsf.edu). Research JAMA Cardiology | Original Investigation (Reprinted) E1 © 2018 American Medical Association. All rights reserved.
  • 149.
    Passive Detection ofAtrial Fibrillation Using a Commercially Available Smartwatch Geoffrey H. Tison, MD, MPH; José M. Sanchez, MD; Brandon Ballinger, BS; Avesh Singh, MS; Jeffrey E. Olgin, MD; Mark J. Pletcher, MD, MPH; Eric Vittinghoff, PhD; Emily S. Lee, BA; Shannon M. Fan, BA; Rachel A. Gladstone, BA; Carlos Mikell, BS; Nimit Sohoni, BS; Johnson Hsieh, MS; Gregory M. Marcus, MD, MAS IMPORTANCE Atrial fibrillation (AF) affects 34 million people worldwide and is a leading cause of stroke. A readily accessible means to continuously monitor for AF could prevent large numbers of strokes and death. OBJECTIVE To develop and validate a deep neural network to detect AF using smartwatch data. DESIGN, SETTING, AND PARTICIPANTS In this multinational cardiovascular remote cohort study coordinated at the University of California, San Francisco, smartwatches were used to obtain heart rate and step count data for algorithm development. A total of 9750 participants enrolled in the Health eHeart Study and 51 patients undergoing cardioversion at the University of California, San Francisco, were enrolled between February 2016 and March 2017. A deep neural network was trained using a method called heuristic pretraining in which the network approximated representations of the R-R interval (ie, time between heartbeats) without manual labeling of training data. Validation was performed against the reference standard 12-lead electrocardiography (ECG) in a separate cohort of patients undergoing cardioversion. A second exploratory validation was performed using smartwatch data from ambulatory individuals against the reference standard of self-reported history of persistent AF. Data were analyzed from March 2017 to September 2017. MAIN OUTCOMES AND MEASURES The sensitivity, specificity, and receiver operating characteristic C statistic for the algorithm to detect AF were generated based on the reference standard of 12-lead ECG–diagnosed AF. RESULTS Of the 9750 participants enrolled in the remote cohort, including 347 participants with AF, 6143 (63.0%) were male, and the mean (SD) age was 42 (12) years. There were more than 139 million heart rate measurements on which the deep neural network was trained. The deep neural network exhibited a C statistic of 0.97 (95% CI, 0.94-1.00; P < .001) to detect AF against the reference standard 12-lead ECG–diagnosed AF in the external validation cohort of 51 patients undergoing cardioversion; sensitivity was 98.0% and specificity was 90.2%. In an exploratory analysis relying on self-report of persistent AF in ambulatory participants, the C statistic was 0.72 (95% CI, 0.64-0.78); sensitivity was 67.7% and specificity was 67.6%. CONCLUSIONS AND RELEVANCE This proof-of-concept study found that smartwatch photoplethysmography coupled with a deep neural network can passively detect AF but with some loss of sensitivity and specificity against a criterion-standard ECG. Further studies will help identify the optimal role for smartwatch-guided rhythm assessment. JAMA Cardiol. doi:10.1001/jamacardio.2018.0136 Published online March 21, 2018. Editorial Supplemental content and Audio Author Affiliations: Division of Cardiology, Department of Medicine, University of California, San Francisco (Tison, Sanchez, Olgin, Lee, Fan, Gladstone, Mikell, Marcus); Cardiogram Incorporated, San Francisco, California (Ballinger, Singh, Sohoni, Hsieh); Department of Epidemiology and Biostatistics, University of California, San Francisco (Pletcher, Vittinghoff). Corresponding Author: Gregory M. Marcus, MD, MAS, Division of Cardiology, Department of Medicine, University of California, San Francisco, 505 Parnassus Ave, M1180B, San Francisco, CA 94143- 0124 (marcusg@medicine.ucsf.edu). Research JAMA Cardiology | Original Investigation (Reprinted) E1 © 2018 American Medical Association. All rights reserved. • eHeart Study in UCSF • A total of 9,750 participants • 51 patients undergoing cardio version • Validated against standard 12-lead ECG
  • 150.
    Passive Detection ofAtrial Fibrillation Using a Commercially Available Smartwatch Geoffrey H. Tison, MD, MPH; José M. Sanchez, MD; Brandon Ballinger, BS; Avesh Singh, MS; Jeffrey E. Olgin, MD; Mark J. Pletcher, MD, MPH; Eric Vittinghoff, PhD; Emily S. Lee, BA; Shannon M. Fan, BA; Rachel A. Gladstone, BA; Carlos Mikell, BS; Nimit Sohoni, BS; Johnson Hsieh, MS; Gregory M. Marcus, MD, MAS IMPORTANCE Atrial fibrillation (AF) affects 34 million people worldwide and is a leading cause of stroke. A readily accessible means to continuously monitor for AF could prevent large numbers of strokes and death. OBJECTIVE To develop and validate a deep neural network to detect AF using smartwatch data. DESIGN, SETTING, AND PARTICIPANTS In this multinational cardiovascular remote cohort study coordinated at the University of California, San Francisco, smartwatches were used to obtain heart rate and step count data for algorithm development. A total of 9750 participants enrolled in the Health eHeart Study and 51 patients undergoing cardioversion at the University of California, San Francisco, were enrolled between February 2016 and March 2017. A deep neural network was trained using a method called heuristic pretraining in which the network approximated representations of the R-R interval (ie, time between heartbeats) without manual labeling of training data. Validation was performed against the reference standard 12-lead electrocardiography (ECG) in a separate cohort of patients undergoing cardioversion. A second exploratory validation was performed using smartwatch data from ambulatory individuals against the reference standard of self-reported history of persistent AF. Data were analyzed from March 2017 to September 2017. MAIN OUTCOMES AND MEASURES The sensitivity, specificity, and receiver operating characteristic C statistic for the algorithm to detect AF were generated based on the reference standard of 12-lead ECG–diagnosed AF. RESULTS Of the 9750 participants enrolled in the remote cohort, including 347 participants with AF, 6143 (63.0%) were male, and the mean (SD) age was 42 (12) years. There were more than 139 million heart rate measurements on which the deep neural network was trained. The deep neural network exhibited a C statistic of 0.97 (95% CI, 0.94-1.00; P < .001) to detect AF against the reference standard 12-lead ECG–diagnosed AF in the external validation cohort of 51 patients undergoing cardioversion; sensitivity was 98.0% and specificity was 90.2%. In an exploratory analysis relying on self-report of persistent AF in ambulatory participants, the C statistic was 0.72 (95% CI, 0.64-0.78); sensitivity was 67.7% and specificity was 67.6%. CONCLUSIONS AND RELEVANCE This proof-of-concept study found that smartwatch photoplethysmography coupled with a deep neural network can passively detect AF but with some loss of sensitivity and specificity against a criterion-standard ECG. Further studies will help identify the optimal role for smartwatch-guided rhythm assessment. JAMA Cardiol. doi:10.1001/jamacardio.2018.0136 Published online March 21, 2018. Editorial Supplemental content and Audio Author Affiliations: Division of Cardiology, Department of Medicine, University of California, San Francisco (Tison, Sanchez, Olgin, Lee, Fan, Gladstone, Mikell, Marcus); Cardiogram Incorporated, San Francisco, California (Ballinger, Singh, Sohoni, Hsieh); Department of Epidemiology and Biostatistics, University of California, San Francisco (Pletcher, Vittinghoff). Corresponding Author: Gregory M. Marcus, MD, MAS, Division of Cardiology, Department of Medicine, University of California, San Francisco, 505 Parnassus Ave, M1180B, San Francisco, CA 94143- 0124 (marcusg@medicine.ucsf.edu). Research JAMA Cardiology | Original Investigation (Reprinted) E1 © 2018 American Medical Association. All rights reserved. tion from the participant (dependent on user adherence) and by the episodic nature of data obtained. A Samsung Simband (Samsung) exhibited high sensitivity and specificity for AF de- 32 costs associated with the care of those patients, the potential reduction in stroke could ultimately provide cost savings. SeveralfactorsmakedetectionofAFfromambulatorydata Figure 2. Accuracy of Detecting Atrial Fibrillation in the Cardioversion Cohort 100 80 60 40 20 0 0 10080 Sensitivity,% 1 –Specificity, % 604020 Cardioversion cohortA 100 80 60 40 20 0 0 10080 Sensitivity,% 1 –Specificity, % 604020 Ambulatory subset of remote cohortB A, Receiver operating characteristic curve among 51 individuals undergoing in-hospital cardioversion. The curve demonstrates a C statistic of 0.97 (95% CI, 0.94-1.00), and the point on the curve indicates a sensitivity of 98.0% and a specificity of 90.2%. B, Receiver operating characteristic curve among 1617 individuals in the ambulatory subset of the remote cohort. The curve demonstrates a C statistic of 0.72 (95% CI, 0.64-0.78), and the point on the curve indicates a sensitivity of 67.7% and a specificity of 67.6%. Table 3. Performance Characteristics of Deep Neural Network in Validation Cohortsa Cohort % AUCSensitivity Specificity PPV NPV Cardioversion cohort (sedentary) 98.0 90.2 90.9 97.8 0.97 Subset of remote cohort (ambulatory) 67.7 67.6 7.9 98.1 0.72 Abbreviations: AUC, area under the receiver operating characteristic curve; NPV, negative predictive value; PPV, positive predictive value. a In the cardioversion cohort, the atrial fibrillation reference standard was 12-lead electrocardiography diagnosis; in the remote cohort, the atrial fibrillation reference standard was limited to self-reported history of persistent atrial fibrillation. Research Original Investigation Passive Detection of Atrial Fibrillation Using a Commercially Available Smartwatch AUC=0.98 AUC=0.72 • In external validation using standard 12-lead ECG, algorithm performance achieved a C statistic of 0.97. • The passive detection of AF from free-living smartwatch data has substantial clinical implications. • Importantly, the accuracy of detecting self-reported AF in an ambulatory setting was more modest (C statistic of 0.72)
  • 151.
  • 153.
    An Algorithm Basedon Deep Learning for Predicting In-Hospital Cardiac Arrest Joon-myoung Kwon, MD;* Youngnam Lee, MS;* Yeha Lee, PhD; Seungwoo Lee, BS; Jinsik Park, MD, PhD Background-—In-hospital cardiac arrest is a major burden to public health, which affects patient safety. Although traditional track- and-trigger systems are used to predict cardiac arrest early, they have limitations, with low sensitivity and high false-alarm rates. We propose a deep learning–based early warning system that shows higher performance than the existing track-and-trigger systems. Methods and Results-—This retrospective cohort study reviewed patients who were admitted to 2 hospitals from June 2010 to July 2017. A total of 52 131 patients were included. Specifically, a recurrent neural network was trained using data from June 2010 to January 2017. The result was tested using the data from February to July 2017. The primary outcome was cardiac arrest, and the secondary outcome was death without attempted resuscitation. As comparative measures, we used the area under the receiver operating characteristic curve (AUROC), the area under the precision–recall curve (AUPRC), and the net reclassification index. Furthermore, we evaluated sensitivity while varying the number of alarms. The deep learning–based early warning system (AUROC: 0.850; AUPRC: 0.044) significantly outperformed a modified early warning score (AUROC: 0.603; AUPRC: 0.003), a random forest algorithm (AUROC: 0.780; AUPRC: 0.014), and logistic regression (AUROC: 0.613; AUPRC: 0.007). Furthermore, the deep learning– based early warning system reduced the number of alarms by 82.2%, 13.5%, and 42.1% compared with the modified early warning system, random forest, and logistic regression, respectively, at the same sensitivity. Conclusions-—An algorithm based on deep learning had high sensitivity and a low false-alarm rate for detection of patients with cardiac arrest in the multicenter study. (J Am Heart Assoc. 2018;7:e008678. DOI: 10.1161/JAHA.118.008678.) Key Words: artificial intelligence • cardiac arrest • deep learning • machine learning • rapid response system • resuscitation In-hospital cardiac arrest is a major burden to public health, which affects patient safety.1–3 More than a half of cardiac arrests result from respiratory failure or hypovolemic shock, and 80% of patients with cardiac arrest show signs of deterioration in the 8 hours before cardiac arrest.4–9 However, 209 000 in-hospital cardiac arrests occur in the United States each year, and the survival discharge rate for patients with cardiac arrest is <20% worldwide.10,11 Rapid response systems (RRSs) have been introduced in many hospitals to detect cardiac arrest using the track-and-trigger system (TTS).12,13 Two types of TTS are used in RRSs. For the single-parameter TTS (SPTTS), cardiac arrest is predicted if any single vital sign (eg, heart rate [HR], blood pressure) is out of the normal range.14 The aggregated weighted TTS calculates a weighted score for each vital sign and then finds patients with cardiac arrest based on the sum of these scores.15 The modified early warning score (MEWS) is one of the most widely used approaches among all aggregated weighted TTSs (Table 1)16 ; however, traditional TTSs including MEWS have limitations, with low sensitivity or high false-alarm rates.14,15,17 Sensitivity and false-alarm rate interact: Increased sensitivity creates higher false-alarm rates and vice versa. Current RRSs suffer from low sensitivity or a high false- alarm rate. An RRS was used for only 30% of patients before unplanned intensive care unit admission and was not used for 22.8% of patients, even if they met the criteria.18,19 From the Departments of Emergency Medicine (J.-m.K.) and Cardiology (J.P.), Mediplex Sejong Hospital, Incheon, Korea; VUNO, Seoul, Korea (Youngnam L., Yeha L., S.L.). *Dr Kwon and Mr Youngnam Lee contributed equally to this study. Correspondence to: Joon-myoung Kwon, MD, Department of Emergency medicine, Mediplex Sejong Hospital, 20, Gyeyangmunhwa-ro, Gyeyang-gu, Incheon 21080, Korea. E-mail: kwonjm@sejongh.co.kr Received January 18, 2018; accepted May 31, 2018. ª 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. DOI: 10.1161/JAHA.118.008678 Journal of the American Heart Association 1 ORIGINAL RESEARCH byguestonJune28,2018http://jaha.ahajournals.org/Downloadedfrom
  • 154.
    • 환자 수:86,290 • cardiac arrest: 633 • Input: Heart rate, Respiratory rate, Body temperature, Systolic Blood Pressure (source: VUNO) Cardiac Arrest Prediction Accuracy
  • 155.
    • 대학병원 신속대응팀에서 처리 가능한 알림 수 (A, B 지점) 에서 더 큰 정확도 차이를 보임 • A: DEWS 33.0%, MEWS 0.3% • B: DEWS 42.7%, MEWS 4.0% (source: VUNO) APPH(Alarms Per Patients Per Hour) (source: VUNO) Less False Alarm
  • 156.
  • 157.
    •복잡한 의료 데이터의분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 의료 인공지능의 세 유형
  • 158.
    의료 인공지능 •1부: 제2의 기계시대와 의료 인공지능 •2부: 의료 인공지능의 과거와 현재 •3부: 미래를 어떻게 맞이할 것인가
  • 159.
    •인공지능은 의사를 대체하는가 •인간의사의 새로운 역할은 •결과에 대한 책임은 누가 지는가 •블랙박스 문제 •탈숙련화 문제 •어떻게 인허가/규제할 것인가 •의학적 효용을 어떻게 증명할 것인가 Issues
  • 160.
    의료 인공지능 •1부: 제2의 기계시대와 의료 인공지능 •2부: 의료 인공지능의 과거와 현재 •3부: 미래를 어떻게 맞이할 것인가
  • 162.
    Feedback/Questions • Email: yoonsup.choi@gmail.com •Blog: http://www.yoonsupchoi.com • Facebook: Yoon Sup Choi