Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring

312 views

Published on

Unsupervised Deep Learning Applied to Breast Density Segmentation
and Mammographic Risk Scoring

Published in: Science
  • Hello! I can recommend a site that has helped me. It's called ⇒ www.HelpWriting.net ⇐ So make sure to check it out!
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Don't forget another good way of simplifying your writing is using external resources (such as ⇒ www.WritePaper.info ⇐ ). This will definitely make your life more easier
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • If you want to discover how you can increase your cup size within 6 weeks then you need to see this new website... This is an all natural alternative to painful surgery or expensive pills... It's what plastic surgeons have been hiding for years. Jenny went from an A cup to a C cup in just 6 weeks. Want to do the same yourself... ☺☺☺ https://dwz1.cc/iZqgQnlK
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Natural Breast Enlargement... The Magic Formula ✱✱✱ https://dwz1.cc/YYZPZbuh
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • The Most Effective Natural Breast Enlargement Techniques That Have Already Changed The Lives Of Over 7591 Women From 69 Countries Worldwide! ➤➤ https://t.cn/A6Liz7kD
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Be the first to like this

Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring

  1. 1. Journal Review Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring Jinseob Kim December 27, 2017 Jinseob Kim Journal Review December 27, 2017 1 / 43
  2. 2. 1 Introduction 2 Method 3 Result 4 Discussion Jinseob Kim Journal Review December 27, 2017 2 / 43
  3. 3. Introduction Introduction Jinseob Kim Journal Review December 27, 2017 3 / 43
  4. 4. Introduction The goal of this paper Automatically learns features for images, which in our case are mammograms CSAE: convolutional sparse autoencoder Sparse autoencoder within a convolutional architecture Unlabelled data로 추상적인 feature학습 + label정보 이용해 분류: 2 optimization process 많은 unlabbeled data 이용 가능. 한번에 다 하는 것보다 Fast and stable Jinseob Kim Journal Review December 27, 2017 4 / 43
  5. 5. Introduction 2 tasks 1 The automated segmentation of percentage mammographic density (PMD) 2 Characterize mammographic textural (MT) patterns with the goal of predicting whether a woman will develop breast cancer. Structural information of breast tissue Heterogeneity > density. Manual vs automated Harder than MD scoring, The label of interest (healthy vs. diseased) is defined per image and not per pixel (e.g., fatty vs. dense) Jinseob Kim Journal Review December 27, 2017 5 / 43
  6. 6. Introduction Model 요약 1 Multiscale denoising autoencoders 2 Convolutional architecture 3 Novel sparsity term to control the model capacity Jinseob Kim Journal Review December 27, 2017 6 / 43
  7. 7. Introduction Figure 1: Multiscale Jinseob Kim Journal Review December 27, 2017 7 / 43
  8. 8. Method Method Jinseob Kim Journal Review December 27, 2017 8 / 43
  9. 9. Method 3 part 1 Generating input data Multiscale 2 Model representation CNN 3 Parameter learning Sparse autoencoder: novel sparsity regularizer Jinseob Kim Journal Review December 27, 2017 9 / 43
  10. 10. Method Problem 1 Entire image를 input으로 쓰긴 어렵다..computational burden 2 Downsampling은 안된다. Fine scale로만 확인할 수 있는 feature.. Learn a compact representation for local neighbors (or patches) from the image Jinseob Kim Journal Review December 27, 2017 10 / 43
  11. 11. Method Figure 2: Patch Jinseob Kim Journal Review December 27, 2017 11 / 43
  12. 12. Method Patch creation 1 Image resize: 50 pixel/mm 2 Sampling 48000 patches from whole data. Restricted to 24pixel × 24pixel size 3 Density scoring 10% from the background and the pectoral muscle 45% from the fatty breast tissue 45% from the dense breast tissue. 4 Texture scoring 50% from the breast tissue of controls 50% from the breast tissue of cancer cases. Jinseob Kim Journal Review December 27, 2017 12 / 43
  13. 13. Method m = 24, M = 1, 4 layer: Conv + Maxpool + Conv + Conv c = 1 (흑백): 칼라면 3 If t = 1, 인접한 m × m. t = 4, 더 큰 smoothed 그림에서 every 8th pixel 선택. K = {50, (50), 50, 100}, kernal size= {7, 2, 5, 5} Jinseob Kim Journal Review December 27, 2017 13 / 43
  14. 14. Method Multiscale input data Capture long range interactions Gaussian scale space I(u; σt) = [I ∗ Gσt ](u) Gσt = 1 2πσt e −(x2+y2) σt σt = t−1 i=0 δ2i δ : downsampling factor=2 Jinseob Kim Journal Review December 27, 2017 14 / 43
  15. 15. Method Jinseob Kim Journal Review December 27, 2017 15 / 43
  16. 16. Method Jinseob Kim Journal Review December 27, 2017 16 / 43
  17. 17. Method Jinseob Kim Journal Review December 27, 2017 17 / 43
  18. 18. Method Jinseob Kim Journal Review December 27, 2017 18 / 43
  19. 19. Method Jinseob Kim Journal Review December 27, 2017 19 / 43
  20. 20. Method 3 step from z(l) to z(l+1) 1 Extract sub-patches (called local receptive fields) 2 Feature learning: Learn transformation parameters (or features) by autoencoding the local receptive fields 3 Feature encoding: Transform all local receptive fields using the learned features from step 2. Last: Softmax classifier(multinomial logistic regression) Jinseob Kim Journal Review December 27, 2017 20 / 43
  21. 21. Method Sparse autoencoder 전체 architecture에서 training할수도 있지만.. Use unsupervised learning: autoencoder Sparse Enables to learn a sparse overcomplete representation Input보다 size가 큰 feature Jinseob Kim Journal Review December 27, 2017 21 / 43
  22. 22. Method Standard autoencoder https://wikidocs.net/3413 Jinseob Kim Journal Review December 27, 2017 22 / 43
  23. 23. Method Tied weight autoencoder Input(r):Subpatch(d × d of c channel) Encoder: K개 feature로 표현 (K < cd2) a ≡ g(r) = φ(Wr + b) φ(x) = max(0, x) Decoder: K개 feature를 이용해서 다시 Input을 표현. f (a) = ψ(W a + ˜b) Tied weight Jinseob Kim Journal Review December 27, 2017 23 / 43
  24. 24. Method Learned features as input of next layer 1 subpatch 당 K차원 feature 1 patch 당 (m − d + 1) × (m − d + 1) × K 차원의 feature Next layer의 input Jinseob Kim Journal Review December 27, 2017 24 / 43
  25. 25. Method Estimation Jinseob Kim Journal Review December 27, 2017 25 / 43
  26. 26. Method Sparse overcomplete representations N(Basis Vectors) > dimensions of the input K > cd2 This study, K = {50, (50), 50, 100}, kernal size= {7, 2, 5, 5} http://mlsp.cs.cmu.edu/courses/fall2013/lectures/slides/class15. sparseovercomplete.pdf Jinseob Kim Journal Review December 27, 2017 26 / 43
  27. 27. Method Type of Sparsity 1 Population sparsity: 적은 갯수의 feature로 input 설명 Compact encoding per example 2 Lifetime sparsity: 한 feature가 여러 input을 설명하는데 쓰이지 않음. Example-specific features Jinseob Kim Journal Review December 27, 2017 27 / 43
  28. 28. Method Estimation: Novel sparsity term Jinseob Kim Journal Review December 27, 2017 28 / 43
  29. 29. Method Jinseob Kim Journal Review December 27, 2017 29 / 43
  30. 30. Method Jinseob Kim Journal Review December 27, 2017 30 / 43
  31. 31. Method Experiments and Datasets 2 different tasks (MD, MT) First segmented mammograms into background, pectoral muscle, and breast tissue region 3 different mammographic datasets Density Dataset(493 mammograms of healthy women): radiologist Texture Dataset(MMHS): trained observer Dutch Breast Cancer Screening Dataset: Software(Volpara) Jinseob Kim Journal Review December 27, 2017 31 / 43
  32. 32. Result Result Jinseob Kim Journal Review December 27, 2017 32 / 43
  33. 33. Result MD: Density datasets Initial output: 해당 pixel(patch)이 dense tissue class일 확률 빠른 training을 위해 dense tissue data를 많이 포함: overestimation 가능성 dense 판단 기준 threshold를 0.5에서 0.75로 올림 Jinseob Kim Journal Review December 27, 2017 33 / 43
  34. 34. Result Image-wise average of the Dice coefficients 2 · |A ∩ B| |A| + |B| A: automated, B: radiologisat Jinseob Kim Journal Review December 27, 2017 34 / 43
  35. 35. Result Jinseob Kim Journal Review December 27, 2017 35 / 43
  36. 36. Result MD: Dutch Breast Cancer Screening Dataset Density dataset으로 얻은 모형을 이 dataset에 적용 Jinseob Kim Journal Review December 27, 2017 36 / 43
  37. 37. Result MT: Texture Dataset Initial output: 해당 pixel(patch)이 cancer class일 확률. MT score per image: Breast area에서 patch 500개 랜덤으로 뽑아 Score 구한 후 평균. 랜덤으로 해도 AUC차이 0.01미만임을 확인. Jinseob Kim Journal Review December 27, 2017 37 / 43
  38. 38. Result MT: Dutch Breast Cancer Screening Dataset Jinseob Kim Journal Review December 27, 2017 38 / 43
  39. 39. Discussion Discussion Jinseob Kim Journal Review December 27, 2017 39 / 43
  40. 40. Discussion Summary Present an unsupervised feature learning method for breast density segmentation and automatic texture scoring. Can learn useful features After adapting a small set of hyperparameters (feature scales, output size, and label classes), the CSAE model achieved state-of-the-art results on each of the tasks Jinseob Kim Journal Review December 27, 2017 40 / 43
  41. 41. Discussion CSAE vs Classical CNN The usage of unsupervised pre-training 성능증가 Jinseob Kim Journal Review December 27, 2017 41 / 43
  42. 42. Discussion Limitation: MT scoring 하나의 mammogram은 어떤 위치든 다 같은 label: Case vs Control Assumed that texture changes are systemic and occur at many locations in the tissue Jinseob Kim Journal Review December 27, 2017 42 / 43
  43. 43. Discussion Conclusion: Future idea Image의 여러 부분 정보(patch)를 합쳐서 하나의 label로 매핑. 더 많은 데이터, 컴퓨팅 성능 필요 Easily adjusted to support 3D data Jinseob Kim Journal Review December 27, 2017 43 / 43

×