Using MapReduce for Large–scale Medical Image Analysis
Upcoming SlideShare
Loading in...5
×
 

Using MapReduce for Large–scale Medical Image Analysis

on

  • 638 views

The growth of the amount of medical image data produced on a daily basis in modern hospitals forces the adaptation of traditional medical image analysis and indexing approaches towards scalable ...

The growth of the amount of medical image data produced on a daily basis in modern hospitals forces the adaptation of traditional medical image analysis and indexing approaches towards scalable solutions. In this work, MapReduce is used to speed up and make possible three large–scale medical image processing use–cases: (i) parameter optimization for lung texture classification using support vector machines (SVM), (ii) content–based medical image indexing, and (iii) three–dimensional directional wavelet analysis for solid texture classification.

Statistics

Views

Total Views
638
Views on SlideShare
260
Embed Views
378

Actions

Likes
2
Downloads
5
Comments
0

4 Embeds 378

http://godwincaruana.me 215
http://www.scoop.it 156
http://s0nic.net 5
http://iig.hevs.ch 2

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Using MapReduce for Large–scale Medical Image Analysis Using MapReduce for Large–scale Medical Image Analysis Presentation Transcript

  • Using MapReduce for Large-scale Medical Image Analysis HISB 2012 Presented by : Roger Schaer - HES-SO Valais
  • Summary Introduction Methods Results & Interpretation Conclusions 2
  • Introduction
  • Introduction Exponential growth of imaging data (past 20 years) Year Amountofimagesproduced perdayattheHUG 4
  • Introduction (continued) Mainly caused by : Modern imaging techniques (3D, 4D) : Large files ! Large collections (available on the Internet) Increasingly complex algorithms make processing this data more challenging Requires a lot of computation power, storage and network bandwidth 5
  • Introduction (continued) Flexible and scalable infrastructures are needed Several approaches exist : Single, powerful machine Local cluster / grid Alternative infrastructures (Graphics cards) Cloud computing solutions First two approaches have been tested and compared 6
  • Introduction (continued) 3 large-scale medical image processing use cases Parameter optimization for Support Vector Machines Content-based image feature extraction & indexing 3D texture feature extraction using the Riesz transform NOTE : I mostly handled the infrastructure aspects ! 7
  • Methods
  • Methods MapReduce Hadoop Cluster Support Vector Machines Image Indexing Solid 3D Texture Analysis Using the Riesz Transform 9
  • MapReduce MapReduce is a programming model Developed by Google Map Phase : Key/Value pair input, Intermediate output Reduce phase : For each intermediate key, process the list of associated values Trivial example : Word Count application 10
  • MapReduce : WordCount 11
  • MapReduce : WordCount INPUT 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... INPUT 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 goodbye 1 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 goodbye 1 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 goodbye 1 hadoop 2 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 goodbye 1 hadoop 2 11
  • MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 goodbye 1 hadoop 2 bye 1 11
  • Hadoop Apache’s implementation of MapReduce Consists of Distributed storage system : HDFS Execution framework : Hadoop MapReduce Master node which orchestrates the task distribution Worker nodes which perform the tasks Typical node runs a DataNode and TaskTracker 12
  • Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 13
  • Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 0 5 10 15 20 0 5 10 15 20 13
  • Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 0 5 10 15 20 0 5 10 15 20 13
  • Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 0 5 10 15 20 0 5 10 15 20 13
  • Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 0 5 10 15 20 0 5 10 15 20 ? 13
  • Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 0 5 10 15 20 0 5 10 15 20 13
  • SVM (continued) Goal : find optimal value couple (C, σ) to train a SVM Allowing best classification performance of 5 lung texture patterns Execution on 1 PC (without Hadoop) can take weeks Due to extensive leave-one-patient-out cross- validation with 86 patients Parallelization : Split job by parameter value couples 14
  • Image Indexing Vocabulary File Image Files Feature Extractor Feature Vectors Files Bag of Visual Words Factory Index File Two phases Extract features from images Construct bags of visual words by quantization Component-based / Monolithic approaches Parallelization : Each task processes N images 15
  • Image Indexing Vocabulary File Image Files Feature Extractor Feature Vectors Files Bag of Visual Words Factory Index File Two phases Extract features from images Construct bags of visual words by quantization Component-based / Monolithic approaches Parallelization : Each task processes N images 15 Feature Extractor + Bag of Visual Words Factory
  • 3D Texture Analysis (Riesz) Features are extracted from 3D images (see below) Parallelization : Each task processes N images 16
  • Results & Interpretation
  • Hadoop Cluster Minimally invasive setup (>=2 free cores per node) 18
  • Support Vector Machines Optimization : Longer tasks = bad performance Because the optimization of the hyperplane is more difficult to compute (more iterations needed) After 2 patients (out of 86), check if : ti ≥ F · tref. If time exceeds average (+margin), terminate task 19
  • Support Vector Machines Black : tasks to be interrupted by the new algorithm Optimized algorithm : ~50h → ~9h15min All the best tasks (highest accuracy) are not killed 20 σ (Sigma) C (Cost) Accuracy(%)
  • Image Indexing 1K IMAGES Shows the calculation time in function of the # of tasks Both experiments were executed using hadoop Once on a single computer, then on our cluster of PCs 21
  • Image Indexing 1K IMAGES 10K IMAGES Shows the calculation time in function of the # of tasks Both experiments were executed using hadoop Once on a single computer, then on our cluster of PCs 21
  • Image Indexing 1K IMAGES 10K IMAGES 100K IMAGES Shows the calculation time in function of the # of tasks Both experiments were executed using hadoop Once on a single computer, then on our cluster of PCs 21
  • Riesz 3D Particularity : code was a series of Matlab® scripts Instead of rewriting the whole application : Used Hadoop streaming feature (uses stdin/stdout) To maximize scalability, GNU Octave was used Great compatibility between Matlab® and Octave 22
  • Riesz 3D Particularity : code was a series of Matlab® scripts Instead of rewriting the whole application : Used Hadoop streaming feature (uses stdin/stdout) To maximize scalability, GNU Octave was used Great compatibility between Matlab® and Octave RESULTS 1 task (no Hadoop) 42 tasks (idle) 42 tasks (normal) 131h32m42s 6h29m51s 5h51m31s 22
  • Conclusions
  • Conclusions MapReduce is Flexible (worked with very varied use cases) Easy to use (2-phase programming model is simple) Efficient (>=20x speedup for all use cases) Hadoop is Easy to deploy & manage User-friendly (nice Web UIs) 24
  • Conclusions (continued) Speedups for the different use cases SVMs Image Indexing 3D Feature Extraction Single task 990h* 21h* 131h30 42 tasks on hadoop 50h / 9h15** 1h 5h50 Speedup 20x / 107x** 21x 22.5x * estimation ** using the optimized algorithm 25
  • Lessons Learned It is important to use physically distributed resources Overloading a single machine hurts performance Data locality notably speeds up jobs Not every application is infinitely scalable Performance improvements level off at some point 26
  • Future work Take it to the next level : The Cloud Amazon Elastic Cloud Compute (IaaS) Amazon Elastic MapReduce (PaaS) Cloudbursting Use both local resources + Cloud (for peak usage) 27
  • Thank you ! Questions ?