Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Using Deep Learning on Apache Spark to Diagnose Thoracic Pathology from Chest X-rays

262 views

Published on

Overview and extended description: AI is expected to be the engine of technological advancements in the healthcare industry, especially in the areas of radiology and image processing. The purpose of this session is to demonstrate how we can build a AI-based Radiologist system using Apache Spark and Analytics Zoo to detect pneumonia and other diseases from chest x-ray images. The dataset, released by the NIH, contains around 110,00 X-ray images of around 30,000 unique patients, annotated with up to 14 different thoracic pathology labels. Stanford University developed a state-of-the-art model using CNN and exceeds average radiologist performance on the F1 metric. This talk focuses on how we can build a multi-label image classification model in a distributed Apache Spark infrastructure, and demonstrate how to build complex image transformations and deep learning pipelines using BigDL and Analytics Zoo with scalability and ease of use. Some practical image pre-processing procedures and evaluation metrics are introduced. We will also discuss runtime configuration, near-linear scalability for training and model serving, and other general performance topics.

Published in: Data & Analytics
  • Be the first to comment

  • Be the first to like this

Using Deep Learning on Apache Spark to Diagnose Thoracic Pathology from Chest X-rays

  1. 1. WIFI SSID:SparkAISummit | Password: UnifiedAnalytics
  2. 2. Yuhao Yang, Intel Bala Chandrasekaran, Dell EMC Using Deep Learning on Apache Spark to diagnose thoracic pathology from chest X-rays #UnifiedAnalytics #SparkAISummit
  3. 3. Agenda • Problem statement • Analytics Zoo • Chest Xray Model architecture • Results and Observations 3#UnifiedAnalytics #SparkAISummit
  4. 4. Predicting diseases in Chest X-rays • Develop a ML pipeline in Apache Spark and train a deep learning model to predict disease in Chest X-rays ▪ An integrated ML pipeline with Analytics Zoo on Apache Spark ▪ Demonstrate feature engineering and transfer learning APIs in Analytics Zoo ▪ Use Spark worker nodes to train at scale • CheXNet – Developed at Stanford University, CheXNet is a model for identifying thoracic pathologies from the NIH ChestXray14 dataset – https://stanfordmlgroup.github.io/projects/chexnet/ 4#UnifiedAnalytics #SparkAISummit
  5. 5. X-ray dataset from NIH 5#UnifiedAnalytics #SparkAISummit • 112,120 images from over 30000 patients • Multi label (14 diseases) • Unbalanced datasets • Close to 50% of the images have ‘No findings’ • Infiltration get the most positive samples (19894) and Hernia get the least positive samples (227) 0 10000 20000 30000 40000 50000 60000 70000 Atelectasis Consolidation Infiltration Pneumothorax Edema Emphysema Fibrosis Effusion Pneumonia PleuralThickening Cardiomegaly Nodule Mass Hernia NoFindings Frequency 00000013_005.png,Emphysema | Infiltration | Pleural_Thickening 00000013_006.png,Effusion|Infiltration 00000013_007.png,Infiltration 00000013_008.png,No Finding
  6. 6. Analytics Zoo
  7. 7. Analytics Zoo Reference Use Cases Anomaly detection, sentiment analysis, fraud detection, chatbot, sequence prediction, etc. Built-In Deep Learning Models Image classification, object detection, text classification, recommendations, sequence- to-sequence, anomaly detection, etc. Feature Engineering Feature transformations for • Image, text, 3D imaging, time series, speech, etc. High-Level Pipeline APIs • Distributed TensorFlow and Keras on Spark • Native deep learning support in Spark DataFrames and ML Pipelines • Model serving API for model serving/inference pipelines Backends Spark, TensorFlow, Keras, BigDL, OpenVINO, MKL-DNN, etc. 7#UnifiedAnalytics #SparkAISummit Distributed TensorFlow, Keras and BigDL on Apache Spark https://github.com/intel-analytics/analytics-zoo/
  8. 8. Bringing Deep Learning To Big Data Platform • Distributed deep learning framework for Apache Spark* • Make deep learning more accessible to big data users and data scientists – Write deep learning applications as standard Spark programs – Run on existing Spark/Hadoop clusters (no changes needed) • Feature parity with popular deep learning frameworks – E.g., Caffe, Torch, Tensorflow, etc. • High performance (on CPU) – Built-in Intel MKL and multi-threaded programming • Efficient scale-out – Leveraging Spark for distributed training & inference 8#UnifiedAnalytics #SparkAISummit https://github.com/intel-analytics/BigDL https://bigdl-project.github.io/
  9. 9. 9 Architecture • Training using synchronous mini-batch SGD • Distributed training iterations run as standard Spark jobs
  10. 10. Parameter Synchronization 10#UnifiedAnalytics #SparkAISummit Task n 1 2 n 1 2 n 1 2 n 1 2 n 1 2 n Task 1 Task 2 local gradient local gradient local gradient ∑ ∑ ∑ gradient 1 gradient n weight 1 weight 2 weight n “Parameter Synchronization” Job update update update gradient 2
  11. 11. Integrated Pipeline for Machine Learning and Deep Learning ImageReader Image Pre- processing Image Classification fit Images in HDFS Images DataFrame Images as RDD Image Classification Model
  12. 12. Building the X-ray Model
  13. 13. Let’s build the model 13#UnifiedAnalytics #SparkAISummit Read the X-ray images Feature Engineering Define the model Define optimizer Train the model
  14. 14. Read the X-ray images as Spark DataFrames • Initialize NNContext and load X-ray images into DataFrames using NNImageReader • Process loaded X-ray images and add labels (another DataFrame) using Spark transformations 14#UnifiedAnalytics #SparkAISummit Read the X-ray images Feature Engineering Define the model Define optimizer Training the model from zoo.pipeline.nnframes import NNImageReader imageDF = NNImageReader.readImages(image_path, sc, resizeH=256, resizeW=256, image_codec=1) trainingDF = imageDF.join(labelDF, on="Image_Index", how="inner")
  15. 15. Feature Engineering – Image Pre-processing In-built APIs for feature engineering using ChainedPreprocessing API 15#UnifiedAnalytics #SparkAISummit Read the X-ray images Feature Engineering Define the model Define optimizer Training the model ImageCenterCrop() ImageFlip() ImageChannelNormalize() ImageMatToTensor() ImageCenterCrop(224, 224) Random flip and brightness ImageChannelNormalize() To Tensor()
  16. 16. Transfer Learning using ResNet-50 trained with ImageNet 16#UnifiedAnalytics #SparkAISummit Condition E Condition D Condition C Condition B Condition A Patient A Add a new input layer Remove the existing classification layer and add new classification layer
  17. 17. Defining the model with Transfer Learning APIs 17#UnifiedAnalytics #SparkAISummit Read the X-ray images Feature Engineering Define the model Define optimizer Training the model • Load a pre-trained model using Net.load_bigdl. The model is trained with ImageNet dataset – Inception – ResNet 50 – DenseNet • Remove the final softmax layer of ResNet-50 • Add new input (for resized x-ray images) and output layer (to predict the 14 diseases). Activation function is Sigmoid • Avoid overfitting – Regularization – Dropout
  18. 18. Defining the model with Transfer Learning APIs 18#UnifiedAnalytics #SparkAISummit Read the X-ray images Feature Engineering Define the model Define optimizer Training the model def get_resnet_model(model_path, label_length): full_model = Net.load_bigdl(model_path) model = full_model.new_graph(["pool5"]) inputNode = Input(name="input", shape=(3, 224, 224)) resnet = model.to_keras()(inputNode) flatten = GlobalAveragePooling2D(dim_ordering='th')(resnet) dropout = Dropout(0.2)(flatten) logits = Dense(label_length, W_regularizer=L2Regularizer (1e-1), b_regularizer=L2Regularizer(1e-1), activation="sigmoid")(dropout) lrModel = Model(inputNode, logits) return lrModel
  19. 19. Define the Optimizer 19#UnifiedAnalytics #SparkAISummit Read the X-ray images Feature Engineering Define the model Define optimizer Training the model • Evaluated two optimizers: SGD and Adam Optimizer • Learning rate scheduler is implemented in two phases: – Warmup + Plateau schedule – Warmup: Gradually increase the learning rate for 5 epochs – Plateau: Plateau("Loss", factor=0.1, patience=1, mode="min", epsilon=0.01, cooldown=0, min_lr=1e- 15)
  20. 20. Train the model using ML Pipelines 20#UnifiedAnalytics #SparkAISummit Read the X-ray images Feature Engineering Define the model Define optimizer Training the model • Analytics Zoo API NNEstimator to build the model • .fit() produces a neural network model which is a Transformer • You can now run .predict() on the model for inference • AUC-RoC is used to measure the accuracy of the model. Spark ML pipeline API BinaryClassificationEvaluator to determine the AUC-ROC for each disease. estimator = NNEstimator(xray_model, BinaryCrossEntropy(), transformer) .setBatchSize(batch_size).setMaxEpoch(num_epoch) .setFeaturesCol("image").setCachingSample(False). .setValidation(EveryEpoch(), validationDF, [AUC()], batch_size) .setOptimMethod(optim_method) Xray_nnmodel = estimator.fit(trainingDF)
  21. 21. Results and observations 21#UnifiedAnalytics #SparkAISummit
  22. 22. Spark parameter recommendations • Number of executors Number of compute nodes • Number of executor cores Number of physical cores minus two • Executor memory Number of cores per executor * (Memory required for each core + data partition) 22#UnifiedAnalytics #SparkAISummit
  23. 23. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 0.00002 0.00004 0.00006 0.00008 0.0001 0.00012 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 AverageAUC(Higherisbetter) LearningRate Epochs Learning rate Average AUC w/ Learning Rate Scheduler Average AUC w/o Learning Rate Scheduler Impact on Learning Rate Scheduler • Adam (with Learning Rate Scheduler) outperforms SGD – Warmup – Plateau • Learning rate scheduler helps is covering the model much faster 23#UnifiedAnalytics #SparkAISummit 1E-7
  24. 24. Base model comparison 24#UnifiedAnalytics #SparkAISummit 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 AUC(Higherisbetter) Inception-V3 DenseNet-121 ResNet-50
  25. 25. 0.55 0.6 0.65 0.7 0.75 0.8 0.85 1 2 3 4 5 6 7 8 AverageAUC(Higherisbetter) Batch per thread Batch size and Scalability • Two factors: – Batches per thread (mini-batch): Number of images in a iteration per worker – Global batch size: Batches per thread * Number of workers • Batch size is a critical parameter for model accuracy as well as for distributed performance – Increasing the batch size increases parallelism – Increasing the batch size may lead to a loss in generalization performance especially 25#UnifiedAnalytics #SparkAISummit
  26. 26. Scalability 26#UnifiedAnalytics #SparkAISummit 0 0.5 1 1.5 2 2.5 3 3.5 4 8 12 16 Speedup(Higherisbetter) Number of nodes 3x speed up from 4 nodes to 16 nodes. ~ 2.5 hours to train the model bz=512 bz=1024 bz=1536 bz=2048 * 15 epochs and 32 cores per executor
  27. 27. Future Work • Continue to improve the model performance and scalability – Implement LARS and other advanced learning rate scheduler for large scale training • Increase the feature set – Incorporate patient profile information • Develop performance characteristic and sizing guidelines for image classification problems 27#UnifiedAnalytics #SparkAISummit
  28. 28. Links Code and white paper available at https://github.com/dell-ai-engineering/BigDL- ImageProcessing-Examples 28#UnifiedAnalytics #SparkAISummit
  29. 29. Acknowledgements Mehmood Abd, Michael Bennett, Sajan Govindan, Jenwei Hsieh, Andrew Kipp, Dharmesh Patel, Leela Uppuluri, and Luke Wilson 29#UnifiedAnalytics #SparkAISummit (Alphabetical)
  30. 30. Thank you! DON’T FORGET TO RATE AND REVIEW THE SESSIONS SEARCH SPARK + AI SUMMIT

×