Deep learning is useful for enterprises tasks in the field of speech recognition, image classification, AI chatbots, and machine translation, just to name a few.
In order to train deep learning/machine learning models, applications such as TensorFlow, MXNet, Caffe, and XGBoost can be leveraged. And sometimes these applications will be used together to solve different problems.
To make distributed deep learning/machine learning applications easily launched, managed, and monitored, we introduced, in Apache Hadoop 3.x, YARN native services along with other improvements such as first-class GPU support, container-DNS support, scheduling improvements, etc. These improvements make distributed deep learning/machine learning applications run on YARN as simple as running it locally, which can let machine learning engineers focus on algorithms instead of worrying about underlying infrastructure. Also, YARN can better manage a shared cluster which runs deep learning/machine learning and other services and ETL jobs with these improvements.
In this session, we will take a closer look at these improvements and show how to run these applications on YARN with demos. Audiences can start trying running these applications on YARN after this talk.
Speakers
Wanga Tan, Staff Software Engineer, Hortonworks
Sunil Govindan, Staff Engineer, Hortonworks
The Codex of Business Writing Software for Real-World Solutions 2.pptx
Deep learning on yarn running distributed tensorflow etc on hadoop cluster v3
1. Wangda Tan (Hadoop PMC member @Hortonworks)
Sunil Govind (Hadoop PMC member @Hortonworks)
Deep learning on YARN: running
Tensorflow , etc. on Hadoop
clusters
Model training is the most important step of the whole pipeline.
Just like the workflow shows, only a tiny fraction of the code is actually devoted to model learning. The machine learning workflow usually need lots of supports from the big data platform, such as data collection from different data sources, feature extraction, feature transform, and so on.
Let’s find out how big data infrastructure could help machine learning step by step.
Just like the workflow shows, only a tiny fraction of the code is actually devoted to model learning. The machine learning workflow usually need lots of supports from the big data platform, such as data collection from different data sources, feature extraction, feature transform, and so on.
Let’s find out how big data infrastructure could help machine learning step by step.
To Do:
ToDo
Add Ooozie/Azkaban to control the workflow
Even though TF provide options to use GPU memory less than whole device provided. But we cannot enforce this from external.
Even though TF provide options to use GPU memory less than whole device provided. But we cannot enforce this from external.