Gurpreet Singh from Microsoft gave a talk on scaling Python for data analysis and machine learning using DASK and Apache Spark. He discussed the challenges of scaling the Python data stack and compared options like DASK, Spark, and Spark MLlib. He provided examples of using DASK and PySpark DataFrames for parallel processing and showed how DASK-ML can be used to parallelize Scikit-Learn models. Distributed deep learning with tools like Project Hydrogen was also covered.
3. 3#UnifiedAnalytics #SparkAISummit
…this talk is also about Scaling Python for Data Analysis & Machine Learning!
We’ll start with
briefly reviewing
the Scalability
Challenge in the
PyData stack…
…before Comparing
and Contrasting …
6. DASK DataFrame & PySpark
6#UnifiedAnalytics #SparkAISummit
DASK DataFrames
[Parallel Pandas]
§ Performance Concerns due to the PySpark Design§ DASK DataFrames API is not identical with Pandas API
§ Performance Concerns with Operations involving Shuffling
§ Inefficiencies of Pandas are carried over
Challenges Challenges
§ Follow the Pandas Performance tips
§ Avoid Shuffle, Use pre-sorting, Persist the Results
§ Use DataFrames API
§ Use Vectorized/Pandas UDF (Spark v2.3 onwards)
RecommendationsRecommendations
8. 8#UnifiedAnalytics #SparkAISummit
While Pandas display
a sample of the data,
DASK and PySpark
show metadata of
the DataFrame.
The npartitions value shows
how many partitions the
DataFrame is split into.
DASK created a DAG with 99 nodes to process
the data.
Code Review
10. How does DASK-ML
work?
Parallelize Scikit-Learn
Re-implement
Algorithms
Partner with existing
Libraries
Scalable Machine Learning
10#UnifiedAnalytics #SparkAISummit
OCT ‘17 - DASK-ML
Spark MLlib - As of Spark 2.0, the primary Machine Learning API for
Spark is now the DataFrame-based API in the spark.ml package. It
provides:
Distributed
JobLib
Algorithms
Featurization
Pipelines
Persistence
Utilities
Scalable ML Approaches
§ Spark for Feature Engineering + Scikit-learn etc. for Learning
§ Distributed ML Algorithms from Spark MLlib
§ Train/evaluate Scikit-learn models in parallel (spark-sklearn)
from sklearn.externals.joblib import _dask, parallel_backend
from sklearn.utils import register_parallel_backend
register_parallel_backend('distributed', _dask.DaskDistributedBackend)
from dask_ml.xgboost import XGBRegressor
est = XGBRegressor(...)
est.fit(train, train_labels)
prediction = est.predict(test)
Common algorithms e.g. classification, regression, & clustering
Feature extraction, Transformation, Dimensionality Reduction
Constructing, evaluating and tuning ML pipelines
Save/Load Algorithms, Models, and Pipelines
Linear Algebra, Statistics, and Data Handling
Transformers
Estimators
Pipelines Chains multiple Transformers and Estimators as per the ML Flow
Algorithm that transfers one DataFrame into another
Algorithm that trains and produces a model
11. Distributed Deep Learning
11#UnifiedAnalytics #SparkAISummit
with
Deep
Learning
Pipelines
Project
Hydrogen
Peer a DASK Distributed cluster
with TensorFlow running in
distributed mode.
§ APIs for scalable deep learning in Python from
Databricks
§ Provides a suite of tools covering loading, Training,
Tuning and Deploying
§ Simplifies Distributed Deep Learning Training
§ Supports TensorFlow, Keras and PyTorch
§ Integration with PySpark
New scheduling option called Gang Scheduler
12. Other Dev Considerations…
§ Workloads/APIs
§ Custom Algorithms (only in DASK)
§ SQL, Graph (only in Spark)
§ Debugging Challenges
§ DASK Distributed may not align with normal Python Debugging Tools/Practices
§ PySpark errors may have a mix of JVM and Python Stack Trace
§ Visualization Options
§ Down-sample and use Pandas DataFrames
§ Use open source Libraries e.g. D3, Seaborn, Datashader (only for DASK) etc.
§ Use Databricks Visualization Feature
12#UnifiedAnalytics #SparkAISummit
13. Which one to Use?
13#UnifiedAnalytics #SparkAISummit
There Are No Solutions, There Are Only Tradeoffs! – Thomas Sowell