DeepLeanring4j
Data Parallel deep learning on spark
The JVM is too slow for numerical compute
Great at network I/O and data access
Great streaming infrastructure
Hardware accel required
Spark - Data Access Layer.
Cuda - Compute layer
Current Landscape
Spark assumes columnar data
Binary (audio/images) is becoming more important
HDFS is great for storing blobs
SQL doesn’t work for pixels and audio frames
The ingredients are here for something great
The solution
Javacpp (cython for java)
64 bit pointers for efficient contiguous access of image and audio data
Leverage java’s distributed systems ecosystem
Add new numerical compute layer (libnd4j)
Allow for heterogeneous compute
Off heap memory
Easy deployment
Data pipelines as a first concern
SKIL (Skymind Intelligence Layer)
JavaCpp
Auto generate JNI bindings for C++ by parsing classes
Allows for easy maintenance and deployment of c++ binaries in java
Write efficient ETL pipelines for images via opencv (javacv)
Integrate other c++ deep learning frameworks (tensorflow,caffe,..)
Allows for productionization of fast (but academic) C++ code using java
(kafka,spark) for ETL
64 bit pointers (wasn’t possible before)
“Actual” Streaming frameworks
Kafka
Flink
Spark Streaming
Apex
Nd4j
Heterogenous codebase
Supports cuda, x86 and soon (power)
Shared indexing logic for writing ndarray routines
Memory management in java (even cuda memory!)
Openmp on cpu + routines for common things such as reduce
Pinned memory and async operations
JIT allocation
Spark friendly (runs on multiple threads and devices)
Deployment
Juju
Runs as spark job
Easy to embed in production
Canova
One interface for ETL
Integrates with spark
Easy to extend to write your own custom data pipelines
One interface for generating NDArrays
Conclusion
Built to be friendly to the JVM ecosystem
Allows java to do what its good at
Numpy in java means easy to port things like scikit learn
Data Parallel means commodity hardware JVM assumes works
Future
Model Parallelism
Opencl
Sparse support
Reinforcement learning
Questions?
help@skymind.io

Advanced spark deep learning

  • 1.
  • 2.
    The JVM istoo slow for numerical compute Great at network I/O and data access Great streaming infrastructure Hardware accel required Spark - Data Access Layer. Cuda - Compute layer
  • 3.
    Current Landscape Spark assumescolumnar data Binary (audio/images) is becoming more important HDFS is great for storing blobs SQL doesn’t work for pixels and audio frames The ingredients are here for something great
  • 4.
    The solution Javacpp (cythonfor java) 64 bit pointers for efficient contiguous access of image and audio data Leverage java’s distributed systems ecosystem Add new numerical compute layer (libnd4j) Allow for heterogeneous compute Off heap memory Easy deployment Data pipelines as a first concern
  • 5.
  • 6.
    JavaCpp Auto generate JNIbindings for C++ by parsing classes Allows for easy maintenance and deployment of c++ binaries in java Write efficient ETL pipelines for images via opencv (javacv) Integrate other c++ deep learning frameworks (tensorflow,caffe,..) Allows for productionization of fast (but academic) C++ code using java (kafka,spark) for ETL 64 bit pointers (wasn’t possible before)
  • 7.
  • 9.
    Nd4j Heterogenous codebase Supports cuda,x86 and soon (power) Shared indexing logic for writing ndarray routines Memory management in java (even cuda memory!) Openmp on cpu + routines for common things such as reduce Pinned memory and async operations JIT allocation Spark friendly (runs on multiple threads and devices)
  • 10.
    Deployment Juju Runs as sparkjob Easy to embed in production
  • 11.
    Canova One interface forETL Integrates with spark Easy to extend to write your own custom data pipelines One interface for generating NDArrays
  • 12.
    Conclusion Built to befriendly to the JVM ecosystem Allows java to do what its good at Numpy in java means easy to port things like scikit learn Data Parallel means commodity hardware JVM assumes works
  • 13.
  • 14.