Introduction to GPU Development for Java Developers. View the video at https://youtu.be/sOj8LsuSMFg - and find out more about the Seattle Java User Group (SeaJUG) at http://seajug.org/
1. Intro To GPU Development
for a Java Developer
Bonus JDK 10 Quick Review (Time Permitting)
Will Iverson
2. Speaker
• Will Iverson
• Frequent SeaJUG speaker since ~2001
• Professional developer/technologist since 1990
• Diverse background includes…
• Statistical analysis of data from NASA Space Shuttle
• Product management for Apple
• Developer relations for Symantec'sVisual Cafe
• Clients over last two decades include Sun, BEA, Canal+
Technologies, AT&T,T-Mobile, State of Washington & many,
many more…
• 2010-2016, founder of Dev9, premier local consulting firm
5. Disclaimer
• GPU development is huge
• ”Let’s cover software development in an hour…”
• Goals
• Cover lots of things from high level
• Conceptual framework
• Ideas and leads for more research
• If you are an experienced GPU/AI/ML/Crypto/etc dev
• Hold feedback to end
• Please do contribute at end!
6. Brief History
CISC CPU
• 8086, 68000
CISC CPU + FPU
• 8086+8087, 68020+68881
RISC CPU
• PowerPC, ARM
CISC as RISC CPU
• Modern Intel/AMD CPU
CISC as RISC + GPU
• Modern PC
10. GPU Conceptual Overview
Regular
CPU App
Code/Scripts
for GPU
Video Card
Parallel
Processing
Fast
Dedicated
Memory
Driver
“Compiler”
Data Assets
e.g. 3d
geometry, 2d
texture data
Driver Data
Loading
Video Buffer
Video Output
Driver Data
Retrieval
18. WebGL
•OpenGL ES
• Target mobile
• Most of what you need for (non-cutting) edge 3D
•WebGL
• (Basically) OpenGL ES for the Web
• Target rendering to a HTML canvas
• https://www.shadertoy.com/browse
• https://www.shadertoy.com/howto
• https://www.construct.net/
26. Java Compute
•Deep Learning 4J
•https://deeplearning4j.org/gpu
•Deeplearning4j is a Java-based toolkit for building,
training and deploying deep neural networks, the
regressions and KNN.
27. Deeplearning4j Components
• DataVec performs data ingestion, normalization & transformation into feature vectors
• Deeplearning4j provides tools to configure neural networks & build computation graphs
• DL4J-Examples contains working examples for classification and clustering of images, time series
& text.
• As of 5/12/18, Lombok incompatibility breaks on JDK 10, use JDK 8 instead
• Keras Model Import helps import trained models from Python & Keras to DeepLearning4J & Java.
• ND4J Java access Native Libraries to quickly process Matrix Data on CPUs or GPUs.
• Choose GPUs or native CPUs for your backend linear algebra operations by changing the dependencies
ND4J’s POM.xml file
• CUDA, not OpenCL!
• ScalNet Scala wrapper for Deeplearning4j inspired by Keras.
• Runs on multi-GPUs with Spark.
• RL4J implements Deep Q Learning, A3C and other reinforcement learning algorithms for the JVM.
• Arbiter helps search the hyperparameter space to find the best neural net configuration.
29. Local & Cloud Options
•NVIDIA cards support OpenCL & CUDA
•Mac OS X, eGPU…
• https://github.com/marnovo/macOS-eGPU-CUDA-
guide
•NVIDIA Product Line Exploding
•http://www.nvidia.com/page/home.html
33. Google Cloud
ML, AI, Big Data, etc…
Tiny Subset of
Services…
Just learning what
they all do would
take
34. Challenges
•Very difficult to predict & manage performance
•Could see 10x or 100x perf gains
•…or not.
•One small change could blow up parallel
execution performance
•Relatively difficult to test in advance
35. Suggested Strategies
• GPU Shaders
• Very specialized, visual effects
• Mock up in Photoshop, Motion, Final Cut, etc.
• Look to existing implementations & tweak
• Compute
• Think of kernels as specialized drivers, or stored procs, or
whatever
• Specialist field
• Existing kernels where possible
• Get really clear about modeling data movement
• Get really clear about how minor algo tweeks can blow things up
43. What About Compiling to GPU?
• Fundamental problem
• JVM emulates a traditional CPU
• Probably a bad general solution fit
• Too many differences
• Reminds me of bad ORM abstractions
• Seems to be simplifying, actually making things horrible
• What is the purpose?
• Lots of fast parallel data processing
• IBM inline GPU
• https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.
0/com.ibm.java.lnx.80.doc/diag/understanding/gpu_jit.html
44. Cloud Execution Options
• AWS…
• Dedicated cloud ”GPU” systems
• GPUs with no… graphics output
• https://aws.amazon.com/ec2/elastic-gpus/
• Or… use higher level APIs focused on task
• https://cloud.google.com/gpu/
• Complicated math to figure out approach
• Data transfer costs
• For learning, check out already uploaded public data sets
• Pricing is impacted by things like cyrptomining
CPU Introduced
Focus on simple integer processing
Floating point added later
For a brief time, floating point coprocessors
Simple single threaded model
Multi-threading “hacked” in later
GPU introduced
Lots and lots of transistors
Unlike CPU, GPU just keeps adding cores
CPU, multithreading as afterthought
GPU, multicore as… uhh… core.
Totally different programming model
Future History
Specialized, different dev model
CPU, GPU, Q-Bit…?
Key points
Drivers are a lot more complicated than simple memory mapping and event triggers
Effectively, operating systems, compilers, support for multiple APIs
Huge variety in capabilities, including specialist support for various image and data formats
This is why an NVIDIA driver update may weigh in at 500MB – closer to a giant OS, with lots and lots of legacy system support
Drivers appear to create their own IR format for the various supported APIs, which is then processed by the video card
Easy to imagine tweaking hardware for different use cases. For example, no need for the video output for CPU only tasks
Also easy to imagine tweaking for different uses. For example, less memory and more processing for AI, crypto
Leaky abstraction – those blue arrows are (relatively) slow bus movement. Still wicked, wicked fast but (relatively) slow