Julia: A modern language for Software 2.0
Dr. Viral B. Shah and Keno Fischer
Co-founders, Julia Computing
SAP, Oct 29, 2020
Julia solved the two language problem
Compress the
innovation cycle
with Julia
The last 25 years
Experts develop
algorithms
(Python, R, SAS,
Matlab)
DEPLOY
Computer
Scientists
prepare for
production
(C++, C#, Java)
DEPLOY
Experts and
Computer
Scientists
collaborate on
one platform -
JULIA
Software 2.0 also suffers from the two language problem
Origin and Journey
1. 2009 Discussion about a new language
2. 2012 Why we created Julia
3. 2013 Julia becomes the “Ju” in Jupyter
4. 2015 Julia co-creators found Julia Computing, Inc.
5. 2017 1M Julia downloads
5. 2018 Julia 1.0 released
6. 2019 10M Julia downloads. Wilkinson Prize. Sidney Fernbach Prize.
7. 2020 20M downloads. Julia 1.5 released
Dr. Viral Shah Prof. Alan Edelman
Dr. Jeff Bezanson Stefan Karpinski
The Software 2.0 programming stack
Yann LeCun
Director of AI Research,
Facebook
Software 2.0: Write a rough skeleton of the code, and use the
computational resources at our disposal to search this space for
a program that works.
Andrej Karpathy
Director of AI,
Tesla
OK, Deep Learning has outlived its usefulness as a
buzz-phrase.
Deep Learning est mort. Vive Differentiable
Programming!
Chris Lattner
Senior Director,
TensorFlow, TPU (Google)
Julia is another great language with an open and active
community. They are currently investing in machine learning
techniques. The Julia community shares many common values
as with our project.
Google.ai Head Jeff Dean and
Fast.ai co-author Jeremy Howard on Julia
Performance matters: Comparison with Python
The Computer Language
Benchmarks Game:
Julia is 30x faster
Graph processing:
Julia is 100x faster
H2O DataFrames benchmark: GroupBy
Julia is 2x faster
CSV loading: Julia is 30x
faster
K-means: Julia is 100x
faster
Customers, Partners & Companies
Using Julia
10,000+ companies using Julia
Many case studies here:
https://juliacomputing.com/case-studies/
Universities Using and Teaching Julia
Noteworthy: https://computationalthinking.mit.edu
Rapid adoption for a young language
20M downloads; 4,000 packages; 10,000 companies; 1,500 universities
IEEE Spectrum Language Rankings
Books on Julia
Best in class packages in many domains
Differential
Equations
Robotics Operations Research
Graph Processing
Signal Processing
Image Processing
Computational Biology
Data Science
Writing a notebook is not just about writing the final
document — Pluto empowers the experiments and
discoveries that are essential to getting there.
Explore models and share results in a notebook that is:
• Reactive - when changing a function or variable,
Pluto automatically updates all affected cells.
• Lightweight - Pluto is written in pure Julia and is
easy to install.
• Simple - no hidden workspace state; friendly UI.
JuliaCon 2020 talk:
https://www.youtube.com/watch?v=IAF8DjrQSSk
Pluto Notebooks
Julia on GPUs: https://juliagpu.org
Supports NVIDIA GPUs. Nascent support for AMD and Intel GPUs.
Benchmarks compared to CUDA C
Noteworthy new capabilities
• Multi-GPU programming
• Support for CUDA 11 (and CUDA 10 also)
• CUDNN support
• Multi-tasking and multi-threading
Noteworthy applications
• 300x improvement in pharmaceutical workloads
• 1,000 GPU parallel deployment at CSCS (Switzerland)
• Clima Project – Oceananigans.jl
• Multi-physics simulations
• Reinforcement learning – AlphaZero.jl
Data Science with DataFrames.jl and CSV.jl
DataFrames.jl
• High performance native Julia package for data
manipulation
• GroupBy operations 2x faster than Pandas and R
• DataFrames.jl status and roadmap
CSV.jl
• Native Julia package for loading CSV files
• Significantly faster than Python (pandas) and R
• Multi-threaded
• The great CSV showdown (TowardsDataScience)
H2O DataFrames benchmark: GroupBy
Julia is 2x faster
CSV loading: Julia is 30x faster
Machine Learning with MLJ.jl
The scikit-learn like package in Julia
MLJ:jl: https://github.com/alan-turing-institute/MLJ.jl
Package Maturity
Clustering.jl high
DecisionTree.jl high
EvoTrees.jl medium
GLM.jl medium
LIBSVM.jl high
LightGBM.jl high
MLJFlux.jl experimental
MLJLinearModels.jl experimental
MLJModels.jl (builtins) medium
MultivariateStats.jl high
NaiveBayes.jl experimental
NearestNeighbors.jl high
ParallelKMeans.jl experimental
ScikitLearn.jl high
XGBoost.jl high
Packages supported by MLJ.jl
Deep Learning with Flux.jl
Flux.jl
• Simple Kera—like syntax
• Native Julia implementation
• Easy to look under the hood and modify
• Model Zoo to get started
Pre-trained models
• MetalHead.jl
• YOLO.jl
• Darknet.jl
• ObjectDetector.jl
• Transformers.jl
• TextAnalysis.jl
• GeometricFlux.jl
Image Processing with Images.jl
https://juliaimages.org
Images.jl
JuliaImages is focussed on a clean
architecture and hopes to unify "machine
vision" and "biomedical 3d image
processing" communities.
• Native Julia based image processing
package
• Being used at MIT in a class taught in
collaboration with 3blue1brown
• Native Julia datatypes such as RGB.
• Extensive documentation
https://www.youtube.com/watch?v=DGojI9xcCfg
https://www.youtube.com/watch?v=8rrHTtUzyZA
Optimization
INFORMS Computing Society Prize
JuMP.jl: https://github.com/IainNZ/SudokuService
JuMP.jl, Optim.jl
• JuMP is a DSL for mathematical
optimization
• Received INFORMS Computing Society
Prize
• Supports over 25 backend solvers
• Optim.jl provides univariate and
multivariate optimization in Julia.
• Jump-Dev 2020 is in progress
Scientific Machine Learning
https://sciml.ai
Benchmarks compared to CUDA C
Noteworthy new capabilities
• Combine Science and Machine Learning
• Comprehensive Differential Equation Solvers
• GPU acceleration
• MTK.jl: A DSL for modeling and simulation
• Automatic-differentation
Noteworthy applications
• ACED: Computational Electrochemical Systems
• Clima: Climate Modelling
• NYFed DSGE modelling
• Pumas.jl: Pharma simulations
• MADS: Decision Support System
• QuantumOptics
• Robotics
Julia Computing Product Offerings
Applications made possible with Software 2.0
Reinforcement Learning with AlphaZero.jl
• Simple: Core algorithm is
2000 lines of Julia
• Extensible: Generic
interfaces allow new
games and learning
• Fast: 10-100x faster than
other high-level alternatives
• Scalable: Combines
distributed, multi-threaded
and multi-GPU parallelism
Jonathan Laurent
https://github.com/jonathan-laurent/AlphaZero.jl
Accelerated
building energy
efficiency models
● Automation of model order reduction on DAEs via
neural DAE surrogate dimensional reductions
● Interaction with component-based modeling to allow
for generating accelerated building models with
transferred learning components
Automated model discovery in pre-clinical
contexts to predict drug efficacy
NN(2)
NN(3)
NN(4)
?
Data
Find neural networks so the model matches the data,
then find the equations which implies new chemical
reactions
Optimization of materials
for battery-powered
aircraft
● GPU accelerate small (30) DAE battery models
● Utilize neural surrogates for global sensitivities
● Automatically refine equations from data
● Use these models to identify material
properties
● Propose optimal experimental design
● Closed loop: direct collaboration with materials
scientists, restructure model with new data
GPS-free navigation
using magnetic
sensors and neural
partial differential
equations
● Neural partial differential equation models which
incorporate Maxwell’s equations for magnetic field
denoising
● Prediction of location for GPS-free navigation
Celeste: Machine Learning to build a sky atlas ran on
650,000+ cores, 1.3 M. threads in parallel, at 1.54 Petaflops
Climate modeling and Energy Optimization
Personalized Medicine - Pumas.ai
Joga Gobburu
Professor, School of Pharmacy (UMB)
ex-Director, Division of Pharmacometrics, US FDA
Vijay Ivaturi
Professor, School of Pharmacy (UMB)
Models are really programs, and
ML problems are language problems
Idea:
Enormous Datasets
Automatic Differentiation
Novel Hardware
Deployment
Distributed Compute
Machine Learning
Fashionable Modelling with Flux
(arXiv:1811.01457)
Building a Language and Compiler for Machine
Learning (julialang:ml-language-compiler)
Differentiable Programming
AD is a compiler problem
function foo(W, Y, x)
Z = W * Y
a = Z * x
b = Y * x
c = tanh.(b)
r = a + c
return r
end
function ∇foo(W, Y, x)
Z = W * Y
a = Z * x
b = Y * x
c, 𝒥thanh = ∇tanh.(b)
a + c, function (Δr)
Δc = Δr, Δa = Δr
(Δtanh, Δb) = 𝒥thanh(Δc)
(ΔY, Δx) = (Δb * x', Y' * Δb)
(ΔZ = Δa * x', Δx += Z' * Δa)
(ΔW = ΔZ * Y', ΔY = W * ΔZ')
(nothing, ΔW, ΔY, Δx)
end
end
Note: Simplified to assume *,+ are
compiler primitives (Not the case in
the original implementation)
Note: Reverse-AD model (actual
implementation uses mixed)
Don't Unroll Adjoint: Differentiating SSA-Form
Programs (arXiv:1810.07951)
Zygote.jl - AD is a compiler problem
function foo(W, Y, x)
Z = W * Y
a = Z * x
b = Y * x
c = tanh.(b)
r = a + c
return r
end
function ∇foo(W, Y, x)
Z = W * Y
a = Z * x
b = Y * x
c, 𝒥thanh = ∇tanh.(b)
a + c, function (Δr)
Δc = Δr, Δa = Δr
(Δtanh, Δb) = 𝒥thanh(Δc)
(ΔY, Δx) = (Δb * x', Y' * Δb)
(ΔZ = Δa * x', Δx += Z' * Δa)
(ΔW = ΔZ * Y', ΔY = W * ΔZ')
(nothing, ΔW, ΔY, Δx)
end
end
In the backwards pass
- Inputs become Outputs
- Outputs become Inputs
Zygote.jl - AD is a compiler problem
function foo(W, Y, x)
Z = W * Y
a = Z * x
b = Y * x
c = tanh.(b)
r = a + c
return r
end
function ∇foo(W, Y, x)
Z = W * Y
a = Z * x
b = Y * x
c, 𝒥thanh = ∇tanh.(b)
a + c, function (Δr)
Δc = Δr, Δa = Δr
(Δtanh, Δb) = 𝒥thanh(Δc)
(ΔY, Δx) = (Δb * x', Y' * Δb)
(ΔZ = Δa * x', Δx += Z' * Δa)
(ΔW = ΔZ * Y', ΔY = W * ΔZ')
(nothing, ΔW, ΔY, Δx)
end
end
AD is a compiler problem
struct 𝒥_foo
W
Y
x
Z
𝒥thanh
end
(::𝒥_foo)(Δr) = ....
function ∇foo(W, Y, x)
Z = W * Y
a = Z * x
b = Y * x
c, 𝒥thanh = ∇tanh.(b)
r = a + c
(r, 𝒥_foo(W, Y, x, Z, 𝒥thanh))
end
function ∇foo(W, Y, x)
Z = W * Y
a = Z * x
b = Y * x
c, 𝒥thanh = ∇tanh.(b)
a + c, function (Δr)
Δc = Δr, Δa = Δr
(Δtanh, Δb) = 𝒥thanh(Δc)
(ΔY, Δx) = (Δb * x', Y' * Δb)
(ΔZ = Δa * x', Δx += Z' * Δa)
(ΔW = ΔZ * Y', ΔY = W * ΔZ')
(nothing, ΔW, ΔY, Δx)
end
end
Closure conversion
The compiler
builds the “tape”
for us
AD as a compiler problem
Simple (Syntactic) - but requires optimizing compiler for performance
Partial Specialization (/DCE) => Partial Gradients
Better Compiler Optimizations ⇔ Faster AD
Nested AD for free
AD on programs ⇔ Co-Tangent Bundles
Machine Learning
Fashionable Modelling with Flux
(arXiv:1811.01457)
Building a Language and Compiler for Machine
Learning (julialang:ml-language-compiler)
Compiler Backends
Source: Computer Architecture, Sixth Edition: A Quantitative Approach
Hardware Diversity
Convergence of HPC and ML
● Climate
Segmentation
Summit (ORNL)
Any sufficiently complicated machine learning system
contains an ad-hoc, informally-specified, bug-ridden, slow
implementation of half of a programming language.

Julia: A modern language for software 2.0

  • 1.
    Julia: A modernlanguage for Software 2.0 Dr. Viral B. Shah and Keno Fischer Co-founders, Julia Computing SAP, Oct 29, 2020
  • 2.
    Julia solved thetwo language problem Compress the innovation cycle with Julia The last 25 years Experts develop algorithms (Python, R, SAS, Matlab) DEPLOY Computer Scientists prepare for production (C++, C#, Java) DEPLOY Experts and Computer Scientists collaborate on one platform - JULIA Software 2.0 also suffers from the two language problem
  • 3.
    Origin and Journey 1.2009 Discussion about a new language 2. 2012 Why we created Julia 3. 2013 Julia becomes the “Ju” in Jupyter 4. 2015 Julia co-creators found Julia Computing, Inc. 5. 2017 1M Julia downloads 5. 2018 Julia 1.0 released 6. 2019 10M Julia downloads. Wilkinson Prize. Sidney Fernbach Prize. 7. 2020 20M downloads. Julia 1.5 released Dr. Viral Shah Prof. Alan Edelman Dr. Jeff Bezanson Stefan Karpinski
  • 4.
    The Software 2.0programming stack Yann LeCun Director of AI Research, Facebook Software 2.0: Write a rough skeleton of the code, and use the computational resources at our disposal to search this space for a program that works. Andrej Karpathy Director of AI, Tesla OK, Deep Learning has outlived its usefulness as a buzz-phrase. Deep Learning est mort. Vive Differentiable Programming! Chris Lattner Senior Director, TensorFlow, TPU (Google) Julia is another great language with an open and active community. They are currently investing in machine learning techniques. The Julia community shares many common values as with our project.
  • 5.
    Google.ai Head JeffDean and Fast.ai co-author Jeremy Howard on Julia
  • 6.
    Performance matters: Comparisonwith Python The Computer Language Benchmarks Game: Julia is 30x faster Graph processing: Julia is 100x faster H2O DataFrames benchmark: GroupBy Julia is 2x faster CSV loading: Julia is 30x faster K-means: Julia is 100x faster
  • 7.
    Customers, Partners &Companies Using Julia 10,000+ companies using Julia Many case studies here: https://juliacomputing.com/case-studies/
  • 8.
    Universities Using andTeaching Julia Noteworthy: https://computationalthinking.mit.edu
  • 9.
    Rapid adoption fora young language 20M downloads; 4,000 packages; 10,000 companies; 1,500 universities IEEE Spectrum Language Rankings
  • 10.
  • 11.
    Best in classpackages in many domains Differential Equations Robotics Operations Research Graph Processing Signal Processing Image Processing Computational Biology Data Science
  • 12.
    Writing a notebookis not just about writing the final document — Pluto empowers the experiments and discoveries that are essential to getting there. Explore models and share results in a notebook that is: • Reactive - when changing a function or variable, Pluto automatically updates all affected cells. • Lightweight - Pluto is written in pure Julia and is easy to install. • Simple - no hidden workspace state; friendly UI. JuliaCon 2020 talk: https://www.youtube.com/watch?v=IAF8DjrQSSk Pluto Notebooks
  • 13.
    Julia on GPUs:https://juliagpu.org Supports NVIDIA GPUs. Nascent support for AMD and Intel GPUs. Benchmarks compared to CUDA C Noteworthy new capabilities • Multi-GPU programming • Support for CUDA 11 (and CUDA 10 also) • CUDNN support • Multi-tasking and multi-threading Noteworthy applications • 300x improvement in pharmaceutical workloads • 1,000 GPU parallel deployment at CSCS (Switzerland) • Clima Project – Oceananigans.jl • Multi-physics simulations • Reinforcement learning – AlphaZero.jl
  • 14.
    Data Science withDataFrames.jl and CSV.jl DataFrames.jl • High performance native Julia package for data manipulation • GroupBy operations 2x faster than Pandas and R • DataFrames.jl status and roadmap CSV.jl • Native Julia package for loading CSV files • Significantly faster than Python (pandas) and R • Multi-threaded • The great CSV showdown (TowardsDataScience) H2O DataFrames benchmark: GroupBy Julia is 2x faster CSV loading: Julia is 30x faster
  • 15.
    Machine Learning withMLJ.jl The scikit-learn like package in Julia MLJ:jl: https://github.com/alan-turing-institute/MLJ.jl Package Maturity Clustering.jl high DecisionTree.jl high EvoTrees.jl medium GLM.jl medium LIBSVM.jl high LightGBM.jl high MLJFlux.jl experimental MLJLinearModels.jl experimental MLJModels.jl (builtins) medium MultivariateStats.jl high NaiveBayes.jl experimental NearestNeighbors.jl high ParallelKMeans.jl experimental ScikitLearn.jl high XGBoost.jl high Packages supported by MLJ.jl
  • 16.
    Deep Learning withFlux.jl Flux.jl • Simple Kera—like syntax • Native Julia implementation • Easy to look under the hood and modify • Model Zoo to get started Pre-trained models • MetalHead.jl • YOLO.jl • Darknet.jl • ObjectDetector.jl • Transformers.jl • TextAnalysis.jl • GeometricFlux.jl
  • 17.
    Image Processing withImages.jl https://juliaimages.org Images.jl JuliaImages is focussed on a clean architecture and hopes to unify "machine vision" and "biomedical 3d image processing" communities. • Native Julia based image processing package • Being used at MIT in a class taught in collaboration with 3blue1brown • Native Julia datatypes such as RGB. • Extensive documentation https://www.youtube.com/watch?v=DGojI9xcCfg https://www.youtube.com/watch?v=8rrHTtUzyZA
  • 18.
    Optimization INFORMS Computing SocietyPrize JuMP.jl: https://github.com/IainNZ/SudokuService JuMP.jl, Optim.jl • JuMP is a DSL for mathematical optimization • Received INFORMS Computing Society Prize • Supports over 25 backend solvers • Optim.jl provides univariate and multivariate optimization in Julia. • Jump-Dev 2020 is in progress
  • 19.
    Scientific Machine Learning https://sciml.ai Benchmarkscompared to CUDA C Noteworthy new capabilities • Combine Science and Machine Learning • Comprehensive Differential Equation Solvers • GPU acceleration • MTK.jl: A DSL for modeling and simulation • Automatic-differentation Noteworthy applications • ACED: Computational Electrochemical Systems • Clima: Climate Modelling • NYFed DSGE modelling • Pumas.jl: Pharma simulations • MADS: Decision Support System • QuantumOptics • Robotics
  • 20.
  • 21.
    Applications made possiblewith Software 2.0
  • 22.
    Reinforcement Learning withAlphaZero.jl • Simple: Core algorithm is 2000 lines of Julia • Extensible: Generic interfaces allow new games and learning • Fast: 10-100x faster than other high-level alternatives • Scalable: Combines distributed, multi-threaded and multi-GPU parallelism Jonathan Laurent https://github.com/jonathan-laurent/AlphaZero.jl
  • 24.
    Accelerated building energy efficiency models ●Automation of model order reduction on DAEs via neural DAE surrogate dimensional reductions ● Interaction with component-based modeling to allow for generating accelerated building models with transferred learning components
  • 25.
    Automated model discoveryin pre-clinical contexts to predict drug efficacy NN(2) NN(3) NN(4) ? Data Find neural networks so the model matches the data, then find the equations which implies new chemical reactions
  • 26.
    Optimization of materials forbattery-powered aircraft ● GPU accelerate small (30) DAE battery models ● Utilize neural surrogates for global sensitivities ● Automatically refine equations from data ● Use these models to identify material properties ● Propose optimal experimental design ● Closed loop: direct collaboration with materials scientists, restructure model with new data
  • 27.
    GPS-free navigation using magnetic sensorsand neural partial differential equations ● Neural partial differential equation models which incorporate Maxwell’s equations for magnetic field denoising ● Prediction of location for GPS-free navigation
  • 28.
    Celeste: Machine Learningto build a sky atlas ran on 650,000+ cores, 1.3 M. threads in parallel, at 1.54 Petaflops
  • 29.
    Climate modeling andEnergy Optimization
  • 30.
    Personalized Medicine -Pumas.ai Joga Gobburu Professor, School of Pharmacy (UMB) ex-Director, Division of Pharmacometrics, US FDA Vijay Ivaturi Professor, School of Pharmacy (UMB)
  • 31.
    Models are reallyprograms, and ML problems are language problems Idea:
  • 32.
    Enormous Datasets Automatic Differentiation NovelHardware Deployment Distributed Compute
  • 34.
    Machine Learning Fashionable Modellingwith Flux (arXiv:1811.01457) Building a Language and Compiler for Machine Learning (julialang:ml-language-compiler) Differentiable Programming
  • 35.
    AD is acompiler problem function foo(W, Y, x) Z = W * Y a = Z * x b = Y * x c = tanh.(b) r = a + c return r end function ∇foo(W, Y, x) Z = W * Y a = Z * x b = Y * x c, 𝒥thanh = ∇tanh.(b) a + c, function (Δr) Δc = Δr, Δa = Δr (Δtanh, Δb) = 𝒥thanh(Δc) (ΔY, Δx) = (Δb * x', Y' * Δb) (ΔZ = Δa * x', Δx += Z' * Δa) (ΔW = ΔZ * Y', ΔY = W * ΔZ') (nothing, ΔW, ΔY, Δx) end end Note: Simplified to assume *,+ are compiler primitives (Not the case in the original implementation) Note: Reverse-AD model (actual implementation uses mixed) Don't Unroll Adjoint: Differentiating SSA-Form Programs (arXiv:1810.07951)
  • 36.
    Zygote.jl - ADis a compiler problem function foo(W, Y, x) Z = W * Y a = Z * x b = Y * x c = tanh.(b) r = a + c return r end function ∇foo(W, Y, x) Z = W * Y a = Z * x b = Y * x c, 𝒥thanh = ∇tanh.(b) a + c, function (Δr) Δc = Δr, Δa = Δr (Δtanh, Δb) = 𝒥thanh(Δc) (ΔY, Δx) = (Δb * x', Y' * Δb) (ΔZ = Δa * x', Δx += Z' * Δa) (ΔW = ΔZ * Y', ΔY = W * ΔZ') (nothing, ΔW, ΔY, Δx) end end In the backwards pass - Inputs become Outputs - Outputs become Inputs
  • 37.
    Zygote.jl - ADis a compiler problem function foo(W, Y, x) Z = W * Y a = Z * x b = Y * x c = tanh.(b) r = a + c return r end function ∇foo(W, Y, x) Z = W * Y a = Z * x b = Y * x c, 𝒥thanh = ∇tanh.(b) a + c, function (Δr) Δc = Δr, Δa = Δr (Δtanh, Δb) = 𝒥thanh(Δc) (ΔY, Δx) = (Δb * x', Y' * Δb) (ΔZ = Δa * x', Δx += Z' * Δa) (ΔW = ΔZ * Y', ΔY = W * ΔZ') (nothing, ΔW, ΔY, Δx) end end
  • 38.
    AD is acompiler problem struct 𝒥_foo W Y x Z 𝒥thanh end (::𝒥_foo)(Δr) = .... function ∇foo(W, Y, x) Z = W * Y a = Z * x b = Y * x c, 𝒥thanh = ∇tanh.(b) r = a + c (r, 𝒥_foo(W, Y, x, Z, 𝒥thanh)) end function ∇foo(W, Y, x) Z = W * Y a = Z * x b = Y * x c, 𝒥thanh = ∇tanh.(b) a + c, function (Δr) Δc = Δr, Δa = Δr (Δtanh, Δb) = 𝒥thanh(Δc) (ΔY, Δx) = (Δb * x', Y' * Δb) (ΔZ = Δa * x', Δx += Z' * Δa) (ΔW = ΔZ * Y', ΔY = W * ΔZ') (nothing, ΔW, ΔY, Δx) end end Closure conversion The compiler builds the “tape” for us
  • 39.
    AD as acompiler problem Simple (Syntactic) - but requires optimizing compiler for performance Partial Specialization (/DCE) => Partial Gradients Better Compiler Optimizations ⇔ Faster AD Nested AD for free
  • 40.
    AD on programs⇔ Co-Tangent Bundles
  • 41.
    Machine Learning Fashionable Modellingwith Flux (arXiv:1811.01457) Building a Language and Compiler for Machine Learning (julialang:ml-language-compiler) Compiler Backends
  • 42.
    Source: Computer Architecture,Sixth Edition: A Quantitative Approach
  • 43.
  • 44.
    Convergence of HPCand ML ● Climate Segmentation Summit (ORNL)
  • 45.
    Any sufficiently complicatedmachine learning system contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of a programming language.