"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
ML & AI based Cloud Compiler.pptx
1. Machine Learning & Artificial
Intelligence based Cloud Compiler
GLOW, MLIR, ELL
(Content Beyond Syllabus)
Mrs.B.Vijayalakshmi
AP(SG)/CSE,
Ramco Institute of Technology
2. Machine Learning Compiler
• In 2018, Facebook introduced Glow (the Graph Lowering compiler) as an
open source community project.
• It “lowers” neural network graphs using a two-phase intermediate
representation (IR), which generates machine code that is specially-tuned
to the features and memory of a variety of embedded and server-class
hardware targets.
• It also performs ahead-of-time (AOT) compilation, which minimizes
runtime overhead to save disk space, memory, startup times, and so on.
• Use a framework to create an ML model in the form of a computation
graph, and ML compiler can generate machine-native code for whatever
hardware you run on.
3. Glow
• Glow is a machine learning compiler and execution engine for
hardware accelerators.
• It is designed to be used as a backend for high-level machine
learning frameworks.
• The compiler is designed to allow state of the art compiler
optimizations and code generation of neural network graphs.
• It accelerates the performance of deep learning frameworks on
different hardware platforms.
• Glow can be used to compile neural networks into object files
containing native code.
4. Powerful hardware optimizations
• Glow accepts a computation graph from deep learning frameworks,
such as PyTorch, and generates highly optimized code for machine
learning accelerators.
• It contains many machine learning and hardware optimizations like
kernel fusion to accelerate model development.
• Glow is currently in active development.
5. How Glow Works
• Glow lowers a traditional neural network dataflow graph into a two-phase
strongly-typed intermediate representation (IR).
• The high-level IR allows the optimizer to perform domain-specific
optimizations.
• The lower-level instruction-based address-only IR allows the compiler to
perform memory-related optimizations, such as instruction scheduling,
static memory allocation and copy elimination.
• At the lowest level, the optimizer performs machine-specific code
generation to take advantage of specialized hardware features.
• Glow features a lowering phase which enables the compiler to support a
high number of input operators as well as a large number of hardware
targets by eliminating the need to implement all operators on all targets.
• The lowering phase is designed to reduce the input space and allow new
hardware backends to focus on a small number of linear algebra
primitives.
7. MLIR & ELL
• Glow is not the only neural network compiler available.
• Google’s Multi-Level Intermediate Representation (MLIR) is a
compiler infrastructure that focuses on tensor processors and has
been absorbed by LLVM.
• Microsoft’s Embedded Learning Library (ELL) is another cross-
compiling toolchain for resource-constrained AI devices.
• However, Glow is more mature than either, having been open
sourced in 2018.
• It’s also more performant than many existing AI compiler options.
8. ELL
• ELL is used to deploy software onto resource constrained
platforms and small single-board computers, most of the
interaction with ELL occurs on a laptop or desktop computer.
• ELL as a cross-compiler for embedded intelligence - the
compiler itself runs on your laptop or desktop computer and
the machine code that it generates runs on your single-board
computer.