SlideShare a Scribd company logo
Dictionary Learning for Games
Manny Ko
Principal Engineer, Activision R&D
Graphics Research and Development
Outline
● K-SVD and dictionary learning
● Linear Blend Skinning
● Brief survey on automatic skinning and compression
● Dictionary learning for LBS
● Two-layer sparse compression of Le & Deng.
● This talk is about compressing skinned animations.
Frames, Sparsity and Global Illumination:
New Math for Games
GDC 2012
Robin Green – Microsoft Corp
Manny Ko – PDI/Dreamworks
Orthogonal Matching Pursuit
and K-SVD for Sparse Encoding
Manny Ko
Senior Software Engineer, Imaginations Technologies
Robin Green
SSDE, Microsoft Xbox ATG
Representing Signals
● We represent signals as linear combinations of things we already know –
the ‘basis’
× 𝛼1 +
× 𝛼2 +
× 𝛼3 + ⋯
=
× 𝛼0 +
Orthonormal Bases (ONBs)
● The simplest way to represent signals is using a set of orthonormal bases
𝑏𝑖 𝑡 𝑏𝑗(𝑡)
+∞
−∞
𝑑𝑡 =
0 𝑖 ≠ 𝑗
1 𝑖 = 𝑗
Example ONBs
● Fourier Basis
𝑏 𝑘 𝑡 = 𝑒 𝑖2𝑝𝑘𝑡
● Wavelets
𝑏 𝑚,𝑛 𝑡 = 𝑎−𝑚 2 𝑥 𝑎−𝑚 𝑡 − 𝑏𝑚
● Gabor Functions
𝑏 𝑘,𝑛 𝑡 = 𝜔 𝑡 − 𝑏𝑛 𝑒 𝑖2𝑝𝑘𝑡
● Contourlet
𝑏𝑗,𝑘,𝐧 𝑡 = λ𝑗,𝑘 𝑡 − 2 𝑗−1 𝐒 𝑘n
Benefits of ONB
● Analytic formulations
● Well understood mathematical properties
● Fast and simple algorithms for projection
Problems with ONB
● One-size-fits all – not data adaptive
● Global support cannot adapt to data locally
● Fourier support is infinite, SH support spans the sphere
● Try using Fourier to represent a step-function
● Not sparse – very few zero coefficients
● Not additive - relies on destructive cancellation.
Gibb’s Ringing – Fourier and SH
What is Overcomplete Dictionary?
● Overcomplete means the dictionary has more atoms
(columns) than the minimum required for the
dimension of the signal
● In 3D, an ONB only needs 3 basis
● A 3D dictionary can have dozens or hundreds
The Sparse Signal Model
𝐃
A fixed dictionary
𝛼
=
𝑥
𝑁 𝑁
𝐾
resulting
signal
Sparse
vector of
coefficients
Why so many atoms?
● More atoms give our algorithm a better chance to
find a small subset that matches a given signal
● Let’s look at some patches from Barbara
Patches from Barbara
Domain Specific Compression
● Just 550 bytes per image
1. Original
2. JPEG
3. JPEG2000
4. PCA
5. KSVD per block
Project onto Dictionaries
● Overcomplete and non-orthogonal
● interactions among atoms cannot be ignored
● How do we project?
● Sparse Coding problem
Matching Pursuit
1. Set the residual 𝑟 = 𝑥
2. Find an unselected atom
that best matches the
residual 𝐃𝛼 − 𝑟
3. Re-calculate the residual
from matched atoms
𝑟 = 𝑥 − 𝐃𝛼
4. Repeat until 𝑟 ≤ 𝜖
Greedy Methods
𝐃
𝛼
=
𝑥
Orthogonal Matching Pursuit (OMP)
● Add an Orthogonal Projection to the residual calculation
1. set 𝐼 ∶= ∅ , 𝑟 ≔ 𝑥, 𝛾 ≔ 0
2. while (𝑠𝑡𝑜𝑝𝑝𝑖𝑛𝑔 𝑡𝑒𝑠𝑡 𝑓𝑎𝑙𝑠𝑒) do
3. 𝑘 ≔ argmax
𝑘
𝑑 𝑘
𝑇
𝑟
4. 𝐼 ≔ 𝐼, 𝑘
5. 𝛾𝐼 ≔ 𝐃𝐼
+ 𝑥
6. 𝑟 ≔ 𝑥 − 𝐃𝐼 𝛾𝐼
7. end while
What is Dictionary Learning?
● select a few atoms for each signal – e.g. OMP
● Adjust the atoms to better fit those signals
● Repeat
K-SVD
● Is one of the well known dictionary learning
methods
● Check out our GDC2013 talk
● our GDC13 slides "OMP and K-SVD for Sparse Coding“
● See Jim’s talk just before this session
● Miral’s Online Learning is the other.
Overcomplete Dictionary Recap
● Importance of overcomplete dictionaries
● OMP for efficient projection onto dictionaries
● K-SVD for learning a better dictionary using samples
from the real data
Part 2: Skinning
● blank
Linear Blend Skinning
● 𝑣𝑖 = 𝑤𝑖𝑗(𝑅𝑗
|𝐵|
𝑗=1 𝑝𝑗 + 𝑇𝑗)
● 𝑝𝑖 is the position for the 𝑖th vertex of the rest pose
● 𝑤𝑖𝑗 ≥ 0 𝑎𝑛𝑑 𝑠𝑢𝑚𝑠 𝑡𝑜 𝑜𝑛𝑒(affinity). The non-negative
constraint makes the blend additive. The affinity
constraint prevents over-fitting and artifacts.
● 𝑅𝑗 usually is orthogonal to avoid shearing or scaling
● |𝐵| is the number of weights (usually <= 6)
Blending Weights
Blending Weights
Blend Skinning on GPU
GPU
cores
LBS on GPUs
● 𝑤𝑖𝑗 typically very sparse – 4-6 weights or less per-
vertex
● Ideally a group of vertices all have the same weights
to avoid thread divergence or splitting drawcalls
● These are fairly serious constraints
a) Some vertices might need more weights – e.g. very
smooth meshes or complex topology (hand)
WeightsReduction
Poisson-based Weight Reduction of Animated Meshes [Landreneau and Schaefer 2010]
 Discrete optimization:
– Impossible to find optimum solution
– Very high cost for non-optimum solution
• Fracture
• Significant increase of computing cost: nK non-zero  n(K+1) non-zero
K-Largest - fracturing
K-Largest - normals
Vertex Normal in Shader
Magic 4
● why 4 weights is too few to generate smooth
weights
● 4 vertices specifies an affine transform exactly.
● simplices in 3D contains 4 vertices for barycentric
coordinates.
Two-LayerSparseCompressionof
Dense-WeightBlendSkinning
BinhLe and ZhigangDeng
SIGGRAPH2013
Two-Layer Sparse Compression, Le & Deng 2013
● Use dictionary learning to compute a two-level
compression using bones
● Work with the weights of the bind-pose directly
Why Dictionary for LBS?
● Why dictionary learning?
● limitations of Orthonormal-basis e.g. eigen/PCA
● Not adaptive
● Not purely additive – i.e. negative weights (relies on cancellation)
● No intuitive meaning – bones extracted cannot be used to tweak the
model
Dense-WeightCompression
Input: Dense matrix
Bone Transformations
Blending
Vertices
Vertices
Bones
SparseMatrixFactorization– dictionarylearning
SparseMatrixFactorization
c=max{card(wi)}+1
SparseMatrixFactorization
c=max{card(wi)}+1
n is very large
card(A)=2n→min
Algorithm– Blockcoordinatedescent
Alternative update D and A
(Block coordinate descent)
Update D Update A
UpdateCoefficientsA
Linear least square with 2 unknowns
Use mesh smoothness
assumption to quickly
find the non-zero
candidates (virtual bones)
αi
Movies
Analysis of Two-Layer Scheme
● Use 100’s of virtual bones means we are not limited to a
sparse approximation to the original animation.
● virtual bones act as a ‘common subexpression’
● e.g. think compute shader that writes to LDS.
● Still enforce sparsity on VBs to control runtime cost and
LDS usage – but k can be 100’s.
● Per-vertex weights are
● very sparse (2 per vertex) and the same for all vertices
● good for GPU.
Learning Virtual Bones
● Virtual bones are learned from the dense vertex weights
by block-coordinate-descent (BCD):
Sparse coding: search for a few good atoms among the
input columns. Use that to project all the rest of the inputs.
● Atom update: given the sparse weights from above we
seek to adjust the atoms to make them fit the inputs that
needs them better – a series of small LS problems.
● Similar to EM/Lloyd-Max
Sparse Coding
Sparse coding:
● insert the vertex with the largest L2 norm
● add a few more vertex which has the smallest dot-
product with the 1st atom
● solve the basis-pursuit with OMP (see K-SVD) or LARS.
● solve 2x2 least-square prob. for 𝑤𝑖𝑗 to blend masters
bones
Weight Map – matrix A
● Weights and indices for each vertex to blend virtual
bones
● solving a small 2x2 linear system to minimize MSE:
● arg 𝑚𝑖𝑛 𝑥 𝐷𝑥 − 𝑤𝑖 ^2
● runtime per-vertex cost is just 2 dotp
● no bone hierarchy to worry about
● no warp divergence even for high valence vertices
Atom Updates
Atom update:
foreach vertex
● update each atom to minimize error for the set of vertices that
reference it (this is like K-SVD)
● Miral’s Online Dictionary Learning [Miral09]
Atom Updates
● Precompute A and B
● 𝐴 = 𝛼𝑖
𝑡
𝑖=1 𝛼 𝑇
● B = 𝑥𝑖 𝛼 𝑇𝑡
𝑖=1
● For all atoms
● 𝑢𝑗
1
𝐴 𝑗,𝑗
𝑏𝑗 − 𝐷𝑎𝑗 + 𝑑𝑗 − eq(5)
● 𝑑𝑗
1
max 𝑢 𝑗 2,1
𝑢𝑗. − eq 6
● 𝑢𝑗is thresholded to make sure # of non −
zero is below the # of master bones
Live Demo
● youtube
CompressionwithExamplePoses
Without using example pose
– Minimize weights difference
With using example poses
– Minimize reconstruction error
UsingExemplarposes
Virtual Bones Distribution
Recap
● The two-level scheme can work with dense (hand painted)
weights or example poses (blend shape?)
● Only the vertex positions are needed
● a fixed memory footprint and uniform per-vertex cost - GPU
friendly
● Combines the quality of dense skinning and the efficiencies of
sparse-LBS. Animators can use blend-shapes or FFD more.
Recap 2
● Besides it uses dictionary learning and modern
sparsity methods – how cool is that? 
● Last year we show how good dictionary learning is
for compressing 2d images and 3d volumes
● Now we see what it can do for animation.
● Thank you!
Recap 3
● Non-negative LS and Active-set Method (ASM)
● Block-coordinate descent
● Sparsity constraints
● L1 relaxation and L0-norm constraints
● Direct solving
● These are all very useful tools.
Acknowledgements
● Binh Huy Le & Zhigang Deng kindly provided the demo and their Siggraph
materials.
● Robin Green for being my collaborator for many years.
● Igor Carron inspired me to learn sparsity methods and matrix factorization
and for his spirit of broad exploration and sharing.
● Julien Mairal for the online learning math
● Peter-Pike who inspired me to apply modern math to graphics and games.
● Carlos Gonzalez Ochoa for sharing his insight in animation.
Activision R&D is Hiring
● Our group is hiring 
References
● Alexa 2000. “As-rigid-as-possible shape interpolation”, SIGGRAPH 2000.
● Halser 2010. “Learning skeletons for shape and pose”, I3D 2010.
● Kavan, Sloan and O'Sullivan 2010. “Fast and Efficient Skinning of Animated Meshes” Comput. Graph.
Forum.
● Ko, and Green 2013 “Orthogonal Matching Pursuit and K-SVD for Sparse Encoding” GDC, Math for
Games 2013 gdc2013-ompandksvd
● Landreneau & Schaefer “Poisson-Based Weight Reduction of Animated Meshes”, CGF 28(2), 2012.
● Le & Deng 2012. “Smooth skinning decomposition with rigid bones”, ACM TOG, Vol. 31, No. 6.
● Le & Deng 2013. “Two-Layer Sparse Compression of Dense-Weight Blend Skinning”, Siggraph 2013
Paper page
● Mairal 2009. “Online dictionary learning for sparse coding” Int. Conf. on Machine Learning.
Appendix
● Kabsch/Procrutes method – use SVD to compute the
MSE minimum rotation of one point-set to another.
● Kabsch_algorithm

More Related Content

What's hot

Rfid nfc
Rfid nfcRfid nfc
Rfid nfc
sbenloucif
 
指数時間アルゴリズムの最先端
指数時間アルゴリズムの最先端指数時間アルゴリズムの最先端
指数時間アルゴリズムの最先端Yoichi Iwata
 
Graph Edit Distance: Basics & Trends
Graph Edit Distance: Basics & TrendsGraph Edit Distance: Basics & Trends
Graph Edit Distance: Basics & Trends
Luc Brun
 
LCA and RMQ ~簡潔もあるよ!~
LCA and RMQ ~簡潔もあるよ!~LCA and RMQ ~簡潔もあるよ!~
LCA and RMQ ~簡潔もあるよ!~
Yuma Inoue
 
数学つまみぐい入門編
数学つまみぐい入門編数学つまみぐい入門編
数学つまみぐい入門編
Akira Yamaguchi
 
Recurrent and Recursive Networks (Part 1)
Recurrent and Recursive Networks (Part 1)Recurrent and Recursive Networks (Part 1)
Recurrent and Recursive Networks (Part 1)
sohaib_alam
 
AtCoder Regular Contest 039 解説
AtCoder Regular Contest 039 解説AtCoder Regular Contest 039 解説
AtCoder Regular Contest 039 解説
AtCoder Inc.
 
Amortize analysis of Deque with 2 Stack
Amortize analysis of Deque with 2 StackAmortize analysis of Deque with 2 Stack
Amortize analysis of Deque with 2 Stack
Ken Ogura
 
CHAPITRE VIII : Systèmes linéaires Modélisation & Simulation
CHAPITRE VIII :  Systèmes linéaires Modélisation & SimulationCHAPITRE VIII :  Systèmes linéaires Modélisation & Simulation
CHAPITRE VIII : Systèmes linéaires Modélisation & Simulation
Mohammed TAMALI
 
Schönhage Strassen Algorithm
Schönhage Strassen AlgorithmSchönhage Strassen Algorithm
Schönhage Strassen Algorithm
cookies 146
 
アルゴリズムとデータ構造13
アルゴリズムとデータ構造13アルゴリズムとデータ構造13
アルゴリズムとデータ構造13
Kenta Hattori
 
競技プログラミングでの線型方程式系
競技プログラミングでの線型方程式系競技プログラミングでの線型方程式系
競技プログラミングでの線型方程式系tmaehara
 
verilog coding of butterfly diagram
verilog coding of butterfly diagram verilog coding of butterfly diagram
verilog coding of butterfly diagram
Venkat Malai Avichi
 
The Hiring Problem
The Hiring ProblemThe Hiring Problem
The Hiring Problem
Tinou Bao
 
3.6 &amp; 7. pumping lemma for cfl &amp; problems based on pl
3.6 &amp; 7. pumping lemma for cfl &amp; problems based on pl3.6 &amp; 7. pumping lemma for cfl &amp; problems based on pl
3.6 &amp; 7. pumping lemma for cfl &amp; problems based on pl
Sampath Kumar S
 
Advanced Comuter Architecture Ch6 Problem Solutions
Advanced Comuter Architecture Ch6 Problem SolutionsAdvanced Comuter Architecture Ch6 Problem Solutions
Advanced Comuter Architecture Ch6 Problem Solutions
Joe Christensen
 
よくわかるHopscotch hashing
よくわかるHopscotch hashingよくわかるHopscotch hashing
よくわかるHopscotch hashingKumazaki Hiroki
 
Lecture 3 - Introduction to Interpolation
Lecture 3 - Introduction to InterpolationLecture 3 - Introduction to Interpolation
Lecture 3 - Introduction to Interpolation
Eric Cochran
 

What's hot (20)

Rfid nfc
Rfid nfcRfid nfc
Rfid nfc
 
動的計画法
動的計画法動的計画法
動的計画法
 
指数時間アルゴリズムの最先端
指数時間アルゴリズムの最先端指数時間アルゴリズムの最先端
指数時間アルゴリズムの最先端
 
Graph Edit Distance: Basics & Trends
Graph Edit Distance: Basics & TrendsGraph Edit Distance: Basics & Trends
Graph Edit Distance: Basics & Trends
 
LCA and RMQ ~簡潔もあるよ!~
LCA and RMQ ~簡潔もあるよ!~LCA and RMQ ~簡潔もあるよ!~
LCA and RMQ ~簡潔もあるよ!~
 
数学つまみぐい入門編
数学つまみぐい入門編数学つまみぐい入門編
数学つまみぐい入門編
 
Recurrent and Recursive Networks (Part 1)
Recurrent and Recursive Networks (Part 1)Recurrent and Recursive Networks (Part 1)
Recurrent and Recursive Networks (Part 1)
 
AtCoder Regular Contest 039 解説
AtCoder Regular Contest 039 解説AtCoder Regular Contest 039 解説
AtCoder Regular Contest 039 解説
 
Amortize analysis of Deque with 2 Stack
Amortize analysis of Deque with 2 StackAmortize analysis of Deque with 2 Stack
Amortize analysis of Deque with 2 Stack
 
CHAPITRE VIII : Systèmes linéaires Modélisation & Simulation
CHAPITRE VIII :  Systèmes linéaires Modélisation & SimulationCHAPITRE VIII :  Systèmes linéaires Modélisation & Simulation
CHAPITRE VIII : Systèmes linéaires Modélisation & Simulation
 
Schönhage Strassen Algorithm
Schönhage Strassen AlgorithmSchönhage Strassen Algorithm
Schönhage Strassen Algorithm
 
アルゴリズムとデータ構造13
アルゴリズムとデータ構造13アルゴリズムとデータ構造13
アルゴリズムとデータ構造13
 
競技プログラミングでの線型方程式系
競技プログラミングでの線型方程式系競技プログラミングでの線型方程式系
競技プログラミングでの線型方程式系
 
verilog coding of butterfly diagram
verilog coding of butterfly diagram verilog coding of butterfly diagram
verilog coding of butterfly diagram
 
The Hiring Problem
The Hiring ProblemThe Hiring Problem
The Hiring Problem
 
3.6 &amp; 7. pumping lemma for cfl &amp; problems based on pl
3.6 &amp; 7. pumping lemma for cfl &amp; problems based on pl3.6 &amp; 7. pumping lemma for cfl &amp; problems based on pl
3.6 &amp; 7. pumping lemma for cfl &amp; problems based on pl
 
Binary indexed tree
Binary indexed treeBinary indexed tree
Binary indexed tree
 
Advanced Comuter Architecture Ch6 Problem Solutions
Advanced Comuter Architecture Ch6 Problem SolutionsAdvanced Comuter Architecture Ch6 Problem Solutions
Advanced Comuter Architecture Ch6 Problem Solutions
 
よくわかるHopscotch hashing
よくわかるHopscotch hashingよくわかるHopscotch hashing
よくわかるHopscotch hashing
 
Lecture 3 - Introduction to Interpolation
Lecture 3 - Introduction to InterpolationLecture 3 - Introduction to Interpolation
Lecture 3 - Introduction to Interpolation
 

Viewers also liked

omp-and-k-svd - Gdc2013
omp-and-k-svd - Gdc2013omp-and-k-svd - Gdc2013
omp-and-k-svd - Gdc2013
Manchor Ko
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
Gabriel Peyré
 
Parallel Random Generator - GDC 2015
Parallel Random Generator - GDC 2015Parallel Random Generator - GDC 2015
Parallel Random Generator - GDC 2015
Manchor Ko
 
Gdc2012 frames, sparsity and global illumination
Gdc2012 frames, sparsity and global illumination Gdc2012 frames, sparsity and global illumination
Gdc2012 frames, sparsity and global illumination
Manchor Ko
 
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learningIEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
IEEEBEBTECHSTUDENTPROJECTS
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
Arthur Mensch
 
Random number generator
Random number generatorRandom number generator
Random number generator
Syed Atif Naseem
 
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
zukun
 
Blind Source Separation using Dictionary Learning
Blind Source Separation using Dictionary LearningBlind Source Separation using Dictionary Learning
Blind Source Separation using Dictionary Learning
Davide Nardone
 
English Hindi Dictionary
English Hindi DictionaryEnglish Hindi Dictionary
English Hindi Dictionary
Ami Robin
 
A Friendly Guide To Sparse Coding
A Friendly Guide To Sparse CodingA Friendly Guide To Sparse Coding
A Friendly Guide To Sparse Coding
Shao-Chuan Wang
 
Random Number Generation
Random Number GenerationRandom Number Generation
Random Number Generation
Raj Bhatt
 
Introduction to Compressive Sensing (Compressed Sensing)
Introduction to Compressive Sensing (Compressed Sensing)Introduction to Compressive Sensing (Compressed Sensing)
Introduction to Compressive Sensing (Compressed Sensing)
Hamid Adldoost
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
recsysfr
 
Problems and Prospects of Human Right Instruments
Problems and Prospects of Human Right InstrumentsProblems and Prospects of Human Right Instruments
Problems and Prospects of Human Right Instruments
Theola Bonsi
 
Brain reading, compressive sensing, fMRI and statistical learning in Python
Brain reading, compressive sensing, fMRI and statistical learning in PythonBrain reading, compressive sensing, fMRI and statistical learning in Python
Brain reading, compressive sensing, fMRI and statistical learning in Python
Gael Varoquaux
 
Programming a clil unit
Programming a clil unitProgramming a clil unit
Programming a clil unit
Cristiana Vasiluta Costea
 
Dictionary activities for esl learners
Dictionary activities for esl learnersDictionary activities for esl learners
Dictionary activities for esl learners
Dr. Vandana Pathak
 
Practicum iii elena ramirez_garcia_16-04-2016
Practicum iii elena ramirez_garcia_16-04-2016Practicum iii elena ramirez_garcia_16-04-2016
Practicum iii elena ramirez_garcia_16-04-2016
ELe Na
 
DISPLAY LUMAscape
DISPLAY LUMAscapeDISPLAY LUMAscape
DISPLAY LUMAscape
LUMA Partners
 

Viewers also liked (20)

omp-and-k-svd - Gdc2013
omp-and-k-svd - Gdc2013omp-and-k-svd - Gdc2013
omp-and-k-svd - Gdc2013
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
 
Parallel Random Generator - GDC 2015
Parallel Random Generator - GDC 2015Parallel Random Generator - GDC 2015
Parallel Random Generator - GDC 2015
 
Gdc2012 frames, sparsity and global illumination
Gdc2012 frames, sparsity and global illumination Gdc2012 frames, sparsity and global illumination
Gdc2012 frames, sparsity and global illumination
 
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learningIEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
 
Random number generator
Random number generatorRandom number generator
Random number generator
 
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
 
Blind Source Separation using Dictionary Learning
Blind Source Separation using Dictionary LearningBlind Source Separation using Dictionary Learning
Blind Source Separation using Dictionary Learning
 
English Hindi Dictionary
English Hindi DictionaryEnglish Hindi Dictionary
English Hindi Dictionary
 
A Friendly Guide To Sparse Coding
A Friendly Guide To Sparse CodingA Friendly Guide To Sparse Coding
A Friendly Guide To Sparse Coding
 
Random Number Generation
Random Number GenerationRandom Number Generation
Random Number Generation
 
Introduction to Compressive Sensing (Compressed Sensing)
Introduction to Compressive Sensing (Compressed Sensing)Introduction to Compressive Sensing (Compressed Sensing)
Introduction to Compressive Sensing (Compressed Sensing)
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
 
Problems and Prospects of Human Right Instruments
Problems and Prospects of Human Right InstrumentsProblems and Prospects of Human Right Instruments
Problems and Prospects of Human Right Instruments
 
Brain reading, compressive sensing, fMRI and statistical learning in Python
Brain reading, compressive sensing, fMRI and statistical learning in PythonBrain reading, compressive sensing, fMRI and statistical learning in Python
Brain reading, compressive sensing, fMRI and statistical learning in Python
 
Programming a clil unit
Programming a clil unitProgramming a clil unit
Programming a clil unit
 
Dictionary activities for esl learners
Dictionary activities for esl learnersDictionary activities for esl learners
Dictionary activities for esl learners
 
Practicum iii elena ramirez_garcia_16-04-2016
Practicum iii elena ramirez_garcia_16-04-2016Practicum iii elena ramirez_garcia_16-04-2016
Practicum iii elena ramirez_garcia_16-04-2016
 
DISPLAY LUMAscape
DISPLAY LUMAscapeDISPLAY LUMAscape
DISPLAY LUMAscape
 

Similar to Dictionary Learning in Games - GDC 2014

IJCAI13 Paper review: Large-scale spectral clustering on graphs
IJCAI13 Paper review: Large-scale spectral clustering on graphsIJCAI13 Paper review: Large-scale spectral clustering on graphs
IJCAI13 Paper review: Large-scale spectral clustering on graphs
Akisato Kimura
 
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Universitat Politècnica de Catalunya
 
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Universitat Politècnica de Catalunya
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
ChenYiHuang5
 
Optforml
OptformlOptforml
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
StampedeCon
 
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural NetworksPaper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
ChenYiHuang5
 
Exploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation LearningExploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation Learning
Sungchul Kim
 
GDC 2012: Advanced Procedural Rendering in DX11
GDC 2012: Advanced Procedural Rendering in DX11GDC 2012: Advanced Procedural Rendering in DX11
GDC 2012: Advanced Procedural Rendering in DX11
smashflt
 
Average Sensitivity of Graph Algorithms
Average Sensitivity of Graph AlgorithmsAverage Sensitivity of Graph Algorithms
Average Sensitivity of Graph Algorithms
Yuichi Yoshida
 
Druinsky_SIAMCSE15
Druinsky_SIAMCSE15Druinsky_SIAMCSE15
Druinsky_SIAMCSE15
Karen Pao
 
Practical spherical harmonics based PRT methods.ppsx
Practical spherical harmonics based PRT methods.ppsxPractical spherical harmonics based PRT methods.ppsx
Practical spherical harmonics based PRT methods.ppsx
MannyK4
 
04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
Tamer Ahmed Farrag, PhD
 
Deep Learning Tutorial
Deep Learning Tutorial Deep Learning Tutorial
Deep Learning Tutorial
Ligeng Zhu
 
Chromatic Sparse Learning
Chromatic Sparse LearningChromatic Sparse Learning
Chromatic Sparse Learning
Databricks
 
generalized_nbody_acs_2015_challacombe
generalized_nbody_acs_2015_challacombegeneralized_nbody_acs_2015_challacombe
generalized_nbody_acs_2015_challacombe
Matt Challacombe
 
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual AttentionShow, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Eun Ji Lee
 
2021 04-01-dalle
2021 04-01-dalle2021 04-01-dalle
2021 04-01-dalle
JAEMINJEONG5
 
Barker_SIAMCSE15
Barker_SIAMCSE15Barker_SIAMCSE15
Barker_SIAMCSE15
Karen Pao
 
Scaling out logistic regression with Spark
Scaling out logistic regression with SparkScaling out logistic regression with Spark
Scaling out logistic regression with Spark
Barak Gitsis
 

Similar to Dictionary Learning in Games - GDC 2014 (20)

IJCAI13 Paper review: Large-scale spectral clustering on graphs
IJCAI13 Paper review: Large-scale spectral clustering on graphsIJCAI13 Paper review: Large-scale spectral clustering on graphs
IJCAI13 Paper review: Large-scale spectral clustering on graphs
 
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
 
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
 
Optforml
OptformlOptforml
Optforml
 
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
 
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural NetworksPaper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
 
Exploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation LearningExploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation Learning
 
GDC 2012: Advanced Procedural Rendering in DX11
GDC 2012: Advanced Procedural Rendering in DX11GDC 2012: Advanced Procedural Rendering in DX11
GDC 2012: Advanced Procedural Rendering in DX11
 
Average Sensitivity of Graph Algorithms
Average Sensitivity of Graph AlgorithmsAverage Sensitivity of Graph Algorithms
Average Sensitivity of Graph Algorithms
 
Druinsky_SIAMCSE15
Druinsky_SIAMCSE15Druinsky_SIAMCSE15
Druinsky_SIAMCSE15
 
Practical spherical harmonics based PRT methods.ppsx
Practical spherical harmonics based PRT methods.ppsxPractical spherical harmonics based PRT methods.ppsx
Practical spherical harmonics based PRT methods.ppsx
 
04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
 
Deep Learning Tutorial
Deep Learning Tutorial Deep Learning Tutorial
Deep Learning Tutorial
 
Chromatic Sparse Learning
Chromatic Sparse LearningChromatic Sparse Learning
Chromatic Sparse Learning
 
generalized_nbody_acs_2015_challacombe
generalized_nbody_acs_2015_challacombegeneralized_nbody_acs_2015_challacombe
generalized_nbody_acs_2015_challacombe
 
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual AttentionShow, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
 
2021 04-01-dalle
2021 04-01-dalle2021 04-01-dalle
2021 04-01-dalle
 
Barker_SIAMCSE15
Barker_SIAMCSE15Barker_SIAMCSE15
Barker_SIAMCSE15
 
Scaling out logistic regression with Spark
Scaling out logistic regression with SparkScaling out logistic regression with Spark
Scaling out logistic regression with Spark
 

Recently uploaded

Infrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI modelsInfrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI models
Zilliz
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 
HCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAUHCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAU
panagenda
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
innovationoecd
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
shyamraj55
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUHCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
panagenda
 
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Speck&Tech
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
Kari Kakkonen
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
DianaGray10
 
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceAI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
IndexBug
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
Quotidiano Piemontese
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Malak Abu Hammad
 
Microsoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdfMicrosoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdf
Uni Systems S.M.S.A.
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
Kumud Singh
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
Zilliz
 

Recently uploaded (20)

Infrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI modelsInfrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI models
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 
HCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAUHCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAU
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUHCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
 
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
 
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceAI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
 
Microsoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdfMicrosoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdf
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
 

Dictionary Learning in Games - GDC 2014

  • 1. Dictionary Learning for Games Manny Ko Principal Engineer, Activision R&D Graphics Research and Development
  • 2. Outline ● K-SVD and dictionary learning ● Linear Blend Skinning ● Brief survey on automatic skinning and compression ● Dictionary learning for LBS ● Two-layer sparse compression of Le & Deng. ● This talk is about compressing skinned animations.
  • 3. Frames, Sparsity and Global Illumination: New Math for Games GDC 2012 Robin Green – Microsoft Corp Manny Ko – PDI/Dreamworks
  • 4. Orthogonal Matching Pursuit and K-SVD for Sparse Encoding Manny Ko Senior Software Engineer, Imaginations Technologies Robin Green SSDE, Microsoft Xbox ATG
  • 5. Representing Signals ● We represent signals as linear combinations of things we already know – the ‘basis’ × 𝛼1 + × 𝛼2 + × 𝛼3 + ⋯ = × 𝛼0 +
  • 6. Orthonormal Bases (ONBs) ● The simplest way to represent signals is using a set of orthonormal bases 𝑏𝑖 𝑡 𝑏𝑗(𝑡) +∞ −∞ 𝑑𝑡 = 0 𝑖 ≠ 𝑗 1 𝑖 = 𝑗
  • 7. Example ONBs ● Fourier Basis 𝑏 𝑘 𝑡 = 𝑒 𝑖2𝑝𝑘𝑡 ● Wavelets 𝑏 𝑚,𝑛 𝑡 = 𝑎−𝑚 2 𝑥 𝑎−𝑚 𝑡 − 𝑏𝑚 ● Gabor Functions 𝑏 𝑘,𝑛 𝑡 = 𝜔 𝑡 − 𝑏𝑛 𝑒 𝑖2𝑝𝑘𝑡 ● Contourlet 𝑏𝑗,𝑘,𝐧 𝑡 = λ𝑗,𝑘 𝑡 − 2 𝑗−1 𝐒 𝑘n
  • 8. Benefits of ONB ● Analytic formulations ● Well understood mathematical properties ● Fast and simple algorithms for projection
  • 9. Problems with ONB ● One-size-fits all – not data adaptive ● Global support cannot adapt to data locally ● Fourier support is infinite, SH support spans the sphere ● Try using Fourier to represent a step-function ● Not sparse – very few zero coefficients ● Not additive - relies on destructive cancellation.
  • 10. Gibb’s Ringing – Fourier and SH
  • 11. What is Overcomplete Dictionary? ● Overcomplete means the dictionary has more atoms (columns) than the minimum required for the dimension of the signal ● In 3D, an ONB only needs 3 basis ● A 3D dictionary can have dozens or hundreds
  • 12. The Sparse Signal Model 𝐃 A fixed dictionary 𝛼 = 𝑥 𝑁 𝑁 𝐾 resulting signal Sparse vector of coefficients
  • 13. Why so many atoms? ● More atoms give our algorithm a better chance to find a small subset that matches a given signal ● Let’s look at some patches from Barbara
  • 15.
  • 16. Domain Specific Compression ● Just 550 bytes per image 1. Original 2. JPEG 3. JPEG2000 4. PCA 5. KSVD per block
  • 17. Project onto Dictionaries ● Overcomplete and non-orthogonal ● interactions among atoms cannot be ignored ● How do we project? ● Sparse Coding problem
  • 18. Matching Pursuit 1. Set the residual 𝑟 = 𝑥 2. Find an unselected atom that best matches the residual 𝐃𝛼 − 𝑟 3. Re-calculate the residual from matched atoms 𝑟 = 𝑥 − 𝐃𝛼 4. Repeat until 𝑟 ≤ 𝜖 Greedy Methods 𝐃 𝛼 = 𝑥
  • 19. Orthogonal Matching Pursuit (OMP) ● Add an Orthogonal Projection to the residual calculation 1. set 𝐼 ∶= ∅ , 𝑟 ≔ 𝑥, 𝛾 ≔ 0 2. while (𝑠𝑡𝑜𝑝𝑝𝑖𝑛𝑔 𝑡𝑒𝑠𝑡 𝑓𝑎𝑙𝑠𝑒) do 3. 𝑘 ≔ argmax 𝑘 𝑑 𝑘 𝑇 𝑟 4. 𝐼 ≔ 𝐼, 𝑘 5. 𝛾𝐼 ≔ 𝐃𝐼 + 𝑥 6. 𝑟 ≔ 𝑥 − 𝐃𝐼 𝛾𝐼 7. end while
  • 20. What is Dictionary Learning? ● select a few atoms for each signal – e.g. OMP ● Adjust the atoms to better fit those signals ● Repeat
  • 21. K-SVD ● Is one of the well known dictionary learning methods ● Check out our GDC2013 talk ● our GDC13 slides "OMP and K-SVD for Sparse Coding“ ● See Jim’s talk just before this session ● Miral’s Online Learning is the other.
  • 22. Overcomplete Dictionary Recap ● Importance of overcomplete dictionaries ● OMP for efficient projection onto dictionaries ● K-SVD for learning a better dictionary using samples from the real data
  • 24. Linear Blend Skinning ● 𝑣𝑖 = 𝑤𝑖𝑗(𝑅𝑗 |𝐵| 𝑗=1 𝑝𝑗 + 𝑇𝑗) ● 𝑝𝑖 is the position for the 𝑖th vertex of the rest pose ● 𝑤𝑖𝑗 ≥ 0 𝑎𝑛𝑑 𝑠𝑢𝑚𝑠 𝑡𝑜 𝑜𝑛𝑒(affinity). The non-negative constraint makes the blend additive. The affinity constraint prevents over-fitting and artifacts. ● 𝑅𝑗 usually is orthogonal to avoid shearing or scaling ● |𝐵| is the number of weights (usually <= 6)
  • 27. Blend Skinning on GPU GPU cores
  • 28. LBS on GPUs ● 𝑤𝑖𝑗 typically very sparse – 4-6 weights or less per- vertex ● Ideally a group of vertices all have the same weights to avoid thread divergence or splitting drawcalls ● These are fairly serious constraints a) Some vertices might need more weights – e.g. very smooth meshes or complex topology (hand)
  • 29. WeightsReduction Poisson-based Weight Reduction of Animated Meshes [Landreneau and Schaefer 2010]  Discrete optimization: – Impossible to find optimum solution – Very high cost for non-optimum solution • Fracture • Significant increase of computing cost: nK non-zero  n(K+1) non-zero
  • 33. Magic 4 ● why 4 weights is too few to generate smooth weights ● 4 vertices specifies an affine transform exactly. ● simplices in 3D contains 4 vertices for barycentric coordinates.
  • 35. Two-Layer Sparse Compression, Le & Deng 2013 ● Use dictionary learning to compute a two-level compression using bones ● Work with the weights of the bind-pose directly
  • 36. Why Dictionary for LBS? ● Why dictionary learning? ● limitations of Orthonormal-basis e.g. eigen/PCA ● Not adaptive ● Not purely additive – i.e. negative weights (relies on cancellation) ● No intuitive meaning – bones extracted cannot be used to tweak the model
  • 37. Dense-WeightCompression Input: Dense matrix Bone Transformations Blending Vertices Vertices Bones
  • 38.
  • 42. Algorithm– Blockcoordinatedescent Alternative update D and A (Block coordinate descent) Update D Update A
  • 43. UpdateCoefficientsA Linear least square with 2 unknowns Use mesh smoothness assumption to quickly find the non-zero candidates (virtual bones) αi
  • 45. Analysis of Two-Layer Scheme ● Use 100’s of virtual bones means we are not limited to a sparse approximation to the original animation. ● virtual bones act as a ‘common subexpression’ ● e.g. think compute shader that writes to LDS. ● Still enforce sparsity on VBs to control runtime cost and LDS usage – but k can be 100’s. ● Per-vertex weights are ● very sparse (2 per vertex) and the same for all vertices ● good for GPU.
  • 46. Learning Virtual Bones ● Virtual bones are learned from the dense vertex weights by block-coordinate-descent (BCD): Sparse coding: search for a few good atoms among the input columns. Use that to project all the rest of the inputs. ● Atom update: given the sparse weights from above we seek to adjust the atoms to make them fit the inputs that needs them better – a series of small LS problems. ● Similar to EM/Lloyd-Max
  • 47. Sparse Coding Sparse coding: ● insert the vertex with the largest L2 norm ● add a few more vertex which has the smallest dot- product with the 1st atom ● solve the basis-pursuit with OMP (see K-SVD) or LARS. ● solve 2x2 least-square prob. for 𝑤𝑖𝑗 to blend masters bones
  • 48. Weight Map – matrix A ● Weights and indices for each vertex to blend virtual bones ● solving a small 2x2 linear system to minimize MSE: ● arg 𝑚𝑖𝑛 𝑥 𝐷𝑥 − 𝑤𝑖 ^2 ● runtime per-vertex cost is just 2 dotp ● no bone hierarchy to worry about ● no warp divergence even for high valence vertices
  • 49. Atom Updates Atom update: foreach vertex ● update each atom to minimize error for the set of vertices that reference it (this is like K-SVD) ● Miral’s Online Dictionary Learning [Miral09]
  • 50. Atom Updates ● Precompute A and B ● 𝐴 = 𝛼𝑖 𝑡 𝑖=1 𝛼 𝑇 ● B = 𝑥𝑖 𝛼 𝑇𝑡 𝑖=1 ● For all atoms ● 𝑢𝑗 1 𝐴 𝑗,𝑗 𝑏𝑗 − 𝐷𝑎𝑗 + 𝑑𝑗 − eq(5) ● 𝑑𝑗 1 max 𝑢 𝑗 2,1 𝑢𝑗. − eq 6 ● 𝑢𝑗is thresholded to make sure # of non − zero is below the # of master bones
  • 52. CompressionwithExamplePoses Without using example pose – Minimize weights difference With using example poses – Minimize reconstruction error
  • 55. Recap ● The two-level scheme can work with dense (hand painted) weights or example poses (blend shape?) ● Only the vertex positions are needed ● a fixed memory footprint and uniform per-vertex cost - GPU friendly ● Combines the quality of dense skinning and the efficiencies of sparse-LBS. Animators can use blend-shapes or FFD more.
  • 56. Recap 2 ● Besides it uses dictionary learning and modern sparsity methods – how cool is that?  ● Last year we show how good dictionary learning is for compressing 2d images and 3d volumes ● Now we see what it can do for animation. ● Thank you!
  • 57. Recap 3 ● Non-negative LS and Active-set Method (ASM) ● Block-coordinate descent ● Sparsity constraints ● L1 relaxation and L0-norm constraints ● Direct solving ● These are all very useful tools.
  • 58. Acknowledgements ● Binh Huy Le & Zhigang Deng kindly provided the demo and their Siggraph materials. ● Robin Green for being my collaborator for many years. ● Igor Carron inspired me to learn sparsity methods and matrix factorization and for his spirit of broad exploration and sharing. ● Julien Mairal for the online learning math ● Peter-Pike who inspired me to apply modern math to graphics and games. ● Carlos Gonzalez Ochoa for sharing his insight in animation.
  • 59. Activision R&D is Hiring ● Our group is hiring 
  • 60. References ● Alexa 2000. “As-rigid-as-possible shape interpolation”, SIGGRAPH 2000. ● Halser 2010. “Learning skeletons for shape and pose”, I3D 2010. ● Kavan, Sloan and O'Sullivan 2010. “Fast and Efficient Skinning of Animated Meshes” Comput. Graph. Forum. ● Ko, and Green 2013 “Orthogonal Matching Pursuit and K-SVD for Sparse Encoding” GDC, Math for Games 2013 gdc2013-ompandksvd ● Landreneau & Schaefer “Poisson-Based Weight Reduction of Animated Meshes”, CGF 28(2), 2012. ● Le & Deng 2012. “Smooth skinning decomposition with rigid bones”, ACM TOG, Vol. 31, No. 6. ● Le & Deng 2013. “Two-Layer Sparse Compression of Dense-Weight Blend Skinning”, Siggraph 2013 Paper page ● Mairal 2009. “Online dictionary learning for sparse coding” Int. Conf. on Machine Learning.
  • 61. Appendix ● Kabsch/Procrutes method – use SVD to compute the MSE minimum rotation of one point-set to another. ● Kabsch_algorithm