SlideShare a Scribd company logo
1 of 61
Download to read offline
Dictionary Learning for Games
Manny Ko
Principal Engineer, Activision R&D
Graphics Research and Development
Outline
● K-SVD and dictionary learning
● Linear Blend Skinning
● Brief survey on automatic skinning and compression
● Dictionary learning for LBS
● Two-layer sparse compression of Le & Deng.
● This talk is about compressing skinned animations.
Frames, Sparsity and Global Illumination:
New Math for Games
GDC 2012
Robin Green – Microsoft Corp
Manny Ko – PDI/Dreamworks
Orthogonal Matching Pursuit
and K-SVD for Sparse Encoding
Manny Ko
Senior Software Engineer, Imaginations Technologies
Robin Green
SSDE, Microsoft Xbox ATG
Representing Signals
● We represent signals as linear combinations of things we already know –
the ‘basis’
× 𝛼1 +
× 𝛼2 +
× 𝛼3 + ⋯
=
× 𝛼0 +
Orthonormal Bases (ONBs)
● The simplest way to represent signals is using a set of orthonormal bases
𝑏𝑖 𝑡 𝑏𝑗(𝑡)
+∞
−∞
𝑑𝑡 =
0 𝑖 ≠ 𝑗
1 𝑖 = 𝑗
Example ONBs
● Fourier Basis
𝑏 𝑘 𝑡 = 𝑒 𝑖2𝑝𝑘𝑡
● Wavelets
𝑏 𝑚,𝑛 𝑡 = 𝑎−𝑚 2 𝑥 𝑎−𝑚 𝑡 − 𝑏𝑚
● Gabor Functions
𝑏 𝑘,𝑛 𝑡 = 𝜔 𝑡 − 𝑏𝑛 𝑒 𝑖2𝑝𝑘𝑡
● Contourlet
𝑏𝑗,𝑘,𝐧 𝑡 = λ𝑗,𝑘 𝑡 − 2 𝑗−1 𝐒 𝑘n
Benefits of ONB
● Analytic formulations
● Well understood mathematical properties
● Fast and simple algorithms for projection
Problems with ONB
● One-size-fits all – not data adaptive
● Global support cannot adapt to data locally
● Fourier support is infinite, SH support spans the sphere
● Try using Fourier to represent a step-function
● Not sparse – very few zero coefficients
● Not additive - relies on destructive cancellation.
Gibb’s Ringing – Fourier and SH
What is Overcomplete Dictionary?
● Overcomplete means the dictionary has more atoms
(columns) than the minimum required for the
dimension of the signal
● In 3D, an ONB only needs 3 basis
● A 3D dictionary can have dozens or hundreds
The Sparse Signal Model
𝐃
A fixed dictionary
𝛼
=
𝑥
𝑁 𝑁
𝐾
resulting
signal
Sparse
vector of
coefficients
Why so many atoms?
● More atoms give our algorithm a better chance to
find a small subset that matches a given signal
● Let’s look at some patches from Barbara
Patches from Barbara
Domain Specific Compression
● Just 550 bytes per image
1. Original
2. JPEG
3. JPEG2000
4. PCA
5. KSVD per block
Project onto Dictionaries
● Overcomplete and non-orthogonal
● interactions among atoms cannot be ignored
● How do we project?
● Sparse Coding problem
Matching Pursuit
1. Set the residual 𝑟 = 𝑥
2. Find an unselected atom
that best matches the
residual 𝐃𝛼 − 𝑟
3. Re-calculate the residual
from matched atoms
𝑟 = 𝑥 − 𝐃𝛼
4. Repeat until 𝑟 ≤ 𝜖
Greedy Methods
𝐃
𝛼
=
𝑥
Orthogonal Matching Pursuit (OMP)
● Add an Orthogonal Projection to the residual calculation
1. set 𝐼 ∶= ∅ , 𝑟 ≔ 𝑥, 𝛾 ≔ 0
2. while (𝑠𝑡𝑜𝑝𝑝𝑖𝑛𝑔 𝑡𝑒𝑠𝑡 𝑓𝑎𝑙𝑠𝑒) do
3. 𝑘 ≔ argmax
𝑘
𝑑 𝑘
𝑇
𝑟
4. 𝐼 ≔ 𝐼, 𝑘
5. 𝛾𝐼 ≔ 𝐃𝐼
+ 𝑥
6. 𝑟 ≔ 𝑥 − 𝐃𝐼 𝛾𝐼
7. end while
What is Dictionary Learning?
● select a few atoms for each signal – e.g. OMP
● Adjust the atoms to better fit those signals
● Repeat
K-SVD
● Is one of the well known dictionary learning
methods
● Check out our GDC2013 talk
● our GDC13 slides "OMP and K-SVD for Sparse Coding“
● See Jim’s talk just before this session
● Miral’s Online Learning is the other.
Overcomplete Dictionary Recap
● Importance of overcomplete dictionaries
● OMP for efficient projection onto dictionaries
● K-SVD for learning a better dictionary using samples
from the real data
Part 2: Skinning
● blank
Linear Blend Skinning
● 𝑣𝑖 = 𝑤𝑖𝑗(𝑅𝑗
|𝐵|
𝑗=1 𝑝𝑗 + 𝑇𝑗)
● 𝑝𝑖 is the position for the 𝑖th vertex of the rest pose
● 𝑤𝑖𝑗 ≥ 0 𝑎𝑛𝑑 𝑠𝑢𝑚𝑠 𝑡𝑜 𝑜𝑛𝑒(affinity). The non-negative
constraint makes the blend additive. The affinity
constraint prevents over-fitting and artifacts.
● 𝑅𝑗 usually is orthogonal to avoid shearing or scaling
● |𝐵| is the number of weights (usually <= 6)
Blending Weights
Blending Weights
Blend Skinning on GPU
GPU
cores
LBS on GPUs
● 𝑤𝑖𝑗 typically very sparse – 4-6 weights or less per-
vertex
● Ideally a group of vertices all have the same weights
to avoid thread divergence or splitting drawcalls
● These are fairly serious constraints
a) Some vertices might need more weights – e.g. very
smooth meshes or complex topology (hand)
WeightsReduction
Poisson-based Weight Reduction of Animated Meshes [Landreneau and Schaefer 2010]
 Discrete optimization:
– Impossible to find optimum solution
– Very high cost for non-optimum solution
• Fracture
• Significant increase of computing cost: nK non-zero  n(K+1) non-zero
K-Largest - fracturing
K-Largest - normals
Vertex Normal in Shader
Magic 4
● why 4 weights is too few to generate smooth
weights
● 4 vertices specifies an affine transform exactly.
● simplices in 3D contains 4 vertices for barycentric
coordinates.
Two-LayerSparseCompressionof
Dense-WeightBlendSkinning
BinhLe and ZhigangDeng
SIGGRAPH2013
Two-Layer Sparse Compression, Le & Deng 2013
● Use dictionary learning to compute a two-level
compression using bones
● Work with the weights of the bind-pose directly
Why Dictionary for LBS?
● Why dictionary learning?
● limitations of Orthonormal-basis e.g. eigen/PCA
● Not adaptive
● Not purely additive – i.e. negative weights (relies on cancellation)
● No intuitive meaning – bones extracted cannot be used to tweak the
model
Dense-WeightCompression
Input: Dense matrix
Bone Transformations
Blending
Vertices
Vertices
Bones
SparseMatrixFactorization– dictionarylearning
SparseMatrixFactorization
c=max{card(wi)}+1
SparseMatrixFactorization
c=max{card(wi)}+1
n is very large
card(A)=2n→min
Algorithm– Blockcoordinatedescent
Alternative update D and A
(Block coordinate descent)
Update D Update A
UpdateCoefficientsA
Linear least square with 2 unknowns
Use mesh smoothness
assumption to quickly
find the non-zero
candidates (virtual bones)
αi
Movies
Analysis of Two-Layer Scheme
● Use 100’s of virtual bones means we are not limited to a
sparse approximation to the original animation.
● virtual bones act as a ‘common subexpression’
● e.g. think compute shader that writes to LDS.
● Still enforce sparsity on VBs to control runtime cost and
LDS usage – but k can be 100’s.
● Per-vertex weights are
● very sparse (2 per vertex) and the same for all vertices
● good for GPU.
Learning Virtual Bones
● Virtual bones are learned from the dense vertex weights
by block-coordinate-descent (BCD):
Sparse coding: search for a few good atoms among the
input columns. Use that to project all the rest of the inputs.
● Atom update: given the sparse weights from above we
seek to adjust the atoms to make them fit the inputs that
needs them better – a series of small LS problems.
● Similar to EM/Lloyd-Max
Sparse Coding
Sparse coding:
● insert the vertex with the largest L2 norm
● add a few more vertex which has the smallest dot-
product with the 1st atom
● solve the basis-pursuit with OMP (see K-SVD) or LARS.
● solve 2x2 least-square prob. for 𝑤𝑖𝑗 to blend masters
bones
Weight Map – matrix A
● Weights and indices for each vertex to blend virtual
bones
● solving a small 2x2 linear system to minimize MSE:
● arg 𝑚𝑖𝑛 𝑥 𝐷𝑥 − 𝑤𝑖 ^2
● runtime per-vertex cost is just 2 dotp
● no bone hierarchy to worry about
● no warp divergence even for high valence vertices
Atom Updates
Atom update:
foreach vertex
● update each atom to minimize error for the set of vertices that
reference it (this is like K-SVD)
● Miral’s Online Dictionary Learning [Miral09]
Atom Updates
● Precompute A and B
● 𝐴 = 𝛼𝑖
𝑡
𝑖=1 𝛼 𝑇
● B = 𝑥𝑖 𝛼 𝑇𝑡
𝑖=1
● For all atoms
● 𝑢𝑗
1
𝐴 𝑗,𝑗
𝑏𝑗 − 𝐷𝑎𝑗 + 𝑑𝑗 − eq(5)
● 𝑑𝑗
1
max 𝑢 𝑗 2,1
𝑢𝑗. − eq 6
● 𝑢𝑗is thresholded to make sure # of non −
zero is below the # of master bones
Live Demo
● youtube
CompressionwithExamplePoses
Without using example pose
– Minimize weights difference
With using example poses
– Minimize reconstruction error
UsingExemplarposes
Virtual Bones Distribution
Recap
● The two-level scheme can work with dense (hand painted)
weights or example poses (blend shape?)
● Only the vertex positions are needed
● a fixed memory footprint and uniform per-vertex cost - GPU
friendly
● Combines the quality of dense skinning and the efficiencies of
sparse-LBS. Animators can use blend-shapes or FFD more.
Recap 2
● Besides it uses dictionary learning and modern
sparsity methods – how cool is that? 
● Last year we show how good dictionary learning is
for compressing 2d images and 3d volumes
● Now we see what it can do for animation.
● Thank you!
Recap 3
● Non-negative LS and Active-set Method (ASM)
● Block-coordinate descent
● Sparsity constraints
● L1 relaxation and L0-norm constraints
● Direct solving
● These are all very useful tools.
Acknowledgements
● Binh Huy Le & Zhigang Deng kindly provided the demo and their Siggraph
materials.
● Robin Green for being my collaborator for many years.
● Igor Carron inspired me to learn sparsity methods and matrix factorization
and for his spirit of broad exploration and sharing.
● Julien Mairal for the online learning math
● Peter-Pike who inspired me to apply modern math to graphics and games.
● Carlos Gonzalez Ochoa for sharing his insight in animation.
Activision R&D is Hiring
● Our group is hiring 
References
● Alexa 2000. “As-rigid-as-possible shape interpolation”, SIGGRAPH 2000.
● Halser 2010. “Learning skeletons for shape and pose”, I3D 2010.
● Kavan, Sloan and O'Sullivan 2010. “Fast and Efficient Skinning of Animated Meshes” Comput. Graph.
Forum.
● Ko, and Green 2013 “Orthogonal Matching Pursuit and K-SVD for Sparse Encoding” GDC, Math for
Games 2013 gdc2013-ompandksvd
● Landreneau & Schaefer “Poisson-Based Weight Reduction of Animated Meshes”, CGF 28(2), 2012.
● Le & Deng 2012. “Smooth skinning decomposition with rigid bones”, ACM TOG, Vol. 31, No. 6.
● Le & Deng 2013. “Two-Layer Sparse Compression of Dense-Weight Blend Skinning”, Siggraph 2013
Paper page
● Mairal 2009. “Online dictionary learning for sparse coding” Int. Conf. on Machine Learning.
Appendix
● Kabsch/Procrutes method – use SVD to compute the
MSE minimum rotation of one point-set to another.
● Kabsch_algorithm

More Related Content

What's hot

SakataMoriLab GNN勉強会第一回資料
SakataMoriLab GNN勉強会第一回資料SakataMoriLab GNN勉強会第一回資料
SakataMoriLab GNN勉強会第一回資料ttt_miura
 
ランダムフォレストとそのコンピュータビジョンへの応用
ランダムフォレストとそのコンピュータビジョンへの応用ランダムフォレストとそのコンピュータビジョンへの応用
ランダムフォレストとそのコンピュータビジョンへの応用Kinki University
 
工学系大学4年生のための論文の読み方
工学系大学4年生のための論文の読み方工学系大学4年生のための論文の読み方
工学系大学4年生のための論文の読み方ychtanaka
 
2値化CNN on FPGAでGPUとガチンコバトル(公開版)
2値化CNN on FPGAでGPUとガチンコバトル(公開版)2値化CNN on FPGAでGPUとガチンコバトル(公開版)
2値化CNN on FPGAでGPUとガチンコバトル(公開版)Hiroki Nakahara
 
Neural Processes Family
Neural Processes FamilyNeural Processes Family
Neural Processes FamilyKota Matsui
 
Denoising Diffusion Probabilistic Modelsの重要な式の解説
Denoising Diffusion Probabilistic Modelsの重要な式の解説Denoising Diffusion Probabilistic Modelsの重要な式の解説
Denoising Diffusion Probabilistic Modelsの重要な式の解説Tomonari Masada
 
SSII2019TS: Shall We GANs?​ ~GANの基礎から最近の研究まで~
SSII2019TS: Shall We GANs?​ ~GANの基礎から最近の研究まで~SSII2019TS: Shall We GANs?​ ~GANの基礎から最近の研究まで~
SSII2019TS: Shall We GANs?​ ~GANの基礎から最近の研究まで~SSII
 
ConvNetの歴史とResNet亜種、ベストプラクティス
ConvNetの歴史とResNet亜種、ベストプラクティスConvNetの歴史とResNet亜種、ベストプラクティス
ConvNetの歴史とResNet亜種、ベストプラクティスYusuke Uchida
 
Rate-Distortion Function for Gamma Sources under Absolute-Log Distortion
Rate-Distortion Function for Gamma Sources under Absolute-Log DistortionRate-Distortion Function for Gamma Sources under Absolute-Log Distortion
Rate-Distortion Function for Gamma Sources under Absolute-Log Distortion奈良先端大 情報科学研究科
 
Introduction to Monte Carlo Ray Tracing (CEDEC 2013)
Introduction to Monte Carlo Ray Tracing (CEDEC 2013)Introduction to Monte Carlo Ray Tracing (CEDEC 2013)
Introduction to Monte Carlo Ray Tracing (CEDEC 2013)Takahiro Harada
 
パターン認識と機械学習14章 d0912
パターン認識と機械学習14章 d0912パターン認識と機械学習14章 d0912
パターン認識と機械学習14章 d0912Yoshinori Kabeya
 
[DL輪読会]Deep Anomaly Detection Using Geometric Transformations
[DL輪読会]Deep Anomaly Detection Using Geometric Transformations[DL輪読会]Deep Anomaly Detection Using Geometric Transformations
[DL輪読会]Deep Anomaly Detection Using Geometric TransformationsDeep Learning JP
 
DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation
DiffusionCLIP: Text-Guided Diffusion Models for Robust Image ManipulationDiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation
DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulationssuser2e0133
 
ハトでもわかる単純パーセプトロン
ハトでもわかる単純パーセプトロンハトでもわかる単純パーセプトロン
ハトでもわかる単純パーセプトロンtakosumipasta
 
全日本コンピュータビジョン勉強会:Disentangling and Unifying Graph Convolutions for Skeleton-B...
全日本コンピュータビジョン勉強会:Disentangling and Unifying Graph Convolutions for Skeleton-B...全日本コンピュータビジョン勉強会:Disentangling and Unifying Graph Convolutions for Skeleton-B...
全日本コンピュータビジョン勉強会:Disentangling and Unifying Graph Convolutions for Skeleton-B...Yasutomo Kawanishi
 
NIPS2013読み会 DeViSE: A Deep Visual-Semantic Embedding Model
NIPS2013読み会 DeViSE: A Deep Visual-Semantic Embedding ModelNIPS2013読み会 DeViSE: A Deep Visual-Semantic Embedding Model
NIPS2013読み会 DeViSE: A Deep Visual-Semantic Embedding ModelSeiya Tokui
 
Layer Normalization@NIPS+読み会・関西
Layer Normalization@NIPS+読み会・関西Layer Normalization@NIPS+読み会・関西
Layer Normalization@NIPS+読み会・関西Keigo Nishida
 
【DL輪読会】SDEdit: Guided Image Synthesis and Editing with Stochastic Differentia...
【DL輪読会】SDEdit: Guided Image Synthesis and Editing with Stochastic Differentia...【DL輪読会】SDEdit: Guided Image Synthesis and Editing with Stochastic Differentia...
【DL輪読会】SDEdit: Guided Image Synthesis and Editing with Stochastic Differentia...Deep Learning JP
 
[DL輪読会]Learning Transferable Visual Models From Natural Language Supervision
[DL輪読会]Learning Transferable Visual Models From Natural Language Supervision[DL輪読会]Learning Transferable Visual Models From Natural Language Supervision
[DL輪読会]Learning Transferable Visual Models From Natural Language SupervisionDeep Learning JP
 

What's hot (20)

SakataMoriLab GNN勉強会第一回資料
SakataMoriLab GNN勉強会第一回資料SakataMoriLab GNN勉強会第一回資料
SakataMoriLab GNN勉強会第一回資料
 
ランダムフォレストとそのコンピュータビジョンへの応用
ランダムフォレストとそのコンピュータビジョンへの応用ランダムフォレストとそのコンピュータビジョンへの応用
ランダムフォレストとそのコンピュータビジョンへの応用
 
1次式とノルムで構成された最適化問題とその双対問題
1次式とノルムで構成された最適化問題とその双対問題1次式とノルムで構成された最適化問題とその双対問題
1次式とノルムで構成された最適化問題とその双対問題
 
工学系大学4年生のための論文の読み方
工学系大学4年生のための論文の読み方工学系大学4年生のための論文の読み方
工学系大学4年生のための論文の読み方
 
2値化CNN on FPGAでGPUとガチンコバトル(公開版)
2値化CNN on FPGAでGPUとガチンコバトル(公開版)2値化CNN on FPGAでGPUとガチンコバトル(公開版)
2値化CNN on FPGAでGPUとガチンコバトル(公開版)
 
Neural Processes Family
Neural Processes FamilyNeural Processes Family
Neural Processes Family
 
Denoising Diffusion Probabilistic Modelsの重要な式の解説
Denoising Diffusion Probabilistic Modelsの重要な式の解説Denoising Diffusion Probabilistic Modelsの重要な式の解説
Denoising Diffusion Probabilistic Modelsの重要な式の解説
 
SSII2019TS: Shall We GANs?​ ~GANの基礎から最近の研究まで~
SSII2019TS: Shall We GANs?​ ~GANの基礎から最近の研究まで~SSII2019TS: Shall We GANs?​ ~GANの基礎から最近の研究まで~
SSII2019TS: Shall We GANs?​ ~GANの基礎から最近の研究まで~
 
ConvNetの歴史とResNet亜種、ベストプラクティス
ConvNetの歴史とResNet亜種、ベストプラクティスConvNetの歴史とResNet亜種、ベストプラクティス
ConvNetの歴史とResNet亜種、ベストプラクティス
 
Rate-Distortion Function for Gamma Sources under Absolute-Log Distortion
Rate-Distortion Function for Gamma Sources under Absolute-Log DistortionRate-Distortion Function for Gamma Sources under Absolute-Log Distortion
Rate-Distortion Function for Gamma Sources under Absolute-Log Distortion
 
Introduction to Monte Carlo Ray Tracing (CEDEC 2013)
Introduction to Monte Carlo Ray Tracing (CEDEC 2013)Introduction to Monte Carlo Ray Tracing (CEDEC 2013)
Introduction to Monte Carlo Ray Tracing (CEDEC 2013)
 
パターン認識と機械学習14章 d0912
パターン認識と機械学習14章 d0912パターン認識と機械学習14章 d0912
パターン認識と機械学習14章 d0912
 
[DL輪読会]Deep Anomaly Detection Using Geometric Transformations
[DL輪読会]Deep Anomaly Detection Using Geometric Transformations[DL輪読会]Deep Anomaly Detection Using Geometric Transformations
[DL輪読会]Deep Anomaly Detection Using Geometric Transformations
 
DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation
DiffusionCLIP: Text-Guided Diffusion Models for Robust Image ManipulationDiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation
DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation
 
ハトでもわかる単純パーセプトロン
ハトでもわかる単純パーセプトロンハトでもわかる単純パーセプトロン
ハトでもわかる単純パーセプトロン
 
全日本コンピュータビジョン勉強会:Disentangling and Unifying Graph Convolutions for Skeleton-B...
全日本コンピュータビジョン勉強会:Disentangling and Unifying Graph Convolutions for Skeleton-B...全日本コンピュータビジョン勉強会:Disentangling and Unifying Graph Convolutions for Skeleton-B...
全日本コンピュータビジョン勉強会:Disentangling and Unifying Graph Convolutions for Skeleton-B...
 
NIPS2013読み会 DeViSE: A Deep Visual-Semantic Embedding Model
NIPS2013読み会 DeViSE: A Deep Visual-Semantic Embedding ModelNIPS2013読み会 DeViSE: A Deep Visual-Semantic Embedding Model
NIPS2013読み会 DeViSE: A Deep Visual-Semantic Embedding Model
 
Layer Normalization@NIPS+読み会・関西
Layer Normalization@NIPS+読み会・関西Layer Normalization@NIPS+読み会・関西
Layer Normalization@NIPS+読み会・関西
 
【DL輪読会】SDEdit: Guided Image Synthesis and Editing with Stochastic Differentia...
【DL輪読会】SDEdit: Guided Image Synthesis and Editing with Stochastic Differentia...【DL輪読会】SDEdit: Guided Image Synthesis and Editing with Stochastic Differentia...
【DL輪読会】SDEdit: Guided Image Synthesis and Editing with Stochastic Differentia...
 
[DL輪読会]Learning Transferable Visual Models From Natural Language Supervision
[DL輪読会]Learning Transferable Visual Models From Natural Language Supervision[DL輪読会]Learning Transferable Visual Models From Natural Language Supervision
[DL輪読会]Learning Transferable Visual Models From Natural Language Supervision
 

Viewers also liked

Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse RepresentationGabriel Peyré
 
Parallel Random Generator - GDC 2015
Parallel Random Generator - GDC 2015Parallel Random Generator - GDC 2015
Parallel Random Generator - GDC 2015Manchor Ko
 
Gdc2012 frames, sparsity and global illumination
Gdc2012 frames, sparsity and global illumination Gdc2012 frames, sparsity and global illumination
Gdc2012 frames, sparsity and global illumination Manchor Ko
 
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learningIEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learningIEEEBEBTECHSTUDENTPROJECTS
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationArthur Mensch
 
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...zukun
 
Blind Source Separation using Dictionary Learning
Blind Source Separation using Dictionary LearningBlind Source Separation using Dictionary Learning
Blind Source Separation using Dictionary LearningDavide Nardone
 
English Hindi Dictionary
English Hindi DictionaryEnglish Hindi Dictionary
English Hindi DictionaryAmi Robin
 
A Friendly Guide To Sparse Coding
A Friendly Guide To Sparse CodingA Friendly Guide To Sparse Coding
A Friendly Guide To Sparse CodingShao-Chuan Wang
 
Random Number Generation
Random Number GenerationRandom Number Generation
Random Number GenerationRaj Bhatt
 
Introduction to Compressive Sensing (Compressed Sensing)
Introduction to Compressive Sensing (Compressed Sensing)Introduction to Compressive Sensing (Compressed Sensing)
Introduction to Compressive Sensing (Compressed Sensing)Hamid Adldoost
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorizationrecsysfr
 
Problems and Prospects of Human Right Instruments
Problems and Prospects of Human Right InstrumentsProblems and Prospects of Human Right Instruments
Problems and Prospects of Human Right InstrumentsTheola Bonsi
 
Brain reading, compressive sensing, fMRI and statistical learning in Python
Brain reading, compressive sensing, fMRI and statistical learning in PythonBrain reading, compressive sensing, fMRI and statistical learning in Python
Brain reading, compressive sensing, fMRI and statistical learning in PythonGael Varoquaux
 
Dictionary activities for esl learners
Dictionary activities for esl learnersDictionary activities for esl learners
Dictionary activities for esl learnersDr. Vandana Pathak
 
Practicum iii elena ramirez_garcia_16-04-2016
Practicum iii elena ramirez_garcia_16-04-2016Practicum iii elena ramirez_garcia_16-04-2016
Practicum iii elena ramirez_garcia_16-04-2016ELe Na
 

Viewers also liked (19)

Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
 
Parallel Random Generator - GDC 2015
Parallel Random Generator - GDC 2015Parallel Random Generator - GDC 2015
Parallel Random Generator - GDC 2015
 
Gdc2012 frames, sparsity and global illumination
Gdc2012 frames, sparsity and global illumination Gdc2012 frames, sparsity and global illumination
Gdc2012 frames, sparsity and global illumination
 
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learningIEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Scale adaptive dictionary learning
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
 
Random number generator
Random number generatorRandom number generator
Random number generator
 
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
 
Blind Source Separation using Dictionary Learning
Blind Source Separation using Dictionary LearningBlind Source Separation using Dictionary Learning
Blind Source Separation using Dictionary Learning
 
English Hindi Dictionary
English Hindi DictionaryEnglish Hindi Dictionary
English Hindi Dictionary
 
A Friendly Guide To Sparse Coding
A Friendly Guide To Sparse CodingA Friendly Guide To Sparse Coding
A Friendly Guide To Sparse Coding
 
Random Number Generation
Random Number GenerationRandom Number Generation
Random Number Generation
 
Introduction to Compressive Sensing (Compressed Sensing)
Introduction to Compressive Sensing (Compressed Sensing)Introduction to Compressive Sensing (Compressed Sensing)
Introduction to Compressive Sensing (Compressed Sensing)
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
 
Problems and Prospects of Human Right Instruments
Problems and Prospects of Human Right InstrumentsProblems and Prospects of Human Right Instruments
Problems and Prospects of Human Right Instruments
 
Brain reading, compressive sensing, fMRI and statistical learning in Python
Brain reading, compressive sensing, fMRI and statistical learning in PythonBrain reading, compressive sensing, fMRI and statistical learning in Python
Brain reading, compressive sensing, fMRI and statistical learning in Python
 
Programming a clil unit
Programming a clil unitProgramming a clil unit
Programming a clil unit
 
Dictionary activities for esl learners
Dictionary activities for esl learnersDictionary activities for esl learners
Dictionary activities for esl learners
 
Practicum iii elena ramirez_garcia_16-04-2016
Practicum iii elena ramirez_garcia_16-04-2016Practicum iii elena ramirez_garcia_16-04-2016
Practicum iii elena ramirez_garcia_16-04-2016
 
DISPLAY LUMAscape
DISPLAY LUMAscapeDISPLAY LUMAscape
DISPLAY LUMAscape
 

Similar to Dictionary Learning in Games - GDC 2014

IJCAI13 Paper review: Large-scale spectral clustering on graphs
IJCAI13 Paper review: Large-scale spectral clustering on graphsIJCAI13 Paper review: Large-scale spectral clustering on graphs
IJCAI13 Paper review: Large-scale spectral clustering on graphsAkisato Kimura
 
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)Universitat Politècnica de Catalunya
 
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)Universitat Politècnica de Catalunya
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satChenYiHuang5
 
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017StampedeCon
 
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural NetworksPaper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural NetworksChenYiHuang5
 
Exploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation LearningExploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation LearningSungchul Kim
 
GDC 2012: Advanced Procedural Rendering in DX11
GDC 2012: Advanced Procedural Rendering in DX11GDC 2012: Advanced Procedural Rendering in DX11
GDC 2012: Advanced Procedural Rendering in DX11smashflt
 
Average Sensitivity of Graph Algorithms
Average Sensitivity of Graph AlgorithmsAverage Sensitivity of Graph Algorithms
Average Sensitivity of Graph AlgorithmsYuichi Yoshida
 
Druinsky_SIAMCSE15
Druinsky_SIAMCSE15Druinsky_SIAMCSE15
Druinsky_SIAMCSE15Karen Pao
 
Practical spherical harmonics based PRT methods.ppsx
Practical spherical harmonics based PRT methods.ppsxPractical spherical harmonics based PRT methods.ppsx
Practical spherical harmonics based PRT methods.ppsxMannyK4
 
Deep Learning Tutorial
Deep Learning Tutorial Deep Learning Tutorial
Deep Learning Tutorial Ligeng Zhu
 
Chromatic Sparse Learning
Chromatic Sparse LearningChromatic Sparse Learning
Chromatic Sparse LearningDatabricks
 
generalized_nbody_acs_2015_challacombe
generalized_nbody_acs_2015_challacombegeneralized_nbody_acs_2015_challacombe
generalized_nbody_acs_2015_challacombeMatt Challacombe
 
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual AttentionShow, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual AttentionEun Ji Lee
 
Barker_SIAMCSE15
Barker_SIAMCSE15Barker_SIAMCSE15
Barker_SIAMCSE15Karen Pao
 
Scaling out logistic regression with Spark
Scaling out logistic regression with SparkScaling out logistic regression with Spark
Scaling out logistic regression with SparkBarak Gitsis
 

Similar to Dictionary Learning in Games - GDC 2014 (20)

IJCAI13 Paper review: Large-scale spectral clustering on graphs
IJCAI13 Paper review: Large-scale spectral clustering on graphsIJCAI13 Paper review: Large-scale spectral clustering on graphs
IJCAI13 Paper review: Large-scale spectral clustering on graphs
 
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
 
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
 
Optforml
OptformlOptforml
Optforml
 
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
 
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural NetworksPaper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
Paper Study: OptNet: Differentiable Optimization as a Layer in Neural Networks
 
Exploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation LearningExploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation Learning
 
GDC 2012: Advanced Procedural Rendering in DX11
GDC 2012: Advanced Procedural Rendering in DX11GDC 2012: Advanced Procedural Rendering in DX11
GDC 2012: Advanced Procedural Rendering in DX11
 
Average Sensitivity of Graph Algorithms
Average Sensitivity of Graph AlgorithmsAverage Sensitivity of Graph Algorithms
Average Sensitivity of Graph Algorithms
 
Druinsky_SIAMCSE15
Druinsky_SIAMCSE15Druinsky_SIAMCSE15
Druinsky_SIAMCSE15
 
Practical spherical harmonics based PRT methods.ppsx
Practical spherical harmonics based PRT methods.ppsxPractical spherical harmonics based PRT methods.ppsx
Practical spherical harmonics based PRT methods.ppsx
 
04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
 
Deep Learning Tutorial
Deep Learning Tutorial Deep Learning Tutorial
Deep Learning Tutorial
 
Chromatic Sparse Learning
Chromatic Sparse LearningChromatic Sparse Learning
Chromatic Sparse Learning
 
generalized_nbody_acs_2015_challacombe
generalized_nbody_acs_2015_challacombegeneralized_nbody_acs_2015_challacombe
generalized_nbody_acs_2015_challacombe
 
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual AttentionShow, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
 
2021 04-01-dalle
2021 04-01-dalle2021 04-01-dalle
2021 04-01-dalle
 
Barker_SIAMCSE15
Barker_SIAMCSE15Barker_SIAMCSE15
Barker_SIAMCSE15
 
Scaling out logistic regression with Spark
Scaling out logistic regression with SparkScaling out logistic regression with Spark
Scaling out logistic regression with Spark
 

Recently uploaded

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 

Recently uploaded (20)

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 

Dictionary Learning in Games - GDC 2014

  • 1. Dictionary Learning for Games Manny Ko Principal Engineer, Activision R&D Graphics Research and Development
  • 2. Outline ● K-SVD and dictionary learning ● Linear Blend Skinning ● Brief survey on automatic skinning and compression ● Dictionary learning for LBS ● Two-layer sparse compression of Le & Deng. ● This talk is about compressing skinned animations.
  • 3. Frames, Sparsity and Global Illumination: New Math for Games GDC 2012 Robin Green – Microsoft Corp Manny Ko – PDI/Dreamworks
  • 4. Orthogonal Matching Pursuit and K-SVD for Sparse Encoding Manny Ko Senior Software Engineer, Imaginations Technologies Robin Green SSDE, Microsoft Xbox ATG
  • 5. Representing Signals ● We represent signals as linear combinations of things we already know – the ‘basis’ × 𝛼1 + × 𝛼2 + × 𝛼3 + ⋯ = × 𝛼0 +
  • 6. Orthonormal Bases (ONBs) ● The simplest way to represent signals is using a set of orthonormal bases 𝑏𝑖 𝑡 𝑏𝑗(𝑡) +∞ −∞ 𝑑𝑡 = 0 𝑖 ≠ 𝑗 1 𝑖 = 𝑗
  • 7. Example ONBs ● Fourier Basis 𝑏 𝑘 𝑡 = 𝑒 𝑖2𝑝𝑘𝑡 ● Wavelets 𝑏 𝑚,𝑛 𝑡 = 𝑎−𝑚 2 𝑥 𝑎−𝑚 𝑡 − 𝑏𝑚 ● Gabor Functions 𝑏 𝑘,𝑛 𝑡 = 𝜔 𝑡 − 𝑏𝑛 𝑒 𝑖2𝑝𝑘𝑡 ● Contourlet 𝑏𝑗,𝑘,𝐧 𝑡 = λ𝑗,𝑘 𝑡 − 2 𝑗−1 𝐒 𝑘n
  • 8. Benefits of ONB ● Analytic formulations ● Well understood mathematical properties ● Fast and simple algorithms for projection
  • 9. Problems with ONB ● One-size-fits all – not data adaptive ● Global support cannot adapt to data locally ● Fourier support is infinite, SH support spans the sphere ● Try using Fourier to represent a step-function ● Not sparse – very few zero coefficients ● Not additive - relies on destructive cancellation.
  • 10. Gibb’s Ringing – Fourier and SH
  • 11. What is Overcomplete Dictionary? ● Overcomplete means the dictionary has more atoms (columns) than the minimum required for the dimension of the signal ● In 3D, an ONB only needs 3 basis ● A 3D dictionary can have dozens or hundreds
  • 12. The Sparse Signal Model 𝐃 A fixed dictionary 𝛼 = 𝑥 𝑁 𝑁 𝐾 resulting signal Sparse vector of coefficients
  • 13. Why so many atoms? ● More atoms give our algorithm a better chance to find a small subset that matches a given signal ● Let’s look at some patches from Barbara
  • 15.
  • 16. Domain Specific Compression ● Just 550 bytes per image 1. Original 2. JPEG 3. JPEG2000 4. PCA 5. KSVD per block
  • 17. Project onto Dictionaries ● Overcomplete and non-orthogonal ● interactions among atoms cannot be ignored ● How do we project? ● Sparse Coding problem
  • 18. Matching Pursuit 1. Set the residual 𝑟 = 𝑥 2. Find an unselected atom that best matches the residual 𝐃𝛼 − 𝑟 3. Re-calculate the residual from matched atoms 𝑟 = 𝑥 − 𝐃𝛼 4. Repeat until 𝑟 ≤ 𝜖 Greedy Methods 𝐃 𝛼 = 𝑥
  • 19. Orthogonal Matching Pursuit (OMP) ● Add an Orthogonal Projection to the residual calculation 1. set 𝐼 ∶= ∅ , 𝑟 ≔ 𝑥, 𝛾 ≔ 0 2. while (𝑠𝑡𝑜𝑝𝑝𝑖𝑛𝑔 𝑡𝑒𝑠𝑡 𝑓𝑎𝑙𝑠𝑒) do 3. 𝑘 ≔ argmax 𝑘 𝑑 𝑘 𝑇 𝑟 4. 𝐼 ≔ 𝐼, 𝑘 5. 𝛾𝐼 ≔ 𝐃𝐼 + 𝑥 6. 𝑟 ≔ 𝑥 − 𝐃𝐼 𝛾𝐼 7. end while
  • 20. What is Dictionary Learning? ● select a few atoms for each signal – e.g. OMP ● Adjust the atoms to better fit those signals ● Repeat
  • 21. K-SVD ● Is one of the well known dictionary learning methods ● Check out our GDC2013 talk ● our GDC13 slides "OMP and K-SVD for Sparse Coding“ ● See Jim’s talk just before this session ● Miral’s Online Learning is the other.
  • 22. Overcomplete Dictionary Recap ● Importance of overcomplete dictionaries ● OMP for efficient projection onto dictionaries ● K-SVD for learning a better dictionary using samples from the real data
  • 24. Linear Blend Skinning ● 𝑣𝑖 = 𝑤𝑖𝑗(𝑅𝑗 |𝐵| 𝑗=1 𝑝𝑗 + 𝑇𝑗) ● 𝑝𝑖 is the position for the 𝑖th vertex of the rest pose ● 𝑤𝑖𝑗 ≥ 0 𝑎𝑛𝑑 𝑠𝑢𝑚𝑠 𝑡𝑜 𝑜𝑛𝑒(affinity). The non-negative constraint makes the blend additive. The affinity constraint prevents over-fitting and artifacts. ● 𝑅𝑗 usually is orthogonal to avoid shearing or scaling ● |𝐵| is the number of weights (usually <= 6)
  • 27. Blend Skinning on GPU GPU cores
  • 28. LBS on GPUs ● 𝑤𝑖𝑗 typically very sparse – 4-6 weights or less per- vertex ● Ideally a group of vertices all have the same weights to avoid thread divergence or splitting drawcalls ● These are fairly serious constraints a) Some vertices might need more weights – e.g. very smooth meshes or complex topology (hand)
  • 29. WeightsReduction Poisson-based Weight Reduction of Animated Meshes [Landreneau and Schaefer 2010]  Discrete optimization: – Impossible to find optimum solution – Very high cost for non-optimum solution • Fracture • Significant increase of computing cost: nK non-zero  n(K+1) non-zero
  • 33. Magic 4 ● why 4 weights is too few to generate smooth weights ● 4 vertices specifies an affine transform exactly. ● simplices in 3D contains 4 vertices for barycentric coordinates.
  • 35. Two-Layer Sparse Compression, Le & Deng 2013 ● Use dictionary learning to compute a two-level compression using bones ● Work with the weights of the bind-pose directly
  • 36. Why Dictionary for LBS? ● Why dictionary learning? ● limitations of Orthonormal-basis e.g. eigen/PCA ● Not adaptive ● Not purely additive – i.e. negative weights (relies on cancellation) ● No intuitive meaning – bones extracted cannot be used to tweak the model
  • 37. Dense-WeightCompression Input: Dense matrix Bone Transformations Blending Vertices Vertices Bones
  • 38.
  • 42. Algorithm– Blockcoordinatedescent Alternative update D and A (Block coordinate descent) Update D Update A
  • 43. UpdateCoefficientsA Linear least square with 2 unknowns Use mesh smoothness assumption to quickly find the non-zero candidates (virtual bones) αi
  • 45. Analysis of Two-Layer Scheme ● Use 100’s of virtual bones means we are not limited to a sparse approximation to the original animation. ● virtual bones act as a ‘common subexpression’ ● e.g. think compute shader that writes to LDS. ● Still enforce sparsity on VBs to control runtime cost and LDS usage – but k can be 100’s. ● Per-vertex weights are ● very sparse (2 per vertex) and the same for all vertices ● good for GPU.
  • 46. Learning Virtual Bones ● Virtual bones are learned from the dense vertex weights by block-coordinate-descent (BCD): Sparse coding: search for a few good atoms among the input columns. Use that to project all the rest of the inputs. ● Atom update: given the sparse weights from above we seek to adjust the atoms to make them fit the inputs that needs them better – a series of small LS problems. ● Similar to EM/Lloyd-Max
  • 47. Sparse Coding Sparse coding: ● insert the vertex with the largest L2 norm ● add a few more vertex which has the smallest dot- product with the 1st atom ● solve the basis-pursuit with OMP (see K-SVD) or LARS. ● solve 2x2 least-square prob. for 𝑤𝑖𝑗 to blend masters bones
  • 48. Weight Map – matrix A ● Weights and indices for each vertex to blend virtual bones ● solving a small 2x2 linear system to minimize MSE: ● arg 𝑚𝑖𝑛 𝑥 𝐷𝑥 − 𝑤𝑖 ^2 ● runtime per-vertex cost is just 2 dotp ● no bone hierarchy to worry about ● no warp divergence even for high valence vertices
  • 49. Atom Updates Atom update: foreach vertex ● update each atom to minimize error for the set of vertices that reference it (this is like K-SVD) ● Miral’s Online Dictionary Learning [Miral09]
  • 50. Atom Updates ● Precompute A and B ● 𝐴 = 𝛼𝑖 𝑡 𝑖=1 𝛼 𝑇 ● B = 𝑥𝑖 𝛼 𝑇𝑡 𝑖=1 ● For all atoms ● 𝑢𝑗 1 𝐴 𝑗,𝑗 𝑏𝑗 − 𝐷𝑎𝑗 + 𝑑𝑗 − eq(5) ● 𝑑𝑗 1 max 𝑢 𝑗 2,1 𝑢𝑗. − eq 6 ● 𝑢𝑗is thresholded to make sure # of non − zero is below the # of master bones
  • 52. CompressionwithExamplePoses Without using example pose – Minimize weights difference With using example poses – Minimize reconstruction error
  • 55. Recap ● The two-level scheme can work with dense (hand painted) weights or example poses (blend shape?) ● Only the vertex positions are needed ● a fixed memory footprint and uniform per-vertex cost - GPU friendly ● Combines the quality of dense skinning and the efficiencies of sparse-LBS. Animators can use blend-shapes or FFD more.
  • 56. Recap 2 ● Besides it uses dictionary learning and modern sparsity methods – how cool is that?  ● Last year we show how good dictionary learning is for compressing 2d images and 3d volumes ● Now we see what it can do for animation. ● Thank you!
  • 57. Recap 3 ● Non-negative LS and Active-set Method (ASM) ● Block-coordinate descent ● Sparsity constraints ● L1 relaxation and L0-norm constraints ● Direct solving ● These are all very useful tools.
  • 58. Acknowledgements ● Binh Huy Le & Zhigang Deng kindly provided the demo and their Siggraph materials. ● Robin Green for being my collaborator for many years. ● Igor Carron inspired me to learn sparsity methods and matrix factorization and for his spirit of broad exploration and sharing. ● Julien Mairal for the online learning math ● Peter-Pike who inspired me to apply modern math to graphics and games. ● Carlos Gonzalez Ochoa for sharing his insight in animation.
  • 59. Activision R&D is Hiring ● Our group is hiring 
  • 60. References ● Alexa 2000. “As-rigid-as-possible shape interpolation”, SIGGRAPH 2000. ● Halser 2010. “Learning skeletons for shape and pose”, I3D 2010. ● Kavan, Sloan and O'Sullivan 2010. “Fast and Efficient Skinning of Animated Meshes” Comput. Graph. Forum. ● Ko, and Green 2013 “Orthogonal Matching Pursuit and K-SVD for Sparse Encoding” GDC, Math for Games 2013 gdc2013-ompandksvd ● Landreneau & Schaefer “Poisson-Based Weight Reduction of Animated Meshes”, CGF 28(2), 2012. ● Le & Deng 2012. “Smooth skinning decomposition with rigid bones”, ACM TOG, Vol. 31, No. 6. ● Le & Deng 2013. “Two-Layer Sparse Compression of Dense-Weight Blend Skinning”, Siggraph 2013 Paper page ● Mairal 2009. “Online dictionary learning for sparse coding” Int. Conf. on Machine Learning.
  • 61. Appendix ● Kabsch/Procrutes method – use SVD to compute the MSE minimum rotation of one point-set to another. ● Kabsch_algorithm