SlideShare a Scribd company logo
Network Intelligence and Analysis Lab 
Network Intelligence and Analysis Lab 
Markov Chain Basic 
2014.07.11 
SanghyukChun
Network Intelligence and Analysis Lab 
•Exact Counting 
•#P Complete 
•Sampling and Counting 
Previous Chapters 
2
Network Intelligence and Analysis Lab 
•Markov Chain Basic 
•ErgodicMC has an unique stationary distribution 
•Some basic concepts (Coupling, Mixing time) 
•Coupling from past 
•Coupling detail 
•IsingModel 
•Bounding Mixing time via Coupling 
•Random spanning tree 
•Path coupling framework 
•MC for k-coloring graph 
Remaining Chapters 
3 
Today!
Network Intelligence and Analysis Lab 
•Introduce Markov Chain 
•Show a potential algorithmic use of Markov Chain for sampling from complex distribution 
•Prove that ErgodicMarkov Chain always converge to unique stationary distribution 
•Introduce Coupling techniques and Mixing time 
In this chapter… 
4
Network Intelligence and Analysis Lab 
•For a finite state space Ω, we say a sequence of random variables (푋푡) on Ωis a Markov chain if the sequence is Markovianin the following sense 
•For all t, all 푥0,…,푥푡,푦∈Ω, we require 
•Pr푋푡+1=푦푋0=푥0,…,푋푡=푥푡=Pr(푋푡+1=푦|푋푡=푥푡) 
•The Markov property: “Memoryless” 
Markov Chain 
5
Network Intelligence and Analysis Lab 
•For a finite state space Ω, we say a sequence of random variables (푋푡) on Ωis a Markov chain if the sequence is Markovian 
•Let’s Ωdenote the set of shuffling (ex. 푋1=1,2,3,…,52) 
•The next shuffling state only depends on previous shuffling state, or 푋푡only depends on 푋푡+1 
•Question 1: How can we uniformly shuffle the card? 
•Question 2: Can we get fast uniform shuffling algorithm? 
Example of Markov Chain (Card Shuffling) 
6
Network Intelligence and Analysis Lab 
•Transition Matrix 
•푃푥,푦=Pr(푋푡+1=푦|푋푡=푥) 
•Transitions are independent of the time (time-homogeneous) 
Transition Matrix 
7
Network Intelligence and Analysis Lab 
•Transition Matrix 
•푃푥,푦=Pr(푋푡+1=푦|푋푡=푥) 
•Transitions are independent of the time (time-homogeneous) 
•The t-step distribution is defined in the natural way 
•푃푡푥,푦= 푃푥,푦,푡=1 푧∈Ω푃푥,푧푃푡−1(푧,푦),푡>1 
Transition Matrix 
8
Network Intelligence and Analysis Lab 
•A distribution 휋is a stationary distribution if it is invariant with respect to the transition matrix 
• forall푦∈Ω,휋푦= 푥∈Ω 휋푥푃(푥,푦) 
•Theorem 1 
•For a finite ergodicMarkov Chain, there exists a unique stationary distribution 휋 
•Proof? 
Stationary Distribution 
9
Network Intelligence and Analysis Lab 
•A Markov Chain is ergodicif there exists t such that for all x,푦∈Ω, 푃푥,푦푡>0 
•It is possible to go from every state to every state (not necessarily in one move) 
•For finite MC following conditions are equivalent to ergodicity 
•Irreducible: 
•Forall푥,푦∈Ω,thereexists푡=푡푥,푦푠.푡.푃푡푥,푦>0 
•Aperiodic: 
•Forall푥∈Ω,gcd푡:푃푡푥,푥>0=1 
•Since regardless of their initial state, ErgodicMCs eventually reach a unique stationary distribution, EMCs are useful algorithmic tools 
ErgodicMarkov Chain 
10
Network Intelligence and Analysis Lab 
•Goal: we have a probability distribution we’d like to generate random sample from 
•Solution via MC: If we can design an ergodicMC whose unique stationary distribution is desired distribution, we then run the chain and can get the distribution 
•Example: sampling matching 
Algorithmic usage of ErgodicMarkov Chains 
11
Network Intelligence and Analysis Lab 
•For a graph 퐺=(푉,퐸), let Ωdenote the set of matching of G 
•We define a MC on Ωwhose transitions are as 
•Choose an edge e uniformly at random from E 
•Let, 푋′= 푋푡∪푒,if푒∉푋푡 푋푡푒,if푒∈푋푡 
•If 푋′∈Ω, then set 푋푡+1=푋′with probability ½; otherwise set 푋푡+1=푋푡 
•The MC is aperiodic (푃푀,푀≥1/2forall푀∈Ω) 
•The MC is irreducible (via empty set) with symmetric transition probabilities 
•Symmetric transition matrix has uniform stationary distribution 
•Thus, the unique stationary distribution is uniform over all matching of G 
Sampling Matching 
12
Network Intelligence and Analysis Lab 
•We will prove the theorem using the coupling technique and coupling Lemma 
Proof of Theorem (introduction) 
13
Network Intelligence and Analysis Lab 
•For distribution 휇,휈on a finite setΩ, a distribution ωon Ω×Ωis a coupling if 
•In other words, ωis a joint distribution whose marginal distributions are the appropriate distributions 
•Variation distance between 휇,휈is defined as 
Coupling Technique 
14
Network Intelligence and Analysis Lab 
Coupling Lemma 
15
Network Intelligence and Analysis Lab 
Proof of Lemma (a) 
16
Network Intelligence and Analysis Lab 
•For all 푧∈Ω, let 
•휔푧,푧=min{휇푧,휈(푧)} 
•푑푇푉=Pr(푋≠푌) 
•We need to complete the construction of w in valid way 
•For y,푧∈Ω,y≠푧, let 
•It is straight forward to verify that w is valid coupling 
Proof of Lemma (b) 
17
Network Intelligence and Analysis Lab 
•Consider a pair of Markov chains 푋푡,푌푡on Ωwith transition matrices 푃푋,푃푌respectively 
•Typically, MCs are identical in applications (푃푋=푃푌) 
•The Markov chainXt′ ,Yt′onΩ×Ωis a Markoviancoupling if 
•For such a Markoviancoupling we have variance distance as 
•If we choose 푌0as stationary distribution πthen we have 
•This shows how can we use coupling to bound the distance from stationary 
Couplings for Markov Chain 
18
Network Intelligence and Analysis Lab 
•Create MCs 푋푡,푌푡, where initial 푋0,푌0are arbitrary state on Ω 
•Create coupling for there chains in the following way 
•From 푋푡,푌푡, choose 푋푡+1according to transition matrix P 
•If Yt=푋푡,setYt+1=푋푡+1, otherwise choose Yt+1according to P, independent of the choice for 푋푡 
•By ergodicity, there exist 푡∗s.t.forall푥,푦∈Ω,푃푡∗ 푥,푦≥휖>0 
•Therefore, for all 푋0,푌0∈Ω 
•We can see similarly get step 푡∗→2푡∗ 
Proof of Theorem(1/4) 
19
Network Intelligence and Analysis Lab 
•Create coupling for there chains in the following way 
•From 푋푡,푌푡, choose 푋푡+1according to transition matrix P 
•If Yt=푋푡,setYt+1=푋푡+1, otherwise choose Yt+1according to P, independent of the choice for 푋푡 
•If once 푋푠=푌푠,wehave푋푠′=푌푠′forall푠′≥푠 
•From earlier observation, 
Proof of Theorem(2/4) 
20
Network Intelligence and Analysis Lab 
•For integer k > 0, 
•Therefore, 
•Since Xt=푌푡,impliesXt+1=푌푡+1,forall푡′≥푡, we have 
•Note that coupling of MC we defined, defines a coupling of Xt,푌푡. Hence by Coupling Lemma, 
•This proves that from any initial points we reach same distribution 
Proof of Theorem(3/4) 
21 
For all 푥,푦∈Ω
Network Intelligence and Analysis Lab 
•From previous result, we proves there is a limiting distribution σ 
•Question: is σa stationary distribution? Or satisfiesforall푦∈Ω,휎푦= 푥∈Ω 휎푥푃(푥,푦) 
Proof of Theorem (4/4) 
22
Network Intelligence and Analysis Lab 
•Convergence itself is guaranteed if MC is ErgodicMC 
•However, it gives no indication to the convergence rate 
•We define the mixing time 휏푚푖푥(휖)as the time until the chain is within variance distance εfrom the worst initial state 
•휏푚푖푥휖=maxmin푋0∈Ω{푡:푑푇푉푃푡푋0,∙,휋≤ϵ} 
•To get efficient sampling algorithms (e.x. matching chain), we hope that mixing time is polynomial for input size 
Markov Chains for Algorithmic Purpose: Mixing Time 
23

More Related Content

What's hot

Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)
Taehoon Kim
 
Explicit Density Models
Explicit Density ModelsExplicit Density Models
Explicit Density Models
Sangwoo Mo
 
DDPG algortihm for angry birds
DDPG algortihm for angry birdsDDPG algortihm for angry birds
DDPG algortihm for angry birds
Wangyu Han
 
Meta-Learning with Implicit Gradients
Meta-Learning with Implicit GradientsMeta-Learning with Implicit Gradients
Meta-Learning with Implicit Gradients
Sangwoo Mo
 
SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15
Hao Zhuang
 
Bayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-LearningBayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-Learning
Sangwoo Mo
 
Distributed Deep Q-Learning
Distributed Deep Q-LearningDistributed Deep Q-Learning
Distributed Deep Q-Learning
Lyft
 
Introduction to Diffusion Models
Introduction to Diffusion ModelsIntroduction to Diffusion Models
Introduction to Diffusion Models
Sangwoo Mo
 
Linear regression, costs & gradient descent
Linear regression, costs & gradient descentLinear regression, costs & gradient descent
Linear regression, costs & gradient descent
Revanth Kumar
 
Lp and ip programming cp 9
Lp and ip programming cp 9Lp and ip programming cp 9
Lp and ip programming cp 9
M S Prasad
 
is anyone_interest_in_auto-encoding_variational-bayes
is anyone_interest_in_auto-encoding_variational-bayesis anyone_interest_in_auto-encoding_variational-bayes
is anyone_interest_in_auto-encoding_variational-bayes
NAVER Engineering
 
Overview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningOverview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep Learning
Khang Pham
 
Dueling network architectures for deep reinforcement learning
Dueling network architectures for deep reinforcement learningDueling network architectures for deep reinforcement learning
Dueling network architectures for deep reinforcement learning
Taehoon Kim
 
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERUndecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
muthukrishnavinayaga
 
unit-4-dynamic programming
unit-4-dynamic programmingunit-4-dynamic programming
unit-4-dynamic programming
hodcsencet
 
An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms
Hakky St
 
Branch and bound
Branch and boundBranch and bound
Branch and bound
Nv Thejaswini
 
Lecture 8 dynamic programming
Lecture 8 dynamic programmingLecture 8 dynamic programming
Lecture 8 dynamic programming
Oye Tu
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
Gopi Saiteja
 
DS ppt
DS pptDS ppt

What's hot (20)

Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)
 
Explicit Density Models
Explicit Density ModelsExplicit Density Models
Explicit Density Models
 
DDPG algortihm for angry birds
DDPG algortihm for angry birdsDDPG algortihm for angry birds
DDPG algortihm for angry birds
 
Meta-Learning with Implicit Gradients
Meta-Learning with Implicit GradientsMeta-Learning with Implicit Gradients
Meta-Learning with Implicit Gradients
 
SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15
 
Bayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-LearningBayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-Learning
 
Distributed Deep Q-Learning
Distributed Deep Q-LearningDistributed Deep Q-Learning
Distributed Deep Q-Learning
 
Introduction to Diffusion Models
Introduction to Diffusion ModelsIntroduction to Diffusion Models
Introduction to Diffusion Models
 
Linear regression, costs & gradient descent
Linear regression, costs & gradient descentLinear regression, costs & gradient descent
Linear regression, costs & gradient descent
 
Lp and ip programming cp 9
Lp and ip programming cp 9Lp and ip programming cp 9
Lp and ip programming cp 9
 
is anyone_interest_in_auto-encoding_variational-bayes
is anyone_interest_in_auto-encoding_variational-bayesis anyone_interest_in_auto-encoding_variational-bayes
is anyone_interest_in_auto-encoding_variational-bayes
 
Overview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningOverview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep Learning
 
Dueling network architectures for deep reinforcement learning
Dueling network architectures for deep reinforcement learningDueling network architectures for deep reinforcement learning
Dueling network architectures for deep reinforcement learning
 
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERUndecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
 
unit-4-dynamic programming
unit-4-dynamic programmingunit-4-dynamic programming
unit-4-dynamic programming
 
An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms
 
Branch and bound
Branch and boundBranch and bound
Branch and bound
 
Lecture 8 dynamic programming
Lecture 8 dynamic programmingLecture 8 dynamic programming
Lecture 8 dynamic programming
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
DS ppt
DS pptDS ppt
DS ppt
 

Viewers also liked

Introduction to E-book
Introduction to E-bookIntroduction to E-book
Introduction to E-book
Sanghyuk Chun
 
K-means and GMM
K-means and GMMK-means and GMM
K-means and GMM
Sanghyuk Chun
 
면발쫄깃123(기업용제안서)
면발쫄깃123(기업용제안서)면발쫄깃123(기업용제안서)
면발쫄깃123(기업용제안서)
Minho Lee
 
Probabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and SequencesProbabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and Sequences
Zitao Liu
 
구조 설정
구조 설정구조 설정
구조 설정
승연 신
 
최종Ppt 디자인입힌거 최종
최종Ppt 디자인입힌거 최종최종Ppt 디자인입힌거 최종
최종Ppt 디자인입힌거 최종종성 박
 
비즈니스모델 젠 (Business Model Zen) 소개
비즈니스모델 젠 (Business Model Zen) 소개비즈니스모델 젠 (Business Model Zen) 소개
비즈니스모델 젠 (Business Model Zen) 소개
The Innovation Lab
 
Distance Metric Learning
Distance Metric LearningDistance Metric Learning
Distance Metric Learning
Sanghyuk Chun
 
M A R K O V C H A I N
M A R K O V  C H A I NM A R K O V  C H A I N
M A R K O V C H A I N
ashishtqm
 
Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro
2vikasdubey
 
[토크아이티] 프런트엔드 개발 시작하기 저자 특강
[토크아이티] 프런트엔드 개발 시작하기 저자 특강 [토크아이티] 프런트엔드 개발 시작하기 저자 특강
[토크아이티] 프런트엔드 개발 시작하기 저자 특강
우영 주
 
Python Sympy 모듈 이해하기
Python Sympy 모듈 이해하기Python Sympy 모듈 이해하기
Python Sympy 모듈 이해하기
Yong Joon Moon
 
Lesson 11: Markov Chains
Lesson 11: Markov ChainsLesson 11: Markov Chains
Lesson 11: Markov Chains
Matthew Leingang
 
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
Evan Ryu
 
Markov Models
Markov ModelsMarkov Models
Markov Models
Vu Pham
 
Markov chain
Markov chainMarkov chain
Markov chain
Yogesh Khandelwal
 
212140045 박채영 인데코
212140045 박채영 인데코212140045 박채영 인데코
212140045 박채영 인데코채영 박
 
Markov Chains
Markov ChainsMarkov Chains
Markov Chains
guest8901f4
 
Markov analysis
Markov analysisMarkov analysis
Markov analysis
ganith2k13
 
쿠키런 1년, 서버개발 분투기
쿠키런 1년, 서버개발 분투기쿠키런 1년, 서버개발 분투기
쿠키런 1년, 서버개발 분투기
Brian Hong
 

Viewers also liked (20)

Introduction to E-book
Introduction to E-bookIntroduction to E-book
Introduction to E-book
 
K-means and GMM
K-means and GMMK-means and GMM
K-means and GMM
 
면발쫄깃123(기업용제안서)
면발쫄깃123(기업용제안서)면발쫄깃123(기업용제안서)
면발쫄깃123(기업용제안서)
 
Probabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and SequencesProbabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and Sequences
 
구조 설정
구조 설정구조 설정
구조 설정
 
최종Ppt 디자인입힌거 최종
최종Ppt 디자인입힌거 최종최종Ppt 디자인입힌거 최종
최종Ppt 디자인입힌거 최종
 
비즈니스모델 젠 (Business Model Zen) 소개
비즈니스모델 젠 (Business Model Zen) 소개비즈니스모델 젠 (Business Model Zen) 소개
비즈니스모델 젠 (Business Model Zen) 소개
 
Distance Metric Learning
Distance Metric LearningDistance Metric Learning
Distance Metric Learning
 
M A R K O V C H A I N
M A R K O V  C H A I NM A R K O V  C H A I N
M A R K O V C H A I N
 
Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro
 
[토크아이티] 프런트엔드 개발 시작하기 저자 특강
[토크아이티] 프런트엔드 개발 시작하기 저자 특강 [토크아이티] 프런트엔드 개발 시작하기 저자 특강
[토크아이티] 프런트엔드 개발 시작하기 저자 특강
 
Python Sympy 모듈 이해하기
Python Sympy 모듈 이해하기Python Sympy 모듈 이해하기
Python Sympy 모듈 이해하기
 
Lesson 11: Markov Chains
Lesson 11: Markov ChainsLesson 11: Markov Chains
Lesson 11: Markov Chains
 
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
 
Markov Models
Markov ModelsMarkov Models
Markov Models
 
Markov chain
Markov chainMarkov chain
Markov chain
 
212140045 박채영 인데코
212140045 박채영 인데코212140045 박채영 인데코
212140045 박채영 인데코
 
Markov Chains
Markov ChainsMarkov Chains
Markov Chains
 
Markov analysis
Markov analysisMarkov analysis
Markov analysis
 
쿠키런 1년, 서버개발 분투기
쿠키런 1년, 서버개발 분투기쿠키런 1년, 서버개발 분투기
쿠키런 1년, 서버개발 분투기
 

Similar to Markov Chain Basic

Fa18_P1.pptx
Fa18_P1.pptxFa18_P1.pptx
Fa18_P1.pptx
Md Abul Hayat
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
ChenYiHuang5
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
Competitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptxCompetitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptx
raghavaram5555
 
Neural Networks
Neural NetworksNeural Networks
Av 738- Adaptive Filtering - Background Material
Av 738- Adaptive Filtering - Background MaterialAv 738- Adaptive Filtering - Background Material
Av 738- Adaptive Filtering - Background Material
Dr. Bilal Siddiqui, C.Eng., MIMechE, FRAeS
 
Efficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketchingEfficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketching
Hsing-chuan Hsieh
 
test generation
test generationtest generation
test generation
dennis gookyi
 
Trajectory Transformer.pptx
Trajectory Transformer.pptxTrajectory Transformer.pptx
Trajectory Transformer.pptx
Seungeon Baek
 
Financial Networks III. Centrality and Systemic Importance
Financial Networks III. Centrality and Systemic ImportanceFinancial Networks III. Centrality and Systemic Importance
Financial Networks III. Centrality and Systemic Importance
Kimmo Soramaki
 
Artificial Neural Networks presentations
Artificial Neural Networks presentationsArtificial Neural Networks presentations
Artificial Neural Networks presentations
migob991
 
Quarks zk study-club
Quarks zk study-clubQuarks zk study-club
Quarks zk study-club
Alex Pruden
 
Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!
ChenYiHuang5
 
Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)
Sangwoo Mo
 
Class13_Quicksort_Algorithm.pdf
Class13_Quicksort_Algorithm.pdfClass13_Quicksort_Algorithm.pdf
Class13_Quicksort_Algorithm.pdf
AkashSingh625550
 
Stat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chainStat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chain
Khulna University
 
Monte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptxMonte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptx
HaibinSu2
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
arjitkantgupta
 
DAC15 Hao Zhuang poster
DAC15 Hao Zhuang poster DAC15 Hao Zhuang poster
DAC15 Hao Zhuang poster
Hao Zhuang
 
Intelligent Handwriting Recognition_MIL_presentation_v3_final
Intelligent Handwriting Recognition_MIL_presentation_v3_finalIntelligent Handwriting Recognition_MIL_presentation_v3_final
Intelligent Handwriting Recognition_MIL_presentation_v3_final
Suhas Pillai
 

Similar to Markov Chain Basic (20)

Fa18_P1.pptx
Fa18_P1.pptxFa18_P1.pptx
Fa18_P1.pptx
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
Competitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptxCompetitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptx
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Av 738- Adaptive Filtering - Background Material
Av 738- Adaptive Filtering - Background MaterialAv 738- Adaptive Filtering - Background Material
Av 738- Adaptive Filtering - Background Material
 
Efficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketchingEfficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketching
 
test generation
test generationtest generation
test generation
 
Trajectory Transformer.pptx
Trajectory Transformer.pptxTrajectory Transformer.pptx
Trajectory Transformer.pptx
 
Financial Networks III. Centrality and Systemic Importance
Financial Networks III. Centrality and Systemic ImportanceFinancial Networks III. Centrality and Systemic Importance
Financial Networks III. Centrality and Systemic Importance
 
Artificial Neural Networks presentations
Artificial Neural Networks presentationsArtificial Neural Networks presentations
Artificial Neural Networks presentations
 
Quarks zk study-club
Quarks zk study-clubQuarks zk study-club
Quarks zk study-club
 
Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!
 
Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)
 
Class13_Quicksort_Algorithm.pdf
Class13_Quicksort_Algorithm.pdfClass13_Quicksort_Algorithm.pdf
Class13_Quicksort_Algorithm.pdf
 
Stat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chainStat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chain
 
Monte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptxMonte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptx
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
DAC15 Hao Zhuang poster
DAC15 Hao Zhuang poster DAC15 Hao Zhuang poster
DAC15 Hao Zhuang poster
 
Intelligent Handwriting Recognition_MIL_presentation_v3_final
Intelligent Handwriting Recognition_MIL_presentation_v3_finalIntelligent Handwriting Recognition_MIL_presentation_v3_final
Intelligent Handwriting Recognition_MIL_presentation_v3_final
 

Recently uploaded

20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceAI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
IndexBug
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
Aftab Hussain
 
Mariano G Tinti - Decoding SpaceX
Mariano G Tinti - Decoding SpaceXMariano G Tinti - Decoding SpaceX
Mariano G Tinti - Decoding SpaceX
Mariano Tinti
 
GraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracyGraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracy
Tomaz Bratanic
 
Microsoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdfMicrosoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdf
Uni Systems S.M.S.A.
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
shyamraj55
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
innovationoecd
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
Best 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERPBest 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERP
Pixlogix Infotech
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
Claudio Di Ciccio
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
Matthew Sinclair
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
Kumud Singh
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Safe Software
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
Neo4j
 
Building Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and MilvusBuilding Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and Milvus
Zilliz
 

Recently uploaded (20)

20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceAI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
 
Mariano G Tinti - Decoding SpaceX
Mariano G Tinti - Decoding SpaceXMariano G Tinti - Decoding SpaceX
Mariano G Tinti - Decoding SpaceX
 
GraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracyGraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracy
 
Microsoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdfMicrosoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdf
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
Best 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERPBest 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERP
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
 
Building Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and MilvusBuilding Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and Milvus
 

Markov Chain Basic

  • 1. Network Intelligence and Analysis Lab Network Intelligence and Analysis Lab Markov Chain Basic 2014.07.11 SanghyukChun
  • 2. Network Intelligence and Analysis Lab •Exact Counting •#P Complete •Sampling and Counting Previous Chapters 2
  • 3. Network Intelligence and Analysis Lab •Markov Chain Basic •ErgodicMC has an unique stationary distribution •Some basic concepts (Coupling, Mixing time) •Coupling from past •Coupling detail •IsingModel •Bounding Mixing time via Coupling •Random spanning tree •Path coupling framework •MC for k-coloring graph Remaining Chapters 3 Today!
  • 4. Network Intelligence and Analysis Lab •Introduce Markov Chain •Show a potential algorithmic use of Markov Chain for sampling from complex distribution •Prove that ErgodicMarkov Chain always converge to unique stationary distribution •Introduce Coupling techniques and Mixing time In this chapter… 4
  • 5. Network Intelligence and Analysis Lab •For a finite state space Ω, we say a sequence of random variables (푋푡) on Ωis a Markov chain if the sequence is Markovianin the following sense •For all t, all 푥0,…,푥푡,푦∈Ω, we require •Pr푋푡+1=푦푋0=푥0,…,푋푡=푥푡=Pr(푋푡+1=푦|푋푡=푥푡) •The Markov property: “Memoryless” Markov Chain 5
  • 6. Network Intelligence and Analysis Lab •For a finite state space Ω, we say a sequence of random variables (푋푡) on Ωis a Markov chain if the sequence is Markovian •Let’s Ωdenote the set of shuffling (ex. 푋1=1,2,3,…,52) •The next shuffling state only depends on previous shuffling state, or 푋푡only depends on 푋푡+1 •Question 1: How can we uniformly shuffle the card? •Question 2: Can we get fast uniform shuffling algorithm? Example of Markov Chain (Card Shuffling) 6
  • 7. Network Intelligence and Analysis Lab •Transition Matrix •푃푥,푦=Pr(푋푡+1=푦|푋푡=푥) •Transitions are independent of the time (time-homogeneous) Transition Matrix 7
  • 8. Network Intelligence and Analysis Lab •Transition Matrix •푃푥,푦=Pr(푋푡+1=푦|푋푡=푥) •Transitions are independent of the time (time-homogeneous) •The t-step distribution is defined in the natural way •푃푡푥,푦= 푃푥,푦,푡=1 푧∈Ω푃푥,푧푃푡−1(푧,푦),푡>1 Transition Matrix 8
  • 9. Network Intelligence and Analysis Lab •A distribution 휋is a stationary distribution if it is invariant with respect to the transition matrix • forall푦∈Ω,휋푦= 푥∈Ω 휋푥푃(푥,푦) •Theorem 1 •For a finite ergodicMarkov Chain, there exists a unique stationary distribution 휋 •Proof? Stationary Distribution 9
  • 10. Network Intelligence and Analysis Lab •A Markov Chain is ergodicif there exists t such that for all x,푦∈Ω, 푃푥,푦푡>0 •It is possible to go from every state to every state (not necessarily in one move) •For finite MC following conditions are equivalent to ergodicity •Irreducible: •Forall푥,푦∈Ω,thereexists푡=푡푥,푦푠.푡.푃푡푥,푦>0 •Aperiodic: •Forall푥∈Ω,gcd푡:푃푡푥,푥>0=1 •Since regardless of their initial state, ErgodicMCs eventually reach a unique stationary distribution, EMCs are useful algorithmic tools ErgodicMarkov Chain 10
  • 11. Network Intelligence and Analysis Lab •Goal: we have a probability distribution we’d like to generate random sample from •Solution via MC: If we can design an ergodicMC whose unique stationary distribution is desired distribution, we then run the chain and can get the distribution •Example: sampling matching Algorithmic usage of ErgodicMarkov Chains 11
  • 12. Network Intelligence and Analysis Lab •For a graph 퐺=(푉,퐸), let Ωdenote the set of matching of G •We define a MC on Ωwhose transitions are as •Choose an edge e uniformly at random from E •Let, 푋′= 푋푡∪푒,if푒∉푋푡 푋푡푒,if푒∈푋푡 •If 푋′∈Ω, then set 푋푡+1=푋′with probability ½; otherwise set 푋푡+1=푋푡 •The MC is aperiodic (푃푀,푀≥1/2forall푀∈Ω) •The MC is irreducible (via empty set) with symmetric transition probabilities •Symmetric transition matrix has uniform stationary distribution •Thus, the unique stationary distribution is uniform over all matching of G Sampling Matching 12
  • 13. Network Intelligence and Analysis Lab •We will prove the theorem using the coupling technique and coupling Lemma Proof of Theorem (introduction) 13
  • 14. Network Intelligence and Analysis Lab •For distribution 휇,휈on a finite setΩ, a distribution ωon Ω×Ωis a coupling if •In other words, ωis a joint distribution whose marginal distributions are the appropriate distributions •Variation distance between 휇,휈is defined as Coupling Technique 14
  • 15. Network Intelligence and Analysis Lab Coupling Lemma 15
  • 16. Network Intelligence and Analysis Lab Proof of Lemma (a) 16
  • 17. Network Intelligence and Analysis Lab •For all 푧∈Ω, let •휔푧,푧=min{휇푧,휈(푧)} •푑푇푉=Pr(푋≠푌) •We need to complete the construction of w in valid way •For y,푧∈Ω,y≠푧, let •It is straight forward to verify that w is valid coupling Proof of Lemma (b) 17
  • 18. Network Intelligence and Analysis Lab •Consider a pair of Markov chains 푋푡,푌푡on Ωwith transition matrices 푃푋,푃푌respectively •Typically, MCs are identical in applications (푃푋=푃푌) •The Markov chainXt′ ,Yt′onΩ×Ωis a Markoviancoupling if •For such a Markoviancoupling we have variance distance as •If we choose 푌0as stationary distribution πthen we have •This shows how can we use coupling to bound the distance from stationary Couplings for Markov Chain 18
  • 19. Network Intelligence and Analysis Lab •Create MCs 푋푡,푌푡, where initial 푋0,푌0are arbitrary state on Ω •Create coupling for there chains in the following way •From 푋푡,푌푡, choose 푋푡+1according to transition matrix P •If Yt=푋푡,setYt+1=푋푡+1, otherwise choose Yt+1according to P, independent of the choice for 푋푡 •By ergodicity, there exist 푡∗s.t.forall푥,푦∈Ω,푃푡∗ 푥,푦≥휖>0 •Therefore, for all 푋0,푌0∈Ω •We can see similarly get step 푡∗→2푡∗ Proof of Theorem(1/4) 19
  • 20. Network Intelligence and Analysis Lab •Create coupling for there chains in the following way •From 푋푡,푌푡, choose 푋푡+1according to transition matrix P •If Yt=푋푡,setYt+1=푋푡+1, otherwise choose Yt+1according to P, independent of the choice for 푋푡 •If once 푋푠=푌푠,wehave푋푠′=푌푠′forall푠′≥푠 •From earlier observation, Proof of Theorem(2/4) 20
  • 21. Network Intelligence and Analysis Lab •For integer k > 0, •Therefore, •Since Xt=푌푡,impliesXt+1=푌푡+1,forall푡′≥푡, we have •Note that coupling of MC we defined, defines a coupling of Xt,푌푡. Hence by Coupling Lemma, •This proves that from any initial points we reach same distribution Proof of Theorem(3/4) 21 For all 푥,푦∈Ω
  • 22. Network Intelligence and Analysis Lab •From previous result, we proves there is a limiting distribution σ •Question: is σa stationary distribution? Or satisfiesforall푦∈Ω,휎푦= 푥∈Ω 휎푥푃(푥,푦) Proof of Theorem (4/4) 22
  • 23. Network Intelligence and Analysis Lab •Convergence itself is guaranteed if MC is ErgodicMC •However, it gives no indication to the convergence rate •We define the mixing time 휏푚푖푥(휖)as the time until the chain is within variance distance εfrom the worst initial state •휏푚푖푥휖=maxmin푋0∈Ω{푡:푑푇푉푃푡푋0,∙,휋≤ϵ} •To get efficient sampling algorithms (e.x. matching chain), we hope that mixing time is polynomial for input size Markov Chains for Algorithmic Purpose: Mixing Time 23