Random Walk
Random Walk
Markov Property dependency structure of Xo,X1,X2,... ●Markov property: (memoryless) X+1 depends only on X: ∀t=0,1,2,.,c0,c1,.,xt-1,x,y∈2 Pr[Xt+1=y Xo =Jo;...,Xt-1=xt-1,Xt= =PrX:+1=y Xi=x Markov chain:discrete time discrete space stochastic process with Markov property
Markov Property • dependency structure of • Markov property: X0, X1, X2,... (memoryless) Xt+1 depends only on Xt Pr[Xt+1 = y | X0 = x0,...,Xt1 = xt1, Xt = x] = Pr[Xt+1 = y | Xt = x] 8t = 0, 1, 2,..., 8x0, x1,...,xt1, x, y 2 ⌦ Markov chain: discrete time discrete space stochastic process with Markov property
Transition Matrix Markov chain:Xo,X1,X2,... Pr[X+1=y Xo=o,...,Xt-1=t-1,Xt=] =PrX+1=y|X&=四=P )=Pcy (time-homogenous) P y∈D 。 x∈2 stochastic matrix P1=1
Transition Matrix Pr[Xt+1 = y | X0 = x0,...,Xt1 = xt1, Xt = x] = Pr[Xt+1 = y | Xt = x] Markov chain: X0, X1, X2,... = P(t) xy = Pxy P y 2 ⌦ x 2 ⌦ Pxy (time-homogenous) stochastic matrix P1 = 1
chain: X0,X1,X2,. distribution: π(o)π(1四)T2)∈[0,12 ∑π=1 π=Pr[Xt=x π(t+1)=π()P 0=PK+1= =>Pr[X:x]Pr[X+1==] x∈2 =∑Pw c∈2 =(πP)g
X0, X1, X2, ... (0) (1) (2) (t+1) = (t) P chain: distribution: 2 [0, 1]⌦ X x2⌦ ⇡x = 1 ⇡(t) x = Pr[Xt = x] ⇡(t+1) y = Pr[Xt+1 = y] = X x2⌦ Pr[Xt = x] Pr[Xt+1 = y | Xt = x] = X x2⌦ ⇡(t) x Pxy = (⇡(t) P)y
1 1 1/3 2 © 1/3 3 3 2/3 1/3 0 1 0 P 1/3 0 2/3 1/3 1/3 1/3
1/3 1/3 1/3 1/3 2/3 1 1 2 3 P = ⇤ 010 1/302/3 1/3 1/3 1/3 ⇥ ⌅
1 1/3 2 1/3 2/3 3 0 1 0 π0)=(经,2,) P 1/3 0 2/3 1/3 1/3 1/3 π四0=4(0,1,0)+(3,0,)+4(3,3,)
1/3 1/3 1/3 1/3 2/3 1 1 2 3 P = ⇤ 010 1/302/3 1/3 1/3 1/3 ⇥ (0) = ( 1 ⌅ 4 , 1 2 , 1 4 ) (1) = 1 4 (0, 1, 0) + 1 2 ( 1 3 , 0, 2 3 ) + 1 4 ( 1 3 , 1 3 , 1 3 )
Random Walks fair +1 random walk:flipping a fair coin,the state is the difference between heads and tails; random walk on a graph; card shuffling:random walk in a state space of permutations; random walk on q-coloring of a graph;
Random Walks • fair ±1 random walk: flipping a fair coin, the state is the difference between heads and tails; • random walk on a graph; • card shuffling: random walk in a state space of permutations; • random walk on q-coloring of a graph;
Convergence 0 1 0 1/3 P= 1/3 0 2/3 3 /3 1/3 1/3 1/3 0.2500 0.3750 0.3750 P20≈ 0.2500 0.3750 0.3750 0.2500 0.3750 0.3750 V distribution m, πP20≈(8,)
Convergence P = ⇤ 010 1/302/3 1/3 1/3 1/3 ⇥ ⌅ 1/3 1/3 1/3 1/3 2/3 1 1 2 3 P20 ⇤ 0.2500 0.3750 0.3750 0.2500 0.3750 0.3750 0.2500 0.3750 0.3750 ⇥ ⌅ distribution , P20 ( 1 4 , 3 8 , 3 8 )
Stationary Distribution Markoy chain=(,P) stationary distribution πP=π (fixed point) Perron-Frobenius Theorem: stochastic matrix P:P1 =1 1 is also a left eigenvalue of P (eigenvalue of pT) ●the left eigenvectorπP=πis nonnegative stationary distribution always exists
Stationary Distribution • stationary distribution π: • Perron-Frobenius Theorem: • stochastic matrix P: • 1 is also a left eigenvalue of P (eigenvalue of PT) • the left eigenvector is nonnegative • stationary distribution always exists Markov chain M = (⌦, P) ⇡P = ⇡ P1 = 1 ⇡P = ⇡ (fixed point)
Perron-Frobenius Perron-Frobenius Theorem: A:a nonnegative nxn matrix with spectral radius (A) (A)>0 is an eigenvalue of A; there is a nonnegative (left and right)eigenvector associated with o(A); if further A is irreducible,then: there is a positive (left and right)eigenvector associated with o(A)that is of multiplicity 1; for stochastic matrix A the spectral radius o(A)=1
Perron-Frobenius • A : a nonnegative n×n matrix with spectral radius ρ(A) • ρ(A) > 0 is an eigenvalue of A; • there is a nonnegative (left and right) eigenvector associated with ρ(A); • if further A is irreducible, then: • there is a positive (left and right) eigenvector associated with ρ(A) that is of multiplicity 1; • for stochastic matrix A the spectral radius ρ(A)=1. Perron-Frobenius Theorem: