Pretty: Tail bound: Pr[X>t]<e. Good Thresholding: The running time of a Las Vegas Alg. Ugly: ● Some cost (e.g.max load). ·The probability of extreme case
Tail bound: Pr[X > t] < . • The running time of a Las Vegas Alg. • Some cost (e.g. max load). • The probability of extreme case. Thresholding: Good Pretty: Ugly: Good
Tail bound: X follows distribution Pr[X>t]t]<f(t,1
Tail bound: Pr[X > t] t ] < f (t, I )
Markov's Inequality Markov's Inequality: For nonnegative X,for any t >0, E[X] Pr[X≥t≤ t Proof: f(x) 1 ifX≥t, →Y≤ X ≤ X E(X] Pr[X≥t=E[Y]≤E p(Xza) tight if we only know the expectation of X
Markov’s Inequality Markov’s Inequality: Pr[X ⇥ t] E[X] t . For nonnegative X , for any t > 0, Y ⇥ X t ⇥ ⇥ X t , Pr[X t] = E[Y ] E X t ⇥ = E[X] t . Proof: Y = 1 if X t, 0 otherwise. Let tight if we only know the expectation of X
Las Vegas to Monte Carlo Las Vegas:running time is B(x): random,always correct. run A(x)for 2T(n)steps; if A(x)returned ●A:Las Vegas Alg with return A(x); worst-case expected running time T(n). else return "yes" one-sided error! ● Monte Carlo:running time is fixed,correctness Pr[error] is random. ≤Pr[T(A(x)>2T(n)] ●B:Monte Carlo Alg E[T(A(x))] 1 ≤ ≤ 2T(n) -2
Las Vegas to Monte Carlo • Las Vegas: running time is random, always correct. • A: Las Vegas Alg with worst-case expected running time T(n). • Monte Carlo: running time is fixed, correctness is random. • B: Monte Carlo Alg ... B(x): run A(x) for 2T(n) steps; if A(x) returned return A(x); else return “yes”; one-sided error! Pr[error] Pr[T (A(x)) > 2T (n)] E[T (A(x))] 2T (n) 1 2
Markov's Inequality Markov's Inequality: For nonnegative X,for any t >0, E[X] Pr[X≥t≤ t
Markov’s Inequality Markov’s Inequality: Pr[X ⇥ t] E[X] t . For nonnegative X , for any t > 0
A Generalization of Markov's Inequality Theorem: For any X,for h:XR,for any t>0, E[h(X)] Pr[h(X)≥t]≤ t Chebyshev,Chernoff
A Generalization of Markov’s Inequality Theorem: For any X, for h : X ⇥ R+, for any t > 0, Pr[h(X) ⇥ t] E[h(X)] t . Chebyshev, Chernoff,
Chebyshev's Inequality Chebyshev's Inequality: For any t >0, ar[X] Pr[X-E[X]川≥t]≤ t2
Chebyshev’s Inequality Chebyshev’s Inequality: Pr[|X ⇥E[X]| ⌅ t] ⇤ Var[X] t 2 . For any t > 0
Variance Definition(variance); The variance of a random variable X is VAr[X]=E[(X-E[X)2]=E[x2]-(EX])2 The standard deviation of random variable X is 6[X]=VVar[X]
Variance Definition (variance): The variance of a random variable X is Var[X] = E (X E[X])2⇥ = E X2⇥ (E[X])2 . The standard deviation of random variable X is [X] = Var[X]
Covariance Theorem: Var[X+Y]=Var[X]+Var[Y]+2Cov(X,Y); n n Var Var[Xl+∑Cov(Xi,Xi). i=1 i计i Definition (covariance): The covariance of X and Y is Cov(X,Y)=E[(X-E[X])(Y-E[Y])]
Covariance Definition (covariance): The covariance of X and Y is Cov(X,Y ) = E[(X E[X])(Y E[Y ])]. Theorem: Var[X +Y ] = Var[X]+Var[Y ]+2Cov(X,Y ); Var ⇤ n i=1 Xi ⇥ = ⇤ n i=1 Var[Xi]+ ⇤ i=j Cov(Xi ,Xj)
Covariance Theorem: For independent X and Y,E[X.Y]=E[X].E[Y]. Theorem: For independent X and Y,Cov(X,Y)=0. Proof:Cov(X,Y)=E[(X-E[X])(Y-E[Y])] =E[X-EX☒]E[Y-E[Y] =0
Covariance Theorem: For independent X and Y , E[X ·Y ] = E[X]·E[Y ]. Theorem: For independent X and Y , Cov(X,Y ) = 0. Proof: Cov(X,Y ) = E[(X E[X])(Y E[Y ])] = E[X E[X]]E[Y E[Y ]] = 0