当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

南京大学:《随机算法 Randomized Algorithms》课程教学资源(课件讲稿)Moments

资源类别:文库,文档格式:PDF,文档页数:51,文件大小:8.87MB,团购合买
点击下载完整版文档(PDF)

Pretty: Tail bound: Pr[X>t]<e. Good Thresholding: The running time of a Las Vegas Alg. Ugly: ● Some cost (e.g.max load). ·The probability of extreme case

Tail bound: Pr[X > t] < ￾. • The running time of a Las Vegas Alg. • Some cost (e.g. max load). • The probability of extreme case. Thresholding: ￾ Good Pretty: Ugly: Good ￾

Tail bound: X follows distribution Pr[X>t]t]<f(t,1

Tail bound: Pr[X > t] t ] < f (t, I )

Markov's Inequality Markov's Inequality: For nonnegative X,for any t >0, E[X] Pr[X≥t≤ t Proof: f(x) 1 ifX≥t, →Y≤ X ≤ X E(X] Pr[X≥t=E[Y]≤E p(Xza) tight if we only know the expectation of X

Markov’s Inequality Markov’s Inequality: Pr[X ⇥ t] ￾ E[X] t . For nonnegative X , for any t > 0, ￾ Y ⇥ ￾ X t ⇥ ⇥ X t , Pr[X ￾ t] = E[Y ] ￾ E ￾ X t ⇥ = E[X] t . Proof: Y = ￾ 1 if X ￾ t, 0 otherwise. Let tight if we only know the expectation of X

Las Vegas to Monte Carlo Las Vegas:running time is B(x): random,always correct. run A(x)for 2T(n)steps; if A(x)returned ●A:Las Vegas Alg with return A(x); worst-case expected running time T(n). else return "yes" one-sided error! ● Monte Carlo:running time is fixed,correctness Pr[error] is random. ≤Pr[T(A(x)>2T(n)] ●B:Monte Carlo Alg E[T(A(x))] 1 ≤ ≤ 2T(n) -2

Las Vegas to Monte Carlo • Las Vegas: running time is random, always correct. • A: Las Vegas Alg with worst-case expected running time T(n). • Monte Carlo: running time is fixed, correctness is random. • B: Monte Carlo Alg ... B(x): run A(x) for 2T(n) steps; if A(x) returned return A(x); else return “yes”; one-sided error! Pr[error] ￾ Pr[T (A(x)) > 2T (n)] ￾ E[T (A(x))] 2T (n) ￾ 1 2

Markov's Inequality Markov's Inequality: For nonnegative X,for any t >0, E[X] Pr[X≥t≤ t

Markov’s Inequality Markov’s Inequality: Pr[X ⇥ t] ￾ E[X] t . For nonnegative X , for any t > 0

A Generalization of Markov's Inequality Theorem: For any X,for h:XR,for any t>0, E[h(X)] Pr[h(X)≥t]≤ t Chebyshev,Chernoff

A Generalization of Markov’s Inequality Theorem: For any X, for h : X ⇥￾ R+, for any t > 0, Pr[h(X) ⇥ t] ￾ E[h(X)] t . Chebyshev, Chernoff,

Chebyshev's Inequality Chebyshev's Inequality: For any t >0, ar[X] Pr[X-E[X]川≥t]≤ t2

Chebyshev’s Inequality Chebyshev’s Inequality: Pr[|X ⇥E[X]| ⌅ t] ⇤ Var[X] t 2 . For any t > 0

Variance Definition(variance); The variance of a random variable X is VAr[X]=E[(X-E[X)2]=E[x2]-(EX])2 The standard deviation of random variable X is 6[X]=VVar[X]

Variance Definition (variance): The variance of a random variable X is Var[X] = E ￾ (X ￾E[X])2⇥ = E ￾ X2⇥ ￾(E[X])2 . The standard deviation of random variable X is ￾[X] = ￾Var[X]

Covariance Theorem: Var[X+Y]=Var[X]+Var[Y]+2Cov(X,Y); n n Var Var[Xl+∑Cov(Xi,Xi). i=1 i计i Definition (covariance): The covariance of X and Y is Cov(X,Y)=E[(X-E[X])(Y-E[Y])]

Covariance Definition (covariance): The covariance of X and Y is Cov(X,Y ) = E[(X ￾E[X])(Y ￾E[Y ])]. Theorem: Var[X +Y ] = Var[X]+Var[Y ]+2Cov(X,Y ); Var￾ ⇤ n i=1 Xi ⇥ = ⇤ n i=1 Var[Xi]+ ⇤ i￾=j Cov(Xi ,Xj)

Covariance Theorem: For independent X and Y,E[X.Y]=E[X].E[Y]. Theorem: For independent X and Y,Cov(X,Y)=0. Proof:Cov(X,Y)=E[(X-E[X])(Y-E[Y])] =E[X-EX☒]E[Y-E[Y] =0

Covariance Theorem: For independent X and Y , E[X ·Y ] = E[X]·E[Y ]. Theorem: For independent X and Y , Cov(X,Y ) = 0. Proof: Cov(X,Y ) = E[(X ￾E[X])(Y ￾E[Y ])] = E[X ￾E[X]]E[Y ￾E[Y ]] = 0

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共51页,可试读17页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有