正在加载图片...
Substituting back we get ≤(W-1P∑∑Qc)Pye)-][∑Q(e)P(yle) Choosing now s =1/(1+p)(this choice in fact minimizes the bound)and observing that for this choice 1-sp=s,and that [∑Qe)P(yl)-n-[∑Q(e)P(ylc)月 since the two summations differ only by the summation index,we get E≤M-1r∑[∑Q(e)P(yle)+ If we now specialize this theorem to discrete memoryless channels and if we choose Q(c)=ΠI,Q(ce),we get that for every p∈0,1, If M=[eR],then (M-1)seR,and we can summarize the above as THEOREM 1.Given a discrete memoryless channel described by P(),for any blocklength n,and any R20 consider constructing a random block code with M=[enR]codewords by choosing each letter of each codeword independently according to a distribution Q on X.Then,the expected average error probability of this random code satisfies P<expEo(p.Q)-R) where a,Q)=-h∑∑QPa+n]n Since the expected error probability cannot be better than the error probability of the best code we also get CoROLLARY 1.Given a discrete memoryless channel described by p(ula).for any distri- bution Q on any blocklength n and any R>0,there exists a code of block length n and rate at least R with Pae≤exp{-nE.(R,Q)} where E.(R.Q)-[E(pQ)-Rl Note that the above corollary establishes the existence of codes of a certain rate with a guarantee on their p but suggests no mechanism to find them.However,if one does carry out the experiment of constructing a code by randomly choosing its codewords. rage is small,in the probability that the code obtained will be much worse than the aver quality tells us that the probability that a code constructed as such has Plarger than is smal PrPeave≥a Peave]≤1/a for a>1. 6Substituting back we get P¯ e,1 ≤ (M − 1)ρX y hX c1 Q(c1)P(y|c1) 1−sρihX c Q(c)P(y|c) s iρ . Choosing now s = 1/(1 + ρ) (this choice in fact minimizes the bound) and observing that for this choice 1 − sρ = s, and that hX c1 Q(c1)P(y|c1) 1−sρi = hX c Q(c)P(y|c) s i since the two summations differ only by the summation index, we get P¯ e,1 ≤ (M − 1)ρX y hX c Q(c)P(y|c) 1/(1+ρ) i1+ρ . If we now specialize this theorem to discrete memoryless channels and if we choose Q(c) = Qn i=1 Q(ci), we get that for every ρ ∈ [0, 1], P¯ e,1 ≤ (M − 1)ρ X y hX x Q(x)P(y|x) 1/(1+ρ) i1+ρ n If M = de nRe, then (M − 1) ≤ e nR, and we can summarize the above as Theorem 1. Given a discrete memoryless channel described by P(y|x), for any blocklength n, and any R ≥ 0 consider constructing a random block code with M = de nRe codewords by choosing each letter of each codeword independently according to a distribution Q on X . Then, the expected average error probability of this random code satisfies P¯ e,ave ≤ exp −n max ρ∈[0,1] [E0(ρ, Q) − ρR] where E0(ρ, Q) = − lnX y hX x Q(x)P(y|x) 1/(1+ρ) i1+ρ . Since the expected error probability cannot be better than the error probability of the best code we also get Corollary 1. Given a discrete memoryless channel described by P(y|x), for any distri￾bution Q on X , any blocklength n and any R > 0, there exists a code of block length n and rate at least R with Pe,ave ≤ exp −nEr(R, Q) . where Er(R, Q) = max ρ∈[0,1] [E0(ρ, Q) − ρR]. Note that the above corollary establishes the existence of codes of a certain rate with a guarantee on their Pe,ave, but suggests no mechanism to find them. However, if one does carry out the experiment of constructing a code by randomly choosing its codewords, the probability that the code obtained will be much worse than the average is small, in particular, Markov inequality tells us that the probability that a code constructed as such has Pe,ave larger than αP¯ e,ave is small: Pr[Pe,ave ≥ αP¯ e,ave] ≤ 1/α for α > 1. 6
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有