正在加载图片...
P(elm)s∑r,yIk)阿 B.(ylx mel,.,M and any pe0 (3.23) If the channel is memoryless,it becomes Pem≤∑Pox向 (3.24 We can see that,when optimized by choice of p.the Gallager bound is strictly tighter than the Bhattacharyya bound except whenp=1.(It is clear that the union bound (3.18)is a setting=1.)Notice that unless the code and nel possess a high degree of simplifying symmetry,both bounds are too complicated to calculation in most practical cases The Gallager bound is sometimes expressed as s>0,p20 (3.25) 3.5 Ensemble Average Performance of Codes with Two Codewords In this section,we consider random coding and evaluate the average Pfelm)for codes with two codewords.Suppose that.for a gi en channel and given N.one has calculated P(em)with ML decod ng for every lengtn- N block code s.Note that the error probabilities Pem)are now (random)variables,dependent on the used specific code To make the dependence on the code explicit,we write P(x)to denote Pa(elm)for some ML decoder for the particular code C= Let (x)be an arbitrary probability assignment on the set of channel input sequenceso length N.Now consider the ensemble of codes where the codewords are selected independently using the probability assignment (x).The expected value of Pelm)over the ensemble is then given by Ps(elm)=∑∑Ps,x2()(s2)mrl,2 By symmetry,P(el)=P(e2).From (3.12).P(elm)is upper-bounded by Pems∑∑∑P.(yl.)P.(y1x.0a,2) =∑VR12,a)∑RI0,)m=l,26.26 Note that x and x2 in(3.26)are simply dummy variable of summation,(3.26)may reduced to 3-83-8 1 1 1 1 ' ' 1 ' (| ) (| ) (| ) M B Nm Nm m m m P em P P ρ + + ρ ρ = ≠ ⎡ ⎤ ⎢ ⎥ ≤ ⎢ ⎥ ⎣ ⎦ ∑ ∑ y yx yx , m=1,.,M and any ρ≥0 (3.23) If the channel is memoryless, it becomes 1 1 1 1 ' 1 '1 ' (| ) ( | ) ( | ) N M B mn m n ny mm m P em Py x Py x ρ + + ρ ρ = =≠ ⎡ ⎤ ⎢ ⎥ ≤ ⎢ ⎥ ⎢⎣ ⎥⎦ ∏∑ ∑ (3.24) We can see that, when optimized by choice of ρ, the Gallager bound is strictly tighter than the Bhattacharyya bound except when ρ = 1. (It is clear that the union bound (3.18) is a special case of Gallager bound obtained by setting ρ = 1.) Notice that unless the code and channel possess a high degree of simplifying symmetry, both bounds are too complicated to calculation in most practical cases. The Gallager bound is sometimes expressed as ' ' 1 ' (| ) (| ) (| ) (| ) s M N m B Nm m N m m m P P em P P ρ = ≠ ⎡ ⎤ ⎛ ⎞ ⎢ ⎥ ≤ ⎜ ⎟ ⎢ ⎥ ⎝ ⎠ ⎢⎣ ⎥⎦ ∑ ∑ y y x y x y x , s>0, ρ≥0 (3.25) 3.5 Ensemble Average Performance of Codes with Two Codewords In this section, we consider random coding and evaluate the average PB(e|m) for codes with two codewords. Suppose that, for a given channel and given N, one has calculated PB(e|m) with ML decoding for every length-N block code with two codewords. Note that the error probabilities PB(e|m) are now (random) variables, dependent on the used specific code. To make the dependence on the code explicit, we write | 12 (, ) Pe m x x to denote PB(e|m) for some ML decoder for the particular code 1 2 C ={, } x x . Let QN(x) be an arbitrary probability assignment on the set of channel input sequences of length N. Now consider the ensemble of codes where the codewords are selected independently using the probability assignment QN(x). The expected value of PB(e|m) over the ensemble is then given by 1 2 | 12 1 2 (| ) ( , ) ( ) ( ) P em P Q Q B em N N = ∑∑x x xx x x m= 1, 2 By symmetry, ( |1) ( | 2) Pe Pe B B = . From (3.12), (| ) P em B is upper-bounded by 1 2 1 212 (| ) (| ) (| ) ( ) ( ) P em P P Q Q B N N NN ≤ ∑∑∑ xx y y x y xxx 1 2 11 2 2 (| ) () (| ) ( ) PQ PQ NN N N ⎡ ⎤⎡ ⎤ = ⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦ ∑∑ ∑ yx x y x x y x x , m=1, 2 (3.26) Note that x1 and x2 in (3.26) are simply dummy variable of summation, (3.26) may reduced to
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有