正在加载图片...
ABI)ΣRGIRGI s∑P(yIx.)P(ylx) (3.17) Substituting (3.17)into (3.15),we obtain the so-called union Bhattacharyya bound: Pems2ΣB.(wIs.)R.(yi, m=1,2.,M (3.18) In particular,for memoryless channels Pem)≤立i∑VP0lxP0川x,mel,2M (3.19) 3.4.2 Gallager Bound For an ML decoder.we know that yEDn→P(ylxn)2P(ylx.)for some m'≠m. Then we have BIs) LR ≥1,for any s>0 since at least one term in the summation will itself be at least one and then the sum is at least one.We can raise both sides of the above equation to the power of some non-negative preserve the inequality,.e. yEDn→ 「y1 w1 21,for any s>0 and any p20 (3.20) Note that (3.20)can be written as P(ylx) p.(ylx)'21,any s0 and any p2o (3.21) By multiplying each of terms in the summation in (3.5)by the L.H.S.of (3.21),we can see that the error probability is upper-bounded by Pa(elm)s∑P(ylx) Pv(ylx)' (3.22) Lettings=1/(+p)and extending the summation in (3.22)to all y,we have the so-called Gallager bound. Gallager Bound:When the code C=length N is decoded by an MLdecoder,then 3-7 ' 2 ' (| ) (| ) (| ) m N m N mN m D P PP ∈ ∈ ′ ∑ ∑≤ y y y x y x y x D ' (| ) (| ) ≤ ∑ P P N mN m y y x y x (3.17) Substituting (3.17) into (3.15), we obtain the so-called union Bhattacharyya bound: ' ' 1 ' (| ) (| ) (| ) M B N mN m m m m P em P P = ≠ ≤ ∑ ∑ y y x y x , m=1,2,.,M (3.18) In particular, for memoryless channels, ' '1 1 (| ) ( | )( | ) M N B mn m n mn y P em Py x Py x = = ≤ ∑∏∑ , m=1,2,.,M (3.19) 3.4.2 Gallager Bound For an ML decoder, we know that ' (| ) (| ) y y ∉DmN mN m ⇒ ≥ P P x y x for some m’ ≠ m. Then we have ' ' 1 ' (| ) 1 (| ) s M N m m N m m m P = P ≠ ⎡ ⎤ ⎢ ⎥ ≥ ⎣ ⎦ ∑ y x y x , for any s>0 since at least one term in the summation will itself be at least one and then the sum is at least one. We can raise both sides of the above equation to the power of some non-negative parameter ρ ≥0 and preserve the inequality, i.e., ' ' 1 ' (| ) 1 (| ) s M N m m m N m m m P P ρ = ≠ ⎧ ⎫ ⎪ ⎪ ⎡ ⎤ ∉⇒ ≥ ⎨ ⎬ ⎢ ⎥ ⎪ ⎪ ⎣ ⎦ ⎩ ⎭ ∑ y x y y x D , for any s>0 and any ρ≥0 (3.20) Note that (3.20) can be written as ' ' 1 ' (| ) (| ) 1 M s s Nm Nm m m m P P ρ − ρ = ≠ ⎡ ⎤ ⎢ ⎥ ≥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ yx yx ∑ , any s>0 and any ρ≥0 (3.21) By multiplying each of terms in the summation in (3.5) by the L.H.S. of (3.21), we can see that the error probability is upper-bounded by 1 ' ' 1 ' (| ) (| ) (| ) m M s s B Nm Nm m m m P em P P ρ − ρ ∉ = ≠ ⎡ ⎤ ⎢ ⎥ ≤ ⎢ ⎥ ⎢⎣ ⎥⎦ ∑ ∑ y yx yx D (3.22) Letting s = 1/(1+ρ) and extending the summation in (3.22) to all y, we have the so-called Gallager bound. „ Gallager Bound: When the code 1 2 { , ,., } = M C xx x of length N is decoded by an ML decoder, then
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有