正在加载图片...
In the case of a DMC used without feedback,this becomes m=f)=arg max·P0.x) 3.3 Codes with Two Codewords-the Bhattacharyya Bound We will denote the set of received sequences decoded into message.e. D.=yeA*If(y)=m which is called the decision region for message m.Since the output sequence y is decoded (/mapped)to exactly one message,D's form a collection of disjoint sets whose union isA ie. [DnD,=g,for all i≠j UD- Thus,the probability of decoding eror,when messagemis sent,is defined as P(elm)=Pr(2+ZIZ=m)=Pr(yDx) =l-Pry∈DIxn)=∑Pylx) (3.5) VeD. The overall probability of decoding error,if the message have a priori distribution Pr(m),is then given by P.()-p(m)F.(em) (3.6) Equations(3.5)and (3.6)apply to any block code and any channel.In particular,the case is simple whenM2.In this case,the error probability,when message 2 is transmitted,is BeI2-gB) (3.7) We observe that, for yeDi,P.(yx )2 P(yx) for ML decoding.It also implies that P(yx(y),<s<1,and hence that P(ylx)s P(yx2)P(ylx)' (3.8) Substituting (3.8)into (3.7)and lettings=1/2,we have P.(e12)s>P(ylx)P(ylx2) (3.9) Similarly, (3.10) 343-4 In the case of a DMC used without feedback, this becomes , 1 ˆ ( ) arg max ( | ) N n mn m n m f Py x = = = y ∏ 3.3 Codes with Two Codewords – the Bhattacharyya Bound We will denote the set of received sequences decoded into message m as Dm; i.e., { | () } N m Y D A =∈ = y y f m which is called the decision region for message m. Since the output sequence y is decoded (/mapped) to exactly one message, Dm’s form a collection of disjoint sets whose union is N AY ; i.e., , for all i j N i Y i ⎧ ∩ = ≠ φ i j ⎪ ⎨ = ⎪⎩ ∪ D D D A Thus, the probability of decoding error, when message m is sent, is defined as ˆ ( | ) Pr( | ) Pr( | ) P em Z ZZ m B mm ≡ ≠ == ∉y x D 1 Pr( | ) ( | ) m mm N m P ∉ =− ∈ = ∑ y y x y x D D (3.5) The overall probability of decoding error, if the message have a priori distribution Pr(m), is then given by 1 ( ) Pr( ) ( | ) M B B m P e mP e m = = ∑ (3.6) Equations (3.5) and (3.6) apply to any block code and any channel. In particular, the case is simple when M = 2. In this case, the error probability, when message 2 is transmitted, is 1 2 ( | 2) ( | ) Pe P B N ∈ = ∑ y y x D (3.7) We observe that, for y∈D1, 1 2 (| ) (| ) P P N N y x ≥ y x for ML decoding. It also implies that 1 2 (| ) (| ) s s P P N N y x ≥ y x , 0 < s < 1, and hence that 1 2 21 (| ) (| ) (| ) s s PPP NNN − y x ≤ y x y x (3.8) Substituting (3.8) into (3.7) and letting s=1/2, we have 1 1 2 ( | 2) ( | ) ( | ) Pe P P B NN ∈ ≤ ∑ y y x y x D (3.9) Similarly, 2 1 2 ( |1) ( | ) ( | ) Pe P P B NN ∈ ≤ ∑ y y x y x D (3.10)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有