正在加载图片...
where we use the memoryless and without feedback conditions at the fourth equality. From now on,we will restrict our attention of channels used without feedback.With some abuse of notation we will let P(yx)denote the probability of receiving the sequence y =y"at the output of the channel when the channel input is the sequence x=x".If the channel is memoryless,we see from above that Pyx)=ΠP() 2.BLOCK CODES A block code with M messages and block lengthn is a mapping from a set of M messages 化pcd长eW出put of1eu8hs.aoc冰code specid whe@ input sequences ci (c11,.,cin).,cA (cM.1,.cM.m) the messages are mapped into.We will call cm the codeword for message m. To send message m with such a block code we simply give the sequence c to the channel as input. A decoder for such a block code is a mapping from channel output sequences yn to the set of M messages {1,.,M).For a given decoder,let Dc y"denote the set of channel outputs which are mapped to message m.Since an output sequence y is mapped to exactly one message,D's form a collection of disjoint sets whose union is ya. We define the rate of a block code with M messages and block length n as n and given such a code and a decoder we define Pm=∑P(ylcm) the probability of a decoding error when message m is sent.Further define Pame=方∑P.m and P.mas=,Pm m=1 as the average and maximal(both over the possible messages)error probability of such a code and decoder. Among many possible decoding methods,the rule that minimizes Peave is the maximum likelihood rule.Given a channel output sequence y,the maximum likelihood rule decodes a message m for which P(ylcm)≥P(ylcm')for every m'≠n, tha such m choos of them arbitrarily.We will restrict 3.ERROR PROBABILITY FOR TWO CODEWORDS 2,s8 onsists of two codewords,c andwhere we use the memoryless and without feedback conditions at the fourth equality. From now on, we will restrict our attention of channels used without feedback. With some abuse of notation we will let P(y|x) denote the probability of receiving the sequence y = y n 1 at the output of the channel when the channel input is the sequence x = x n 1 . If the channel is memoryless, we see from above that P(y|x) = Yn k=1 P(yk|xk). 2. Block Codes A block code with M messages and block length n is a mapping from a set of M messages {1, . . . , M} to channel input sequences of length n. Thus, a block code is specified when we specify the M channel input sequences c1 = (c1,1, . . . , c1,n), . . . , cM = (cM,1, . . . , cM,n) the messages are mapped into. We will call cm the codeword for message m. To send message m with such a block code we simply give the sequence cm to the channel as input. A decoder for such a block code is a mapping from channel output sequences Y n to the set of M messages {1, . . . , M}. For a given decoder, let Dm ⊂ Yn denote the set of channel outputs which are mapped to message m. Since an output sequence y is mapped to exactly one message, Dm’s form a collection of disjoint sets whose union is Y n . We define the rate of a block code with M messages and block length n as ln M n , and given such a code and a decoder we define Pe,m = X y6∈Dm P(y|cm), the probability of a decoding error when message m is sent. Further define Pe,ave = 1 M X M m=1 Pe,m and Pe,max = max 1≤m≤M Pe,m as the average and maximal (both over the possible messages) error probability of such a code and decoder. Among many possible decoding methods, the rule that minimizes Pe,ave is the maximum likelihood rule. Given a channel output sequence y, the maximum likelihood rule decodes a message m for which P(y|cm) ≥ P(y|cm0) for every m0 6= m, and if there are more than one such m chooses one of them arbitrarily. We will restrict ourselves in the following to the maximum likelihood rule. 3. Error probability for two codewords Consider now the case when M = 2, so the block code consists of two codewords, c1 and c2. We will find a bound on Pe,m for the maximum likelihood decoding rule. 2
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有