16.36: Communication Systems Engineering Lectures 12/13: Channel Capacity and Coding Eytan Modiano
Eytan Modiano Slide 1 16.36: Communication Systems Engineering Lectures 12/13: Channel Capacity and Coding Eytan Modiano
Channel Coding When transmitting over a noisy channel, some of the bits are received with errors Example: Binary Symmetric Channel (BSc) Pe= probability oferror Q: How can these errors be removed? A: Coding: the addition of redundant bits that help us determine what was sent with greater accuracy
Eytan Modiano Slide 2 Channel Coding • When transmitting over a noisy channel, some of the bits are received with errors Example: Binary Symmetric Channel (BSC) • Q: How can these errors be removed? • A: Coding: the addition of redundant bits that help us determine what was sent with greater accuracy 1 1 0 0 1-Pe 1-Pe Pe Pe = Probability of error
Example Repetition code Repeat each bit n times(n-odd Input Code 000.0 Decoder If received sequence contains n/2 or more 1s decode as a 1 and o otherwise Max likelihood decoding P(error 1 sent)=P(error 0 sent PI more than n /2 bit errors occur] ∑|,P(-Py m/2(
Eytan Modiano Slide 3 Example (Repetition code) Repeat each bit n times (n-odd) Input Code 0 000……..0 1 11..……..1 Decoder: • If received sequence contains n/2 or more 1’s decode as a 1 and 0 otherwise – Max likelihood decoding P ( error | 1 sent ) = P ( error | 0 sent ) = P[ more than n / 2 bit errors occur ] = n i P P i n n e i e n i − = − ∑/ ( ) 2 1
Repetition code, cont For Pe<1/2, plerror) is decreasing in n for any e, 3 n large enough so that p(error)<e Code Rate: ratio of data bits to transmitted bits For the repetition codeR= 1/n To send one data bit. must transmit n channel bits "bandwidth expansion In general, an(n, k code uses n channel bits to transmit k data bits Code rate rek/n Goal: for a desired error probability, E, find the highest rate code that can achieve p(error) <e
Eytan Modiano Slide 4 Repetition code, cont. • For P e < 1/2, P(error) is decreasing in n – ⇒ for any ε, ∃ n large enough so that P (error) < ε Code Rate: ratio of data bits to transmitted bits – For the repetition code R = 1/n – To send one data bit, must transmit n channel bits “bandwidth expansion” • In general, an (n,k) code uses n channel bits to transmit k data bits – Code rate R = k / n • Goal: for a desired error probability, ε, find the highest rate code that can achieve p(error) < ε
Channel Capacity The capacity of a discrete memoryless channel is given by, C=max 1(r; Y) Channel Example: Binary Symmetric Channel (BSc) P0 P1=1 IX; Y)=H (Y-H YX=H(X)-H(XY H(X×Y)=H(XY=0)P(Y=0)+H(XY=1)P(Y=1) H(XY0)=H(XY=1=Pelog(1/Pe)(1-Pe)log(1/1-P)=Hb(Pe) H(XY=Hb(Pe=>I(; Y)=H(X)-Hb(Pe) H(X)=Po log(1/P0)+(1-Po)log(11-P0)=Hb(po) I(X; Y)=Hb (Po)-Hb(Pe
Eytan Modiano Slide 5 Channel Capacity • The capacity of a discrete memoryless channel is given by, – Example: Binary Symmetric Channel (BSC) I(X;Y) = H (Y) - H (Y|X) = H (X) - H (X|Y) H (X|Y) = H (X|Y=0)*P(Y=0) + H (X|Y=1)*P(Y=1) H (X|Y=0) = H (X|Y=1) = Pelog(1/Pe) + (1-Pe)log(1/ 1-Pe) = Hb(Pe) H (X|Y) = Hb(Pe) => I(X;Y) = H(X) - Hb(Pe) H (X) = P0 log (1/P0) + (1-P0) log (1/ 1-P0) = Hb(p0) => I (X;Y) = Hb (P0) - Hb(Pe) C IXY = max ( ; ) p x( ) X Channel Y 1 1 0 0 1-Pe 1-Pe Pe P0 P1 =1-P0
Capacity of Bsc I (X;Y)=Hb(Po)-Hb(Pe) H(P)=P log(1/P)+(1-P)log(1/1-P Hb(P)<=1 with equality if P=1/2 C=max po ((X; Y)=Hb(Po)-H(Pe))=1-Hb(Pe Hb(P) C=1-Hb(Pe 1/2 C=0 when P= 1/2 and c 1 when P=0 or P =1
Eytan Modiano Slide 6 Capacity of BSC I (X;Y) = Hb (P0) - Hb(Pe) • Hb(P) = P log(1/P) + (1-P) log(1/ 1-P) – Hb(P) <= 1 with equality if P=1/2 C = max P0 {I (X;Y) = Hb (P0) - Hb(Pe)} = 1 - Hb(Pe) C = 0 when Pe = 1/2 and C = 1 when Pe = 0 or Pe=1 0 1/2 1 P Hb(P) 1 0 1/2 1 Pe C = 1 - Hb(Pe) 1
Channel Coding Theorem(Claude Shannon) Theorem For allro there exists a code of rate r whose error probability C, EEo>0, such that the probability of error is always greater than Eo For code rates greater than capacity, the probability of error is bounded away from 0
Eytan Modiano Slide 7 Channel Coding Theorem (Claude Shannon) Theorem: For all R o; there exists a code of rate R whose error probability C, ∃ ε 0 > 0, such that the probability of error is always greater than ε 0 For code rates greater than capacity, the probability of error is bounded away from 0
Channel coding Block diagram Source Source h chanelle Modulator Channel Sink Source Channel decoder decoder KH Demod K
Eytan Modiano Slide 8 Channel Coding • Block diagram Source Source encoder Channel encoder Modulator Channel Demod Channel decoder Source decoder Sink
pproaches to coding Block Codes Data is broken up into blocks of equal length Each block is"mapped"onto a larger block Example:( 6, 3 code, n=6, k=3, R=1/2 000-000000 100→100101 001→001011 101→101110 010→010111 10→110010 011→011100 11→111001 An(n, k binary block code is a collection of 2k binary n-tuples(n>k) n= block length k= number of data bits n-k= number of checked bits Rek/n=code rate
Eytan Modiano Slide 9 Approaches to coding • Block Codes – Data is broken up into blocks of equal length – Each block is “mapped” onto a larger block Example: (6,3) code, n = 6, k = 3, R = 1/2 000 → 000000 100 → 100101 001 → 001011 101 → 101110 010 → 010111 110 → 110010 011 → 011100 111 → 111001 • An (n,k) binary block code is a collection of 2 k binary n-tuples (n>k) – n = block length – k = number of data bits – n-k = number of checked bits – R = k / n = code rate
pproaches to coding Convolutional codes The output is provided by looking at a sliding window of input Ci UK Delay Delay Ca)= U(2K OU(eK.2,C2K+1=U2k+n⑨Ua)⑨Ua1 mod(2 addition (1+1=0) Slide 10
Eytan Modiano Slide 10 Approaches to coding • Convolutional Codes – The output is provided by looking at a sliding window of input Delay Delay + + + Ci Ci+1 U K C(2K) = U(2K) U(2K-2), C(2K+1) = U(2K+1) U(2K) U(2K-1) + + + + mod(2) addition (1+1=0)