0122229现代数字通信与编码理论 November 10.2011 XDU,Fall 2011 Lecture Notes 4.8 Low-Density Parity-Check Codes Low-density parity-check (LDPC)codes are a class of linear block codes with near capacity performance, which was invented by Gallager in the in his Ph.D.thesis and were rediscovered in late 1990's.They are constructed using sparse bipartite graph(hence the name low-density),and can be decoded iteratively based on sum-product algorithm. LDPC codes have proved capable of approaching the Shannon limit more closely than any other class of codes. Ever since thei rediscovery,desig construction,decoding.efficient encoding performance analysis,and applications of LDPC codes in digital communication and storage systems have become focal points of research. Note:Some of material presented in this section is based on Prof.Shu Lin's lecture notes and S.J.Johnson's notes. 4.8.1 Definitions and Basic Concepts Although LDPC codes can be generalized to nonbinary alphabets,we will consider only binary codes in this section for the sake of simplicity. A binary LDPC code is a special class of(n)linear block codes whose parity-check matrix H=[h]contains mostly 0's and relatively few I's;i.e.,H is sparse.This sparsity renders low complexity decoding and leads to simple implementations.In particular,an (n,d de)regular LDPC code is a binary linear block code of length n,whose parity-check matrix H contains exactly d I's in each column and d I's in each row.This means that every coded bit participates in exactly ns and that every such check equation involves exactlycoded bits.A formal definition is as follows Definition 4.8.1:A binary regular LDPC code is defined as the null space of a sparse parity-check matrix H over GF(2)with the following structural properties: 1)Each row has constant weightd 2)Each has constant weight 3)No two rows(or two columns)have more than one 1-component in common;and 4)Both de and d,are small compared to code length and the number of rows in H. The H with these properties will be said to be(dde)-regular,and the code generated by such H will be called a(d,de)-regular LDPC code. to as the row and column (RC)const The RC-constraint on the rows and columns of H ensures that nodor fewer columns of H can be added to zero.As a result,the minimum distance of the LDPC code given by the null space of H is at least d+1. Because both de and d,are small compared to code length and the number of rows in the matrix,H therefore has a low density of I's.In literature,the density r of the parity-check
1 0122229 现代数字通信与编码理论 November 10, 2011 XDU, Fall 2011 Lecture Notes 4.8 Low-Density Parity-Check Codes Low-density parity-check (LDPC) codes are a class of linear block codes with near capacity performance, which was invented by Gallager in the early 1960’s in his Ph.D. thesis and were rediscovered in late 1990’s. They are constructed using sparse bipartite graph (hence the name low-density), and can be decoded iteratively based on sum-product algorithm. LDPC codes have proved capable of approaching the Shannon limit more closely than any other class of codes. Ever since their rediscovery, design, construction, decoding, efficient encoding, performance analysis, and applications of LDPC codes in digital communication and storage systems have become focal points of research. Note: Some of material presented in this section is based on Prof. Shu Lin’s lecture notes and S. J. Johnson’s notes. 4.8.1 Definitions and Basic Concepts Although LDPC codes can be generalized to nonbinary alphabets, we will consider only binary codes in this section for the sake of simplicity. A binary LDPC code is a special class of (n, k) linear block codes whose parity-check matrix [ ]ij H = h contains mostly 0’s and relatively few 1’s; i.e., H is sparse. This sparsity renders low complexity decoding and leads to simple implementations. In particular, an (n, dv, dc) regular LDPC code is a binary linear block code of length n, whose parity-check matrix H contains exactly dv 1’s in each column and dc 1’s in each row. This means that every coded bit participates in exactly dv parity-check equations and that every such check equation involves exactly dc coded bits. A formal definition is as follows. Definition 4.8.1: A binary regular LDPC code is defined as the null space of a sparse parity-check matrix H over GF(2) with the following structural properties: 1) Each row has constant weight dc ; 2) Each column has constant weight dv ; 3) No two rows (or two columns) have more than one 1-component in common; and 4) Both dc and dv are small compared to code length and the number of rows in H. The H with these properties will be said to be (dv, dc)-regular, and the code generated by such H will be called a (dv, dc)-regular LDPC code. Property 3) is referred to as the row and column (RC) constraint. The RC-constraint on the rows and columns of H ensures that no dv or fewer columns of H can be added to zero. As a result, the minimum distance of the LDPC code given by the null space of H is at least dv+1. Because both dc and dv are small compared to code length and the number of rows in the matrix, H therefore has a low density of 1’s. In literature, the density r of the parity-check
matrix H is defined as the ratio of the total number of 1's in H to the total number of entries in H.Let m be the number of rows in H(i.e suppose that H is an mn matrix).We readily see that n m Since d and de are related by d.m=dn,the nominal code rate R can be computed by =1-4 (4.1) n d The actual rate of the code specified by H may be higher since these m rows of H(m check equations)might not all be independent. Definition48.2 If H is sparse,but the number of I's in each column or in each row is not constant,the resulting code is called an irregular LDPC code. In general,irregular LDPC codes have a better performance than regular LDPC codes. Recall that a squ rix is called a circulant matrix(simply a circulant)if each row is a cyclic-shift (one place to the right or the left)of the row above it and the first row is the cyclic-shift of the last row. Definition 483:An ldPC code is said to be cvclic if its parity-check matrix consists of a colum n of circulants.An LDPC code is said to be quasi-cyclic (QC)if its parity -check matrix consists of an array of circulants The encoding of a cyclic or a QC-LDPC code can be implemented with simple shift-register(s). Example 4.8.1:Consider the matrix given below: 「1011000 0101100 0110 H 1011 (4.2) 000101 1100010 0110001 This matrix has both column and r weights 3.We can easily check that rows (or two columns)have more than one 1-componen in common.Therefore the matri satisfies the RC-constraint.It is a (3.3)-regular matrix.It is also seen that H is a circulant The null space of H gives a(7,3)cyclic-LDPC code with minimum distanced=4
2 matrix H is defined as the ratio of the total number of 1’s in H to the total number of entries in H. Let m be the number of rows in H (i.e., suppose that H is an m×n matrix). We readily see that c v d d r n m = = Since dv and dc are related by c v dm dn = , the nominal code rate Rc can be computed by 1 v c c n m d R n d − = = − (4.1) The actual rate of the code specified by H may be higher since these m rows of H (m check equations) might not all be independent. Definition 4.8.2: If H is sparse, but the number of 1’s in each column or in each row is not constant, the resulting code is called an irregular LDPC code. In general, irregular LDPC codes have a better performance than regular LDPC codes. Recall that a square matrix is called a circulant matrix (simply a circulant) if each row is a cyclic-shift (one place to the right or the left) of the row above it and the first row is the cyclic-shift of the last row. Definition 4.8.3: An LDPC code is said to be cyclic if its parity-check matrix consists of a column of circulants. An LDPC code is said to be quasi-cyclic (QC) if its parity-check matrix consists of an array of circulants. The encoding of a cyclic or a QC-LDPC code can be implemented with simple shift-register(s). Example 4.8.1: Consider the matrix given below: 1011000 0101100 0010110 0001011 1000101 1100010 0110001 ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ H (4.2) This matrix has both column and row weights 3. We can easily check that there are no two rows (or two columns) have more than one 1-component in common. Therefore the matrix satisfies the RC-constraint. It is a (3, 3)-regular matrix. It is also seen that H is a circulant. The null space of H gives a (7, 3) cyclic-LDPC code with minimum distance min d = 4
have seen in ection 4.6 that a linear block code can also be represented by aTanne graph (or factor graph,normal graph),which actually provides a graphical representation of the parity-check matrix H.The n variable nodes represent the columns of H,and the m constraint(check)nodes represent the rows of H.There exists an edge between the check node i and the variable node j if and only if the entryh=1.Thus,for an (n,dd)regular LDPC code,there are nd,=md edges in the graph,d,edges incident to each variable node and d edges incident to each check node.As an example,Fig.4.8.1 shows the Tanner graph for a (10,5)linear block code withd=3,de=6,and the parity-check matrix [11110110 00] 0011111100 H=0101010111 (4.3) 1010100111 1100101011 Let x denote a codeword.From xH=0,we can see that for each check node,the sum of all adjacent variable nodes is equal to zero.That is,check nodes are zero-sum constraints. C3 + S1 C4 S2 + S3 + S4 S5 cg○ Figure 4.8.1 Tanner graph for a(10,5)linear block code.Note that each variable node is of degree 3 and each check node is of degree 6. The parity-check matrix H of a code is actually the incidence matrix of its Tanner graph 3
3 4.8.1.2 Graphical Representation of LDPC codes We have seen in Section 4.6 that a linear block code can also be represented by a Tanner graph (or factor graph, normal graph), which actually provides a graphical representation of the parity-check matrix H. The n variable nodes represent the columns of H, and the m constraint (check) nodes represent the rows of H. There exists an edge between the check node i and the variable node j if and only if the entry hij = 1. Thus, for an (n, dv, dc) regular LDPC code, there are v c nd md = edges in the graph, dv edges incident to each variable node and dc edges incident to each check node. As an example, Fig. 4.8.1 shows the Tanner graph for a (10, 5) linear block code with dv = 3, dc = 6, and the parity-check matrix 1111011000 0011111100 0101010111 1010100111 1100101011 ⎡ ⎤ ⎢ ⎥ = ⎣ ⎦ H (4.3) Let x denote a codeword. From T xH 0 = , we can see that for each check node, the sum of all adjacent variable nodes is equal to zero. That is, check nodes are zero-sum constraints. Figure 4.8.1 Tanner graph for a (10, 5) linear block code. Note that each variable node is of degree 3 and each check node is of degree 6. The parity-check matrix H of a code is actually the incidence matrix of its Tanner graph
Therefore,we also refer to the Tanner graph of a code as the Tanner graph of its parity-check matrix H. [Cycle and Girth:A cycle of length I in a Tanner graph is a path comprised of edges which closes back on itself.The minimum cycle length of the graph is called the girth of the graph. Clearly,the shortest possible cycle in a bipartite graph is a length-4 cycle,which manifest thems elves in the H matrix as four I's that lie an th corners of a submatrix of H. Figure 4.8.2 As stated in Section 4.7,the cycles,particularly short cycles will degrade the perfor rmance of the iterative decoding algo e for LDPC cod The cycles that affec the decoding performance the most are the cycles of length 4.Therefore.in constructing LDPC codes.cycles of length 4 must be avoided. If the parity-check matrix H of a binary linear block code satisfies the RC-constraint,then wocoddcan e checked simultanously by two parity-check sums.As a result the girth of at least 6.Since we Fan LDPC cod regularor,satisfies the RC-constraint the Tanner graph of an LDPC code has a girth of at least 6. Example 4.8.2:The Tanner graph of the (7,3.3)LDPC code given in Example 1 is shown in Fig.4.8.3
4 Therefore, we also refer to the Tanner graph of a code as the Tanner graph of its parity-check matrix H. Definition 4.8.4 [Cycle and Girth]: A cycle of length l in a Tanner graph is a path comprised of l edges which closes back on itself. The minimum cycle length of the graph is called the girth of the graph. Clearly, the shortest possible cycle in a bipartite graph is a length-4 cycle, which manifest themselves in the H matrix as four 1’s that lie an the corners of a submatrix of H. . 1 1 . 1 1 . ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ H Figure 4.8.2 As stated in Section 4.7, the cycles, particularly short cycles will degrade the performance of the iterative decoding algorithm used for LDPC codes. The cycles that affect the decoding performance the most are the cycles of length 4. Therefore, in constructing LDPC codes, cycles of length 4 must be avoided. If the parity-check matrix H of a binary linear block code satisfies the RC-constraint, then no two coded bits can be checked simultaneously by two parity-check sums. As a result, the Tanner graph of the code is free of cycles of length 4 and has a girth of at least 6. Since we require that the matrix H of an LDPC code, regular or irregular, satisfies the RC-constraint, the Tanner graph of an LDPC code has a girth of at least 6. Example 4.8.2: The Tanner graph of the (7, 3, 3) LDPC code given in Example 1 is shown in Fig. 4.8.3
+s + Se +s7 Figure 4.8.3 Tanner graph of a(7,3)linear block code LDPC be repre resented by normal graphs,see Fig.4.8.3-2.Figure 4.8.4 depicts a generic normal graph for an LDPC code Xo X + = Figure4.8.3-2
5 c1 c2 c3 c4 c5 c6 c7 s1 s2 s3 s4 s5 s6 s7 Figure 4.8.3 Tanner graph of a (7, 3) linear block code LDPC codes can also be represented by normal graphs, see Fig. 4.8.3-2. Figure 4.8.4 depicts a generic normal graph for an LDPC code. Figure 4.8.3-2
长 目< + = Figure 4.8.4 Normal graph of a regular LDPC code withd=3 andd=6. 4.8.2 Encoding of LDPC Codes 如前所述,LDC码是一种线性分组码,其编码可以按照一般线性分组码的编码原 理进行,即首先通过Gauss-Jordan消去法将给定的校验矩阵H变换为如下形式: H=[P I-] where P is a (nk)xk binary matrix and is the identity matrix of size n-k.Then the generator matrix in systematic form is obtained G=「LP] Thus.the encoding of the code defined by H can be performed via c=uG=u uPT] (4.4) Example 4.8.3:Consider the encoding of the length-10 rate-1/2 LDPC code [Johnson,pp.15] 「1101100100 0110111000 H=0001000111 1100011010 0010010101 First,we put H into row-echelon form(ie.so that in any two successive rows that do no consist entirely of zeros,the leading 1 in the lower row occurs further to the right than the leading I in the higher row). The matrix H is put into this form by applying elementary row operations in GF(2). 6
6 Figure 4.8.4 Normal graph of a regular LDPC code with dv = 3 and dc = 6. 4.8.2 Encoding of LDPC Codes 如前所述,LDPC码是一种线性分组码,其编码可以按照一般线性分组码的编码原 理进行,即首先通过Gauss-Jordan消去法将给定的校验矩阵H变换为如下形式: H PI = [ n k − ] where P is a ( ) nk k − × binary matrix and In-k is the identity matrix of size n-k. Then the generator matrix in systematic form is obtained T k = ⎡ ⎤ GIP ⎣ ⎦ Thus, the encoding of the code defined by H can be performed via T = = ⎡ ⎤ ⎣ ⎦ c uG u uP (4.4) Example 4.8.3: Consider the encoding of the length-10 rate-1/2 LDPC code [Johnson, pp.15] 1101100100 0110111000 0001000111 1100011010 0010010101 ⎡ ⎤ ⎢ ⎥ = ⎣ ⎦ H First, we put H into row-echelon form (i.e. so that in any two successive rows that do not consist entirely of zeros, the leading 1 in the lower row occurs further to the right than the leading 1 in the higher row). The matrix H is put into this form by applying elementary row operations in GF(2)
which are:interchanging two rows or adding one row to another modulo 2.From linear operations the modified parity-check matrix will have the set as the original. The 1-st and 2-nd columns of H already have ones on the diagonal and entries in these columns below the diagonal are removed by replacing the 4-th row with the modulo-2 sum of the 1-st and 4-th rows.The 3-rd column of H does not have a one on the diagonal but this can be obtained by swapping the 3-rd and 5-th rows.Finally,replacing the 5-th row with the modulo-2 sum of the 5-th and 4-th rows gives H,in row-echelon fo 110.1100100 0110111000 H,=0010010101 0001111110 0000111001 Next the parity-check matrix is put into reduced row-echelon form (i.e.so that any column that contains a leading one has zeros everywhere else).The 1-st column is already correct and the entry in the 2-nd column above the diagonal is removed by replacing the 1-st ws.Similarly the er in the 3-nd column and 3-rd rows.To clear the 4-th column the 1-st row is replace with the modulo-2 sum of the 1-st and 4-th rows.Finally,to clear the 5-th column involves adding the 5-th row to the 1-st, 2-nd and 4-th rows gives H,in reduced row-echelon form: [1000001110 0100010100 H,=0010010101 0001000111 000011100 Lastly,using column permutations we put the parity-check matrix into standard form 011 1010000 1000 010 100100 (4.5) 01 1 1 00010 1100100001 from which a generator G for the code (with parity-check matrices H)can be obtained by the relation stated above. In this final step column permutations have been used and so the codewords of Hatd will π=[67891012345] and apply the inverse permutation to each Ha codeword before it is transmitted.这样,在接 收端我们可仍然使用H进行译码
7 which are: interchanging two rows or adding one row to another modulo 2. From linear algebra we know that by using only elementary row operations the modified parity-check matrix will have the same codeword set as the original. The 1-st and 2-nd columns of H already have ones on the diagonal and entries in these columns below the diagonal are removed by replacing the 4-th row with the modulo-2 sum of the 1-st and 4-th rows. The 3-rd column of H does not have a one on the diagonal but this can be obtained by swapping the 3-rd and 5-th rows. Finally, replacing the 5-th row with the modulo-2 sum of the 5-th and 4-th rows gives Hr in row-echelon form: 1101100100 0110111000 0010010101 0001111110 0000111001 r ⎡ ⎤ ⎢ ⎥ = ⎣ ⎦ H Next the parity-check matrix is put into reduced row-echelon form (i.e. so that any column that contains a leading one has zeros everywhere else). The 1-st column is already correct and the entry in the 2-nd column above the diagonal is removed by replacing the 1-st row with the modulo-2 sum of the 1-st and 2-nd rows. Similarly the entry in the 3-nd column above the diagonal is removed by replacing the 2-nd row with the modulo-2 sum of the 2-nd and 3-rd rows. To clear the 4-th column the 1-st row is replace with the modulo-2 sum of the 1-st and 4-th rows. Finally, to clear the 5-th column involves adding the 5-th row to the 1-st, 2-nd and 4-th rows gives Hr in reduced row-echelon form: 1000001110 0100010100 0010010101 0001000111 0000111001 r ⎡ ⎤ ⎢ ⎥ = ⎣ ⎦ H Lastly, using column permutations we put the parity-check matrix into standard form: std 0111010000 1010001000 1010100100 0011100010 1100100001 ⎡ ⎤ ⎢ ⎥ = ⎣ ⎦ H (4.5) from which a generator G for the code (with parity-check matrices Hstd) can be obtained by the relation stated above. In this final step column permutations have been used and so the codewords of Hstd will be permuted versions of the codewords corresponding to H. A solution is to use a permutation vector to keep track of the column permutation used to create Hstd, which in this case is π = [6 7 8 9 10 1 2 3 4 5] and apply the inverse permutation to each Hstd codeword before it is transmitted. 这样,在接 收端我们可仍然使用H进行译码
Alternatively,if the channel is memoryless,and so the order of codeword bits is unimportant,a far easier option is to apply to the original Hto give a parity-check matrix 「00100110117 1100001101 H=0011100010 1101011000 1010100100 with the same properties as H but which shares the same codeword bit ordering as Had.在接 收端,我们使用H'对由G编出的码字进行译码。 All of this processing can be done off-line and just the matrices G and H'provided to the encoder and decoder respectively. The drawback of this approach is that,unlike H,the matrix G will most likely not be sparse and so the matrix multiplication in (4.4)have complexity in the order of noperations. 随着码长n的增加,编码复杂度会变得非常高。Richarson and Urbanke于2001年提出了 一种直接基于H矩阵的编码方法Richarson200L,T,它具有近似线性编码复杂度。另 种降低编码复杂度的方法是基于代数或几何方法构造LDPC码,这种结构化的校验矩阵 通常能够允许编码器采用简单的移位寄存器实现。下面我们首先介绍U的方法,具有 准循环H矩阵结构的QC-LDPC码编码方法见[19,Li-Lin2006]。 4.8.2.1 (Almost)Linear-Time Encoding of LDPC Codes Rather than finding a generator matrix for H,an LDPC code can be encoded using the matrix din 。uer ndinbok ng prior to encoding Firstly,we perform the following preprocessing steps.By row and column permutations, we bring H into the approximate lower triangular form as indicated in Fig.4.8.5,where the upper right comer can be identified as a lower triangular matrix.Because it is obtained only where T is a (m-g)x(m-g)lower triangular matrix with ones along the diagonal and hence is invertible.If H is full rank,the matrix B is of size (m-g)xg and A is of size (m-g)xk.The g rows of H left in C.D.and E are called the gap of this approximate representation.The smaller g the lower the encoding complexity for the LDPC code
8 Alternatively, if the channel is memoryless, and so the order of codeword bits is unimportant, a far easier option is to apply π to the original H to give a parity-check matrix 0010011011 1100001101 0011100010 1101011000 1010100100 ⎡ ⎤ ⎢ ⎥ ′ = ⎣ ⎦ H with the same properties as H but which shares the same codeword bit ordering as Hstd. 在接 收端,我们使用H’对由G编出的码字进行译码。 All of this processing can be done off-line and just the matrices G and H′ provided to the encoder and decoder respectively. The drawback of this approach is that, unlike H, the matrix G will most likely not be sparse and so the matrix multiplication in (4.4) have complexity in the order of n 2 operations. 随着码长n的增加,编码复杂度会变得非常高。Richarson and Urbanke 于2001年提出了 一种直接基于H矩阵的编码方法[Richarson2001, IT],它具有近似线性编码复杂度。另一 种降低编码复杂度的方法是基于代数或几何方法构造LDPC码,这种结构化的校验矩阵 通常能够允许编码器采用简单的移位寄存器实现。下面我们首先介绍R-U的方法,具有 准循环H矩阵结构的QC-LDPC码编码方法见[19, Li-Lin2006]。 4.8.2.1 (Almost) Linear-Time Encoding of LDPC Codes Rather than finding a generator matrix for H, an LDPC code can be encoded using the parity-check matrix directly by performing some preprocessing prior to encoding (transforming it into upper triangular form and using back substitution). Firstly, we perform the following preprocessing steps. By row and column permutations, we bring H into the approximate lower triangular form as indicated in Fig. 4.8.5, where the upper right comer can be identified as a lower triangular matrix. Because it is obtained only by permutations, the H matrix is still sparse. We denote the permutation decomposition as ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ ABT H CDE where T is a ( ) ( ) mg mg −× − lower triangular matrix with ones along the diagonal and hence is invertible. If H is full rank, the matrix B is of size ( ) mg g − × and A is of size ( ) mg k − × . The g rows of H left in C, D, and E are called the gap of this approximate representation. The smaller g the lower the encoding complexity for the LDPC code
-m-g- Figure 4.8.5 Then Gauss-Jordan elimination is applied to clear E.which is equivalent to multiplying H on the left by the matrix 「1-g0 -ET-I This gives ft-Lu-r 01「A B T] L-ET 1JH-ETA+C -ETB+D 0 Note that is the parity check To encode a message vector u of length k using H,we write the codeword c as c=u P:P2] where p and p contain the first g parity bits and the remaining parity bits,respectively.The parity-check equation c=0gives rise to two equations. Au+Bp:+Tpz=0 (4.6) and (-ET-A+C)u+(-ET-B+D)p,=0 (4.7) Let C=-ET-A+C and D=-ET-B+D.If D is nonsingular (invertible),we have from (4.7) P:=-D-Cu (4.8) If D is not invertible,the columns of H can be permuted to obtain a non-singular D.The matrix -D-C can be pre-computed and saved,so that Pi can be computed with a complexity of (g(n-m))
9 1 0 A 1 1 1 1 1 1 B C D E T n - m g m - g m - g g m Figure 4.8.5 Then Gauss-Jordan elimination is applied to clear E, which is equivalent to multiplying H on the left by the matrix 1 M g g − − ⎡ ⎤ ⎢ ⎥ −⎣ ⎦ I 0 ET I This gives 1 1 1 M g g − − − − ⎡ ⎤ ⎡ ⎤ = = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ − ⎣− +− + ⎦ I 0 A BT H H ET I ET A C ET B D 0 Note that H is the parity check matrix for an equivalent code. To encode a message vector u of length k using H , we write the codeword c as c up p = [ 1 2 ] where p1 and p2 contain the first g parity bits and the remaining parity bits, respectively. The parity-check equation T cH 0 = gives rise to two equations, 1 2 Au Bp Tp + + = 0 (4.6) and ( ) ( ) 1 1 1 − − − + +− + = ET A C u ET B D p 0 (4.7) Let −1 C ET A C =− + and −1 D ET B D =− + . If D is nonsingular (invertible), we have from (4.7) 1 1 − p D Cu = − (4.8) If D is not invertible, the columns of H can be permuted to obtain a non-singular D . The matrix −1 −D C can be pre-computed and saved, so that p1 can be computed with a complexity of O gn m ( ) ( ) −
Once Pi is known,p2 can be found from (4.6)by P2=-T-(Au+Bp,) (4.9) Since T is lower triangular.p can be found by using back substitutuin. Note that column permutations were used to obtain approximate lower triangular H from the original parity-check matrix H,so either H,or实施相同列置换之后的H'(H'with the same column permutation applied),will be used at the decoder. The process of computing p and p2 constitutes the encoding process.The steps for the computation as well as their computational complexity are outlined here(assuming that the preprocessing steps have already been accomplished).For the sake of clarity,intermediate variables are used to show the steps which may not be necessary in a final implementation. Steps to compute P:=-D-(-ET-A+C)u Operation Comment Complexity X=Au Multiplication by a sparse matrix O(n) x=T-x Solve Tx2=x by backsubstitution O(n) X;=-Ex2 Multiplication by a sparse matrix x=Cu Multiplication by a sparse matrix O(n) Addition 0() P=-D-x Multiplication by dense gxg matrix 0g) Steps to compute p2 =-T-(Au+Bp,) Operation Comment Complexity x,=Au Multiplication by a sparse matrix (already 0 done) Xo=Bp Multiplication by a sparse matrix O(n) X7=X1+X6 Addition O(n) P:=-T-x. Solve Tp2=x7by backsubstitution On) The overall algorithm is O(n+g).Clearly,the smaller g can be made.the lower the 10
10 Once p1 is known, p2 can be found from (4.6) by 1 2 1 ( ) − p T Au Bp =− + (4.9) Since T-1 is lower triangular, p2 can be found by using back substitutuin. Note that column permutations were used to obtain approximate lower triangular H from the original parity-check matrix H’, so either H, or 实施相同列置换之后的 H’ (H’ with the same column permutation applied), will be used at the decoder. The process of computing p1 and p2 constitutes the encoding process. The steps for the computation as well as their computational complexity are outlined here (assuming that the preprocessing steps have already been accomplished). For the sake of clarity, intermediate variables are used to show the steps which may not be necessary in a final implementation. z Steps to compute ( ) 1 1 1 − − p D ET A C u =− − + : Operation Comment Complexity x Au 1 = Multiplication by a sparse matrix O(n) 1 2 1 − x Tx = Solve Tx2 = x1 by backsubstitution O(n) x Ex 3 2 = − Multiplication by a sparse matrix O(n) x Cu 4 = Multiplication by a sparse matrix O(n) xxx 5 34 = + Addition O(n) 1 1 5 − p Dx = − Multiplication by dense g×g matrix O(g 2 ) z Steps to compute 1 2 1 ( ) − p T Au Bp =− + : Operation Comment Complexity x Au 1 = Multiplication by a sparse matrix (already done) 0 x Bp 6 1 = Multiplication by a sparse matrix O(n) x xx 7 16 = + Addition O(n) 1 2 7 − p Tx = − Solve Tp2 = x7 by backsubstitution O(n) The overall algorithm is 2 On g ( ) + . Clearly, the smaller g can be made, the lower the