正在加载图片...
where we have used the fact that det(AB)=det(A)det(B)and det(A-)=[det(A)].And the MIMO capacity is given by maximizing the mutual information (8.4)over all input covariance matrces K,satistying the power constraint c器,ogda+Kr》 bits per channel use (8.5) -腮a+之kr列 where the last equality follows from the fact that det(I+AB)=det(I+BA)for matrices A(mxm)andB(n×m) Clearly,the optimization relative to K.will depend on whether or not H is known at the transmitter.We now discuss this maximizing under different assumptions about transmitter CSI by decomposing the vector channel into a set of parallel,independent scalar Gaussian sub-channels. 8.3.1 Parallel Decomposition of the MIMO Channel By the singular value decomposition(SVD)theorem,any MxN matrix HECN can be written as H=UAVI (8.6) where A is an MN non-negative real and diagonal matrix.U and V are MxM and NxN unitary matrices,respectively.That is.UU=I and VV=where the superseript entries of A are the non-negat e columns of U are the eigenvectors of HH and the columns of Vare the eigenvectors of HH. Denote by Athe eigenvalues of HH",which are defined by HH"z=z,z0 (87) mber of no cannot exceed the number of columns or rows of H,rsm=min(M,N).If H is full rank, which is sometimes referred to as a rich scattering environment,thenr=m.Equation (8.7) can be rewritten as (2L.-W)z=0,z0 (8.8) where W is the Wishart matrix defined to be 5 5 where we have used the fact that det( ) det( )det( ) AB A B  and   1 1 det( ) det( )   A A  . And the MIMO capacity is given by maximizing the mutual information (8.4) over all input covariance matrces Kx satisfying the power constraint: 2 :( ) 0 1 ( ) max log det x x H M x Tr P C  N               H I HH K K K bits per channel use (8.5) 2 :( ) 0 1 max log det x x H N x Tr P N               I HH K K K where the last equality follows from the fact that det det I AB I BA m n      for matrices A B ( ) and ( ) mn nm   . Clearly, the optimization relative to Kx will depend on whether or not H is known at the transmitter. We now discuss this maximizing under different assumptions about transmitter CSI by decomposing the vector channel into a set of parallel, independent scalar Gaussian sub-channels. 8.3.1 Parallel Decomposition of the MIMO Channel By the singular value decomposition (SVD) theorem, any MN matrix M N H can be written as H H U ΛV (8.6) where  is an MN non-negative real and diagonal matrix, U and V are MM and NN unitary matrices, respectively. That is, and H H UU I VV I  M  N , where the superscript “H” stands for the Hermitian transpose (or complex conjugate transpose). In fact, the diagonal entries of  are the non-negative square roots of the eigenvalues of matrix HHH, the columns of U are the eigenvectors of HHH and the columns of V are the eigenvectors of HHH. Denote by  the eigenvalues of HHH, which are defined by H HH z z   , z0 (8.7) where z is an M1 eigenvector corresponding to . The number of non-zero eigenvalues of matrix HHH is equal to the rank of H. Let r be the rank of the matrix H. Since the rank of H cannot exceed the number of columns or rows of H, min( , ) r m MN   . If H is full rank, which is sometimes referred to as a rich scattering environment, then r = m. Equation (8.7) can be rewritten as ( ) 0, I Wz m   z0 (8.8) where W is the Wishart matrix defined to be
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有