正在加载图片...
The mutual information then reduces to Z(x;yH)=H(y|H)-H(n) (0.2) where we have used the fact that det(AB)=det(A)det(B)and det(A-)=[det(A) And the MIMO capacity is given by maximizing the mutual information (0.2)over all input covariance matrces Kx satisfying the power constraint: c国=腮,oea,+Kr bits per channel use (0.3) 殿ea+是Kr where the last equality follows from the fact that det(I+AB)=det(I +BA)for matrices A (mxn)and B(nxm) Clearly,the optimization relative to K will depend on whether or not H is known at the transmitter.We now discuss this maximizing under different assumptions about transmitter CSI by decomposing the vector channel into a set of parallel,independent scalar Gaussian sub-cha nels 2.3 Channel Unknown to the Transmitter If the channel is known to the receiver.but not to the transmitter,then the transmitter cannot optimize its power allocation or input covariance structure across antennas.This that if the disrbution of Holo he ero-mean spatiall hannel gai odel,the signals independent and the power should be equally divided among the transmit antennas,resulting an input covariance matriK.It is shown in tar]that this K,indeed maxm the mutual information.Thus,the capacity in such a case is log:detSNR HH" ifM<N C= bits per channel use (0.4) e[+)MN where SNR=P/No 2.1 Parallel Decomposition of the MIMO Channel 3 The mutual information then reduces to I HH (; | ) ( | ) () x y H = − y H n 2 0 1 log det H M x N ⎛ ⎞ ⎛ ⎞ = + ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ I HH K (0.2) where we have used the fact that det( ) det( )det( ) AB A B = and [ ] 1 1 det( ) det( ) − − A A = . And the MIMO capacity is given by maximizing the mutual information (0.2) over all input covariance matrces Kx satisfying the power constraint: 2 :( ) 0 1 ( ) max log det x x H M x Tr P C = N ⎛ ⎞ ⎛ ⎞ = + ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ H I HH K K K bits per channel use (0.3) 2 :( ) 0 1 max log det x x H N x Tr P= N ⎛ ⎞ ⎛ ⎞ = + ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ I HH K K K where the last equality follows from the fact that det det (I AB I BA m n += + )( ) for matrices ( ) and ( ) A B mn nm × × . Clearly, the optimization relative to Kx will depend on whether or not H is known at the transmitter. We now discuss this maximizing under different assumptions about transmitter CSI by decomposing the vector channel into a set of parallel, independent scalar Gaussian sub-channels. 2.3 Channel Unknown to the Transmitter If the channel is known to the receiver, but not to the transmitter, then the transmitter cannot optimize its power allocation or input covariance structure across antennas. This implies that if the distribution of H follows the zero-mean spatially white (ZMSW) channel gain model, the signals transmitted from N antennas should be independent and the power should be equally divided among the transmit antennas, resulting an input covariance matrix x N P N K = I . It is shown in [Telatar99] that this Kx indeed maximize the mutual information. Thus, the capacity in such a case is 2 2 log det , if log det , if H M H N M N N C M N N ⎧ ⎡ ⎤ ⎛ ⎞ ⎪ ⎜ ⎟ + < ⎢ ⎥ ⎪⎣ ⎦ ⎝ ⎠ = ⎨ ⎪ ⎡ ⎤ ⎛ ⎞ ⎜ ⎟ + ≥ ⎪ ⎢ ⎥ ⎩ ⎣ ⎦ ⎝ ⎠ I HH I HH SNR SNR bits per channel use (0.4) where 0 SNR = P N/ . 2.1 Parallel Decomposition of the MIMO Channel
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有