正在加载图片...
With the normalization constraint,the total received signal power per antenna is equal to the total transmitted power,and the average SNR at any receive antenna isy=P/N. In all cases,we will assume that the channel matrix is known to the receiver(ie.,perfect CSIR),equivalently,the channel output consists of the pair(y,H),and the distribution of H is known at the transmitter.In most situations,the realization of H(CSI)is assumed to be not known at the transmitter. 8.3 Fundamental Capacity Limits of MIMO Channels Consider the case of deterministic H.The channel matrix H is assumed to be constant at all time and known to the receiver.The relation of(8.1)indicates a vector Gaussian channel. The Shannon capacity is defined as the maximum data rate that can be transmitted over the mation C(H)= mZ,田=m8xZx+Zxy田 =max Z(x;y H)=max [H(yH)-H(ylx,H)] (8.3) where p(x)is the probability distribution of the vector x,H(y|H)and H(yx,H)are the differential entropy and the conditional differential entropy of the vector y,respectively.Since the vectors x and n are independent,we have Hylx,H)=H(n)=log2(det(πeW。Iy) which has fixed value and is independent of the channel input.Thus,maximizing the mutual information I(x:y|H)is equivalent to maximize H(yH).From (8.1),the covariance matrix ofy is K,=E["]=HKH"+N。Iw Among all vectors y with a given covariance matrix K.the differential entropy H(y)is maximized when y is a zero-mean circularly symmetric complex Gaussian (ZMCSCG) random vector [Telatar9].This implies that the inputx must also be ZMCSCG and therefore this is the optimal distribution onx.This yields the entropy H(yH)given by H(y|H)=log,(det(eK,)) The mutual information then reduces to Z(x:y|H)=H(y|H)-H(n) log,det (8.4)4 With the normalization constraint, the total received signal power per antenna is equal to the total transmitted power, and the average SNR at any receive antenna is 0   P N/ . In all cases, we will assume that the channel matrix is known to the receiver (i.e., perfect CSIR), equivalently, the channel output consists of the pair (y, H), and the distribution of H is known at the transmitter. In most situations, the realization of H (CSI) is assumed to be not known at the transmitter. 8.3 Fundamental Capacity Limits of MIMO Channels Consider the case of deterministic H. The channel matrix H is assumed to be constant at all time and known to the receiver. The relation of (8.1) indicates a vector Gaussian channel. The Shannon capacity is defined as the maximum data rate that can be transmitted over the channel with arbitrarily small error probability. It is given in terms of the mutual information between vectors x and y as 2 ( ): [|| || ] ( ) ( ) max ( ; , ) max ( ; ) ( ; | ) p P p x C    x x H x   y H xH x y H E   () () max ( ; | ) max ( | ) ( | , ) px px     x y H y H y x H (8.3) where p(x) is the probability distribution of the vector x,   ( | ) and ( | , ) y H y x H are the differential entropy and the conditional differential entropy of the vector y, respectively. Since the vectors x and n are independent, we have   ( | , ) ( ) log det( ) y xH n I   2 0  eN M  which has fixed value and is independent of the channel input. Thus, maximizing the mutual information ( ; | )  x y H is equivalent to maximize ( | )  y H . From (8.1), the covariance matrix of y is = 0 H H y xM     N K   E K yy HH I Among all vectors y with a given covariance matrix Ky, the differential entropy ( )  y is maximized when y is a zero-mean circularly symmetric complex Gaussian (ZMCSCG) random vector [Telatar99]. This implies that the input x must also be ZMCSCG, and therefore this is the optimal distribution on x. This yields the entropy ( | )  y H given by ( | ) log det( ) y H  2  eK y  The mutual information then reduces to   (; | ) ( | ) () x y H   y H n 2 0 1 log det H M x N               I HH K (8.4)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有