正在加载图片...
SIhf=N,m=12.M When H is random,we will assume that its entries are i.i.d.zero-mean complex Gaussian variables,each with variance 1/2 per dimension.This case is usually referred to as a rich scattering environment.The normalization constraint for the elements of H is given by 2[kf门=Nm=l2M With the normalization constraint,the total received signal power per antenna is equal to the total transmitted power,and the average SNR at any receive antenna is SNR=P/N 2.Fundamental Capacity Limits of MIMO Channels Consider the case of deterministic H.The channel matrix H is assumed to be constant at all time and known to the receiver The relation of (1)indicates a vector gaussian channel.The Shann acity is defined as the max data rate that can be transmitted over the channel with arbitrarily small error probability.It is given in terms o the mutual information between vectors x and y as C(H)= aZy=maT(x.H)+I(x:y|H) max T(x;y |H)max[H(y |H)-H(ylx,H) (0.1) where p(x)is the probability distribution of the vector x,H(yH)and H(yx,H)are the differential entropy and the conditional differential entropy of the vector y. respectively.Since the vectorsx and nare independent,we have Hylx,HD=H()=log2(det(πeN,lw) which has fixed value and is independent of the channel input.Thus,maximizing the mutual information T(x;y|H)is equivalent to maximize H(y|H).From (1),the covariance matrix ofy is K,=Eyy"=HK,H#+NIv Among all vectors y with a given covariance matrix K,the differential entropy H(y)is maximied when yis ircularly complex Gaussian (ZMCSCG) random vector [Telatar99].This implies that the inputx must also be ZMCSCG and therefore this is the optimal distribution onx.This yields the entropy H(yH)given by H(yl田)=log2(det(πek,)2 2 1 | | , 1,2,., N mn n h Nm M = ∑ = = When H is random, we will assume that its entries are i.i.d. zero-mean complex Gaussian variables, each with variance 1/2 per dimension. This case is usually referred to as a rich scattering environment. The normalization constraint for the elements of H is given by 2 1 | | , 1,2,., N mn n h Nm M = ⎡ ⎤ = = ∑ ⎣ ⎦ E With the normalization constraint, the total received signal power per antenna is equal to the total transmitted power, and the average SNR at any receive antenna is 0 SNR = P N/ . 2. Fundamental Capacity Limits of MIMO Channels Consider the case of deterministic H. The channel matrix H is assumed to be constant at all time and known to the receiver. The relation of (1) indicates a vector Gaussian channel. The Shannon capacity is defined as the maximum data rate that can be transmitted over the channel with arbitrarily small error probability. It is given in terms of the mutual information between vectors x and y as 2 ( ): [|| || ] ( ) ( ) max ( ; , ) max ( ; ) ( ; | ) p P p x C ≤ = =+ x x H x I II y H xH x y H E [ ] () () max ( ; | ) max ( | ) ( | , ) px px = I HH x y H = − y H y x H (0.1) where p(x) is the probability distribution of the vector x, H H ( | ) and ( | , ) y H y x H are the differential entropy and the conditional differential entropy of the vector y, respectively. Since the vectors x and n are independent, we have H H ( | , ) ( ) log det( ) y xH n I = = 2 0 ( πeN M ) which has fixed value and is independent of the channel input. Thus, maximizing the mutual information ( ; | ) I x y H is equivalent to maximize ( | ) H y H . From (1), the covariance matrix of y is = 0 H H y xM ⎡ ⎤ = + N ⎣ ⎦ K E K yy HH I Among all vectors y with a given covariance matrix Ky, the differential entropy ( ) H y is maximized when y is a zero-mean circularly symmetric complex Gaussian (ZMCSCG) random vector [Telatar99]. This implies that the input x must also be ZMCSCG, and therefore this is the optimal distribution on x. This yields the entropy ( | ) H y H given by H( | ) log det( ) y H = 2 ( πeK y )
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有