正在加载图片...
the resulting Stieltjes polynomial in(36). For example, when p=0.6, A1=l and A2=3, (36)simplifies to 3c2m2+(4c2-6c+6c2)m2+(3-4z-6c+c2+3c2+2)m+=c+ For c=0.1,(37) becomes 3_m32+ 502+212)m2+(243169 99 +z=0 To determine the density from (38) we need to determine the roots of the polynomial in m and use the inversion formula in (5). Since we do not know the region of support for this density we would conjecture such a region and basically solve the polynomial above for every value of z. Using numerical tools such as the roots command in MATlAB this is not very difficult Figure 5 shows the excellent agreement between the theoretical density(solid line) obtained from numer ically solving(38)and the histogram of the eigenvalues of 1000 realizations of the matrix Cn with n= 100 and N=n/c= 1000. Figure 6 shows the behavior of the density for a range of values of c. This figure captures our intuition that as c-0, the eigenvalues of the sample covariance matrix Cn will be increasingly localized about A1 1 and A2=3. By contrast, capturing this very same analytic behavior using finite RMt is not as straightforward c=001 Figure 6: The density of Cn with dH(T)=0.68(T-1)+0.48(T-3)for different values of c Unlike the Wishart matrix, the distribution function(or level density) of finite dimensional covariance matrices, such as the Cn we considered in this example, can only be expressed in terms of zonal or other multivariate orthogonal polynomials that appear frequently in texts such as [19. Though these polynomials have been studied extensively by multivariate statisticians 24, 25, 26, 27, 28, 29, 30, 31] and more recently by combinatorists 32, 33, 34, 35, 36, 37 the prevailing consensus is that they are unwieldy and not particularly intuitive to work with. This is partly because of their definition as infinite series expansions which makes their numerical evaluation a non-trivial task when dealing with matrices of moderate dimensions. More importantly, from an engineering point of view, the Stieltjes transform based approaches allows us to generate plots of the form in Figure 6 with the implicit assumption that the matrix in question is infinite and yet predict the behavior of the eigenvalues for the practical finite matrix counterpart with remarkable accuracy, as Figure 5 corroborates. This is the primary motivation for this course's focus on developing infinite random matrix theory In the lectures that follow we will discuss other techniques that allow us to characterize a very broad class of infinite random matrices that cannot be characterized using finite rmt. we will often be intrigued by and speculate on the link between these infinite matrix ensembles and their finite matrix counterparts We encourage you to ask us questions on this or to explore them further1 2 � � � � the resulting Stieltjes polynomial in (36). For example, when p = 0.6, λ1 = 1 and λ2 = 3, (36) simplifies to 2 � 11 11 3 2 2 2 2 3 c 2 m z + � 4 cz 2 − 6 cz + 6 c z m + (3 − 4 z − 6 c + 31 cz + 3 c + z )m + c + z − = 0. (37) 5 5 5 For c = 0.1, (37) becomes 27 2 243 169 99 2 3 m 3 z 2 + −50 z + 2/5 z 2 m + 100 − 50 z + z m − + z = 0 (38) 100 50 To determine the density from (38) we need to determine the roots of the polynomial in m and use the inversion formula in (5). Since we do not know the region of support for this density we would conjecture such a region and basically solve the polynomial above for every value of z. Using numerical tools such as the roots command in MATLAB this is not very difficult. Figure 5 shows the excellent agreement between the theoretical density (solid line) obtained from numer￾ically solving (38) and the histogram of the eigenvalues of 1000 realizations of the matrix Cn with n = 100 and N = n/c = 1000. Figure 6 shows the behavior of the density for a range of values of c. This figure captures our intuition that as c → 0, the eigenvalues of the sample covariance matrix Cn will be increasingly localized about λ1 = 1 and λ2 = 3. By contrast, capturing this very same analytic behavior using finite RMT is not as straightforward. 0 1 2 3 4 5 6 7 8 9 10 0 0.5 1.5 2.5 x dF C /dx c=0.2 c=0.05 c=0.01 Figure 6: The density of Cn with dH(τ) = 0.6 δ(τ − 1) + 0.4 δ(τ − 3) for different values of c Unlike the Wishart matrix, the distribution function (or level density) of finite dimensional covariance matrices, such as the Cn we considered in this example, can only be expressed in terms of zonal or other multivariate orthogonal polynomials that appear frequently in texts such as [19]. Though these polynomials have been studied extensively by multivariate statisticians [24, 25, 26, 27, 28, 29, 30, 31] and more recently by combinatorists [32, 33, 34, 35, 36, 37] the prevailing consensus is that they are unwieldy and not particularly intuitive to work with. This is partly because of their definition as infinite series expansions which makes their numerical evaluation a non-trivial task when dealing with matrices of moderate dimensions. More importantly, from an engineering point of view, the Stieltjes transform based approaches allows us to generate plots of the form in Figure 6 with the implicit assumption that the matrix in question is infinite and yet, predict the behavior of the eigenvalues for the practical finite matrix counterpart with remarkable accuracy, as Figure 5 corroborates. This is the primary motivation for this course’s focus on developing infinite random matrix theory. In the lectures that follow we will discuss other techniques that allow us to characterize a very broad class of infinite random matrices that cannot be characterized using finite RMT. We will often be intrigued by and speculate on the link between these infinite matrix ensembles and their finite matrix counterparts. We encourage you to ask us questions on this or to explore them further
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有