正在加载图片...
18. J/16.394J: The Mathematics of Infinite Random Matrices Project Ideas Alan edel Handout #8, Thursday, September 30, 2004 A requirement of this course is that the students experiment with a random matrix problem that is of interest to them. Often these explorations take the shape of a little bit of theory and a little bit of computational experimentation. Some of you might already have some ideas that are relevant to your current research. Regardless, we thought we'd put together some ideas for projects. Feel free to adapt them based on your interests: if you want more information about a particular idea, please feel to contact us. Mid-term project presentations are tentatively scheduled for October 28th and November 2nd as indicated on the course calendar. We will provide more details about this in a subsequent email to the class. The basic purpose of this mid-term project is to get your feet wet thinking about a problem that is of interest to you using the tools you' ve learned about so far or to learn new tools for that purpose 1 Covariance Matrices in Signal Processing Sample covariance matrices come up in many signal processing applications such as adaptive filtering. Let G be the nx"Gaussian"random matrix that we've encountered in class. In signal processing language, the wishart matrix W=-GG is a rectangularly windowed estimator. In adaptive filtering applications, the covariance matrix that comes up can be expressed in terms of the Wishart matrix as A=w-l. We've already seen how the Marcenko- Pastur theorem can be used to characterize the density of the eigenvalues of the wishart matrix. The density of A can hence be computed using standard Jacobian tricks the s more general covariance matrix estimator is referred to as being exponentially windowed. To see where the"windowing"comes in, let us first rewrite the Wishart matrix as 1 where g is the i-th column of G. The rectangt endowing comes in from the fact that each of the N vectors g. is weighted equally by 1/N in forming W. Hence, the exponentially weighted estimator is simply where a< l is the weighting factor In many signal processing publications, the consensus is that, loosely spe Wx=(1-)EW- The proofs for this involve perturbation theory arguments. We feel that we can get some insight on this problem using random matrix techniques such as the ones you've learned in class Project Idea Use random matrix arguments to rederive or verify this proof. Indicate what values of A his is valid for. Discuss any convergence issues i.e. what values of n and N is this approximately valid1 18.338J/16.394J: The Mathematics of Infinite Random Matrices Project Ideas Alan Edelman Handout #8, Thursday, September 30, 2004 A requirement of this course is that the students experiment with a random matrix problem that is of interest to them. Often these explorations take the shape of a little bit of theory and a little bit of computational experimentation. Some of you might already have some ideas that are relevant to your current research. Regardless, we thought we’d put together some ideas for projects. Feel free to adapt them based on your interests; if you want more information about a particular idea, please feel to contact us. Mid-term project presentations are tentatively scheduled for October 28th and November 2nd as indicated on the course calendar. We will provide more details about this in a subsequent email to the class. The basic purpose of this mid-term project is to get your feet wet thinking about a problem that is of interest to you using the tools you’ve learned about so far or to learn new tools for that purpose. Covariance Matrices in Signal Processing Sample covariance matrices come up in many signal processing applications such as adaptive filtering. Let G be the n × N “Gaussian” random matrix that we’ve encountered in class. In signal processing language, the Wishart matrix 1 W ∗ = GG (1) N is a rectangularly windowed estimator. In adaptive filtering applications, the covariance matrix that comes up can be expressed in terms of the Wishart matrix as A = W−1 . We’ve already seen how the Marˇcenko￾Pastur theorem can be used to characterize the density of the eigenvalues of the Wishart matrix. The density of A can hence be computed using standard Jacobian tricks. A more general covariance matrix estimator is referred to as being exponentially windowed. To see where the “windowing” comes in, let us first rewrite the Wishart matrix as W = 1 N � N i=1 gi g H i (2) gi where gi vectors is the i-th column of G. The rectangular windowing comes in from the fact that each of the N is weighted equally by 1/N in forming W. Hence, the exponentially weighted estimator is simply Wλ = 1 N � N i=1 λN−i gi g H i (3) where λ < 1 is the weighting factor. In many signal processing publications, the consensus is that, loosely speaking, −1 E[W−1] = (1 − λ)E[W ]. (4) λ The proofs for this involve perturbation theory arguments. We feel that we can get some insight on this problem using random matrix techniques such as the ones you’ve learned in class. Project Idea: Use random matrix arguments to rederive or verify this proof. Indicate what values of λ this is valid for. Discuss any convergence issues i.e. what values of n and N is this approximately valid
向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有