正在加载图片...
18. J/16.394J: The Mathematics of Infinite Random Matrices Essentials of Finite Random Matrix Theory Alan edelman Handout #6, Tuesday, September 28, 2004 This handout provides the essential elements needed to understand finite random matrix theory. A cursory observation should reveal that the tools for infinite random matrix theory are quite different from the tools for finite random matrix theory. Nonetheless, there are significantly more published applications that use finite random matrix theory as opposed to infinite random matrix theory. Our belief is that many of the results that have been historically derived using finite random matrix theory can be reformulated and answered using infinite random matrix theory. In this sense, it is worth recognizing that in many applications it is an integral of a function of the eigenvalues that is more important that the mere distribution of the eigenvalues. For finite random matrix theory, the tools that often come into play when setting up such integrals are the Matric Jacobians, the Joint Eigenvalue Densities and the Cauchy-Binet theorem. We describe these in subsequent section 1 Matrix and vector Differentiation In this section. we concern ourselves with the differentiation of matrices. Differentiating matrix and vector functions is not significantly harder than differentiating scalar functions except that we need notation to keep track of the variables. We titled this section"matrix and vector"differentiation, but of course it is the function that we differentiate. The matrix or vector is just a notational package for the scalar functions involved. In the end. a derivative is nothing more than the "linearization" of all the involved functions We find it useful to think of this linearization both symbolically(for manipulative purposes)as well as numerically (in the sense of small numerical perturbations ). The differential notation idea captures these viewpoints very well We begin with the familiar product rule for scalars d(uv)=u(du)+u(du) from which we can derive that d(z)=3 z2dz. We refer to dz as a differential We all unconsciously interpret the"dr" symbolically as well as numerically. Sometimes it is nice to confirm on a computer that (x+e)3-x3 I do this by taking z to be 1 or 2 or randn(1) and e to be 001 or. 0001 The product rule holds for matrices as well d(Uv)=U(dv)+(du)V In the examples we will see some symbolic and numerical interpretations. Example 1: Y=X3 We use the product rule to differentiate Y(X)=x to obtain that d(x)=X(dX)+ X(dX)X+(dx)X1 18.338J/16.394J: The Mathematics of Infinite Random Matrices Essentials of Finite Random Matrix Theory Alan Edelman Handout #6, Tuesday, September 28, 2004 This handout provides the essential elements needed to understand finite random matrix theory. A cursory observation should reveal that the tools for infinite random matrix theory are quite different from the tools for finite random matrix theory. Nonetheless, there are significantly more published applications that use finite random matrix theory as opposed to infinite random matrix theory. Our belief is that many of the results that have been historically derived using finite random matrix theory can be reformulated and answered using infinite random matrix theory. In this sense, it is worth recognizing that in many applications it is an integral of a function of the eigenvalues that is more important that the mere distribution of the eigenvalues. For finite random matrix theory, the tools that often come into play when setting up such integrals are the Matrix Jacobians, the Joint Eigenvalue Densities and the Cauchy-Binet theorem. We describe these in subsequent sections. Matrix and Vector Differentiation In this section, we concern ourselves with the differentiation of matrices. Differentiating matrix and vector functions is not significantly harder than differentiating scalar functions except that we need notation to keep track of the variables. We titled this section “matrix and vector” differentiation, but of course it is the function that we differentiate. The matrix or vector is just a notational package for the scalar functions involved. In the end, a derivative is nothing more than the “linearization” of all the involved functions. We find it useful to think of this linearization both symbolically (for manipulative purposes) as well as numerically (in the sense of small numerical perturbations). The differential notation idea captures these viewpoints very well. We begin with the familiar product rule for scalars, d(uv) = u(dv) + v(du), from which we can derive that d(x3) = 3x2dx. We refer to dx as a differential. We all unconsciously interpret the “dx” symbolically as well as numerically. Sometimes it is nice to confirm on a computer that 3 (x + ǫ)3 − x ≈ 3x 2 . (1) ǫ I do this by taking x to be 1 or 2 or randn(1) and ǫ to be .001 or .0001. The product rule holds for matrices as well: d(UV ) = U(dV ) + (dU)V . In the examples we will see some symbolic and numerical interpretations. Example 1: Y = X3 We use the product rule to differentiate Y (X) = X3 to obtain that d(X3) = X2(dX) + X(dX)X + (dX)X2
向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有