正在加载图片...
3 Y=BXA and the kronecker product 3.1 Jacobian of y= BXA(Kronecker Product Approach) There is a"nuts and bolts "approach to calculate some Jacobian determinants. A good example function the matrix inverse Y=X-. We recall from Example 4 of Section 1 that dY=-X-dXX-I In words, the perturbation dX is multiplied on the left and on the right by a fixed matrix. When this happens we are in a"Kronecker Product"situation, and can instantly write down the Jacobian We provide two definitions of the Kronecker Product for square matrices A E R", to BE R See 1 for a nice discussion of Kronecker products Operator Definition A⑧ B is the operator from X∈ Rm,n to y∈Rm, n where y=BXA. We write (A⑧B)X=BXA Matrix Definition(Tensor Product) A⑧ B is the matrix A8B= The following theorem is important for applications Theorem 1. det(A o B)=(det a)m(det B)" Application: If Y=X-I then dy=-(X-ToX-)dX so that I detJ=det XI Notational Note: The correspondence between the operator definition and matrix definition is worth spelling out. It corresponds to the following identity in MATLAB B Y(:)= kron(A, B)* X( i The second line does not change Y Here kron(A, B)is exactly the matrix in Equation(4), and X(: is the column vector consisting of the columns of X stacked on top of each other. (In computer science this is known as storing an array in "column major"order. Many authors write vec(X), where we use X(: ) Concretely, we have that vec(BXA)=(A@ B)vec(X) ⑧ B is as in(4) Proofs of theorem 8.1: Assume A and B are diagonalizable, with Au;= A;;(i=l,..., n) and Bu;=4;vi(i=l,..., m) Let Miy=v;u;. The mn matrices Mi, form a basis for Rmn and they are eigenmatrices of our map since BMij AT=HiA, Mig. The determinant is (det A)"(det B) The assumption of diagonalizability is not importantY 3 Y = BXAT and the Kronecker Product 3.1 Jacobian of Y = BXAT (Kronecker Product Approach) There is a “nuts and bolts” approach to calculate some Jacobian determinants. A good example function is the matrix inverse Y = X−1 . We recall from Example 4 of Section 1 that dY = −X−1dXX−1 . In words, the perturbation dX is multiplied on the left and on the right by a fixed matrix. When this happens we are in a “Kronecker Product” situation, and can instantly write down the Jacobian. We provide two definitions of the Kronecker Product for square matrices A ∈ Rn,n to B ∈ Rm,m . See [1] for a nice discussion of Kronecker products. Operator Definition A ⊗ B is the operator from X ∈ Rm,n to Y ∈ Rm,n where Y = BXAT . We write (A ⊗ B)X = BXAT . Matrix Definition (Tensor Product) A ⊗ B is the matrix  a11B . . . a1m2 B  A ⊗ B =  . . . . . .  . (4) am11B . . . am1m2 B The following theorem is important for applications. Theorem 1. det(A ⊗ B) = (det A)m(det B)n Application: If Y = X−1 then dY = −(X−T ⊗ X−1) dX so that det J| = det X −2n | | | . Notational Note: The correspondence between the operator definition and matrix definition is worth spelling out. It corresponds to the following identity in MATLAB Y = B * X * A’ Y(:) = kron(A,B) * X(:) % The second line does not change Y Here kron(A,B) is exactly the matrix in Equation (4), and X(:) is the column vector consisting of the columns of X stacked on top of each other. (In computer science this is known as storing an array in “column major” order.) Many authors write vec(X), where we use X(:). Concretely, we have that vec(BXAT) = (A ⊗ B)vec(X) where A ⊗ B is as in (4). Proofs of Theorem 8.1: Assume A and B are diagonalizable, with Aui = λiui (i = 1, . . . , n) and Bvi = µivi (i = 1, . . . , m). Let Mij = T . The mn matrices Mij form a basis for Rmn v and they are eigenmatrices of our map since iuj BMijAT = µiλjMij . The determinant is µiλj or (det A) m(det B) n . (5) 1≤i≤n 1≤j≤m The assumption of diagonalizability is not important
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有