正在加载图片...
3.6 Linear Independence and Rank A set of vectors {1,2....n}CRm is said to be (linearly)independent if no vector can be represented as a linear combination of the remaining vectors.Conversely,if one vector belonging to the set can be represented as a linear combination of the remaining vectors, then the vectors are said to be (linearly)dependent.That is,if n ∑a i=1 for some scalar values a1,...,an-1ER,then we say that the vectors r1,...,n are linearly dependent;otherwise,the vectors are linearly independent.For example,the vectors 1 are linearly dependent because x3 =-2x1+2. The column rank of a matrix A ERmxn is the size of the largest subset of columns of A that constitute a linearly independent set.With some abuse of terminology,this is often referred to simply as the number of linearly independent columns of A.In the same way, the row rank is the largest number of rows of A that constitute a linearly independent set. For any matrix A Rmx",it turns out that the column rank of A is equal to the row rank of A (though we will not prove this),and so both quantities are referred to collectively as the rank of A,denoted as rank(A).The following are some basic properties of the rank: For AE Rmxn,rank(A)<min(m,n).If rank(A)=min(m,n),then A is said to be full rank. 。ForA∈Rmxm,rank(A)=rank(AT), ·ForA∈Rmxm,B∈Rnxp,rank(AB)≤min(rank(A),rank(B) 。ForA,B∈Rmxn,rank(A+B)≤rank(A)+rank(B). 3.7 The Inverse The inverse of a square matrix A ERnxn is denoted A-1,and is the unique matrix such that A-A=I=AA-1 Note that not all matrices have inverses.Non-square matrices,for example,do not have inverses by definition.However,for some square matrices A,it may still be the case that 113.6 Linear Independence and Rank A set of vectors {x1, x2, . . . xn} ⊂ R m is said to be (linearly) independent if no vector can be represented as a linear combination of the remaining vectors. Conversely, if one vector belonging to the set can be represented as a linear combination of the remaining vectors, then the vectors are said to be (linearly) dependent. That is, if xn = Xn−1 i=1 αixi for some scalar values α1, . . . , αn−1 ∈ R, then we say that the vectors x1, . . . , xn are linearly dependent; otherwise, the vectors are linearly independent. For example, the vectors x1 =   1 2 3   x2 =   4 1 5   x3 =   2 −3 −1   are linearly dependent because x3 = −2x1 + x2. The column rank of a matrix A ∈ R m×n is the size of the largest subset of columns of A that constitute a linearly independent set. With some abuse of terminology, this is often referred to simply as the number of linearly independent columns of A. In the same way, the row rank is the largest number of rows of A that constitute a linearly independent set. For any matrix A ∈ R m×n , it turns out that the column rank of A is equal to the row rank of A (though we will not prove this), and so both quantities are referred to collectively as the rank of A, denoted as rank(A). The following are some basic properties of the rank: • For A ∈ R m×n , rank(A) ≤ min(m, n). If rank(A) = min(m, n), then A is said to be full rank. • For A ∈ R m×n , rank(A) = rank(AT ). • For A ∈ R m×n , B ∈ R n×p , rank(AB) ≤ min(rank(A),rank(B)). • For A, B ∈ R m×n , rank(A + B) ≤ rank(A) + rank(B). 3.7 The Inverse The inverse of a square matrix A ∈ R n×n is denoted A−1 , and is the unique matrix such that A −1A = I = AA−1 . Note that not all matrices have inverses. Non-square matrices, for example, do not have inverses by definition. However, for some square matrices A, it may still be the case that 11
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有