Digital Image Processing Second Edition Review Material Rafael C.Gonzalez Richard E.Woods Prentice Hall Upper Saddle River,NJ 07458 www.prenhall.com/gonzalezwoods or www.imageprocessingbook.com
Digital Image Processing Second Edition Review Material Rafael C. Gonzalez Richard E. Woods Prentice Hall Upper Saddle River, NJ 07458 www.prenhall.com/gonzalezwoods or www.imageprocessingbook.com
Preface The purpose of this brief review is to provide the reader with the necessary background to follow material in the book dealing with matrices and vectors,probability,and linear systems.The review is divided into three main sections,each dealing one of the three preceding topics.The following material is highly focused to the needs of someone needing a refresher in one of these topics.Whenever possible,we have used the same notation employed in the text
Preface The purpose of this brief review is to provide the reader with the necessary background to follow material in the book dealing with matrices and vectors, probability, and linear systems. The review is divided into three main sections, each dealing one of the three preceding topics. The following material is highly focused to the needs of someone needing a refresher in one of these topics. Whenever possible, we have used the same notation employed in the text
Contents Preface iii 1 A Brief Review of Matrices and Vectors 1 1.1 Matrices 1 1.2 Vectors and Vector Spaces 4 1.3 Eigenvalues and Eigenvectors 9 2 A Brief Review of Probability and Random Variables 13 2.1 Sets and Set Operations 13 2.2 Relative Frequency and Probability 16 2.3 Random Variables 21 2.4 Expected Value and Moments 24 2.5 The Gaussian Probability Density Function 26 2.6 Several Random Variables 27 2.7 The Multivariate Gaussian Density 29
Contents Preface iii 1 A Brief Review of Matrices and Vectors 1 1.1 Matrices 1 1.2 Vectors and Vector Spaces 4 1.3 Eigenvalues and Eigenvectors 9 2 A Brief Review of Probability and Random Variables 13 2.1 Sets and Set Operations 13 2.2 Relative Frequency and Probability 16 2.3 Random Variables 21 2.4 Expected Value and Moments 24 2.5 The Gaussian Probability Density Function 26 2.6 Several Random Variables 27 2.7 The Multivariate Gaussian Density 29
vi Contents 2.8 Linear Transformations of Random Vectors 31 3 A Brief Overview of Linear Systems 33 3.1 Introductory Definitions 33 3.2 Linear System Characterization-Convolution 35
vi Contents 2.8 Linear Transformations of Random Vectors 31 3 A Brief Overview of Linear Systems 33 3.1 Introductory De®nitions 33 3.2 Linear System Characterization-Convolution 35
1 A Brief Review of Matrices and Vectors The purpose of this short document is to provide the reader with background sufficient to follow the discussions in Digital Image Processing,2nd ed.,by Gonzalez and Woods. The notation is the same that we use in the book. 1.1 Matrices Introductory Definitions We begin with the definition of a matrix.An m x n(read"m by n")matrix,denoted by A,is a rectangular array of entries or elements(numbers,or symbols representing numbers)enclosed typically by square brackets.In this notation,m is the number of horizontal rows and n the number of vertical columns in the array.Sometimes m and n are referred to as the dimensions or order of the matrix,and we say that matrix A has dimensions m by n or is oforder m by n.We use the following notation to represent an m x n matrix A: a11 a12.· din a21 a22 ·a2n A= am2···amn where ai;represents the (i,j)-th entry. If m =n,then A is a square matrix.If A is square and aij =0 for all ij, and not all ai are zero,the matrix is said to be diagonal.In other words,a diagonal matrix is a square matrix in which all elements not on the main diagonal are zero.A diagonal matrix in which all diagonal elements are equal to I is called the identity matrix, typically denoted by I.A matrix in which all elements are 0 is called the zero or null matrix,typically denoted by 0.The trace of a matrix A (not necessarily diagonal)
1 A Brief Review of Matrices and Vectors The purpose of this short document is to provide the reader with background suf®cient to follow the discussions in Digital Image Processing, 2nd ed., by Gonzalez and Woods. The notation is the same that we use in the book. 1.1 Matrices Introductory De®nitions We begin with the de®nition of a matrix. An m £ n (read |m by n}) matrix, denoted by A, is a rectangular array of entries or elements (numbers, or symbols representing numbers) enclosed typically by square brackets. In this notation, m is the number of horizontal rows and n the number of vertical columns in the array. Sometimes m and n are referred to as the dimensions or order of the matrix, and we say that matrix A has dimensions m by n or is of order m by n: We use the following notation to represent an m £ n matrix A: A = 2 6 6 6 6 4 a11 a12 ¢ ¢ ¢ a1n a21 a22 ¢ ¢ ¢ a2n . . . . . . ¢ ¢ ¢ . . . am1 am2 ¢ ¢ ¢ amn 3 7 7 7 7 5 where aij represents the (i; j)-th entry. If m = n, then A is a square matrix. If A is square and aij = 0 for all i 6= j, and not all aii are zero, the matrix is said to be diagonal. In other words, a diagonal matrix is a square matrix in which all elements not on the main diagonal are zero. A diagonal matrix in which all diagonal elements are equal to 1 is called the identity matrix, typically denoted by I. A matrix in which all elements are 0 is called the zero or null matrix, typically denoted by 0. The trace of a matrix A (not necessarily diagonal)
2 Chapter 1 A Brief Review of Matrices and Vectors denoted tr(A),is the sum of the elements in the main diagonal of A.Two matrices A and B are equal if and only if they have the same number of rows and columns,and aij =bij for all i and j. The transpose of an m x n matrix A,denote AT,is an n x m matrix obtained by interchanging the rows and columns of A.That is,the first row of A becomes the first column of AT,the second row of A becomes the second column of AT,and so on.A square matrix for which AT =A is said to be symmetric. Any matrix X for which XA I and AX I is called the imerse of A.Usually, the inverse of A is denoted A-1.Although numerous procedures exist for computing the inverse of a matrix,the procedure usually is to use a computer program for this purpose,so we will not dwell on this topic here.The interested reader can consult any book an matrix theory for extensive theoretical and practical discussions dealing with matrix inverses.A matrix that possesses an inverse in the sense just defined is called a nonsingular matrix Associated with matrix inverses is the computation of the determinant of a matrix.Al- though the determinant is a scalar,its definition is a little more complicated than those discussed in the previous paragraphs.Let A be an m x m(square)matrix.The (i,j)- minor of A,denoted Mii,is the determinant of the (m-1)x (m-1)matrix formed by deleting the ith row and the jth column of A.The (i,j)-cofactor of A,denoted Cij,is (-1)Mj.The determinant of a 1x 1 matrix [a],denoted det ([a]),is det ([a])=a. Finally,we define the determinant of an m x m matrix A as m det(A)=∑amC: j=1 In other words,the determinant of a (square)matrix is the sum of the products of the elements in the first row of the matrix and the cofactors of the first row.As is true of inverses,determinants usually are obtained using a computer. Basic Matrix Operations Let c be a real or complex number (often called a scalar).The scalar multiple of scalar c and matrix A,denoted cA,is obtained by multiplying every elements of A by c.If c =-1.the scalar multiple is called the negative of A. Assuming that they have the same number ofrows and columns,the sum oftwo matrices A and B,denoted A+B,is the matrix obtained by adding the corresponding elements
2 Chapter 1 A Brief Review of Matrices and Vectors denoted tr(A), is the sum of the elements in the main diagonal of A. Two matrices A and B are equal if and only if they have the same number of rows and columns, and aij = bij for all i and j. The transpose of an m £ n matrix A, denote AT , is an n £ m matrix obtained by interchanging the rows and columns of A. That is, the ®rst row of A becomes the ®rst column of AT , the second row of A becomes the second column of AT , and so on. A square matrix for which AT = A is said to be symmetric. Any matrix X for which XA = I and AX = I is called the inverse of A. Usually, the inverse of A is denoted A¡1 . Although numerous procedures exist for computing the inverse of a matrix, the procedure usually is to use a computer program for this purpose, so we will not dwell on this topic here. The interested reader can consult any book an matrix theory for extensive theoretical and practical discussions dealing with matrix inverses. A matrix that possesses an inverse in the sense just de®ned is called a nonsingular matrix. Associated with matrix inverses is the computation of the determinant of a matrix. Although the determinant is a scalar, its de®nition is a little more complicated than those discussed in the previous paragraphs. Let A be an m £ m (square) matrix. The (i; j)- minor of A, denoted Mij , is the determinant of the (m¡1)£(m¡1) matrix formed by deleting the ith row and the jth column of A. The (i; j)-cofactor of A, denoted Cij, is (¡1) i+jMij. The determinant of a 1£1 matrix [®], denoted det([®]), is det([®]) = ®: Finally, we de®ne the determinant of an m £ m matrix A as det(A) = Xm j=1 a1jC1j : In other words, the determinant of a (square) matrix is the sum of the products of the elements in the ®rst row of the matrix and the cofactors of the ®rst row. As is true of inverses, determinants usually are obtained using a computer. Basic Matrix Operations Let c be a real or complex number (often called a scalar). The scalar multiple of scalar c and matrix A, denoted cA, is obtained by multiplying every elements of A by c. If c = ¡1, the scalar multiple is called the negative of A. Assuming that they have the same number of rows and columns, the sum of two matrices A and B, denoted A + B, is the matrix obtained by adding the corresponding elements
3 of the two matrices.In other words,the sum is an m x n matrix whose (i,j)-th element is aij +bij.Similarly,the difference of two matrices,denoted A-B,has elements a坊-b财 The product,AB,of m x n matrix A and p x q matrix B,is an m x q matrix C whose (i,j)-th element is formed by multiplying the entries across the ith row of A times the entries down the jth column of B.In other words, C对=ai1b1j+a2b2i+…+ainbpj for i =1,2,...,m and j=1,2,...,q.We see from the preceding equation that matrix multiplication is defined only if n and p are equal.Also,as will be shown shortly,the sum of products just described is equal to the so-called inner product of rows of A with columns of B.Finally,we note that division of one matrix by another is not defined. Example 1.1 Suppose that and -(131 Then, C-3 Later in this discussion,we will make use of matrix products in which matrix B has only one column.A simple example is [:][a]-[m Also of special interest are products in which matrices consist of only one row or one column,appropriately called row and column matrices,respectively.In subsequent discussions we refer to these as row vectors or column vectors,respectively,and denote them by lowercase bold letters,with the understanding that they are row or column matrices. For example,consider two column vectors a and b,of dimension m x 1,as follows: 01 a2 a= am and aT=[a1,a2,…,am
3 of the two matrices. In other words, the sum is an m£n matrix whose (i; j)-th element is aij + bij :Similarly, the difference of two matrices, denoted A ¡ B, has elements aij ¡ bij . The product, AB, of m £ n matrix A and p £ q matrix B, is an m £ q matrix C whose (i; j)-th element is formed by multiplying the entries across the ith row of A times the entries down the jth column of B. In other words, cij = ai1b1j + ai2b2j + ¢ ¢ ¢ + ainbpj for i = 1; 2; : : : ; m and j = 1; 2; : : : ; q. We see from the preceding equation that matrix multiplication is de®ned only if n and p are equal. Also, as will be shown shortly, the sum of products just described is equal to the so-called inner product of rows of A with columns of B. Finally, we note that division of one matrix by another is not de®ned. Example 1.1 Suppose that A = " 1 ¡2 3 2 # and B = " 1 2 4 1 3 1 # : Then, C = " ¡1 ¡4 2 5 12 14 # : Later in this discussion, we will make use of matrix products in which matrix B has only one column. A simple example is " a b c d # " x1 x2 # = " ax1 + bx2 cx1 + dx2 # : Also of special interest are products in which matrices consist of only one row or one column, appropriately called row and column matrices, respectively. In subsequent discussions we refer to these as row vectors or column vectors, respectively, and denote them by lowercase bold letters, with the understanding that they are row or column matrices. For example, consider two column vectors a and b, of dimension m £ 1, as follows: a = 2 6 6 6 6 4 a1 a2 . . . am 3 7 7 7 7 5 and a T = [a1; a2; ¢ ¢ ¢ ; am]
4 Chapter 1 A Brief Review of Matrices and Vectors y b2 b= bm Keeping in mind the matrix dimensions required for matrix products defined above,the product of a and b is a 1 x 1 matrix,given by aTb bTa=a1b1+a262+...+ambm ∑abi i=1 This particular product is often called the dot-or inner product of two vectors.We have much more to say about this in the following section. 1.2 Vectors and Vector Spaces Vectors As introduced in the previous section,we refer to an m x 1 column matrix as a column vector.Such a vector assumes geometric meaning when we associate geometrical prop- erties with its elements.For example,consider the familiar two-dimensional(Euclid- ean)space in which a point is represented by its (,y)coordinates.These coordinates can be expressed in terms of a column vector as follows: Then,for example,point(1,2)becomes the specific vector 1 2 Geometrically,we represent this vector as a directed line segment from the origin to point (1,2).In three-dimensional space the vector would have components(,y,z).In m-dimensional space we run out of letters and use the same symbol with subscripts to represent the elements of a vector.That is,an m-dimensional vector is represented as 1 X三
4 Chapter 1 A Brief Review of Matrices and Vectors b = 2 6 6 6 6 4 b1 b2 . . . bm 3 7 7 7 7 5 : Keeping in mind the matrix dimensions required for matrix products de®ned above, the product of a and b is a 1 £ 1 matrix, given by a T b = b T a = a1b1 + a2b2 + ¢ ¢ ¢ + ambm = Xm i=1 aibi : This particular product is often called the dot- or inner product of two vectors. We have much more to say about this in the following section. ¤ 1.2 Vectors and Vector Spaces Vectors As introduced in the previous section, we refer to an m £ 1 column matrix as a column vector. Such a vector assumes geometric meaning when we associate geometrical properties with its elements. For example, consider the familiar two-dimensional (Euclidean) space in which a point is represented by its (x; y) coordinates. These coordinates can be expressed in terms of a column vector as follows: u = " x y # : Then, for example, point (1; 2) becomes the speci®c vector u = " 1 2 # : Geometrically, we represent this vector as a directed line segment from the origin to point (1; 2). In three-dimensional space the vector would have components (x; y; z). In m-dimensional space we run out of letters and use the same symbol with subscripts to represent the elements of a vector. That is, an m-dimensional vector is represented as x = 2 6 6 6 6 4 x1 x2 . . . xm 3 7 7 7 7 5 :
1.2 Vectors and Vector Spaces 5 When expressed in the form of these column matrices,arithmetic operations between vectors follow the same rules as they do for matrices.The product of a vector by scalar is obtained simply by multiplying every element of the vector by the scalar.The sum of two vectors x and y is formed by the addition of corresponding elements(1+y1, 2+y2,and so on),and similarly for subtraction.Multiplication of two vectors is as defined in Example 1.Division of one vector by another is not defined. Vector Spaces Definition of a vector space is both intuitive and straightforward.A vector space is defined as a nonempty set V of entities called vectors and associated scalars that satisfy the conditions outlined in A through C below.A vector space is real if the scalars are real numbers,it is complex if the scalars are complex numbers. Condition A:There is in V an operation called vector addition,denoted x+y,that satisfies: 1.x+y =y+x for all vectors x and y in the space. 2.x+(y+z)=(x+y)+z for all x,y,and z. 3.There exists in V a unique vector,called the zero vector,and denoted 0,such that x+0=x and 0+x=x for all vectors x. 4.For each vector x in V,there is a unique vector in V,called the negation of x,and denoted-x,such that x+(-x)=0 and (-x)+x=0. Condition B:There is in V an operation called multiplication by a scalar that associates with each scalar c and each vector x in V a unique vector called the product of c and x. denoted by cx and xc,and which satisfies: 1.c(dx)=(cd)x for all scalars c and d,and all vectors x. 2.(c+d)x =cx+dx for all scalars c and d,and all vectors x. 3.c(x+y)=cx+cy for all scalars c and all vectors x and y. Condition C:1x=x for all vectors x. We are interested particularly in real vector spaces of real m x 1 column matrices,with vector addition and multiplication by scalars being as defined earlier for matrices.We shall denote such spaces by m.Using the notation introduced previously,vectors(col-
1.2 Vectors and Vector Spaces 5 When expressed in the form of these column matrices, arithmetic operations between vectors follow the same rules as they do for matrices. The product of a vector by scalar is obtained simply by multiplying every element of the vector by the scalar. The sum of two vectors x and y is formed by the addition of corresponding elements (x1 + y1, x2 + y2, and so on), and similarly for subtraction. Multiplication of two vectors is as de®ned in Example 1. Division of one vector by another is not de®ned. Vector Spaces De®nition of a vector space is both intuitive and straightforward. A vector space is de®ned as a nonempty set V of entities called vectors and associated scalars that satisfy the conditions outlined in A through C below. A vector space is real if the scalars are real numbersu it is complex if the scalars are complex numbers. Condition A: There is in V an operation called vector addition, denoted x + y, that satis®es: 1. x + y = y + x for all vectors x and y in the space. 2. x + (y + z) = (x + y) + z for all x, y, and z. 3. There exists in V a unique vector, called the zero vector, and denoted 0, such that x + 0 = x and 0 + x = x for all vectors x. 4. For each vector x in V , there is a unique vector in V , called the negation of x, and denoted ¡x, such that x + (¡x) = 0 and (¡x) + x = 0. Condition B: There isin V an operation called multiplication by a scalar that associates with each scalar c and each vector x in V a unique vector called the product of c and x, denoted by cx and xc, and which satis®es: 1. c(dx) = (cd)x for all scalars c and d, and all vectors x. 2. (c + d)x = cx + dx for all scalars c and d, and all vectors x. 3. c(x + y) = cx + cy for all scalars c and all vectors x and y. Condition C: 1x = x for all vectors x. We are interested particularly in real vector spaces of real m £ 1 column matrices, with vector addition and multiplication by scalars being as de®ned earlier for matrices. We shall denote such spaces by <m: Using the notation introduced previously, vectors (col-
6 Chapter 1 A Brief Review of Matrices and Vectors umn matrices)inare written as T2 X= Example 1.2 The vector space with which we are most familiar is the two-dimensional real vector space 2,in which we make frequent use of graphical representations for operations such as vector addition,subtraction,and multiplication by a scalar.For instance,consider the two vectors and [ Using the rules of matrix addition and subtraction we have w- and Figure 1.1 shows the familiar graphical representation of these operations,as well as multiplication of vector a by scalar c=-0.5. □ Consider two real vector spaces Vo and V such that:(1)Each element of Vo is also an element of V(i.e.,Vo is a subset of V).(2)Operations on elements of Vo are the same as on elements of V.Under these conditions,Vo is said to be a subspace of V. A linear combination of vectors v1,v2,...,vn is an expression of the form a1V1+a2v2+…+anVn where the a's are scalars. A vector v is said to be linearly dependent on a set,S,of vectors v1,v2,...,vn if and only if v can be written as a linear combination of these vectors.Otherwise,v is linearly independent of the set of vectors v1,v2,...,vn-
6 Chapter 1 A Brief Review of Matrices and Vectors umn matrices) in <m are written as x = 2 6 6 6 6 4 x1 x2 . . . xm 3 7 7 7 7 5 : Example 1.2 The vector space with which we are most familiar is the two-dimensional real vector space <2 , in which we make frequent use of graphical representations for operations such as vector addition, subtraction, and multiplication by a scalar. For instance, consider the two vectors a = " 2 1 # and b = " 1 3 # : Using the rules of matrix addition and subtraction we have a + b = " 3 4 # and a ¡ b = " 1 ¡2 # : Figure 1.1 shows the familiar graphical representation of these operations, as well as multiplication of vector a by scalar c = ¡0:5. ¤ Consider two real vector spaces V0 and V such that: (1) Each element of V0 is also an element of V (i.e., V0 is a subset of V ). (2) Operations on elements of V0 are the same as on elements of V . Under these conditions, V0 is said to be a subspace of V . A linear combination of vectors v1;v2; : : : ;vn is an expression of the form ®1v1 + ®2v2 + ¢ ¢ ¢ + ®nvn where the ® 0 s are scalars. A vector v is said to be linearly dependent on a set, S, of vectors v1; v2; : : : ;vn if and only if v can be written as a linear combination of these vectors. Otherwise, v is linearly independent of the set of vectors v1; v2; : : : ; vn