当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《A Really Friendly Guide to Wavelets》课程教学资源(书籍文献)Wavelets for Computer Graphics:A Primer Part 1

资源类别:文库,文档格式:PDF,文档页数:8,文件大小:264.5KB,团购合买
点击下载完整版文档(PDF)

Wavelets for Computer graphics: A Primer Part 1t Eric J. Stollnitz Tony D. De rose David H Salesin University of Washington 1 Introduction We can represent this image in the Haar basis by com wavelet transform. To do this, we first average the pixels together Wavelets are a mathematical tool for hierarchically decomposing pairwise, to get the new lower resolution image with pixel values functions. They allow a function to be described in terms of a coarse overall shape plus details that range from broad to narrow. Regard 84 ess of whether the function of interest is an image, a curve, or sur face, wavelets offer an elegant technique for representing the levels of detail present. This primer is intended to provide people work Clearly some information ha lost in this averaging process n computer graphics with some intuition for what wavelets are, as To recover the original four ues from the two averaged val well as to present the mathematical foundations necessary for study ues. we need to store some coefficients, which capture the ing and using them. In Part 1, we discuss the simple case of Haar missing information. In our we will choose i for the fi wavelets in one and two dimensions, and show how they can be used detail coefficient, since the average we computed is I less than 9 for image compression. In Part 2, we will present the mathematical nd I more than 7. This single number allows us to recover the first theory of multiresolution analysis, then develop spline wavelets and two pixels of our original four-pixel image. Similarly, the second describe their use in multiresolution curve and surface editing detail coefficient is-1, since 4+(1)=3 and 4-(1)=5 Although wavelets have their roots in approximation theory [5]and Thus, we have decomposed the original image into a lower resolu- signal processing [13], they have recently been applied to many tion( two-pixel) version and a pair of detail coefficients. Repeating clude image editing [1], image compression [6, and image query tion: ing [10] automatic level-of-detail control for editing and render ing curves and surfaces [7, 8, 12]; surface reconstruction from con- Resolution A Detail coefficients tours[14]; and fast methods for solving simulation problems in ani- mation [11]and global illumination 3, 4, 9, 15]. For a discussion of wavelets that goes beyond the scope of this primer, we refer readers he stage here by first presenting the simplest form of imensional wavelet trans basis functions, and show how these tools can be used to Finally, we will define the wavelet transform(also called the wavelet the representation of a piecewise-constant function. Then decomposition)of the original four-pixel image to be the single ss two-dimensional generalizations of the Haar basis, and efficient representing the overall average of the original image, fol- demonstrate how to apply these wavelets to image compression lowed by the detail coefficients in order of increasing resolution Because linear algebra is central to the mathematics of wavelets, we Thus. for the one-dimensional Haar basis, the wavelet transform of briefly review important concepts in Appendix A our original four-pixel image is given by 「62 2 Wavelets in one dimension The haar the simplest wavelet basis. We will first discuss The way we computed the wavelet transform, by recursively aver- ow a on describe the actual basis functions. Finally, we we will generalize to other types of wavelets in Part 2 of show how using the Haar wavelet decomposition leads to a straight- rial. Note that no information has been gained or lost by this forward technique for compressing a one-dimensional function. The original image had four coefficients, and so does the transform. Also note that, given the transform, we can reconstruct the image to any resolution by recursively adding and subtracting the detail 2.1 One-dimensional Haar wavelet transform efficients from the lower resolution versions To get a sense for how wavelets work, let s start with a simple exam- Storing the image' s wavelet transform, rather than the image itself, ple. Suppose we are given a one-dimensional "image"with a reso- has a number of advantages. One advantage of the wavelet trans lution of four pixels, having values form is that often a large number of the detail coefficients turn out to be very small in magnitude, as in the example of Figure 1. Trun- 9735 cating, or removing, these small coefficients from the representa- tion introduces only small errors in the reconstructed iter graphics: A primer, part 1. IEEE Computer Grap avelets for com- a form of"lossy"image compression. We will discuss this particu- ics and Applica- lar application of wavelets in Section 2.3, after we present the one- lIonS,,15(3):76-84,May1995 dimensional haar basis functions

Wavelets for Computer Graphics: A Primer Part 1y Eric J. Stollnitz Tony D. DeRose David H. Salesin University of Washington 1 Introduction Wavelets are a mathematical tool for hierarchically decomposing functions. They allow a function to be described in terms of a coarse overall shape, plus details that range from broad to narrow. Regard￾less of whether the function of interest is an image, a curve, or a sur￾face, wavelets offer an elegant technique for representing the levels of detail present. This primer is intended to provide people working in computer graphics with some intuition for what wavelets are, as well as to present the mathematical foundations necessary for study￾ing and using them. In Part 1, we discuss the simple case of Haar wavelets in one and two dimensions, and show how they can be used for image compression. In Part 2, we will present the mathematical theory of multiresolution analysis, then develop spline wavelets and describe their use in multiresolution curve and surface editing. Although wavelets have their roots in approximation theory [5] and signal processing [13], they have recently been applied to many problems in computer graphics. These graphics applications in￾clude image editing [1], image compression [6], and image query￾ing [10]; automatic level-of-detail control for editing and render￾ing curves and surfaces [7, 8, 12]; surface reconstruction from con￾tours [14]; and fast methods for solving simulation problems in ani￾mation [11] and global illumination [3, 4, 9, 15]. For a discussion of wavelets that goes beyond the scope of this primer, we refer readers to our forthcoming monograph [16]. We set the stage here by first presenting the simplest form of wavelets, the Haar basis. We cover one-dimensional wavelet trans￾forms and basis functions, and show how these tools can be used to compress the representation of a piecewise-constant function. Then we discuss two-dimensional generalizations of the Haar basis, and demonstrate how to apply these wavelets to image compression. Because linear algebra is central to the mathematics of wavelets, we briefly review important concepts in Appendix A. 2 Wavelets in one dimension The Haar basis is the simplest wavelet basis. We will first discuss how a one-dimensional function can be decomposed using Haar wavelets, and then describe the actual basis functions. Finally, we show how using the Haar wavelet decomposition leads to a straight￾forward technique for compressing a one-dimensional function. 2.1 One-dimensional Haar wavelet transform To get a sense for how wavelets work, let’s start with a simple exam￾ple. Suppose we are given a one-dimensional “image” with a reso￾lution of four pixels, having values 9 7 3 5 y Eric J. Stollnitz, Tony D. DeRose, and David H. Salesin. Wavelets for com￾puter graphics: A primer, part 1. IEEE Computer Graphics and Applica￾tions, 15(3):76–84, May 1995. We can represent this image in the Haar basis by computing a wavelet transform. To do this, we first average the pixels together, pairwise, to get the new lower resolution image with pixel values 8 4 Clearly, some information has been lost in this averaging process. To recover the original four pixel values from the two averaged val￾ues, we need to store some detail coefficients, which capture the missing information. In our example, we will choose 1 for the first detail coefficient, since the average we computed is 1 less than 9 and 1 more than 7. This single number allows us to recover the first two pixels of our original four-pixel image. Similarly, the second detail coefficient is ￾1, since 4 + (￾1) = 3 and 4 ￾ (￾1) = 5. Thus, we have decomposed the original image into a lower resolu￾tion (two-pixel) version and a pair of detail coefficients. Repeating this process recursively on the averages gives the full decomposi￾tion: Resolution Averages Detail coefficients 4 9 7 3 5 2 8 4 1 ￾1 1 6 2 Finally, we will define thewavelet transform (also called thewavelet decomposition) of the original four-pixel image to be the single co￾efficient representing the overall average of the original image, fol￾lowed by the detail coefficients in order of increasing resolution. Thus, for the one-dimensional Haar basis, the wavelet transform of our original four-pixel image is given by 6 2 1 ￾1 The way we computed the wavelet transform, by recursively aver￾aging and differencing coefficients, is called afilter bank—a process we will generalize to other types of wavelets in Part 2 of our tuto￾rial. Note that no information has been gained or lost by this process. The original image had four coefficients, and so does the transform. Also note that, given the transform, we can reconstruct the image to any resolution by recursively adding and subtracting the detail co￾efficients from the lower resolution versions. Storing the image’s wavelet transform, rather than the image itself, has a number of advantages. One advantage of the wavelet trans￾form is that often a large number of the detail coefficients turn out to be very small in magnitude, as in the example of Figure 1. Trun￾cating, or removing, these small coefficients from the representa￾tion introduces only small errors in the reconstructed image, giving a form of “lossy” image compression. We will discuss this particu￾lar application of wavelets in Section 2.3, after we present the one￾dimensional Haar basis functions.

nested set of spaces /3. We will consider this topic more thoroughly y approximation Now we need to define a basis for each vector space V!. The basis ally denoted by the symbol c. A simple basis for W is given by the set of scaled and translated"box functions. d(r) 0,,2-1, ws detail coefficients (x) I for 0<x< I As an example, Figure 2 shows the four box functions forming a ba- The next step is to choose an inner product defined on the vector l-detail coefficients spaces V!. The"standard"inner product, vle f(r)g(x)dx, or two elements g E k will do quite well for our running ex- mple. We can now define a new vector space H as the orthogonal of all functions in w that are orthogonal to all functions in chosen inner product. Informally, we can think of in W as a means for representing the parts of a function in l A collection of linearly independent functions/(x)spanning wo called wavelets. These basis functions have two important proper approximation detail coefficient I.The basis functions w/ of wa, together with the basis functions d Figure 1 A ce of decreasing. resoluti of y form a basis for yf+ function(left), along with the detail coefficient 2. Every basis function y of w is orthogonal to every basis func- the finest approximation(right ). Note that in tion d of wd under the chosen inner product. works wel he corresponding detail coef smal Thus, the"detail coefficients" of Section 2. 1 are really coefficients of the wavelet basis functions quences of coefficients. Alternatively, we can think of images se- wavelets, given y sponding to the box basis are known as the Haar 2.2 One-dimensional haar wavelet basis functions v(x)=v(2x-),i=0,,2-1 so, we will use the concept of a vector space from linear algebra ixel image is just a function that interval We'll let Io be the vector all these 1for0≤x<1/2 tions. A two-pixel image has two constant over the w(x) 1/2<x<1 vals [0, 1/2)and [1/2, 1). We ll call the space containing all these 0 otherwise unctions V. If we continue in this manner, the space po will in- ons defined on the interval [o, 1) Figure 3 shows the two Haar wavelet with constant pieces over each of 2 equal subintervals Before going on, let's run through our example from Section 2.1 w think of every one-dimensional with 2 pixels a an element, or vector, in I. Note that because these vectors are all again, but now applying these more sophisticated ideas. functions defined on the unit interval vector inkd is also con- We begin by expressing our original image I(x)as a linear combi tained in /. For example, we can always describe a piecewise- nation of the box basis functions in I constant function with two intervals as a piecewise-constant func tion with four intervals. with each interval in the first function ce (x)=(x)+(x)+x)+c3的(x) responding to a pair of intervals in the second. Thus, the spaces v Some authors refer to functions with these properties aspre-wavelets, reserving the term"wavelet"for functions, that are also orthogonal to each

V4 approximation V3 approximation W3 detail coefficients V2 approximation W2 detail coefficients V1 approximation W1 detail coefficients V0 approximation W0 detail coefficient Figure 1 A sequence of decreasing-resolution approximations to a function (left), along with the detail coefficients required to recapture the finest approximation (right). Note that in regions where the true function is close to being flat, a piecewise-constant approximation works well, so the corresponding detail coefficients are relatively small. 2.2 One-dimensional Haar wavelet basis functions We have shown how one-dimensional images can be treated as se￾quences of coefficients. Alternatively, we can think of images as piecewise-constant functions on the half-open interval [0, 1). To do so, we will use the concept of a vector space from linear algebra. A one-pixel image is just a function that is constant over the entire interval [0, 1). We’ll let V0 be the vector space of all these func￾tions. A two-pixel image has two constant pieces over the inter￾vals [0, 1=2) and [1=2, 1). We’ll call the space containing all these functions V1 . If we continue in this manner, the space Vj will in￾clude all piecewise-constant functions defined on the interval [0, 1) with constant pieces over each of 2j equal subintervals. We can now think of every one-dimensional image with 2j pixels as an element, or vector, in Vj . Note that because these vectors are all functions defined on the unit interval, every vector inVj is also con￾tained in Vj+1. For example, we can always describe a piecewise￾constant function with two intervals as a piecewise-constant func￾tion with four intervals, with each interval in the first function cor￾responding to a pair of intervals in the second. Thus, the spacesVj are nested; that is, V0  V1  V2     The mathematical theory of multiresolution analysis requires this nested set of spaces Vj . We will consider this topic more thoroughly in Part 2. Now we need to define a basis for each vector spaceVj . The basis functions for the spaces Vj are called scaling functions, and are usu￾ally denoted by the symbol . A simple basis for Vj is given by the set of scaled and translated “box” functions:  j i (x) := (2j x ￾ i), i = 0, : : : , 2j ￾ 1, where (x) :=  1 for 0  x < 1 0 otherwise. As an example, Figure 2 shows the four box functions forming a ba￾sis for V2 . The next step is to choose an inner product defined on the vector spaces Vj . The “standard” inner product, hf j gi := Z 1 0 f(x) g(x) dx, for two elements f, g 2 Vj will do quite well for our running ex￾ample. We can now define a new vector spaceWj as the orthogonal complement of Vj in Vj+1. In other words, we will let Wj be the space of all functions inVj+1 that are orthogonal to all functions inVj under the chosen inner product. Informally, we can think of the wavelets in Wj as a means for representing the parts of a function inVj+1 that cannot be represented in Vj . A collection of linearly independent functions j i (x) spanning Wj are called wavelets. These basis functions have two important proper￾ties: 1. The basis functions j i of Wj , together with the basis functions  j i of Vj , form a basis for Vj+1. 2. Every basis function j i of Wj is orthogonal to every basis func￾tion  j i of Vj under the chosen inner product.1 Thus, the “detail coefficients” of Section 2.1 are really coefficients of the wavelet basis functions. The wavelets corresponding to the box basis are known as theHaar wavelets, given by j i(x) := (2j x ￾ i), i = 0, : : : , 2j ￾ 1, where (x) := ( 1 for 0  x < 1=2 ￾1 for 1=2  x < 1 0 otherwise. Figure 3 shows the two Haar wavelets spanning W1 . Before going on, let’s run through our example from Section 2.1 again, but now applying these more sophisticated ideas. We begin by expressing our original image I(x) as a linear combi￾nation of the box basis functions inV2 : I(x) = c 2 0  2 0(x) + c 2 1  2 1(x) + c 2 2  2 2(x) + c 2 3  2 3(x). 1Some authors refer to functions with these properties aspre-wavelets, reserving the term “wavelet” for functions j i that are also orthogonal to each other. 2

几 igure 2 The box basis for I Figure 3 The Haar wavelets for W1 A more graphical representation is Orthogonality (x)=9x The haar basis es an important onaliry, which is not always shared by ot thogonal basis is one in which all of the 40, 0, v0, 1i,... are orthogonal to one another. Note that orthoge nality is stronger than the minimum requirement for wavelets thaty/ be orthogonal to all scaling functions at the same resolution level Note that the coefficie c, are just the four original pixel values 9735] that is sometimes desirable is normalization. A We can rewrite the expression for I(x)in terms of basis functions )is normalized if (uu)=1. We can normaliz in v and w, using pairwise averaging and differencing: replacing our earlier definitions with I(x)=c0 0(x)+c oi(x)+do w0(x)+divx) u(x):=22u(x-0 where the constant factor of 2/2 is chosen to sati the standard inner product. with these modified definitions, the new 4 normalized coefficients are obtained by multiplying each old coef- ficient with superscript by 271/2. Thus, in the example from the previous section, the unnormalized coefficients 62 1-1] becom the nor ed coefficients These four coefficients should look familiar as well Finally, we'll rewrite I(x)as a sum of basis functions in r, w As an alternative to first computing the unnormalized coefficients and then normalizing them, we can include normalization in the de- and n composition algorithm. The following two pseudocode procedures (x)=c0 90()+d0/0(x)+d6 w0(x)+ divi(x) accomplish this normalized decomposition procedure Decomposition Step(C: array [l..h of reals) fori←-ltoh/2do C←(C[2i-1]+C2) C[h/2+←(C12i-1-C[2)/ end procedure procedure Decomposition(C: array [1.. h of reals) CC/vh (normalize input coefficients) Once again, these four coefficients are the Haar wavelet transform DecompositionStep(CIl.h the Haar basis for wge The four functions shown above constitute f the original Instead of using the usual four box functions end wl we can use 0, w 0, v 0, and w i to represent the overall average,the end procedure broad detail, and the two types of finer detail possible in a function y. The Haar basis for D with j>2 includes these functions Now we can work with an orthonormal basis well as narrower translates of the wavelet l(x) both orthogonal and normalized. Using an orthonormal basis turns

1 0 0 1 1 2 1 0 0 1 1 2 1 0 0 1 1 2 2 1 2 2 2 3 1 0 0 1 1 2 2 0 Figure 2 The box basis for V2. 1 0 1 2 1 0 1 -1 1 2 1 0 -1 1 1 1 Figure 3 The Haar wavelets for W1. A more graphical representation is I(x) = 9 + 7 + 3 + 5 Note that the coefficients c2 0, : : : , c2 3 are just the four original pixel values [9 7 3 5]. We can rewrite the expression for I(x) in terms of basis functions in V1 and W1 , using pairwise averaging and differencing: I(x) = c1 0 1 0(x) + c1 1 1 1(x) + d1 0 1 0 (x) + d1 1 1 1 (x) = 8 + 4 + 1 + ￾1 These four coefficients should look familiar as well. Finally, we’ll rewrite I(x) as a sum of basis functions in V0 , W0 , and W1 : I(x) = c0 0 0 0(x) + d0 0 0 0 (x) + d1 0 1 0 (x) + d1 1 1 1 (x) = 6 + 2 + 1 + ￾1 Once again, these four coefficients are the Haar wavelet transform of the original image. The four functions shown above constitute the Haar basis for V2 . Instead of using the usual four box functions, we can use 0 0, 0 0 , 1 0 , and 1 1 to represent the overall average, the broad detail, and the two types of finer detail possible in a function in V2 . The Haar basis for Vj with j > 2 includes these functions as well as narrower translates of the wavelet (x). Orthogonality The Haar basis possesses an important property known as orthog￾onality, which is not always shared by other wavelet bases. An or￾thogonal basis is one in which all of the basis functions, in this case 0 0, 0 0 , 1 0 , 1 1, : : :, are orthogonal to one another. Note that orthogo￾nality is stronger than the minimum requirement for wavelets that j i be orthogonal to all scaling functions at the same resolution levelj. Normalization Another property that is sometimes desirable is normalization. A basis function u(x) is normalized if hu j ui = 1. We can normalize the Haar basis by replacing our earlier definitions with  j i (x) := 2j=2 (2j x ￾ i) j i (x) := 2j=2 (2j x ￾ i), where the constant factor of 2j=2 is chosen to satisfy hu j ui = 1 for the standard inner product. With these modified definitions, the new normalized coefficients are obtained by multiplying each old coef- ficient with superscript j by 2￾j=2 . Thus, in the example from the previous section, the unnormalized coefficients [6 2 1￾1] become the normalized coefficients 6 2 p1 2 p￾1 2 As an alternative to first computing the unnormalized coefficients and then normalizing them, we can include normalization in the de￾composition algorithm. The following two pseudocode procedures accomplish this normalized decomposition: procedure DecompositionStep(C: array [1. . h] of reals) for i 1 to h=2 do C0 [i] (C[2i ￾ 1] + C[2i])=p 2 C0 [h=2 + i] (C[2i ￾ 1] ￾ C[2i])=p 2 end for C C0 end procedure procedure Decomposition(C: array [1. . h] of reals) C C=p h (normalize input coefficients) while h > 1 do DecompositionStep(C[1. . h]) h h=2 end while end procedure Now we can work with an orthonormal basis, meaning one that is both orthogonal and normalized. Using an orthonormal basis turns 3

out to be handy when compressing a function or an image, which we describe next 2.3 Application 1: Compression 16 out of 16 coefficients 14 out of 16 coefficients The goal of compression is to express an initial set of data using some smaller set of data, either with or without loss of information. e, suppose we are given a function f(x) expressed as a eighted sum of basis functions ur(x),……,ln(x) 12 out of 16 coefficients 10 out of 16 coefficients (x) The data set in this case consists of the coefficients cl..Cm. We would like to find a function approximating(x)but requiring fewer 8 out of 16 coefficients 6 out of 16 coefficients oefficients, perhaps by using a different basis. That is, given a user pecified error tolerance e(for lossless compression, e=0), we are ooking for ciu(x) 4 out of 16 coefficients 2 out of 16 coefficients ch that m m and f(x)-fex)l E for some norm. In general, Figure 4 Coarse approximations to a function obtained using L you could attempt to construct a set of basis functions, ... ii that magnituden: detail coefficients are removed in order of increasing would provide a good approximation with few coefficients. We will focus instead on the simpler problem of finding a good approxim ion in a fixed basis the resulting coefficients simply by removing or ig icients with smallest magnitude. rying the One form of the compression problem is to order the coeffi- ression, we obtain a sequence of app nations to cients CI,..., cm so that for every m< m, the first m elements of on,as shown in Figure 4 the sequence give the best approximation f(x)to f(x)as measured in the L2 norm. As we show here, the solution to this problem is 3 Wavelets in two dimensions straightforward if the basis is orthonormal, as is the case with the normalized haar basis In preparation for image compression, we need to generalize Haar Let o be a permutation of 1, .. m, and let f(x)be a function that wavelets to two dimensions. First, we will consider how to perform uses the coefficients corresponding to the firstr numbers of the per- a wavelet decomposition of the pixel values in a two-dimensional mutationσ image. We then describe the scaling functions and wavelets that form a two-dimensional wavelet basis 3.1 Two-dimensional h The square of the L- error in this approximation is ues within an image. Each is a generalization to two dimensions of (x)-x)2=(x)-/x)(x)-/(x) the one-dimensional wavelet transform described in Section 2 To obtain the standard decomposition [2] of an image, we first app the one-dimensional wavelet transform to each row of pixel values. Can lol This operation gives us an average value along with detail coeff cients for each row. Next, we treat these transformed rows as if they were themselves an image and apply the one-dimensional transfom to each column. The resulting values are all detail coefficients ex- ept for a single overall average coefficient. The algorithm belot computes the standard decomposition. Figure 5 illustrates each step of its operatio procedure Standard Decomposition(C: array [l.h, 1.. w] of reals) The last step follows from the assumption that the basis is orthonor for row v I to h do mal, so(u|u y= 5y. We conclude that to minimize this error on(CRow, I.w]) for any given m, the best choice for o is the permutation that sorts end for the coefficients in order of decreasing magnitude that is, o satis- for col y I to w de fies ca()≥…≥ ca(m) Deco Cl1. h, co)) Figure I demonstrated how a one-dimensiona could be transformed into coefficients representing the erage and various resolutions of detail. Now this time using normalized Haar basis functio 订 he process, The second type of two-dimensional wavelet transform, called the ply L nonstandard decomposition, alternates

out to be handy when compressing a function or an image, which we describe next. 2.3 Application I: Compression The goal of compression is to express an initial set of data using some smaller set of data, either with or without loss of information. For instance, suppose we are given a function f(x) expressed as a weighted sum of basis functions u1(x), : : : , um(x): f(x) = Xm i=1 ci ui(x). The data set in this case consists of the coefficients c1, : : : , cm. We would like to find a function approximatingf(x) but requiring fewer coefficients, perhaps by using a different basis. That is, given a user￾specified error tolerance  (for lossless compression,  = 0), we are looking for ˜f(x) = Xm˜ i=1 c˜i u˜i(x) such that ˜m < m and kf(x) ￾ ˜f(x)k   for some norm. In general, you could attempt to construct a set of basis functions ˜u1, : : : , ˜um˜ that would provide a good approximation with few coefficients. We will focus instead on the simpler problem of finding a good approxima￾tion in a fixed basis. One form of the compression problem is to order the coeffi- cients c1, : : : , cm so that for every ˜m < m, the first ˜m elements of the sequence give the best approximation ˜f(x) to f(x) as measured in the L2 norm. As we show here, the solution to this problem is straightforward if the basis is orthonormal, as is the case with the normalized Haar basis. Let  be a permutation of 1, : : : , m, and let ˜f(x) be a function that uses the coefficients corresponding to the first ˜m numbers of the per￾mutation : ˜f(x) = Xm˜ i=1 c(i) u(i). The square of the L2 error in this approximation is f(x) ￾ ˜f(x) 2 2 = hf(x) ￾ ˜f(x) j f(x) ￾ ˜f(x)i = *Xm i= ˜m+1 c(i) u(i) Xm j= ˜m+1 c(j) u(j)+ = Xm i= ˜m+1Xm j= ˜m+1 c(i) c(j) hu(i) j u(j)i = Xm i= ˜m+1 (c(i)) 2 The last step follows from the assumption that the basis is orthonor￾mal, so hui j uji = ij. We conclude that to minimize this error for any given ˜m, the best choice for  is the permutation that sorts the coefficients in order of decreasing magnitude; that is,  satis- fies jc(1) j      jc(m) j. Figure 1 demonstrated how a one-dimensional function could be transformed into coefficients representing the function’s overall av￾erage and various resolutions of detail. Now we repeat the process, this time using normalized Haar basis functions. We can apply L2 16 out of 16 coefficients 14 out of 16 coefficients 12 out of 16 coefficients 10 out of 16 coefficients 8 out of 16 coefficients 6 out of 16 coefficients 4 out of 16 coefficients 2 out of 16 coefficients Figure 4 Coarse approximations to a function obtained using L2 compression: detail coefficients are removed in order of increasing magnitude. compression to the resulting coefficients simply by removing or ig￾noring the coefficients with smallest magnitude. By varying the amount of compression, we obtain a sequence of approximations to the original function, as shown in Figure 4. 3 Wavelets in two dimensions In preparation for image compression, we need to generalize Haar wavelets to two dimensions. First, we will consider how to perform a wavelet decomposition of the pixel values in a two-dimensional image. We then describe the scaling functions and wavelets that form a two-dimensional wavelet basis. 3.1 Two-dimensional Haar wavelet transforms There are two ways we can use wavelets to transform the pixel val￾ues within an image. Each is a generalization to two dimensions of the one-dimensional wavelet transform described in Section 2.1. To obtain the standard decomposition[2] of an image, we first apply the one-dimensional wavelet transform to each row of pixel values. This operation gives us an average value along with detail coeffi- cients for each row. Next, we treat these transformed rows as if they were themselves an image and apply the one-dimensional transform to each column. The resulting values are all detail coefficients ex￾cept for a single overall average coefficient. The algorithm below computes the standard decomposition. Figure 5 illustrates each step of its operation. procedure StandardDecomposition(C: array [1. . h, 1. . w] of reals) for row 1 to h do Decomposition(C[row, 1. . w]) end for for col 1 to w do Decomposition(C[1. . h, col]) end for end procedure The second type of two-dimensional wavelet transform, called the nonstandard decomposition, alternates between operations on rows 4

transform rows transform rows Figure 5 Standard decomposition of an image Figure 6 Nonstandard decomposition of an image and columns. First, we perform one step of horizontal pairwise aver- by first defining a two-dimensional scaling function aging and differencing on the pixel values in each row of the image Next, we apply vertical pairwise averaging and differencing to each (x,y):=(x)y), column of the result. To complete the transformation, we repeat this process recursively only on the quadrant containing averages in both and three wavelet functions, directions. Figure 6 shows all the steps involved in the nonstandard decomposition procedure below dl l(x,y): =dx)w0) v(x,y)=以(x)y) procedure Nonstandard Decomposition(C: array [1.h, I.h]of reals) u(x,y):=(x)v() for roM← I to h do We now denote levels of scaling with a superscript (as we did in the DecompositionStep(arrow, I.) one-dimensional case)and horizontal and vertical translations with end for a pair of subscripts k and e. The nonstandard basis consists of a sin- for col← I to h do gle coarse scaling function ooo(r,y) =od(x, y) along with scales DecompositionStep(qll. h, col and translates of the three wavelet functions d, yc, and in ol (x, y): =2 o(2x-k, 2y-e end whi end procedure adh (x,y): =20 d(2x-k, 2y-e) a(x,y): =24/(2x-k, 2y-e 3.2 Two-dimensional Haar basis functions The constant 2 normalizes the wavelets to give an orthonormal ba- The two methods of decomposing a two-dimensional image yield is. The nonstandard construction results in the basis for shown coefficients that correspond to two different sets of basis functions 8 sis formed by the standard construction [2]of a two-dimensional We have presented both the standard and nonstandard approaches basis. Similarly, the nonstandard decomposition gives coefficients for the nonstandard construction of basis functions tages. The standard decomposition of an image is appealing be- The standard construction of a two-dimensional wavelet basis con- all rows and then on all columns. On the other hand, it is slightly ists of all possible tensor products of one-dimensional basis func- more efficient to compute the nonstandard decomposition. For an tions. For example, when we start with the one-dimensional Haar m x m image, the standard decomposition requires 4(m-m)as- basis for V, we get the two-dimensional basis for Shown in Fig- signment operations, while the nonstandard decomposition requires ure 7. Note that if we apply the standard construction to an orthonor only g(m-1)assignment operations mal basis in one dimension, we get an orthonormal basis in two di Another consideration is the support of each basis function, mean- ing the portion of each functions domain where that function is non- The nonstandard construction of a two-dimensional basis proceeds zero. All nonstandard Haar basis functions have square supports

   . . . - transform rows ? transform columns Figure 5 Standard decomposition of an image. and columns. First, we perform one step of horizontal pairwise aver￾aging and differencing on the pixel values in each row of the image. Next, we apply vertical pairwise averaging and differencing to each column of the result. To complete the transformation, we repeat this process recursively only on the quadrant containing averages in both directions. Figure 6 shows all the steps involved in the nonstandard decomposition procedure below. procedure NonstandardDecomposition(C: array [1. . h, 1. . h] of reals) C C=h (normalize input coefficients) while h > 1 do for row 1 to h do DecompositionStep(C[row, 1. . h]) end for for col 1 to h do DecompositionStep(C[1. . h, col]) end for h h=2 end while end procedure 3.2 Two-dimensional Haar basis functions The two methods of decomposing a two-dimensional image yield coefficients that correspond to two different sets of basis functions. The standard decomposition of an image gives coefficients for a ba￾sis formed by the standard construction [2] of a two-dimensional basis. Similarly, the nonstandard decomposition gives coefficients for the nonstandard construction of basis functions. The standard construction of a two-dimensional wavelet basis con￾sists of all possible tensor products of one-dimensional basis func￾tions. For example, when we start with the one-dimensional Haar basis for V2 , we get the two-dimensional basis forV2 shown in Fig￾ure 7. Note that if we apply the standard construction to an orthonor￾mal basis in one dimension, we get an orthonormal basis in two di￾mensions. The nonstandard construction of a two-dimensional basis proceeds ... - transform rows ? transform columns Figure 6 Nonstandard decomposition of an image. by first defining a two-dimensional scaling function, (x, y) := (x) (y), and three wavelet functions,  (x, y) := (x) (y) (x, y) := (x) (y) (x, y) := (x) (y). We now denote levels of scaling with a superscriptj (as we did in the one-dimensional case) and horizontal and vertical translations with a pair of subscripts k and `. The nonstandard basis consists of a sin￾gle coarse scaling function 0 0,0(x, y):=(x, y) along with scales and translates of the three wavelet functions  , , and :  j k` (x, y) := 2j (2j x ￾ k, 2j y ￾ `) j k` (x, y) := 2j (2j x ￾ k, 2j y ￾ `) j k` (x, y) := 2j (2j x ￾ k, 2j y ￾ `). The constant 2j normalizes the wavelets to give an orthonormal ba￾sis. The nonstandard construction results in the basis forV2 shown in Figure 8. We have presented both the standard and nonstandard approaches to wavelet transforms and basis functions because both have advan￾tages. The standard decomposition of an image is appealing be￾cause it simply requires performing one-dimensional transforms on all rows and then on all columns. On the other hand, it is slightly more efficient to compute the nonstandard decomposition. For an m m image, the standard decomposition requires 4(m2 ￾ m) as￾signment operations, while the nonstandard decomposition requires only 8 3 (m2 ￾ 1) assignment operations. Another consideration is the support of each basis function, mean￾ing the portion of each function’s domain where that function is non￾zero. All nonstandard Haar basis functions have square supports, 5

8(x)v(y)8(x))th(x)动()h(x)v() /,(,y) L(x,y) wl,(x,y) wv(x,y) co(x)wo() v0(x)w00) v0(x)w/o0) vi(r)vo0) pwo, o(r,y) wio(x, y) wno,o(x,y) wvio(x, y) c8(x)v8(y)v8(x)v8()t2(x)v8(y)vh(x)8(y) cloo(x, y) woo(, y) vo0(x,y) voi.(x,y) 0(x)00) v0(x)000) vd(x)000) ai(x)o0 coo(x, y) 1/goo(x, y) uplo(x,y) iolo(x,y) Figure 7 Standard construction of a two-dimensional Haar wavelet Figure 8 Nonstandard construction of a two-dimensional haar basis for V. In the unnormalized case, functions are +l where plus wavelet basis for k- signs appear, -l where minus signs appear, and 0 in gray regions while some standard basis functions have nonsquare supports. De- to be discarded no longer changes pending upon the application, one of these choices may be prefer- able to the othel procedure ss(C: array [1.. m] of reals, E: real) Tmin← 可} 3.3 Application Il: Image compression T← We defined compression in Section 2.3 as the representation of a function using fewer basis function coefficients than were origi- fori← I to m do ally given. The method we discussed for one-dimensional func if|< T then s←s+()2 ons applies equally well to images, which we treat as the end for cients corresponding to a two-dimensional piecewise-cons ifs<e then in t else T complete treatment of wavelet image compression, see the if|q< T then C←0 end for summarize wavelet image compression using the l-norm nd procedure ch algorithm was used to produce the images in 1. Compute coefficients Cl, ... Cm representing an image in a nor- Figure 9. These images demonstrate the high compression ratios wavelets offer, as well as some of the artifacts they introduce 2. Sort the coefficients in order of decreasing magnitude to produce De Vore et al. [6] suggest that the L' norm is best suited to the the sequence ca(D),., co(m) task of image compression. Here is a pseudocode fragment for a 3. Starting with i m, find the smallest m for which"greedy"L compression scheme The first step is accomplished by applying either of the two- dimensional Haar wavelet transforms described in Section 3. 1. be ing sure to use normalized basis functions. Any standard sorting o Chnique will work for the second step. However, for large images 8<5+error from discarding Ci ∑|{[,y|<then isard coefficient C The pseudocode below outlines a more efficient method that uses a binary search strategy to find a threshold below which coefficient zes are deemed negligible. The procedure takes as input a one- end for dimensional array of coefficients C(with each coefficient corre- sponding to a two-dimensional basis function) and an error toler- Note that this algorithms results depend on the order in which coef- ance e. For each guess at a threshold T, the algorithm computes the ficients are visited. Different images(and degrees of compression) square of the L2 error that would result from discarding coefficients may be obtained from varying this order-for example, by start- smaller in magnitude thanT. This squared error s is compared to ing with the finest scale coefficients, rather than the smallest coef- at each iteration to decide whether the binary search should continue ficients. You could also run a more sophisticated constrained op- in the upper or lower half of the current interval. The algorithm halts timization procedure to select the minimum number of coefficients when the current interval is so narrow that the number of coefficients subject to the error bound

1 1(x) 0 0 (y) 1 0 (x) 0 0 (y) 0 0 (x) 0 0  (y) 0 0(x) 0 0(y) 1 1 (x) 0 0 (y) 1 0 (x) 0 0 (y) 0 0 (x) 0 0  (y) 0 0(x) 0 0 (y) 1 1 (x) 1 0 (y) 1 0 (x) 1 0 (y) 0 0 (x) 1 0  (y) 0 0(x) 1 0 (y) 1 1 (x) 1 1 (y) 1 0 (x) 1 1 (y) 0 0 (x) 1 1  (y) 0 0(x) 1 1 (y) Figure 7 Standard construction of a two-dimensional Haar wavelet basis for V2. In the unnormalized case, functions are +1 where plus signs appear, ￾1 where minus signs appear, and 0 in gray regions. while some standard basis functions have nonsquare supports. De￾pending upon the application, one of these choices may be prefer￾able to the other. 3.3 Application II: Image compression We defined compression in Section 2.3 as the representation of a function using fewer basis function coefficients than were origi￾nally given. The method we discussed for one-dimensional func￾tions applies equally well to images, which we treat as the coeffi- cients corresponding to a two-dimensional piecewise-constant ba￾sis. The approach presented here is only introductory; for a more complete treatment of wavelet image compression, see the article by DeVore et al. [6]. We can summarize wavelet image compression using the L2 norm in three steps: 1. Compute coefficients c1, : : : , cm representing an image in a nor￾malized two-dimensional Haar basis. 2. Sort the coefficients in order of decreasing magnitude to produce the sequence c(1), : : : , c(m). 3. Starting with ˜ P m = m, find the smallest ˜m for which m i= ˜m+1(c(i)) 2   2 , where  is the allowable L2 error. The first step is accomplished by applying either of the two￾dimensional Haar wavelet transforms described in Section 3.1, be￾ing sure to use normalized basis functions. Any standard sorting technique will work for the second step. However, for large images sorting becomes exceedingly slow. The pseudocode below outlines a more efficient method that uses a binary search strategy to find a threshold below which coefficient sizes are deemed negligible. The procedure takes as input a one￾dimensional array of coefficients C (with each coefficient corre￾sponding to a two-dimensional basis function) and an error toler￾ance . For each guess at a threshold  , the algorithm computes the square of the L2 error that would result from discarding coefficients smaller in magnitude than  . This squared errors is compared to  2 at each iteration to decide whether the binary search should continue in the upper or lower half of the current interval. The algorithm halts when the current interval is so narrow that the number of coefficients 1 1,1 (x, y) 1 0,1(x, y) 1 1,0 (x, y) 1 0,0(x, y)  1 1,1  (x, y) 1 0,1(x, y)  1 1,0  (x, y) 1 0,0(x, y) 1 1,1  (x, y) 1 0,1(x, y) 1 1,0  (x, y) 1 0,0(x, y) 0 0,0  (x, y) 0 0,0(x, y) 0 0,0  (x, y) 0 0,0(x, y) Figure 8 Nonstandard construction of a two-dimensional Haar wavelet basis for V2. to be discarded no longer changes. procedure Compress(C: array [1. . m] of reals; : real) min min f jC[i]j g max max f jC[i]j g do  (min + max)=2 s 0 for i 1 to m do if jC[i]j <  then s s + (C[i])2 end for if s < 2 then min  else max  until min  max for i 1 to m do if jC[i]j <  then C[i] 0 end for end procedure This binary search algorithm was used to produce the images in Figure 9. These images demonstrate the high compression ratios wavelets offer, as well as some of the artifacts they introduce. DeVore et al. [6] suggest that the L1 norm is best suited to the task of image compression. Here is a pseudocode fragment for a “greedy” L1 compression scheme: for each pixel (x, y) do [x, y] 0 end for for i 1 to m do  0  + error from discarding C[i] if P x,y j0 [x, y]j <  then discard coefficient C[i]   0 end if end for Note that this algorithm’s results depend on the order in which coef- ficients are visited. Different images (and degrees of compression) may be obtained from varying this order—for example, by start￾ing with the finest scale coefficients, rather than the smallest coef- ficients. You could also run a more sophisticated constrained op￾timization procedure to select the minimum number of coefficients subject to the error bound. 6

Figure9 L2 wavelet image compression: The original image(a)can be represented using(b)19% of its wavelet coefficients, with 5% relativd2 error;(c)3% of its coefficients, with 10% relative L2 error, and(d)1% of its coefficients, with 15% relativeL2 error 4 Conclusion [8 Steven J Gortler and Michael F. Cohen. Hierarchical and vari- ic modeling with wavelets. In Proceedings of We have described Haar wavelets in one and two dimensions as well the 1995 Symposium on Interactive 3D Graphics pages 35. as how to use them for compressing functions and images. Part 2 42. ACM, New York, 1995 of this primer will continue this exposition by presenting the ma ematical framework of multiresolution analysis. We will also de- [9 Steven J. Gortler, Peter Schroder, Michael F. Cohen, and Pat relop a class of wavelets based on endpoint-interpolating B-splines, Hanrahan. Wavelet radiosity. In Proceedings of SIGGRAPH and describe how to use them for multiresolution curve and surface 93, pages 221-230. ACM, New York, 1993 [10] Charles E. Jacobs, Adam Finkelstein, and David H. Salesin esolution image q In Proceedings ofS/G- Acknowledgments RAPH 95, pages 277-286. ACM, New York, 1995 We wish to thank Ronen Barzel, Steven Gortler, Michael Shantzis, [l zicheng Liu, steven ), Giortiep and Michael Cohen p Hrer. and the anonymous reviewers for many helpful comments. This work was supported by NSF Presidential and National Young In- pages 35-42. ACM, New York, 1994 vestigator awards(CCR-8957323 and CCR-9357790), by an NSF [12] Michael Lounsbery, Tony DeRose, and Joe Warren Multires- Graduate Research Fellowship, by the University of Washington olution surfaces of arbitrary topological type. ACM Transac- Royalty Research Fund(65-9731), and by industrial gifts from tions on Graphics, 1996(to appear) Adobe, Aldus, Microsoft, and Xerox. [13 Stephane Mallat. A theory for multiresolution signal decom- position: The wa representation. IEEE Transactions on References Pattern Analysis and Machine Intelligence, 11(7): 674-693 July 1989 [1 Deborah Berman, Jason Bartell, and David Salesin Multires- olution painting and compositing. In Proceedings of SIG 14 David Meyers. Multiresolution tiling. Computer Graphics Fo GRAPH 94, pages 85-90. ACM, New York, 1994 m,13(5):325-340, December199 [2]G Beylkin, R. Coifman, and V. Rokhlin. Fast wavelet trans- [15] Peter Schroder, Steven J Gortler, Michael F. Cohen, and Pat forms and numerical algorithms I. Communications on Pure Hanrahan. Wavelet projections for radiosity. Compute Ind Applied Mathematics, 44(2): 141-183, March 1991 Graphics Forum, 13(2): 141-151, June 1994 [16] Eric J. Stollnitz, Tony D. DeRo David H. Salesin. Clustering for glossy global illumination Wavelets for Computer Graphics s Theor and a ACM Transactions on Graphics, 1996(to appear) Morgan Kaufmann, San Francisco, 1996(to appear) H. Christensen, Eric J. Stollnitz, David H. Salesin, and ony D. DeRose. Wavelet radiance. In G. Sakas, P. Shirley S. Muller, editors, Photorealistic Rendering Techniques, pages 295-309. Springer-Verlag, Berlin, 1995 5] Ingrid Daubechies. Orthonormal bases of compactly ported wavelets Communications on Pure and Applied Math- ematics, 41(7): 909-996, October 1988 [6R. De Vore, B Jawerth, and B Lucier. Image comp through wavelet transform coding. IEEE Transactions on In- formation Theory, 38(2): 719-746, March 1992 [7 Adam Finkelstein and David H. Salesin. Multiresolution curves. In Proceedings of SGGRAPH 94, pages 261-26 ACM, New York, 1994

(a) (b) (c) (d) Figure 9 L2 wavelet image compression: The original image (a) can be represented using (b) 19% of its wavelet coefficients, with 5% relativeL2 error; (c) 3% of its coefficients, with 10% relativeL2 error; and (d) 1% of its coefficients, with 15% relativeL2 error. 4 Conclusion We have described Haar wavelets in one and two dimensions as well as how to use them for compressing functions and images. Part 2 of this primer will continue this exposition by presenting the math￾ematical framework of multiresolution analysis. We will also de￾velop a class of wavelets based on endpoint-interpolating B-splines, and describe how to use them for multiresolution curve and surface editing. Acknowledgments We wish to thank Ronen Barzel, Steven Gortler, Michael Shantzis, and the anonymous reviewers for many helpful comments. This work was supported by NSF Presidential and National Young In￾vestigator awards (CCR-8957323 and CCR-9357790), by an NSF Graduate Research Fellowship, by the University of Washington Royalty Research Fund (65-9731), and by industrial gifts from Adobe, Aldus, Microsoft, and Xerox. References [1] Deborah Berman, Jason Bartell, and David Salesin. Multires￾olution painting and compositing. In Proceedings of SIG￾GRAPH 94, pages 85–90. ACM, New York, 1994. [2] G. Beylkin, R. Coifman, and V. Rokhlin. Fast wavelet trans￾forms and numerical algorithms I. Communications on Pure and Applied Mathematics, 44(2):141–183, March 1991. [3] Per H. Christensen, Dani Lischinski, Eric J. Stollnitz, and David H. Salesin. Clustering for glossy global illumination. ACM Transactions on Graphics, 1996 (to appear). [4] Per H. Christensen, Eric J. Stollnitz, David H. Salesin, and Tony D. DeRose. Wavelet radiance. In G. Sakas, P. Shirley, and S. M¨uller, editors, Photorealistic Rendering Techniques, pages 295–309. Springer-Verlag, Berlin, 1995. [5] Ingrid Daubechies. Orthonormal bases of compactly sup￾ported wavelets. Communications on Pure and Applied Math￾ematics, 41(7):909–996, October 1988. [6] R. DeVore, B. Jawerth, and B. Lucier. Image compression through wavelet transform coding. IEEE Transactions on In￾formation Theory, 38(2):719–746, March 1992. [7] Adam Finkelstein and David H. Salesin. Multiresolution curves. In Proceedings of SIGGRAPH 94, pages 261–268. ACM, New York, 1994. [8] Steven J. Gortler and Michael F. Cohen. Hierarchical and vari￾ational geometric modeling with wavelets. In Proceedings of the 1995 Symposium on Interactive 3D Graphics, pages 35– 42. ACM, New York, 1995. [9] Steven J. Gortler, Peter Schr¨oder, Michael F. Cohen, and Pat Hanrahan. Wavelet radiosity. In Proceedings of SIGGRAPH 93, pages 221–230. ACM, New York, 1993. [10] Charles E. Jacobs, Adam Finkelstein, and David H. Salesin. Fast multiresolution image querying. In Proceedings of SIG￾GRAPH 95, pages 277–286. ACM, New York, 1995. [11] Zicheng Liu, Steven J. Gortler, and Michael F. Cohen. Hier￾archical spacetime control. In Proceedings of SIGGRAPH 94, pages 35–42. ACM, New York, 1994. [12] Michael Lounsbery, Tony DeRose, and Joe Warren. Multires￾olution surfaces of arbitrary topological type. ACM Transac￾tions on Graphics, 1996 (to appear). [13] Stephane Mallat. A theory for multiresolution signal decom￾position: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7):674–693, July 1989. [14] David Meyers. Multiresolution tiling.Computer Graphics Fo￾rum, 13(5):325–340, December 1994. [15] Peter Schr¨oder, Steven J. Gortler, Michael F. Cohen, and Pat Hanrahan. Wavelet projections for radiosity. Computer Graphics Forum, 13(2):141–151, June 1994. [16] Eric J. Stollnitz, Tony D. DeRose, and David H. Salesin. Wavelets for Computer Graphics: Theory and Applications. Morgan Kaufmann, San Francisco, 1996 (to appear). 7

A Linear algebra review A vector space together with an inner product is called, not surpris- The mathematics of wavelets rely heavily on fundamental ideas from linear algebra. This appendix reviews a few important ideas Example: It is straightforward to show that the dot product on IR defined by AI Vector spaces (a,a2,a3)|(b1,b2,b3)=a1b1+a2b2+a3b3(1) The starting point for linear algebra is the notion of a vector space A vector space(over the reals) can be loosely defined as a collec- satisfies the requirements of an inner product tion of elements where 1. For all a,b∈ IR and for all t,yv∈,a+bv∈J Cia mple:vs a en fl lowing m stsa fdardu oes of adulti 2. There exists a unique element V such that olution analysis. vIg f(x)g(x) for all t∈T,O+u=l. 3. Other axioms(omitted here)hold true, most of which are neces- The standard inner can also be generalized to include sary to guarantee that multiplication and addition behave as ex- ve wel ght function w(x) The elements of a vector space are called vectors, and the el- Vlg): =/w(x)f(r)g(r)dx. t 0 is called the zero vector. The vectors may be geometric l, t Or they may be functions, as is the case when discussing to forma the idea of orthogonality. Two vectors, v in an inner product space A 2 Bases and dimension are said to be orthogonal if (u v)=0. It is not difficult to show that a collection ul, tn, .. of mutually orthogonal vectors must b A collection of vectors u1, t2,... in a vector space I are said to be linearly independent, suggesting that orthogonality is a strong form of linear independence. An orthogonal basis is one consisting of mutually orthogonal vectors. ca+c+…=0 if and only if c=a2=…=0. A collection u1, u2,... E V of linearly independent vectors is abasis A 4 Norms and normalization for V if every v E can be written A norm is a function that measures the length of vectors. In a finite dimensional vector space, we typically use the norm ul: =(u u)/2 If we are working with a function space such as Clo, 1, we ordinar- for some real numbers ci, c,,.. The vectors in a basis for v are said to span /. Intuitively speaking, linear independence means that the vectors are not redundant. and a basis consists of a minimal com- ) lete set of vectors In the limit as p tends to infinity, we get what is known as the max- If a basis for v has a finite number of elements u then I nor inite-dimensional and its dimension is m. Otherwise. V is said to be infinite-dimensional. Example: IR is a three-dimensional space, and e (1,0,0,e2=(0,1,0),e3=(0,0,1) is a basis for it. Even more frequently used is the L norm, which can also be written Example: The set of all functions continuous on [0, 1] is A vector u with ull= 1 is said to be normalised. If we have ar an infinite-dimensional vector space. We'll call this space onal basis composed of vectors that are no C[0,1] norm the basis is called orthonormal. stated Inner products and When dealing with geometric vectors from the vector space R,the dot product operation has a number of uses. The generalization of where dy is called the Kronecker delta and is defined to be l ifi=j, the dot product to arbitrary vector spaces is called an inner product Formally, an inner product( |- on a vector space V is any map from yx to ir that is Example: The vectors eI =(1, 0, 0), e2 =(0, 1, 0), e3 2. bilinear: au+ bv w=au| w)+ b( w), 3. positive definite: (u|u)>0 for all ut0

A Linear algebra review The mathematics of wavelets rely heavily on fundamental ideas from linear algebra. This appendix reviews a few important ideas. A.1 Vector spaces The starting point for linear algebra is the notion of a vector space. A vector space (over the reals) can be loosely defined as a collec￾tion V of elements where 1. For all a, b 2 IR and for all u, v 2 V, au + bv 2 V. 2. There exists a unique element 0 2 V such that  for all u 2 V, 0u = 0, and  for all u 2 V, 0 + u = u. 3. Other axioms (omitted here) hold true, most of which are neces￾sary to guarantee that multiplication and addition behave as ex￾pected. The elements of a vector space V are called vectors, and the el￾ement 0 is called the zero vector. The vectors may be geometric vectors, or they may be functions, as is the case when discussing wavelets and multiresolution analysis. A.2 Bases and dimension A collection of vectors u1, u2, : : : in a vector space V are said to be linearly independent if c1u1 + c2u2 +    = 0 if and only if c1 = c2 =    = 0. A collection u1, u2, : : : 2 V of linearly independent vectors is abasis for V if every v 2 V can be written as v = X i ci ui for some real numbers c1, c2, : : : . The vectors in a basis for V are said to span V. Intuitively speaking, linear independence means that the vectors are not redundant, and a basis consists of a minimal com￾plete set of vectors. If a basis for V has a finite number of elements u1, : : : , um, then V is finite-dimensional and its dimension is m. Otherwise, V is said to be infinite-dimensional. Example: IR3 is a three-dimensional space, and e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1) is a basis for it. Example: The set of all functions continuous on [0, 1] is an infinite-dimensional vector space. We’ll call this space C[0, 1]. A.3 Inner products and orthogonality When dealing with geometric vectors from the vector space IR3 , the “dot product” operation has a number of uses. The generalization of the dot product to arbitrary vector spaces is called aninner product. Formally, an inner product h j i on a vector spaceV is any map from V V to IR that is 1. symmetric: hu j vi = hv j ui, 2. bilinear: hau + bv j wi = ahu j wi + bhv j wi, and 3. positive definite: hu j ui > 0 for all u 6= 0. A vector space together with an inner product is called, not surpris￾ingly, an inner product space. Example: It is straightforward to show that the dot product on IR3 defined by h(a1, a2, a3) j (b1, b2, b3)i := a1b1 + a2b2 + a3b3 (1) satisfies the requirements of an inner product. Example: The following “standard” inner product on C[0, 1] plays a central role in most formulations of multires￾olution analysis: hf j gi := Z 1 0 f(x) g(x) dx. The standard inner product can also be generalized to include a positive weight function w(x): hf j gi := Z 1 0 w(x) f(x) g(x) dx. One of the most important uses of the inner product is to formalize the idea of orthogonality. Two vectors u, v in an inner product space are said to be orthogonal if hu j vi = 0. It is not difficult to show that a collection u1, u2, : : : of mutually orthogonal vectors must be linearly independent, suggesting that orthogonality is a strong form of linear independence. An orthogonal basis is one consisting of mutually orthogonal vectors. A.4 Norms and normalization A norm is a function that measures the length of vectors. In a finite￾dimensional vector space, we typically use the normkuk:=hu j ui 1=2 . If we are working with a function space such asC[0, 1], we ordinar￾ily use one of the Lp norms, defined as kuk p := Z 1 0 ju(x)j p dx1=p In the limit as p tends to infinity, we get what is known as the max￾norm: kuk1 := max x2[0,1] ju(x)j. Even more frequently used is theL2 norm, which can also be written as kuk 2 = hu j ui 1=2 if we are using the standard inner product. A vector u with kuk = 1 is said to be normalized. If we have an orthogonal basis composed of vectors that are normalized in theL2 norm, the basis is called orthonormal. Stated concisely, a basis u1, u2, : : : is orthonormal if hui j uj i = ij, where ij is called the Kronecker delta and is defined to be 1 ifi = j, and 0 otherwise. Example: The vectors e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1) form an orthonormal basis for the inner product space IR3 endowed with the dot product of Equation (1). 8

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有