Chapter 4. Curve Fitting
Chapter 4. Curve Fitting
Curve Fitting Application of numerical techniques in science and engineering often involve curve fitting of experimental data. In science and engineering it is often the case that an experiment produces a set of data points (x),...(xNN),where the abscissas {x are distinct.If all the numerical values {x),v}are known to several significant digits of accuracy,then polynomial interpolation can be used successfully;otherwise,it cannot. However,many experiments are done with equipment that is reliable only to three or fewer digits of accuracy.Often,there is an experimental error in the measurements. How do we find the best approximation that goes near(not always through)the points?
Curve Fitting ◼ Application of numerical techniques in science and engineering often involve curve fitting of experimental data. ◼ In science and engineering it is often the case that an experiment produces a set of data points (x1 ,y1 ),…,(xN,yN), where the abscissas {xk} are distinct. If all the numerical values {xk}, {yk} are known to several significant digits of accuracy, then polynomial interpolation can be used successfully; otherwise, it cannot. ◼ However, many experiments are done with equipment that is reliable only to three or fewer digits of accuracy. Often, there is an experimental error in the measurements. ◼ How do we find the best approximation that goes near (not always through) the points?
Measures for Errors (Deviations or Residuals) ■Denote efx)yk for 1≤kW ■Maximum error: E.()=mlf()-y.B ■Average error: EUDN24l k1 Root-mean-square error: a=哈2-月
Measures for Errors (Deviations or Residuals) ◼ Denote ek=f(xk )-yk for 1≤k≤N ◼ Maximum error: ◼ Average error: ◼ Root-mean-square error: 1 1 1 1 2 2 2 1 ( ) max{| ( ) |} 1 ( ) | ( ) | 1 ( ) | ( ) | k k k N N k k k N k k k E f f x y E f f x y N E f f x y N = = = − = − = −
Finding the Least-Squares Curve Let {be a set of N points,where the abscissas are distinct.The least-squares curve yfx)is the best one in some function class that minimizes the root-mean-square error E2(f). The simplest formula is the line y-fx)=Ax+B. The quantity E2(f)will be a minimum if and only if the quantity (E((+B-)is a minimum The latter is visualized geometrically by minimizing the sum of the squares of the vertical distances from the points to the line
Finding the Least-Squares Curve ◼ Let {(𝑥𝑘, 𝑦𝑘)}𝑘=1 𝑁 be a set of N points, where the abscissas {xk} are distinct. The least-squares curve y=f(x) is the best one in some function class that minimizes the root-mean-square error E2 (f ). ◼ The simplest formula is the line y=f(x)=Ax+B. ◼ The quantity E2 (f ) will be a minimum if and only if the quantity is a minimum. ◼ The latter is visualized geometrically by minimizing the sum of the squares of the vertical distances from the points to the line. 2 2 2 1 ( ( )) ( ) N k k k N E f Ax B y = = + −
The Least-Squares Line Thm.4.1(Least-Squares Line).Suppose that {(xk,y=1are N points, where the abscissas1are distinct.The coefficients of the least- squares line y=Ax+B are the solution to the following linear system, known as the normal equations: 位4+位-2 位+阳-2 The line y=fx)=Ax+B is the line that minimizes the root-mean-square error E2(f)
The Least-Squares Line ◼ Thm. 4.1(Least-Squares Line). Suppose that {(𝑥𝑘, 𝑦𝑘)}𝑘=1 𝑁 are N points, where the abscissas {xk}𝑘=1 𝑁 are distinct. The coefficients of the leastsquares line y=Ax+B are the solution to the following linear system, known as the normal equations: ◼ The line y=f(x)=Ax+B is the line that minimizes the root-mean-square error E2 (f ). 2 1 1 1 1 1 N N N k k k k k k k N N k k k k x A x B x y x A NB y = = = = = + = + =
Soluting the Normal Equations The normal equations may be an ill-conditioned linear system. The coefficients 4 and B for the least-squares line can be computed as follows.First compute the means and y,and then perform the calculations: C=立4-明,42x-0y-0B=-在 The algorithm aboved is computationally stable.It gives reliable results in cases when the normal equations are ill-conditioned
Soluting the Normal Equations ◼ The normal equations may be an ill-conditioned linear system. ◼ The coefficients A and B for the least-squares line can be computed as follows. First compute the means 𝑥ҧand 𝑦ത, and then perform the calculations: ◼ The algorithm aboved is computationally stable. It gives reliable results in cases when the normal equations are ill- conditioned. 2 1 1 1 ( ) , ( )( ), N N k k k k k C x x A x x y y B y Ax = = C = − = − − = −
Power Fit y=4xM ■ Some situations involve y=AxM,where Mis a known constant. In these cases there is only one parameter 4 to be determined. Thm.4.2 (Power Fit).Suppose that {(xk,y)=1 are N points, where the abscissas are distinct.The coefficient 4 of the least- squares power curve y=4xM is given by the following normal equation: 4区/会
Power Fit y=AxM ◼ Some situations involve y=AxM , where M is a known constant. In these cases there is only one parameter A to be determined. ◼ Thm. 4.2 (Power Fit). Suppose that {(𝑥𝑘, 𝑦𝑘)}𝑘=1 𝑁 are N points, where the abscissas are distinct. The coefficient A of the leastsquares power curve y=AxM is given by the following normal equation: 2 1 1 . N N M M k k k k k A x y x = = =
Methods of Curve Fitting ■ Suppose that we are given the points (),(x2),...,(x)and want to fit an exponential curve of the form y=Ce x The coefficients 4 and C should be determined. The nonlinear least-squares procedure requires that we find a minimum of E(A,C)=N=(CeAxk-yk)2.We set the partial derivatives of E(4,C)to zero and then simplified,the resulting normal equations are ce2-立e=0 -∑ye=0 k=1
Methods of Curve Fitting ◼ Suppose that we are given the points (x1 ,y1 ),(x2 ,y2 ),…,(xN,yN) and want to fit an exponential curve of the form y=CeAx . ◼ The coefficients A and C should be determined. ◼ The nonlinear least-squares procedure requires that we find a minimum of 𝐸 𝐴, 𝐶 = σ𝑘=1 𝑁 (𝐶𝑒 𝐴𝑥𝑘 − 𝑦𝑘) 2 . We set the partial derivatives of E(A, C) to zero and then simplified, the resulting normal equations are 2 1 1 2 1 1 0 0 k k k k N N Ax Ax k k k k k N N Ax Ax k k k C x e x y e C e y e = = = = − = − =
Data Linearization Method for y=Ce4x Take the logarithm of both sides:In(y)=Ax+In(C) Introduce the change of variables:Y=In(y),X=x,and B=In(C). A linear relation between the new variables X and Y:Y=4X+B The original points (in the xy-plane are transformed into the points (Y)(xkIn())in the XY-plane.This process is called data linearization.Then the least-squares line is fit to the points {(X Y) The normal equations for finding 4 and B are 2位xj-宫x 立x4+N8-24
Data Linearization Method for y=CeAx : ◼ Take the logarithm of both sides: ln(y)=Ax+ln(C). ◼ Introduce the change of variables: Y=ln(y), X=x, and B=ln(C). ◼ A linear relation between the new variables X and Y: Y=AX+B. The original points (xk ,yk ) in the xy-plane are transformed into the points (Xk ,Yk )=(xk ,ln(yk )) in the XY-plane. This process is called data linearization. Then the least-squares line is fit to the points {(Xk , Yk )}. The normal equations for finding A and B are 2 1 1 1 1 1 N N N k k k k k k k N N k k k k X A X B X Y X A NB Y = = = = = + = + =
Transformations for Data Linearization The technique of data linearization has been used to fit curves. Once the curve has been chosen,a suitable transformation of the variables must be found so that a linear relation is obtained. Function,y=f(x) Linearized form,Y=4X+B Change of variable(s)and constants A y=x+B y=A+B X=,Y=y 0 X=xy,Y=y y= y=名)+ -1 x+C C= B =A 1 y= X=x,Y= Ax+B =Ax+B y y x 1 1 1 (c)y=Ax+B =A+B y X=Y=j
Transformations for Data Linearization ◼ The technique of data linearization has been used to fit curves. ◼ Once the curve has been chosen, a suitable transformation of the variables must be found so that a linear relation is obtained. Function, y = f (x) Linearized form, Y=AX+B Change of variable(s) and constants 𝑦 = 𝐴 𝑥 + 𝐵 𝑦 = 𝐴 1 𝑥 +B 𝑋 = 1 𝑥 , 𝑌 = 𝑦 𝑦 = 𝐷 𝑥 + 𝐶 𝑦 = −1 𝐶 𝑥𝑦 + 𝐷 𝐶 X = xy, Y = y 𝐶 = −1 𝐴 ,𝐷 = −𝐵 𝐴 𝑦 = 1 𝐴𝑥 + 𝐵 1 𝑦 = 𝐴𝑥 + 𝐵 𝑋 = 𝑥, 𝑌 = 1 𝑦 𝑦 = 𝑥 𝐴𝑥 + 𝐵 1 𝑦 = 𝐴 1 𝑥 + 𝐵 𝑋 = 1 𝑥 , 𝑌 = 1 𝑦 (c)