当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 14.5

资源类别:文库,文档格式:PDF,文档页数:4,文件大小:56.75KB,团购合买
点击下载完整版文档(PDF)

636 Chapter 14.Statistical Description of Data Norusis,M.J.1982.SPSS Introductory Guide:Basic Statistics and Operations:and 1985,SPSS- X Advanced Statistics Guide (New York:McGraw-Hill). Fano,R.M.1961,Transmission of Information (New York:Wiley and MIT Press),Chapter 2. 14.5 Linear Correlation We next turn to measures of association between variables that are ordinal 三 or continuous,rather than nominal.Most widely used is the linear correlation coefficient.For pairs of quantities (i,y),i=1,...,N,the linear correlation coefficient r (also called the product-moment correlation coefficient,or Pearson's 鱼君 r)is given by the formula ICAL (x-)(班-) r= (14.5.1) ∑(x-)2∑(-列2 9 where,as usual,is the mean of the i's,7 is the mean of the yi's. The value ofr lies between-1 and 1.inclusive.It takes on a value of 1,termed 王。分 "complete positive correlation,"when the data points lie on a perfect straight line with positive slope,with z and y increasing together.The value 1 holds independent of the magnitude of the slope.If the data points lie on a perfect straight line with aRS兰g%0 9 negative slope,y decreasing as increases,then r has the value-1;this is called "complete negative correlation."A value ofr near zero indicates that the variables x and y are uncorrelated. When a correlation is known to be significant,r is one conventional way of summarizing its strength.In fact,the value of r can be translated into a statement about what residuals(root mean square deviations)are to be expected if the data are fitted to a straight line by the least-squares method(see $15.2,especially equations 15.2.13-15.2.14).Unfortunately,r is a rather poor statistic for deciding whether an observed correlation is statistically significant,and/or whether one observed sfgG分N 10-521 correlation is significantly stronger than another.The reason is that r is ignorant of Numerica the individual distributions of r and y,so there is no universal way to compute its 43106 distribution in the case of the null hypothesis. About the only general statement that can be made is this:If the null hypothesis is that x and y are uncorrelated,and if the distributions for x and y each have enough convergent moments ("tails"die off sufficiently rapidly),and if N is large (typically >500),then r is distributed approximately normally,with a mean of zero and a standard deviation of 1/vN.In that case,the (double-sided)significance of the correlation,that is,the probability thatr should be larger than its observed value in the null hypothesis,is (14.5.2) where erfc(x)is the complementary error function,equation (6.2.8),computed by the routines erffc or erfcc of $6.2.A small value of(14.5.2)indicates that the

636 Chapter 14. Statistical Description of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Norusis, M.J. 1982, SPSS Introductory Guide: Basic Statistics and Operations; and 1985, SPSS￾X Advanced Statistics Guide (New York: McGraw-Hill). Fano, R.M. 1961, Transmission of Information (New York: Wiley and MIT Press), Chapter 2. 14.5 Linear Correlation We next turn to measures of association between variables that are ordinal or continuous, rather than nominal. Most widely used is the linear correlation coefficient. For pairs of quantities (xi, yi), i = 1,...,N, the linear correlation coefficient r (also called the product-moment correlation coefficient, or Pearson’s r) is given by the formula r =  i (xi − x)(yi − y)  i (xi − x)2  i (yi − y)2 (14.5.1) where, as usual, x is the mean of the xi’s, y is the mean of the yi’s. The value of r lies between −1 and 1, inclusive. It takes on a value of 1, termed “complete positive correlation,” when the data points lie on a perfect straight line with positive slope, with x and y increasing together. The value 1 holds independent of the magnitude of the slope. If the data points lie on a perfect straight line with negative slope, y decreasing as x increases, then r has the value −1; this is called “complete negative correlation.” A value of r near zero indicates that the variables x and y are uncorrelated. When a correlation is known to be significant, r is one conventional way of summarizing its strength. In fact, the value of r can be translated into a statement about what residuals (root mean square deviations) are to be expected if the data are fitted to a straight line by the least-squares method (see §15.2, especially equations 15.2.13 – 15.2.14). Unfortunately, r is a rather poor statistic for deciding whether an observed correlation is statistically significant, and/or whether one observed correlation is significantly stronger than another. The reason is that r is ignorant of the individual distributions of x and y, so there is no universal way to compute its distribution in the case of the null hypothesis. About the only general statement that can be made is this: If the null hypothesis is that x and y are uncorrelated, and if the distributions for x and y each have enough convergent moments (“tails” die off sufficiently rapidly), and if N is large (typically > 500), then r is distributed approximately normally, with a mean of zero and a standard deviation of 1/ √ N. In that case, the (double-sided) significance of the correlation, that is, the probability that |r| should be larger than its observed value in the null hypothesis, is erfc |r| √ N √2  (14.5.2) where erfc(x) is the complementary error function, equation (6.2.8), computed by the routines erffc or erfcc of §6.2. A small value of (14.5.2) indicates that the

14.5 Linear Correlation 637 two distributions are significantly correlated.(See expression 14.5.9 below for a more accurate test. Most statistics books try to go beyond (14.5.2)and give additional statistical tests that can be made using r.In almost all cases,however,these tests are valid only for a very special class of hypotheses,namely that the distributions of and y jointly form a binormal or two-dimensional Gaussian distribution around their mean values,with joint probability density p(x,y)dxdy const.x exp a2-2aw+a) dxdy (14.5.3) 81 where a1,a12,and a22 are arbitrary constants.For this distribution r has the value 012 T三 (14.5.4) /a11a22 ICAL There are occasions when(14.5.3)may be known to be a good model of the data.There may be other occasions when we are willing to take(14.5.3)as at least a rough and ready guess,since many two-dimensional distributions do resemble a binormal distribution,at least not too far out on their tails.In either situation,we can use (14.5.3)to go beyond(14.5.2)in any of several directions: First,we can allow for the possibility that the number N of data points is not 曼芸2物09州 large.Here,it turns out that the statistic N-2 t=1 量色鸟6 V1-2 (14.5.5) is distributed in the null case (of no correlation)like Student's t-distribution with =N-2 degrees of freedom,whose two-sided significance level is given by 6 1-A(t)(equation 6.4.7).As N becomes large,this significance and (14.5.2) become asymptotically the same,so that one never does worse by using(14.5.5), even if the binormal assumption is not well substantiated. Second,when N is only moderately large (>10),we can compare whether the difference of two significantly nonzero r's,e.g.,from different experiments,is itself significant.In other words,we can quantify whether a change in some control Numerica 10621 variable significantly alters an existing correlation between two other variables.This 43106 is done by using Fisher's z-transformation to associate each measured r with a corresponding 2, a=() (14.5.6) Then,each z is approximately normally distributed with a mean value 1 (1+Ttrue 2(-re +N一 Ttrue (14.5.7) where rtrue is the actual or population value of the correlation coefficient,and with a standard deviation .1 ()≈√N-3 (14.5.8)

14.5 Linear Correlation 637 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). two distributions are significantly correlated. (See expression 14.5.9 below for a more accurate test.) Most statistics books try to go beyond (14.5.2) and give additional statistical tests that can be made using r. In almost all cases, however, these tests are valid only for a very special class of hypotheses, namely that the distributions of x and y jointly form a binormal or two-dimensional Gaussian distribution around their mean values, with joint probability density p(x, y) dxdy = const. × exp  −1 2 (a11x2 − 2a12xy + a22y2)  dxdy (14.5.3) where a11, a12, and a22 are arbitrary constants. For this distribution r has the value r = − a12 √a11a22 (14.5.4) There are occasions when (14.5.3) may be known to be a good model of the data. There may be other occasions when we are willing to take (14.5.3) as at least a rough and ready guess, since many two-dimensional distributions do resemble a binormal distribution, at least not too far out on their tails. In either situation, we can use (14.5.3) to go beyond (14.5.2) in any of several directions: First, we can allow for the possibility that the number N of data points is not large. Here, it turns out that the statistic t = r N − 2 1 − r2 (14.5.5) is distributed in the null case (of no correlation) like Student’s t-distribution with ν = N − 2 degrees of freedom, whose two-sided significance level is given by 1 − A(t|ν) (equation 6.4.7). As N becomes large, this significance and (14.5.2) become asymptotically the same, so that one never does worse by using (14.5.5), even if the binormal assumption is not well substantiated. Second, when N is only moderately large (≥ 10), we can compare whether the difference of two significantly nonzero r’s, e.g., from different experiments, is itself significant. In other words, we can quantify whether a change in some control variable significantly alters an existing correlation between two other variables. This is done by using Fisher’s z-transformation to associate each measured r with a corresponding z, z = 1 2 ln 1 + r 1 − r  (14.5.6) Then, each z is approximately normally distributed with a mean value z = 1 2  ln 1 + rtrue 1 − rtrue  + rtrue N − 1  (14.5.7) where rtrue is the actual or population value of the correlation coefficient, and with a standard deviation σ(z) ≈ 1 √N − 3 (14.5.8)

638 Chapter 14.Statistical Description of Data Equations (14.5.7)and (14.5.8).when they are valid.give several useful statistical tests.For example,the significance level at which a measured value of r differs from some hypothesized value rtrue is given by -N-3 erfc (14.5.9) √2 where z and z are given by (14.5.6)and (14.5.7),with small values of (14.5.9) indicating a significant difference.(Setting=0 makes expression 14.5.9 a more 三 accurate replacement for expression 14.5.2 above.)Similarly,the significance of a difference between two measured correlation coefficients r1 and r2 is g z1-22 erfc (14.5.10) where z and z2 are obtained from r and r2 using (14.5.6),and where N and N2 are,respectively,the number of data points in the measurement ofr and r2. 令 All of the significances above are two-sided.If you wish to disprove the null hypothesis in favor of a one-sided hypothesis,such as that r>r2(where the sense of the inequality was decided a priori),then (i)if your measured r1 and r2 have 当7 the wrong sense,you have failed to demonstrate your one-sided hypothesis,but(ii) if they have the right ordering,you can multiply the significances given above by 9 0.5,which makes them more significant. But keep in mind:These interpretations of the r statistic can be completely meaningless if the joint probability distribution of your variables x and y is too different from a binormal distribution 6 #include #define TINY 1.0e-20 Will regularize the unusual case of complete correlation OF SCIENTIFIC COMPUTING (ISBN 198918920 void pearsn(float x[☐,f1oaty门,unsigned long n,float*r,f1oat*prob, float *z) Given two arrays x[1..n]and y[1..n],this routine computes their correlation coefficient 10-621 r(returned as r),the significance level at which the null hypothesis of zero correlation is disproved (prob whose small value indicates a significant correlation),and Fisher's z(returned as z).whose value can be used in further statistical tests as described above. Numerical Recipes 43106 float betai(float a,float b,float x); float erfcc(float x); (outside unsigned long j; Software. float yt,xt,t,df; f1 oat syy=0.0,sxy=0.0,sxx=0.0,ay=0.0,ax=0.0; for (j=1;j<=n;j++){ Find the means. ax +x[]; ay +y[j]; ax /n; ay/▣n; for(j=1;j<=n;j++){ Compute the correlation coefficient. xt=x[j]-ax; yt=y[j]-ay; SXX+▣Xt*xt: syy +yt*yt;

638 Chapter 14. Statistical Description of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Equations (14.5.7) and (14.5.8), when they are valid, give several useful statistical tests. For example, the significance level at which a measured value of r differs from some hypothesized value rtrue is given by erfc |z − z| √N − 3 √2  (14.5.9) where z and z are given by (14.5.6) and (14.5.7), with small values of (14.5.9) indicating a significant difference. (Setting z = 0 makes expression 14.5.9 a more accurate replacement for expression 14.5.2 above.) Similarly, the significance of a difference between two measured correlation coefficients r 1 and r2 is erfc   |z1 − z2| √2 1 N1−3 + 1 N2−3   (14.5.10) where z1 and z2 are obtained from r1 and r2 using (14.5.6), and where N1 and N2 are, respectively, the number of data points in the measurement of r 1 and r2. All of the significances above are two-sided. If you wish to disprove the null hypothesis in favor of a one-sided hypothesis, such as that r 1 > r2 (where the sense of the inequality was decided a priori), then (i) if your measured r 1 and r2 have the wrong sense, you have failed to demonstrate your one-sided hypothesis, but (ii) if they have the right ordering, you can multiply the significances given above by 0.5, which makes them more significant. But keep in mind: These interpretations of the r statistic can be completely meaningless if the joint probability distribution of your variables x and y is too different from a binormal distribution. #include #define TINY 1.0e-20 Will regularize the unusual case of complete correlation. void pearsn(float x[], float y[], unsigned long n, float *r, float *prob, float *z) Given two arrays x[1..n] and y[1..n], this routine computes their correlation coefficient r (returned as r), the significance level at which the null hypothesis of zero correlation is disproved (prob whose small value indicates a significant correlation), and Fisher’s z (returned as z), whose value can be used in further statistical tests as described above. { float betai(float a, float b, float x); float erfcc(float x); unsigned long j; float yt,xt,t,df; float syy=0.0,sxy=0.0,sxx=0.0,ay=0.0,ax=0.0; for (j=1;j<=n;j++) { Find the means. ax += x[j]; ay += y[j]; } ax /= n; ay /= n; for (j=1;j<=n;j++) { Compute the correlation coefficient. xt=x[j]-ax; yt=y[j]-ay; sxx += xt*xt; syy += yt*yt;

14.6 Nonparametric or Rank Correlation 639 sxy +xt*yt; *r=sxy/(sqrt(sxx*syy)+TINY); *z=0.5*1og((1.0+(*r)+TINY)/(1.0-(*r)+TINY)); Fisher's z transformation. df=n-2: t=(*r)*sqrt(df/((1.0-(*r)+TINY)*(1.0+(*r)+TINY))); Equation (14.5.5). *prob=betai(0.5*df,0.5,df/(df+t*t)); Student's t probability. / *prob=erfcc(fabs((*z)*sqrt(n-1.0))/1.4142136)*/ For large n,this easier computation of prob,using the short routine erfcc,would give approx- imately the same value. 三 g CITED REFERENCES AND FURTHER READING: Dunn,O.J.,and Clark,V.A.1974,Applied Statistics:Analysis of Variance and Regression (New York:Wiley). Hoel.P.G.1971.Introduction to Mathematical Statistics,4th ed.(New York:Wiley),Chapter 7. von Mises,R.1964,Mathematical Theory of Probability and Statistics (New York:Academic Press),Chapters IX(A)and IX(B). RECIPES I Korn,G.A.,and Korn,T.M.1968,Mathematical Handbook for Scientists and Engineers,2nd ed. 令 (New York:McGraw-Hill),$19.7. Norusis,M.J.1982,SPSS Introductory Guide:Basic Statistics and Operations:and 1985,SPSS- X Advanced Statistics Guide (New York:McGraw-Hill) 14.6 Nonparametric or Rank Correlation IENTIFIC It is precisely the uncertainty in interpreting the significance of the linear 6 correlation coefficient r that leads us to the important concepts of nonparametric or rank correlation.As before,we are given N pairs of measurements(xi,4).Before, difficulties arose because we did not necessarily know the probability distribution function from which the zi's or yi's were drawn. The key concept of nonparametric correlation is this:If we replace the value Recipes Numerica 10621 of each xi by the value of its rank among all the other xi's in the sample,that is,1,2,3,...,N,then the resulting list of numbers will be drawn from a perfectly 43106 known distribution function,namely uniformly from the integers between 1 and N, Recipes inclusive.Better than uniformly.in fact,since if the ;'s are all distinct.then each integer will occur precisely once.If some of the zi's have identical values,it is conventional to assign to all these "ties"the mean of the ranks that they would have had if their values had been slightly different.This midrank will sometimes be an integer,sometimes a half-integer.In all cases the sum of all assigned ranks will be the same as the sum of the integers from 1 to N,namely N(N+1). Of course we do exactly the same procedure for the yi's,replacing each value by its rank among the other yi's in the sample. Now we are free to invent statistics for detecting correlation between uniform sets of integers between 1 and N,keeping in mind the possibility ofties in the ranks. There is,of course,some loss of information in replacing the original numbers by ranks.We could construct some rather artificial examples where a correlation could be detected parametrically (e.g.,in the linear correlation coefficient r),but could not

14.6 Nonparametric or Rank Correlation 639 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). sxy += xt*yt; } *r=sxy/(sqrt(sxx*syy)+TINY); *z=0.5*log((1.0+(*r)+TINY)/(1.0-(*r)+TINY)); Fisher’s z transformation. df=n-2; t=(*r)*sqrt(df/((1.0-(*r)+TINY)*(1.0+(*r)+TINY))); Equation (14.5.5). *prob=betai(0.5*df,0.5,df/(df+t*t)); Student’s t probability. /* *prob=erfcc(fabs((*z)*sqrt(n-1.0))/1.4142136) */ For large n, this easier computation of prob, using the short routine erfcc, would give approx￾imately the same value. } CITED REFERENCES AND FURTHER READING: Dunn, O.J., and Clark, V.A. 1974, Applied Statistics: Analysis of Variance and Regression (New York: Wiley). Hoel, P.G. 1971, Introduction to Mathematical Statistics, 4th ed. (New York: Wiley), Chapter 7. von Mises, R. 1964, Mathematical Theory of Probability and Statistics (New York: Academic Press), Chapters IX(A) and IX(B). Korn, G.A., and Korn, T.M. 1968, Mathematical Handbook for Scientists and Engineers, 2nd ed. (New York: McGraw-Hill), §19.7. Norusis, M.J. 1982, SPSS Introductory Guide: Basic Statistics and Operations; and 1985, SPSS￾X Advanced Statistics Guide (New York: McGraw-Hill). 14.6 Nonparametric or Rank Correlation It is precisely the uncertainty in interpreting the significance of the linear correlation coefficient r that leads us to the important concepts of nonparametric or rank correlation. As before, we are given N pairs of measurements (xi, yi). Before, difficulties arose because we did not necessarily know the probability distribution function from which the xi’s or yi’s were drawn. The key concept of nonparametric correlation is this: If we replace the value of each xi by the value of its rank among all the other xi’s in the sample, that is, 1, 2, 3,...,N, then the resulting list of numbers will be drawn from a perfectly known distribution function, namely uniformly from the integers between 1 and N, inclusive. Better than uniformly, in fact, since if the xi’s are all distinct, then each integer will occur precisely once. If some of the xi’s have identical values, it is conventional to assign to all these “ties” the mean of the ranks that they would have had if their values had been slightly different. This midrank will sometimes be an integer, sometimes a half-integer. In all cases the sum of all assigned ranks will be the same as the sum of the integers from 1 to N, namely 1 2N(N + 1). Of course we do exactly the same procedure for the y i’s, replacing each value by its rank among the other yi’s in the sample. Now we are free to invent statistics for detecting correlation between uniform sets of integers between 1 and N, keeping in mind the possibility of ties in the ranks. There is, of course, some loss of information in replacing the original numbers by ranks. We could construct some rather artificial examples where a correlation could be detected parametrically (e.g., in the linear correlation coefficient r), but could not

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有