正在加载图片...
638 Chapter 14.Statistical Description of Data Equations (14.5.7)and (14.5.8).when they are valid.give several useful statistical tests.For example,the significance level at which a measured value of r differs from some hypothesized value rtrue is given by -N-3 erfc (14.5.9) √2 where z and z are given by (14.5.6)and (14.5.7),with small values of (14.5.9) indicating a significant difference.(Setting=0 makes expression 14.5.9 a more 三 accurate replacement for expression 14.5.2 above.)Similarly,the significance of a difference between two measured correlation coefficients r1 and r2 is g z1-22 erfc (14.5.10) where z and z2 are obtained from r and r2 using (14.5.6),and where N and N2 are,respectively,the number of data points in the measurement ofr and r2. 令 All of the significances above are two-sided.If you wish to disprove the null hypothesis in favor of a one-sided hypothesis,such as that r>r2(where the sense of the inequality was decided a priori),then (i)if your measured r1 and r2 have 当7 the wrong sense,you have failed to demonstrate your one-sided hypothesis,but(ii) if they have the right ordering,you can multiply the significances given above by 9 0.5,which makes them more significant. But keep in mind:These interpretations of the r statistic can be completely meaningless if the joint probability distribution of your variables x and y is too different from a binormal distribution 6 #include <math.h> #define TINY 1.0e-20 Will regularize the unusual case of complete correlation OF SCIENTIFIC COMPUTING (ISBN 198918920 void pearsn(float x[☐,f1oaty门,unsigned long n,float*r,f1oat*prob, float *z) Given two arrays x[1..n]and y[1..n],this routine computes their correlation coefficient 10-621 r(returned as r),the significance level at which the null hypothesis of zero correlation is disproved (prob whose small value indicates a significant correlation),and Fisher's z(returned as z).whose value can be used in further statistical tests as described above. Numerical Recipes 43106 float betai(float a,float b,float x); float erfcc(float x); (outside unsigned long j; Software. float yt,xt,t,df; f1 oat syy=0.0,sxy=0.0,sxx=0.0,ay=0.0,ax=0.0; for (j=1;j<=n;j++){ Find the means. ax +x[]; ay +y[j]; ax /n; ay/▣n; for(j=1;j<=n;j++){ Compute the correlation coefficient. xt=x[j]-ax; yt=y[j]-ay; SXX+▣Xt*xt: syy +yt*yt;638 Chapter 14. Statistical Description of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Equations (14.5.7) and (14.5.8), when they are valid, give several useful statistical tests. For example, the significance level at which a measured value of r differs from some hypothesized value rtrue is given by erfc |z − z| √N − 3 √2  (14.5.9) where z and z are given by (14.5.6) and (14.5.7), with small values of (14.5.9) indicating a significant difference. (Setting z = 0 makes expression 14.5.9 a more accurate replacement for expression 14.5.2 above.) Similarly, the significance of a difference between two measured correlation coefficients r 1 and r2 is erfc   |z1 − z2| √2 1 N1−3 + 1 N2−3   (14.5.10) where z1 and z2 are obtained from r1 and r2 using (14.5.6), and where N1 and N2 are, respectively, the number of data points in the measurement of r 1 and r2. All of the significances above are two-sided. If you wish to disprove the null hypothesis in favor of a one-sided hypothesis, such as that r 1 > r2 (where the sense of the inequality was decided a priori), then (i) if your measured r 1 and r2 have the wrong sense, you have failed to demonstrate your one-sided hypothesis, but (ii) if they have the right ordering, you can multiply the significances given above by 0.5, which makes them more significant. But keep in mind: These interpretations of the r statistic can be completely meaningless if the joint probability distribution of your variables x and y is too different from a binormal distribution. #include <math.h> #define TINY 1.0e-20 Will regularize the unusual case of complete correlation. void pearsn(float x[], float y[], unsigned long n, float *r, float *prob, float *z) Given two arrays x[1..n] and y[1..n], this routine computes their correlation coefficient r (returned as r), the significance level at which the null hypothesis of zero correlation is disproved (prob whose small value indicates a significant correlation), and Fisher’s z (returned as z), whose value can be used in further statistical tests as described above. { float betai(float a, float b, float x); float erfcc(float x); unsigned long j; float yt,xt,t,df; float syy=0.0,sxy=0.0,sxx=0.0,ay=0.0,ax=0.0; for (j=1;j<=n;j++) { Find the means. ax += x[j]; ay += y[j]; } ax /= n; ay /= n; for (j=1;j<=n;j++) { Compute the correlation coefficient. xt=x[j]-ax; yt=y[j]-ay; sxx += xt*xt; syy += yt*yt;
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有