当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 14.1

资源类别:文库,文档格式:PDF,文档页数:6,文件大小:72.84KB,团购合买
点击下载完整版文档(PDF)

610 Chapter 14.Statistical Description of Data In the other category.model-dependent statistics,we lump the whole subject of fitting data to a theory,parameter estimation,least-squares fits,and so on.Those subjects are introduced in Chapter 15. Section 14.1 deals with so-called measures of central tendency,the moments of a distribution.the median and mode.In $14.2 we learn to test whether different data sets are drawn from distributions with different values of these measures of central tendency.This leads naturally,in $14.3,to the more general question of whether two distributions can be shown to be(significantly)different. In $14.4-$14.7,we deal with measures of association for two distributions. We want to determine whether two variables are "correlated"or "dependent"on one another.If they are,we want to characterize the degree of correlation in some simple ways.The distinction between parametric and nonparametric(rank) methods is emphasized. Section 14.8 introduces the concept of data smoothing,and discusses the M质 particular case of Savitzky-Golay smoothing filters This chapter draws mathematically on the material on special functions that was presented in Chapter 6,especially $6.1-86.4.You may wish,at this point,to review those sections. 9 CITED REFERENCES AND FURTHER READING: Bevington,P.R.1969,Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill). Stuart,A,and Ord,J.K.1987,Kendall's Advanced Theory of Statistics,5th ed.(London:Griffin and Co.)[previous eds.published as Kendall,M.,and Stuart,A.,The Advanced Theory 、iA的 of Statistics]. Norusis,M.J.1982,SPSS Introductory Guide:Basic Statistics and Operations:and 1985,SPSS- IENTIFIC X Advanced Statistics Guide (New York:McGraw-Hill). 61 Dunn,O.J.,and Clark,V.A.1974,Applied Statistics:Analysis of Variance and Regression (New York:Wiley). (ISBN 14.1 Moments of a Distribution:Mean, Numerical Recipes 10.621 Variance,Skewness,and So Forth 43108 When a set of values has a sufficiently strong central tendency,that is,a tendency (outside to cluster around some particular value,then it may be useful to characterize the Software. set by a few numbers that are related to its moments,the sums of integer powers North of the values. Best known is the mean of the values x1,...,N, N (14.1.1) 71 which estimates the value around which central clustering occurs.Note the use of an overbar to denote the mean;angle brackets are an equally common notation,e.g., ()You should be aware that the mean is not the only available estimator of this

610 Chapter 14. Statistical Description of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). In the other category, model-dependent statistics, we lump the whole subject of fitting data to a theory, parameter estimation, least-squares fits, and so on. Those subjects are introduced in Chapter 15. Section 14.1 deals with so-called measures of central tendency, the moments of a distribution, the median and mode. In §14.2 we learn to test whether different data sets are drawn from distributions with different values of these measures of central tendency. This leads naturally, in §14.3, to the more general question of whether two distributions can be shown to be (significantly) different. In §14.4–§14.7, we deal with measures of association for two distributions. We want to determine whether two variables are “correlated” or “dependent” on one another. If they are, we want to characterize the degree of correlation in some simple ways. The distinction between parametric and nonparametric (rank) methods is emphasized. Section 14.8 introduces the concept of data smoothing, and discusses the particular case of Savitzky-Golay smoothing filters. This chapter draws mathematically on the material on special functions that was presented in Chapter 6, especially §6.1–§6.4. You may wish, at this point, to review those sections. CITED REFERENCES AND FURTHER READING: Bevington, P.R. 1969, Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill). Stuart, A., and Ord, J.K. 1987, Kendall’s Advanced Theory of Statistics, 5th ed. (London: Griffin and Co.) [previous eds. published as Kendall, M., and Stuart, A., The Advanced Theory of Statistics]. Norusis, M.J. 1982, SPSS Introductory Guide: Basic Statistics and Operations; and 1985, SPSS￾X Advanced Statistics Guide (New York: McGraw-Hill). Dunn, O.J., and Clark, V.A. 1974, Applied Statistics: Analysis of Variance and Regression (New York: Wiley). 14.1 Moments of a Distribution: Mean, Variance, Skewness, and So Forth When a set of values has a sufficiently strong central tendency, that is, a tendency to cluster around some particular value, then it may be useful to characterize the set by a few numbers that are related to its moments, the sums of integer powers of the values. Best known is the mean of the values x1,...,xN , x = 1 N  N j=1 xj (14.1.1) which estimates the value around which central clustering occurs. Note the use of an overbar to denote the mean; angle brackets are an equally common notation, e.g., x. You should be aware that the mean is not the only available estimator of this

14.1 Moments of a Distribution:Mean,Variance,Skewness 611 quantity,nor is it necessarily the best one.For values drawn from a probability distribution with very broad"tails,"the mean may converge poorly,or not at all,as the number of sampled points is increased.Alternative estimators,the median and the mode,are mentioned at the end of this section. Having characterized a distribution's central value,one conventionally next characterizes its“width”or“variability”around that value.Here again,.more than one measure is available.Most common is the variance, N (14.1.2) j=1 or its square root,the standard deviation, a(1...N)=VVar(x1...IN) (14.1.3) 、 ICAL Equation(14.1.2)estimates the mean squared deviation of x from its mean value. There is a long story about why the denominator of(14.1.2)is N-1 instead of N.If you have never heard that story,you may consult any good statistics text. Here we will be content to note that the N-1 should be changed to N if you are ever in the situation of measuring the variance of a distribution whose mean 个 9 is known a priori rather than being estimated from the data.(We might also comment that if the difference between N and N-1 ever matters to you,then you 9 are probably up to no good anyway-e.g.,trying to substantiate a questionable 的n hypothesis with marginal data.) As the mean depends on the first moment of the data,so do the variance and standard deviation depend on the second moment.It is not uncommon,in real life,to be dealing with a distribution whose second moment does not exist(ie.,is infinite).In this case,the variance or standard deviation is useless as a measure of the data's width around its central value:The values obtained from equations (14.1.2)or(14.1.3)will not converge with increased numbers of points,nor show any consistency from data set to data set drawn from the same distribution.This can occur even when the width of the peak looks,by eye,perfectly finite.A more robust estimator of the width is the average deviation or mean absolute deviation,defined by Numerica 10621 43106 ADev(1...N)= (14.1.4) j=1 One often substitutes the sample median zmed for in equation(14.1.4).For any fixed sample,the median in fact minimizes the mean absolute deviation Statisticians have historically sniffed at the use of(14.1.4)instead of(14.1.2), since the absolute value brackets in (14.1.4)are "nonanalytic"and make theorem- proving difficult.In recent years,however,the fashion has changed,and the subject of robust estimation(meaning,estimation for broad distributions with significant numbers of "outlier"points)has become a popular and important one.Higher moments,or statistics involving higher powers of the input data,are almost always less robust than lower moments or statistics that involve only linear sums or(the lowest moment of all)counting

14.1 Moments of a Distribution: Mean, Variance, Skewness 611 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). quantity, nor is it necessarily the best one. For values drawn from a probability distribution with very broad “tails,” the mean may converge poorly, or not at all, as the number of sampled points is increased. Alternative estimators, the median and the mode, are mentioned at the end of this section. Having characterized a distribution’s central value, one conventionally next characterizes its “width” or “variability” around that value. Here again, more than one measure is available. Most common is the variance, Var(x1 ...xN ) = 1 N − 1  N j=1 (xj − x) 2 (14.1.2) or its square root, the standard deviation, σ(x1 ...xN ) = Var(x1 ...xN ) (14.1.3) Equation (14.1.2) estimates the mean squared deviation of x from its mean value. There is a long story about why the denominator of (14.1.2) is N − 1 instead of N. If you have never heard that story, you may consult any good statistics text. Here we will be content to note that the N − 1 should be changed to N if you are ever in the situation of measuring the variance of a distribution whose mean x is known a priori rather than being estimated from the data. (We might also comment that if the difference between N and N − 1 ever matters to you, then you are probably up to no good anyway — e.g., trying to substantiate a questionable hypothesis with marginal data.) As the mean depends on the first moment of the data, so do the variance and standard deviation depend on the second moment. It is not uncommon, in real life, to be dealing with a distribution whose second moment does not exist (i.e., is infinite). In this case, the variance or standard deviation is useless as a measure of the data’s width around its central value: The values obtained from equations (14.1.2) or (14.1.3) will not converge with increased numbers of points, nor show any consistency from data set to data set drawn from the same distribution. This can occur even when the width of the peak looks, by eye, perfectly finite. A more robust estimator of the width is the average deviation or mean absolute deviation, defined by ADev(x1 ...xN ) = 1 N  N j=1 |xj − x| (14.1.4) One often substitutes the sample median xmed for x in equation (14.1.4). For any fixed sample, the median in fact minimizes the mean absolute deviation. Statisticians have historically sniffed at the use of (14.1.4) instead of (14.1.2), since the absolute value brackets in (14.1.4) are “nonanalytic” and make theorem￾proving difficult. In recent years, however, the fashion has changed, and the subject of robust estimation (meaning, estimation for broad distributions with significant numbers of “outlier” points) has become a popular and important one. Higher moments, or statistics involving higher powers of the input data, are almost always less robust than lower moments or statistics that involve only linear sums or (the lowest moment of all) counting.

612 Chapter 14.Statistical Description of Data Skewness Kurtosis positive (leptokurtic) negative negative positive (platykurtic) (a) (b) Figure 14.1.1.Distributions whose third and fourth moments are significantly different from a normal (Gaussian)distribution.(a)Skewness or third moment.(b)Kurtosis or fourth moment. That being the case,the skewness or third moment,and the kurtosis or fourth 兰吴州 8 100 moment should be used with caution or,better yet.not at all. The skewness characterizes the degree of asymmetry ofa distribution around its mean.While the mean,standard deviation,and average deviation are dimensional quantities,that is,have the same units as the measured quantities xj,the skewness N兰eo3 (Nort server 令 is conventionally defined in such a way as to make it nondimensional.It is a pure number that characterizes only the shape of the distribution.The usual definition is America Skew(z1...ZN) ,鬥 (14.1.5) 9 =1 where =o(1...N)is the distribution's standard deviation(14.1.3).A positive value of skewness signifies a distribution with an asymmetric tail extending out CIENTIFIC( towards more positive z;a negative value signifies a distribution whose tail extends out towards more negative (see Figure 14.1.1). Of course,any set of N measured values is likely to give a nonzero value for (14.1.5),even if the underlying distribution is in fact symmetrical (has zero skewness). COMPUTING (ISBN 188812920 For (14.1.5)to be meaningful,we need to have some idea of its standard deviation as an estimator of the skewness of the underlying distribution.Unfortunately,that depends on the shape of the underlying distribution,and rather critically on its tails! uurggoglrion Numerical Recipes 10621 For the idealized case of a normal(Gaussian)distribution,the standard deviation of (14.1.5)is approximately 15/N when is the true mean,and 6/N when it is 43106 estimated by the sample mean,(14.1.1).In real life it is good practice to believe in skewnesses only when they are several or many times as large as this. (outside The kurtosis is also a nondimensional quantity.It measures the relative peakedness or flatness of a distribution.Relative to what?A normal distribution, North Software. what else!A distribution with positive kurtosis is termed leptokurtic,the outline of the Matterhorn is an example.A distribution with negative kurtosis is termed platykurtic;the outline of a loaf of bread is an example.(See Figure 14.1.1.)And, as you no doubt expect,an in-between distribution is termed mesokurtic. The conventional definition of the kurtosis is Kurt(z1... - (14.1.6) where the-3 term makes the value zero for a normal distribution

612 Chapter 14. Statistical Description of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). (a) (b) Skewness negative positive positive (leptokurtic) negative (platykurtic) Kurtosis Figure 14.1.1. Distributions whose third and fourth moments are significantly different from a normal (Gaussian) distribution. (a) Skewness or third moment. (b) Kurtosis or fourth moment. That being the case, the skewness or third moment, and the kurtosis or fourth moment should be used with caution or, better yet, not at all. The skewness characterizes the degree of asymmetry of a distribution around its mean. While the mean, standard deviation, and average deviation are dimensional quantities, that is, have the same units as the measured quantities xj , the skewness is conventionally defined in such a way as to make it nondimensional. It is a pure number that characterizes only the shape of the distribution. The usual definition is Skew(x1 ...xN ) = 1 N  N j=1 xj − x σ 3 (14.1.5) where σ = σ(x1 ...xN ) is the distribution’s standard deviation (14.1.3). A positive value of skewness signifies a distribution with an asymmetric tail extending out towards more positive x; a negative value signifies a distribution whose tail extends out towards more negative x (see Figure 14.1.1). Of course, any set of N measured values is likely to give a nonzero value for (14.1.5), even if the underlying distribution is in fact symmetrical (has zero skewness). For (14.1.5) to be meaningful, we need to have some idea of its standard deviation as an estimator of the skewness of the underlying distribution. Unfortunately, that depends on the shape of the underlying distribution, and rather critically on its tails! For the idealized case of a normal (Gaussian) distribution, the standard deviation of (14.1.5) is approximately 15/N when x is the true mean, and 6/N when it is estimated by the sample mean, (14.1.1). In real life it is good practice to believe in skewnesses only when they are several or many times as large as this. The kurtosis is also a nondimensional quantity. It measures the relative peakedness or flatness of a distribution. Relative to what? A normal distribution, what else! A distribution with positive kurtosis is termed leptokurtic; the outline of the Matterhorn is an example. A distribution with negative kurtosis is termed platykurtic; the outline of a loaf of bread is an example. (See Figure 14.1.1.) And, as you no doubt expect, an in-between distribution is termed mesokurtic. The conventional definition of the kurtosis is Kurt(x1 ...xN ) =    1 N  N j=1 xj − x σ 4    − 3 (14.1.6) where the −3 term makes the value zero for a normal distribution.

14.1 Moments of a Distribution:Mean.Variance,Skewness 613 The standard deviation of(14.1.6)as an estimator ofthe kurtosis ofan underlying normal distribution is v96/N when o is the true standard deviation,and v24/N when it is the sample estimate (14.1.3).However,the kurtosis depends on such a high moment that there are many real-life distributions for which the standard deviation of(14.1.6)as an estimator is effectively infinite. Calculation of the quantities defined in this section is perfectly straightforward. Many textbooks use the binomial theorem to expand out the definitions into sums of various powers of the data,e.g.,the familiar Var(1...IN) [位 (14.1.7) but this can magnify the roundofferror by a large factor and is generally unjustifiable in terms of computing speed.A clever way to minimize roundoff error,especially for large samples,is to use the corrected two-pass algorithm [11:First calculate then calculate Var(z1...xN)by 、、1 (Nort server 令 Var(x1..xN (14.1.8) 景S%s3dN America The second sum would be zero if were exact,but otherwise it does a good job of correcting the roundoff error in the first term. Programs #include SCIENTIFIC void moment(float data[],int n,float *ave,float *adev,float *sdev, float *var,float *skew,float *curt) Given an array of data[1..n],this routine returns its mean ave,average deviation adev, standard deviation sdev,variance var,skewness skew,and kurtosis curt. void nrerror(char error._text[☐); 1920 COMPUTING(ISBN int j; float ep=0.0,s,P; if (n <1)nrerror("n must be at least 2 in moment"); 8=0.0; First pass to get the mean. for (j=1;j<=n;j++)s +data[j]; Numerical Recipes 021 43108 来aVe世g/n: *adev=(*var)=(*skew)=(*curt)=0.0; Second pass to get the first (absolute),sec- for (j=1;j<=n;j++){ ond,third,and fourth moments of the (outside *adev +fabs(s=data[j]-(tave)); deviation from the mean. 膜 ep +s; North oftware. 米Var+=(p=s*s); *skew +(p *s); *curt +(p *s) *adev /n; *var=(*var-ep*ep/n)/(n-1); Corrected two-pass formula. *sdev=sgrt(*var); Put the pieces together according to the con- if (*var){ ventional definitions. *skew /(n*(*var)*(*sdev)); *curt=(*curt)/(n*(*var)*(*var))-3.0; else nrerror("No skew/kurtosis when variance 0 (in moment)");

14.1 Moments of a Distribution: Mean, Variance, Skewness 613 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The standard deviation of (14.1.6) as an estimator of the kurtosis of an underlying normal distribution is 96/N when σ is the true standard deviation, and 24/N when it is the sample estimate (14.1.3). However, the kurtosis depends on such a high moment that there are many real-life distributions for which the standard deviation of (14.1.6) as an estimator is effectively infinite. Calculation of the quantities defined in this section is perfectly straightforward. Many textbooks use the binomial theorem to expand out the definitions into sums of various powers of the data, e.g., the familiar Var(x1 ...xN ) = 1 N − 1     N j=1 x2 j   − Nx2   ≈ x2 − x2 (14.1.7) but this can magnify the roundoff error by a large factor and is generally unjustifiable in terms of computing speed. A clever way to minimize roundoff error, especially for large samples, is to use the corrected two-pass algorithm [1]: First calculate x, then calculate Var(x1 ...xN ) by Var(x1 ...xN ) = 1 N − 1     N j=1 (xj − x) 2 − 1 N    N j=1 (xj − x)   2    (14.1.8) The second sum would be zero if x were exact, but otherwise it does a good job of correcting the roundoff error in the first term. #include void moment(float data[], int n, float *ave, float *adev, float *sdev, float *var, float *skew, float *curt) Given an array of data[1..n], this routine returns its mean ave, average deviation adev, standard deviation sdev, variance var, skewness skew, and kurtosis curt. { void nrerror(char error_text[]); int j; float ep=0.0,s,p; if (n <= 1) nrerror("n must be at least 2 in moment"); s=0.0; First pass to get the mean. for (j=1;j<=n;j++) s += data[j]; *ave=s/n; *adev=(*var)=(*skew)=(*curt)=0.0; Second pass to get the first (absolute), sec￾ond, third, and fourth moments of the deviation from the mean. for (j=1;j<=n;j++) { *adev += fabs(s=data[j]-(*ave)); ep += s; *var += (p=s*s); *skew += (p *= s); *curt += (p *= s); } *adev /= n; *var=(*var-ep*ep/n)/(n-1); Corrected two-pass formula. *sdev=sqrt(*var); Put the pieces together according to the con￾if (*var) { ventional definitions. *skew /= (n*(*var)*(*sdev)); *curt=(*curt)/(n*(*var)*(*var))-3.0; } else nrerror("No skew/kurtosis when variance = 0 (in moment)"); }

614 Chapter 14.Statistical Description of Data Semi-Invariants The mean and variance of independent random variables are additive:If and y are drawn independently from two,possibly different,probability distributions,then (x+y)=+Var(x+y)=Var(x)+Var(z) (14.1.9) Higher moments are not,in general,additive.However,certain combinations of them, called semi-imariants,are in fact additive.If the centered moments of a distribution are denoted M, M三(e:-到》 (14.1.10) so that,e.g.,M2 Var(),then the first few semi-invariants,denoted I&are given by I2=M2I3=M3I4=M4-3M (14.1.11) I5=M5-10M2M3I6=M6-15M2M4-10M+30M Notice that the skewness and kurtosis,equations(14.1.5)and(14.1.6)are simple powers / 3 of the semi-invariants, Skew()=Is/I2 Kurt()=I/I (14.1.12) 9 A Gaussian distribution has all its semi-invariants higher than I2 equal to zero.A Poisson distribution has all of its semi-invariants equal to its mean.For more details,see [2]. 9 Median and Mode 9a The median of a probability distribution function p(x)is the value xmed for which larger and smaller values of are equally probable: med p(a)dr=1 p(r)dx (14.1.13) The median of a distribution is estimated from a sample of values 1,..., N by finding that value i which has equal numbers of values above it and below 10.621 it.Of course,this is not possible when N is even.In that case it is conventional Numerica to estimate the median as the mean of the unique two central values.If the values 431 jj=1,...,N are sorted into ascending (or,for that matter,descending)order, Recipes then the formula for the median is [E(N+1)/2: N odd Imed North 1(红NW2+xN2+1), (14.1.14) N even If a distribution has a strong central tendency,so that most of its area is under a single peak,then the median is an estimator of the central value.It is a more robust estimator than the mean is:The median fails as an estimator only if the area in the tails is large,while the mean fails if the first moment of the tails is large; it is easy to construct examples where the first moment of the tails is large even though their area is negligible. To find the median of a set of values,one can proceed by sorting the set and then applying(14.1.14).This is a process of order N log N.You might rightly think

614 Chapter 14. Statistical Description of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Semi-Invariants The mean and variance of independent random variables are additive: If x and y are drawn independently from two, possibly different, probability distributions, then (x + y) = x + y Var(x + y) = Var(x) + Var(x) (14.1.9) Higher moments are not, in general, additive. However, certain combinations of them, called semi-invariants, are in fact additive. If the centered moments of a distribution are denoted Mk, Mk ≡  (xi − x) k  (14.1.10) so that, e.g., M2 = Var(x), then the first few semi-invariants, denoted Ik are given by I2 = M2 I3 = M3 I4 = M4 − 3M2 2 I5 = M5 − 10M2M3 I6 = M6 − 15M2M4 − 10M2 3 + 30M3 2 (14.1.11) Notice that the skewness and kurtosis, equations (14.1.5) and (14.1.6) are simple powers of the semi-invariants, Skew(x) = I3/I3/2 2 Kurt(x) = I4/I2 2 (14.1.12) A Gaussian distribution has all its semi-invariants higher than I2 equal to zero. A Poisson distribution has all of its semi-invariants equal to its mean. For more details, see [2]. Median and Mode The median of a probability distribution function p(x) is the value x med for which larger and smaller values of x are equally probable:  xmed −∞ p(x) dx = 1 2 =  ∞ xmed p(x) dx (14.1.13) The median of a distribution is estimated from a sample of values x1,..., xN by finding that value xi which has equal numbers of values above it and below it. Of course, this is not possible when N is even. In that case it is conventional to estimate the median as the mean of the unique two central values. If the values xj j = 1,...,N are sorted into ascending (or, for that matter, descending) order, then the formula for the median is xmed =  x(N+1)/2, N odd 1 2 (xN/2 + x(N/2)+1), N even (14.1.14) If a distribution has a strong central tendency, so that most of its area is under a single peak, then the median is an estimator of the central value. It is a more robust estimator than the mean is: The median fails as an estimator only if the area in the tails is large, while the mean fails if the first moment of the tails is large; it is easy to construct examples where the first moment of the tails is large even though their area is negligible. To find the median of a set of values, one can proceed by sorting the set and then applying (14.1.14). This is a process of order N log N. You might rightly think

14.2 Do Two Distributions Have the Same Means or Variances? 615 that this is wasteful,since it yields much more information than just the median (e.g.,the upper and lower quartile points,the deciles,etc.).In fact,we saw in 88.5 that the element (N+1)/2 can be located in of order N operations.Consult that section for routines. The mode of a probability distribution function p(x)is the value of x where it takes on a maximum value.The mode is useful primarily when there is a single,sharp maximum.in which case it estimates the central value.Occasionally,a distribution will be bimodal.with two relative maxima;then one may wish to know the two modes individually.Note that,in such cases,the mean and median are not very useful,since they will give only a"compromise"value between the two peaks. CITED REFERENCES AND FURTHER READING: Bevington,PR.1969.Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill),Chapter 2. Stuart,A.,and Ord,J.K.1987,Kendall's Advanced Theory of Statistics,5th ed.(London:Griffin and Co.)[previous eds.published as Kendall,M.,and Stuart,A.,The Advanced Theory of Statistics].vol.1.$10.15 Norusis,M.J.1982.SPSS Introductory Guide:Basic Statistics and Operations:and 1985,SPSS- X Advanced Statistics Guide (New York:McGraw-Hill). 9 Chan,T.F.,Golub,G.H.,and LeVeque,R.J.1983,American Statistician,vol.37,pp.242-247.[1] Cramer,H.1946,Mathematical Methods of Statistics (Princeton:Princeton University Press). 515.10.[2 SCIENTIFIC 14.2 Do Two Distributions Have the Same Means or Variances? Not uncommonly we want to know whether two distributions have the same mean.For example,a first set of measured values may have been gathered before some event,a second set after it.We want to know whether the event,a"treatment" or a "change in a control parameter,"made a difference. Our first thought is to ask"how many standard deviations"one sample mean is Numerical Recipes 10621 43106 from the other.That number may in fact be a useful thing to know.It does relate to the strength or"importance"of a difference of means if that difference is genuine. However,by itself,it says nothing about whether the difference is genuine,that is, (outside 腿 statistically significant.A difference of means can be very small compared to the North standard deviation,and yet very significant,if the number of data points is large. Conversely,a difference may be moderately large but not significant,if the data are sparse.We will be meeting these distinct concepts of strength and significance several times in the next few sections. A quantity that measures the significance of a difference of means is not the number of standard deviations that they are apart,but the number of so-called standard errors that they are apart.The standard error of a set of values measures the accuracy with which the sample mean estimates the population(or"true")mean. Typically the standard error is equal to the sample's standard deviation divided by the square root of the number of points in the sample

14.2 Do Two Distributions Have the Same Means or Variances? 615 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). that this is wasteful, since it yields much more information than just the median (e.g., the upper and lower quartile points, the deciles, etc.). In fact, we saw in §8.5 that the element x(N+1)/2 can be located in of order N operations. Consult that section for routines. The mode of a probability distribution function p(x) is the value of x where it takes on a maximum value. The mode is useful primarily when there is a single, sharp maximum, in which case it estimates the central value. Occasionally, a distribution will be bimodal, with two relative maxima; then one may wish to know the two modes individually. Note that, in such cases, the mean and median are not very useful, since they will give only a “compromise” value between the two peaks. CITED REFERENCES AND FURTHER READING: Bevington, P.R. 1969, Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill), Chapter 2. Stuart, A., and Ord, J.K. 1987, Kendall’s Advanced Theory of Statistics, 5th ed. (London: Griffin and Co.) [previous eds. published as Kendall, M., and Stuart, A., The Advanced Theory of Statistics], vol. 1, §10.15 Norusis, M.J. 1982, SPSS Introductory Guide: Basic Statistics and Operations; and 1985, SPSS￾X Advanced Statistics Guide (New York: McGraw-Hill). Chan, T.F., Golub, G.H., and LeVeque, R.J. 1983, American Statistician, vol. 37, pp. 242–247. [1] Cram´er, H. 1946, Mathematical Methods of Statistics (Princeton: Princeton University Press), §15.10. [2] 14.2 Do Two Distributions Have the Same Means or Variances? Not uncommonly we want to know whether two distributions have the same mean. For example, a first set of measured values may have been gathered before some event, a second set after it. We want to know whether the event, a “treatment” or a “change in a control parameter,” made a difference. Our first thought is to ask “how many standard deviations” one sample mean is from the other. That number may in fact be a useful thing to know. It does relate to the strength or “importance” of a difference of means if that difference is genuine. However, by itself, it says nothing about whether the difference is genuine, that is, statistically significant. A difference of means can be very small compared to the standard deviation, and yet very significant, if the number of data points is large. Conversely, a difference may be moderately large but not significant, if the data are sparse. We will be meeting these distinct concepts of strength and significance several times in the next few sections. A quantity that measures the significance of a difference of means is not the number of standard deviations that they are apart, but the number of so-called standard errors that they are apart. The standard error of a set of values measures the accuracy with which the sample mean estimates the population (or “true”) mean. Typically the standard error is equal to the sample’s standard deviation divided by the square root of the number of points in the sample

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有