当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《纺织复合材料》课程参考文献(Composite Materials Handbook,Volume 1)CHAPTER 8 STATISTICAL METHODS

资源类别:文库,文档格式:PDF,文档页数:112,文件大小:667.51KB,团购合买
点击下载完整版文档(PDF)

MIL-HDBK-17-1F Volume 1,Chapter 8 Statistical Methods CHAPTER 8 STATISTICAL METHODS 8.1 INTRODUCTION Variability in composite material property data may result from a number of sources including run-to- run variability in fabrication,batch-to-batch variability of raw materials,testing variability,and variability intrinsic to the material.It is important to acknowledge this variability when designing with composites and to incorporate it in design values of material properties.Procedures for calculating statistically-based material properties are provided in this chapter.With a properly designed test program(Chapter 2),these statistical procedures can account for some,but not all,of these sources for variability.A fundamental assumption is that one is measuring the desired properties.If this is not the case,then no statistical pro- cedure is sufficient to account for other technical inadequacies. Section 8.2 provides introductory material and guidance for the methods used in the remainder of the chapter.Readers unfamiliar with the statistical methods in the chapter should read Section 8.2 before the remainder of the chapter;more experienced readers may find it useful as a reference.Section 8.3 pro- vides methods for evaluating data and calculating statistically-based properties.Section 8.4 contains other statistical methods,including methods for confidence intervals for a coefficient of variation,stress- strain curves,quality control,and alternate material evaluation.Section 8.5 contains statistical tables and approximate formulas. 8.1.1 Overview of methods for calculating statistically-based properties Section 8.3 describes computational methods for obtaining A-and B-basis values from composite material data.Different approaches are used depending on whether the data can be grouped in a natural way (for example,because of batches or differences in environmental conditions).Data sets which either cannot be grouped,or for which there are negligible differences among such groups,are called unstruc- tured.Otherwise,the data are said to be structured.The statistical methods in Section 8.3.2,which ex- amine if the differences among groups of data are negligible,are useful for determining whether the data should be treated as structured or unstructured.Unstructured data are modeled using a Weibull,normal, or lognormal distribution,using the methods in Section 8.3.4.If none of these are acceptable,nonpara- metric basis values are determined.Structured data are modeled using linear statistical mode/s,including regression and the analysis of variance(ANOVA),using the methods in Section 8.3.5. 8.1.2 Computer software Non-proprietary computer software useful for analyzing material property data is available.STAT17, available from the MIL-HDBK-17 Secretariat upon request(see page ii),performs the calculations in the flowchart in Figure 8.3.1 with the exception of linear regression.RECIPE(REgression Confidence Inter- vals on PErcentiles),available from the National Institute of Standards and Technology,performs calcula- tions that find material basis values from linear models including regression and analysis of variance. RECIPE can be obtained by anonymous ftp from'ftp.nist.gov',directory 'recipe'.A non-proprietary general statistical analysis and graphics package DATAPLOT is also available from NIST by anonymous ftp from'scf.nist.gov',directory 'pubs/dataplot". 8.1.3 Symbols The symbols that are used in Chapter 8 and not commonly used throughout the remainder of this handbook are listed below,each with its definition and the section in which it is first used. Contact Stefan Leigh,Statistical Engineering Division,NIST,Gaithersburg,MD,20899-0001,email:stefan.leigh@nist.gov. 8-1

MIL-HDBK-17-1F Volume 1, Chapter 8 Statistical Methods 8-1 CHAPTER 8 STATISTICAL METHODS 8.1 INTRODUCTION Variability in composite material property data may result from a number of sources including run-to￾run variability in fabrication, batch-to-batch variability of raw materials, testing variability, and variability intrinsic to the material. It is important to acknowledge this variability when designing with composites and to incorporate it in design values of material properties. Procedures for calculating statistically-based material properties are provided in this chapter. With a properly designed test program (Chapter 2), these statistical procedures can account for some, but not all, of these sources for variability. A fundamental assumption is that one is measuring the desired properties. If this is not the case, then no statistical pro￾cedure is sufficient to account for other technical inadequacies. Section 8.2 provides introductory material and guidance for the methods used in the remainder of the chapter. Readers unfamiliar with the statistical methods in the chapter should read Section 8.2 before the remainder of the chapter; more experienced readers may find it useful as a reference. Section 8.3 pro￾vides methods for evaluating data and calculating statistically-based properties. Section 8.4 contains other statistical methods, including methods for confidence intervals for a coefficient of variation, stress￾strain curves, quality control, and alternate material evaluation. Section 8.5 contains statistical tables and approximate formulas. 8.1.1 Overview of methods for calculating statistically-based properties Section 8.3 describes computational methods for obtaining A- and B-basis values from composite material data. Different approaches are used depending on whether the data can be grouped in a natural way (for example, because of batches or differences in environmental conditions). Data sets which either cannot be grouped, or for which there are negligible differences among such groups, are called unstruc￾tured. Otherwise, the data are said to be structured. The statistical methods in Section 8.3.2, which ex￾amine if the differences among groups of data are negligible, are useful for determining whether the data should be treated as structured or unstructured. Unstructured data are modeled using a Weibull, normal, or lognormal distribution, using the methods in Section 8.3.4. If none of these are acceptable, nonpara￾metric basis values are determined. Structured data are modeled using linear statistical models, including regression and the analysis of variance (ANOVA), using the methods in Section 8.3.5. 8.1.2 Computer software Non-proprietary computer software useful for analyzing material property data is available. STAT17, available from the MIL-HDBK-17 Secretariat upon request (see page ii), performs the calculations in the flowchart in Figure 8.3.1 with the exception of linear regression. RECIPE (REgression Confidence Inter￾vals on PErcentiles), available from the National Institute of Standards and Technology, performs calcula￾tions that find material basis values from linear models including regression and analysis of variance. RECIPE can be obtained by anonymous ftp from 'ftp.nist.gov', directory 'recipe'. A non-proprietary general statistical analysis and graphics package DATAPLOT is also available from NIST by anonymous ftp from ‘scf.nist.gov’, directory ‘pubs/dataplot1 . 8.1.3 Symbols The symbols that are used in Chapter 8 and not commonly used throughout the remainder of this handbook are listed below, each with its definition and the section in which it is first used. 1 Contact Stefan Leigh, Statistical Engineering Division, NIST, Gaithersburg, MD, 20899-0001, email: stefan.leigh@nist.gov

MIL-HDBK-17-1F Volume 1,Chapter 8 Statistical Methods SYMBOL DEFINITION SECTION A A-basis value a distribution limit 8.1.4 ADC critical value of ADK 8.3.2.2 ADK k-sample Anderson-Darling statistic 8.3.2.2 B-basis value 8.2.5.1 b distribution limit 8.1.4 critical value 8.3.3.1 CV coefficient of variation 8.2.5.2 e error,residual 8.3.5.1 F F-statistic 8.3.5.2.2 F(x) cumulative distribution function 8.1.4 f(x) probability density function 8.1.4 Fo standard normal distribution function 8.3.4.3.2 IQ informative quantile function 8.3.6.2 J number of specimens per batch 8.2.5.3 number of batches 8.2.3 kA (1)one-sided tolerance limit factor,A-basis 8.3.4.3.3 (2)Hanson-Koopmans coefficient,A-basis 8.3.4.5.2 kB (1)one-sided tolerance limit factor.B-basis 8.3.4.3.3 (2)Hanson-Koopmans coefficient,B-basis 8.3.4.5.2 MNR maximum normed residual test statistic 8.3.3.1 MSB between-batch/group mean square 8.3.5.2.5 MSE within-batch/group mean square 8.3.5.2.5 n number of observations in a data set 8.1.4 n' effective sample size 8.3.5.2.6 n number of specimens required for comparable reproducibility 8.2.5.3 々州 see Equation 8.3.5.2.6(b) 8.3.5.2.6 ni number of observations in batch/group i 8.3.2.1 OSL observed significance level 8.3.1 p(s) fixed condition 8.3.5.1 Q quantile function 8.3.6.1 quantile function estimate 8.3.6.1 rank of observation 8.3.4.5.1 RME relative magnitude of error 8.5 sample standard deviation 8.1.4 s2 sample variance 8.1.4 SL standard deviation of log values 8.3.4.4 Sy estimated standard deviation of errors from the regression line 8.3.5.3 SSB between-batch/group sum of squares 8.3.5.2.3 SSE within-batch/group sum of squares 8.3.5.2.3 SST total sum of squares 8.3.5.2.3 T tolerance limit factor 8.3.5.2.7 quantile of the t-distribution 8.3.3.1 Ti temperature at condition i 8.3.5.1 8-2

MIL-HDBK-17-1F Volume 1, Chapter 8 Statistical Methods 8-2 SYMBOL DEFINITION SECTION A A-basis value - a distribution limit 8.1.4 ADC critical value of ADK 8.3.2.2 ADK k-sample Anderson-Darling statistic 8.3.2.2 B B-basis value 8.2.5.1 b distribution limit 8.1.4 C critical value 8.3.3.1 CV coefficient of variation 8.2.5.2 e error, residual 8.3.5.1 F F-statistic 8.3.5.2.2 F(x) cumulative distribution function 8.1.4 f(x) probability density function 8.1.4 F0 standard normal distribution function 8.3.4.3.2 IQ informative quantile function 8.3.6.2 J number of specimens per batch 8.2.5.3 k number of batches 8.2.3 kA (1) one-sided tolerance limit factor, A-basis (2) Hanson-Koopmans coefficient, A-basis 8.3.4.3.3 8.3.4.5.2 kB (1) one-sided tolerance limit factor, B-basis (2) Hanson-Koopmans coefficient, B-basis 8.3.4.3.3 8.3.4.5.2 MNR maximum normed residual test statistic 8.3.3.1 MSB between-batch/group mean square 8.3.5.2.5 MSE within-batch/group mean square 8.3.5.2.5 n number of observations in a data set 8.1.4 n′ effective sample size 8.3.5.2.6 ~n number of specimens required for comparable reproducibility 8.2.5.3 *n see Equation 8.3.5.2.6(b) 8.3.5.2.6 ni number of observations in batch/group i 8.3.2.1 OSL observed significance level 8.3.1 p(s) fixed condition 8.3.5.1 Q quantile function 8.3.6.1 Q quantile function estimate 8.3.6.1 r rank of observation 8.3.4.5.1 RME relative magnitude of error 8.5 s sample standard deviation 8.1.4 2 s sample variance 8.1.4 sL standard deviation of log values 8.3.4.4 sy estimated standard deviation of errors from the regression line 8.3.5.3 SSB between-batch/group sum of squares 8.3.5.2.3 SSE within-batch/group sum of squares 8.3.5.2.3 SST total sum of squares 8.3.5.2.3 T tolerance limit factor 8.3.5.2.7 t quantile of the t-distribution 8.3.3.1 Ti temperature at condition i 8.3.5.1

MIL-HDBK-17-1F Volume 1,Chapter 8 Statistical Methods SYMBOL DEFINITION SECTION ty,0.95() 0.95 quantile of the non-central t-distribution with non-centrality pa-8.3.5.3 rameter and degrees of freedom y TIQ truncated informative quantile function 8.3.6.2 (1)ratio of mean squares 8.3.5.2.7 (2)batch 8.3.5.1 VA one-sided tolerance limit factor for the Weibull distribution.A-basis 8.3.4.2.3 VB one-sided tolerance limit factor for the Weibull distribution.B-basis 8.3.4.2.3 Wij transformed data 8.3.5.2.1 f sample mean,overall mean 8.1.4 Xi observation i in a sample 8.1.4 Xi median of x values 8.3.5.2.1 X可 jth observation in batch/group i 8.3.2.1 Xijk kth observation in batch j at condition i 8.2.3 XL mean of log values 8.3.4.4 X(r) rth observation,sorted in ascending order;observation of rank r 8.3.4.5.1 Z0.10 tenth percentile of the underlying population distribution 8.2.2 Z60 ranked independent values 8.3.2.1 Zp(s).u regression constants 8.3.5.1 (1)significance level 8.3.3.1 (2)scale parameter of Weibull distribution 8.1.4 e estimate of a 8.3.4.2.1 B shape parameter of Weibull distribution 8.1.4 B estimate of B 8.3.4.2.1 B regression parameters 8.3.5.3 B least squares estimate of B 8.3.5.3 Y degrees of freedom 8.3.5.3 6 noncentrality parameter 8.3.5.3 a regression parameters 8.3.5.1 μ population mean 8.1.4 4 mean at condition i 8.2.3 P correlation between any two measurements in the same batch 8.2.5.3 0 population standard deviation 8.1.4 02 population variance 8.1.4 % population between-batch variance 8.2.3 品 population within-batch variance 8.2.3 8.1.4 Statistical terms Definitions of the most often used statistical terms in this handbook are provided in this section. This list is certainly not complete;the user of this document with little or no background in statistical methods should also consult an elementary text on statistical methods such as Reference 8.1.4.Defini- tions for additional statistical terms are included in Section 1.7. 8-3

MIL-HDBK-17-1F Volume 1, Chapter 8 Statistical Methods 8-3 SYMBOL DEFINITION SECTION tγ ,0.95( ) δ 0.95 quantile of the non-central t-distribution with non-centrality pa￾rameter δ and degrees of freedom γ 8.3.5.3 TIQ truncated informative quantile function 8.3.6.2 u (1) ratio of mean squares (2) batch 8.3.5.2.7 8.3.5.1 VA one-sided tolerance limit factor for the Weibull distribution, A-basis 8.3.4.2.3 VB one-sided tolerance limit factor for the Weibull distribution, B-basis 8.3.4.2.3 wij transformed data 8.3.5.2.1 x sample mean, overall mean 8.1.4 xi observation i in a sample 8.1.4 i ~x median of x values 8.3.5.2.1 xij thj observation in batch/group i 8.3.2.1 xijk th k observation in batch j at condition i 8.2.3 xL mean of log values 8.3.4.4 x(r) th r observation, sorted in ascending order; observation of rank r 8.3.4.5.1 z0.10 tenth percentile of the underlying population distribution 8.2.2 z(i) ranked independent values 8.3.2.1 zp(s),u regression constants 8.3.5.1 α (1) significance level (2) scale parameter of Weibull distribution 8.3.3.1 8.1.4 α estimate of α 8.3.4.2.1 β shape parameter of Weibull distribution 8.1.4 β estimate of β 8.3.4.2.1 i β regression parameters 8.3.5.3 i β least squares estimate of i β 8.3.5.3 γ degrees of freedom 8.3.5.3 δ noncentrality parameter 8.3.5.3 θ i regression parameters 8.3.5.1 µ population mean 8.1.4 i µ mean at condition i 8.2.3 ρ correlation between any two measurements in the same batch 8.2.5.3 σ population standard deviation 8.1.4 2 σ population variance 8.1.4 b 2 σ population between-batch variance 8.2.3 e 2 σ population within-batch variance 8.2.3 8.1.4 Statistical terms Definitions of the most often used statistical terms in this handbook are provided in this section. This list is certainly not complete; the user of this document with little or no background in statistical methods should also consult an elementary text on statistical methods such as Reference 8.1.4. Defini￾tions for additional statistical terms are included in Section 1.7.

MIL-HDBK-17-1F Volume 1,Chapter 8 Statistical Methods Population--The set of measurements about which inferences are to be made or the totality of pos- sible measurements which might be obtained in a given testing situation.For example,"all possible ulti- mate tensile strength measurements for Composite Material A,conditioned at 95%relative humidity and room temperature".In order to make inferences about a population,it is often necessary to make as- sumptions about its distributional form.The assumed distributional form may also be referred to as the population. Sample--The collection of measurements (sometimes referred to as observations)taken from a specified population. Sample size--The number of measurements in a sample. A-basis Value--A statistically-based material property;a 95%lower confidence bound on the first percentile of a specified population of measurements.Also a 95%lower tolerance bound for the upper 99%of a specified population. B-basis Value--A statistically-based material property;a 95%lower confidence bound on the tenth percentile of a specified population of measurements.Also a 95%lower tolerance bound for the upper 90%of a specified population. Compatible--Descriptive term referring to different groups or subpopulations which may be treated as coming from the same population. Structured data--Data for which natural groupings exist,or for which responses of interest could vary systematically with respect to known factors.For example,measurements made from each of several batches could reasonably be grouped according to batch,and measurements made at various known temperatures could be modeled using linear regression(Section 8.3.5.2);hence both can be regarded as structured data. Unstructured data--Data for which all relevant information is contained in the response measure- ments themselves.This could be because these measurements are all that is known,or else because one is able to ignore potential structure in the data.For example,data measurements that have been grouped by batch and demonstrated to have negligible batch-to-batch variability (using the subsample compatibility methods of Section 8.3.2)may be considered unstructured. Location parameters and statistics: Population mean--The average of all potential measurements in a given population weighted by their relative frequencies in the population.The population mean is the limit of the sample mean as the sample size increases. Sample mean--The average of all observations in a sample and an estimate of the population mean. If the notation x1,x2,...,xn is used to denote the n observations in a sample,then the sample mean is defined by: x =xI+x2+...+xn 8.1.4(a n or 1 n 8.1.4b) ni=1 Sample median--After ordering the observations in a sample from least to greatest,the sample me- dian is the value of the middle-most observation if the sample size is odd and the average of the two mid- dle-most observations if the sample size is even.If the population is symmetric about its mean,the sam- ple median is also a satisfactory estimator of the population mean. 8-4

MIL-HDBK-17-1F Volume 1, Chapter 8 Statistical Methods 8-4 ` Population -- The set of measurements about which inferences are to be made or the totality of pos￾sible measurements which might be obtained in a given testing situation. For example, "all possible ulti￾mate tensile strength measurements for Composite Material A, conditioned at 95% relative humidity and room temperature". In order to make inferences about a population, it is often necessary to make as￾sumptions about its distributional form. The assumed distributional form may also be referred to as the population. Sample -- The collection of measurements (sometimes referred to as observations) taken from a specified population. Sample size -- The number of measurements in a sample. A-basis Value -- A statistically-based material property; a 95% lower confidence bound on the first percentile of a specified population of measurements. Also a 95% lower tolerance bound for the upper 99% of a specified population. B-basis Value -- A statistically-based material property; a 95% lower confidence bound on the tenth percentile of a specified population of measurements. Also a 95% lower tolerance bound for the upper 90% of a specified population. Compatible -- Descriptive term referring to different groups or subpopulations which may be treated as coming from the same population. Structured data -- Data for which natural groupings exist, or for which responses of interest could vary systematically with respect to known factors. For example, measurements made from each of several batches could reasonably be grouped according to batch, and measurements made at various known temperatures could be modeled using linear regression (Section 8.3.5.2); hence both can be regarded as structured data. Unstructured data -- Data for which all relevant information is contained in the response measure￾ments themselves. This could be because these measurements are all that is known, or else because one is able to ignore potential structure in the data. For example, data measurements that have been grouped by batch and demonstrated to have negligible batch-to-batch variability (using the subsample compatibility methods of Section 8.3.2) may be considered unstructured. Location parameters and statistics: Population mean -- The average of all potential measurements in a given population weighted by their relative frequencies in the population. The population mean is the limit of the sample mean as the sample size increases. Sample mean -- The average of all observations in a sample and an estimate of the population mean. If the notation x12 n , x , ..., x is used to denote the n observations in a sample, then the sample mean is defined by: x = x + x +...+x n 12 n 8.1.4(a) or x= 1 n x i=1 n ∑ i 8.1.4(b) Sample median -- After ordering the observations in a sample from least to greatest, the sample me￾dian is the value of the middle-most observation if the sample size is odd and the average of the two mid￾dle-most observations if the sample size is even. If the population is symmetric about its mean, the sam￾ple median is also a satisfactory estimator of the population mean

MIL-HDBK-17-1F Volume 1,Chapter 8 Statistical Methods Dispersion statistics: Sample variance--The sum of the squared deviations from the sample mean,divided by n-1,where n denotes the sample size.The sample variance is defined by: s21 n 2(x-x2 8.1.4(c n-1=1 or 2=2x2n2 8.1.4d n-1i=1n-1 Sample standard deviation--The square root of the sample variance.The sample standard deviation is denoted by s. Probability distribution terms: Probability distribution--A formula which gives the probability that a value will fall within prescribed limits.When the word distribution is used in this chapter.it should be interpreted to mean probability dis- tribution. Normal Distribution--A two parameter(u,o)family of probability distributions for which the probabil- ity that an observation will fall between a and b is given by the area under the curve x)=1 e\x-uY'12o 8.1.4(e) GV2π between a and b.A normal distribution with parameters (,o)has population mean u and variance g2. Lognormal Distribution--A probability distribution for which the probability that an observation se- lected at random from this population falls between a and b(0<a<b<)is given by the area under the normal distribution between In(a)and In(b). Two-Parameter Weibull Distribution--A probability distribution for which the probability that a ran- domly selected observation from this population lies between a and b(0<a<b<)is given by e(alay-e(bla) 8.1.4(0 where a is called the scale parameter and B is called the shape parameter. Probability function terms: Cumulative Distribution Function--A function,usually denoted by F(x),which gives the probability that a random variable lies between any prescribed pair of numbers,that is Pr(a<x≤b)=Fb)-F(a) 8.1.4(g) Such functions are non-decreasing and satisfy lim F(x)=1 8.1.4h) X→∞ 8-5

MIL-HDBK-17-1F Volume 1, Chapter 8 Statistical Methods 8-5 Dispersion statistics: Sample variance -- The sum of the squared deviations from the sample mean, divided by n-1, where n denotes the sample size. The sample variance is defined by: n 2 2 i i=1 1 s = (x x) n 1 ∑ − − 8.1.4(c) or 2 i=1 n s = 1 n-1 - n n-1 ∑ x x i 2 2 8.1.4(d) Sample standard deviation -- The square root of the sample variance. The sample standard deviation is denoted by s . Probability distribution terms: Probability distribution -- A formula which gives the probability that a value will fall within prescribed limits. When the word distribution is used in this chapter, it should be interpreted to mean probability dis￾tribution. Normal Distribution -- A two parameter ( µ σ, ) family of probability distributions for which the probabil￾ity that an observation will fall between a and b is given by the area under the curve f(x) = 1 2 e-(x- ) /2 2 2 σ π µ σ 8.1.4(e) between a and b. A normal distribution with parameters ( µ σ, ) has population mean µ and variance 2 σ . Lognormal Distribution -- A probability distribution for which the probability that an observation se￾lected at random from this population falls between a and b ( 0 < a < b < ∞ ) is given by the area under the normal distribution between ln(a) and ln(b) . Two-Parameter Weibull Distribution -- A probability distribution for which the probability that a ran￾domly selected observation from this population lies between a and b ( 0 < a < b < ∞ ) is given by -(a/ ) -(b/ ) e - e α α β β 8.1.4(f) where α is called the scale parameter and β is called the shape parameter. Probability function terms: Cumulative Distribution Function -- A function, usually denoted by F(x) , which gives the probability that a random variable lies between any prescribed pair of numbers, that is Pr(a < x b) = F(b) - F(a) ≤ 8.1.4(g) Such functions are non-decreasing and satisfy x lim F(x) = 1 →+∞ 8.1.4(h)

MIL-HDBK-17-1F Volume 1,Chapter 8 Statistical Methods The cumulative distribution function,F,is related to the probability density function,f,by fx)=dF(x) 8.1.4(0 dx provided that F(x)is differentiable. F-distribution--A probability distribution which is employed in the analysis of variance,regression analysis,and tests for equality of variance.Tables of this distribution are readily available. Probability Density Function--A function f(x)>0 for all x with 了fx达=l 8.1.40) The probability density function determines the cumulative distribution function F(x)by F(x)=∫f(t)dt 8.1.4(k) Note that the limits (-,may be conventional;for example,the exponential distribution satisfies the definition by defining its probability density function as 0 forx≤0,and x)= 8.1.40 le-x forx >0 The probability density function is used to calculate probabilities as follows: 6 Pr(a<x≤b)=∫f(x)dx 8.1.4(m Error and Variability: Fixed Effect--A systematic shift in a measured quantity due to a particular level change of a treat- ment or condition.The change in level for a treatment or condition is often under the control of the ex- perimenter.A measured quantity could be compressive strength or tensile modulus.A treatment or con- dition could be test temperature,fabricator,and so on.For a fixed effect,the shift in the measured quan- tity is to be interpreted as a consistent change not only in the context of the observed data but also with respect to future data under the same treatment or condition. Random Effect--A shift in a measured quantity due to a particular level change of an external,usu- ally uncontrollable,factor.The level of this factor is regarded as a random draw from an infinite popula- tion.The specific level of a random effect is never under the control of the experimenter,however it may remain fixed within a limited subgroup of observed data.A measured quantity could be compressive strength or tensile modulus.An external factor could be batch production leading to batch-to-batch differ- ences.Fabricator-to-fabricator differences may be considered a random effect if the number of fabrica- tors involved are considered to be a small sample of all present and future fabricators.For a random ef- fect,the shift in the measured quantities is viewed as a random variable having mean zero and a non- zero variance.Within a subgroup experiencing a fixed level of an external factor,the measured quantities are correlated(shifting as a cluster around a population average with the magnitude of the shift depend- ing on the level of the factor).Therefore,to obtain the most independent information concerning the 8-6

MIL-HDBK-17-1F Volume 1, Chapter 8 Statistical Methods 8-6 The cumulative distribution function, F, is related to the probability density function, f , by f(x) = d dx F(x) 8.1.4(i) provided that F(x) is differentiable. F-distribution -- A probability distribution which is employed in the analysis of variance, regression analysis, and tests for equality of variance. Tables of this distribution are readily available. Probability Density Function -- A function f(x) 0 ≥ for all x with −∞ ∞ z f x dx () 1 = 8.1.4(j) The probability density function determines the cumulative distribution function F(x) by x F(x) f (t)dt −∞ = ∫ 8.1.4(k) Note that the limits (- , ) ∞ ∞ may be conventional; for example, the exponential distribution satisfies the definition by defining its probability density function as f(x) = 0 e-x for x 0, and for x > 0 R ≤ S | T | 8.1.4(l) The probability density function is used to calculate probabilities as follows: b a Pr(a x b) f (x)dx <≤ = ∫ 8.1.4(m) Error and Variability: Fixed Effect -- A systematic shift in a measured quantity due to a particular level change of a treat￾ment or condition. The change in level for a treatment or condition is often under the control of the ex￾perimenter. A measured quantity could be compressive strength or tensile modulus. A treatment or con￾dition could be test temperature, fabricator, and so on. For a fixed effect, the shift in the measured quan￾tity is to be interpreted as a consistent change not only in the context of the observed data but also with respect to future data under the same treatment or condition. Random Effect -- A shift in a measured quantity due to a particular level change of an external, usu￾ally uncontrollable, factor. The level of this factor is regarded as a random draw from an infinite popula￾tion. The specific level of a random effect is never under the control of the experimenter, however it may remain fixed within a limited subgroup of observed data. A measured quantity could be compressive strength or tensile modulus. An external factor could be batch production leading to batch-to-batch differ￾ences. Fabricator-to-fabricator differences may be considered a random effect if the number of fabrica￾tors involved are considered to be a small sample of all present and future fabricators. For a random ef￾fect, the shift in the measured quantities is viewed as a random variable having mean zero and a non￾zero variance. Within a subgroup experiencing a fixed level of an external factor, the measured quantities are correlated (shifting as a cluster around a population average with the magnitude of the shift depend￾ing on the level of the factor). Therefore, to obtain the most independent information concerning the

MIL-HDBK-17-1F Volume 1,Chapter 8 Statistical Methods population of response values,it is better to have more subgroups than to have more measurements per subgroup Random Error--That part of the data variation that is due to unknown or uncontrolled external factors and that affects each observation independently and unpredictably.It is the residual error in a model un- der analysis.the variability remaining after the variability due to fixed and random effects has been re- moved.Random error is a special case of a random effect.In both cases,the level of the random effect or error is uncontrollable but random errors vary independently from measurement to measurement (i.e.. there are no random error shifts shared in common by several measurements).An important example of random error is the specimen-to-specimen variability occurring within a subgroup experiencing constant levels of treatment.condition.batch.and other external factors (fixed and random effects). Material Variability--A source of variability due to the spatial and consistency variations of the mate- rial itself and due to variations in its processing (e.g.,the inherent microstructure,defect population, cross-link density,etc.)Components of material variability can be any combination of fixed effects,ran- dom effects,and random error. 8.2 BACKGROUND This section provides introductory material and guidance for the methods used in the remainder of the chapter.Readers unfamiliar with the statistical methods in the chapter should read this section before the remainder of the chapter.For more experienced readers,this section may be a useful reference for the approach and use of terminology. 8.2.1 Statistically-based design values A design value for a material is the minimum value of a material property expected to be used in the fabrication of the structure.The value can be deterministic or statistically based.S-basis value is the usual designation of a deterministic value;this implies that any material when test-sampled is rejected if any of its properties fall below the established S-value.Statistically-based design values acknowledge the stochastic nature of the material properties and.in general,will reduce the amount of incoming mate- rial testing.Deterministic and statistically based material design values are used in the same way in the deterministic design of the structure.For structural integrity,actual(including appropriate safety factors) stresses or strains in the structure can not exceed the material design values.If the structure is designed using probabilistic methods(by making reliability estimates)only statistically-based design values can be used. To understand the definitions of'statistically-based'design values,it is necessary to regard the mate- rial property of interest,not as a constant,but as a random variable,a quantity that varies from specimen to specimen according to some probability distribution.A reasonable first attempt at definitions of B-basis and A-basis material properties are the 10th and 1st percentiles of a material property distribution.One expects the property to usually be above these values,so these definitions are reasonable statistically- based counterparts to the traditional deterministic notion of a design value.Of course,there is an obvious problem in practice;one doesn't know the probability distribution of a material property.So far only sim- ple ideas of probability theory have been used in these definitions;it is in addressing uncertainty in these percentiles that statistical inference plays an essential role. 8.2.2 Basis values for unstructured data. Before breaking n specimens,imagine them each to have a strength value which can be represented as belonging to a common probability distribution.After breaking the specimens,one observes n num- bers,and if n is large enough,a histogram of these numbers will approximate the unknown distribution. This probability distribution is referred to as a population,and the n numbers are a realization of a ran- dom sample of this population.Conceptually,one can do this thought-experiment many times,obtaining different sets of n numbers.A statistically-based B-basis material property is a statistic,calculated from a 8-7

MIL-HDBK-17-1F Volume 1, Chapter 8 Statistical Methods 8-7 population of response values, it is better to have more subgroups than to have more measurements per subgroup. Random Error -- That part of the data variation that is due to unknown or uncontrolled external factors and that affects each observation independently and unpredictably. It is the residual error in a model un￾der analysis, the variability remaining after the variability due to fixed and random effects has been re￾moved. Random error is a special case of a random effect. In both cases, the level of the random effect or error is uncontrollable but random errors vary independently from measurement to measurement (i.e., there are no random error shifts shared in common by several measurements). An important example of random error is the specimen-to-specimen variability occurring within a subgroup experiencing constant levels of treatment, condition, batch, and other external factors (fixed and random effects). Material Variability -- A source of variability due to the spatial and consistency variations of the mate￾rial itself and due to variations in its processing (e.g., the inherent microstructure, defect population, cross-link density, etc.). Components of material variability can be any combination of fixed effects, ran￾dom effects, and random error. 8.2 BACKGROUND This section provides introductory material and guidance for the methods used in the remainder of the chapter. Readers unfamiliar with the statistical methods in the chapter should read this section before the remainder of the chapter. For more experienced readers, this section may be a useful reference for the approach and use of terminology. 8.2.1 Statistically-based design values A design value for a material is the minimum value of a material property expected to be used in the fabrication of the structure. The value can be deterministic or statistically based. S-basis value is the usual designation of a deterministic value; this implies that any material when test-sampled is rejected if any of its properties fall below the established S-value. Statistically-based design values acknowledge the stochastic nature of the material properties and, in general, will reduce the amount of incoming mate￾rial testing. Deterministic and statistically based material design values are used in the same way in the deterministic design of the structure. For structural integrity, actual (including appropriate safety factors) stresses or strains in the structure can not exceed the material design values. If the structure is designed using probabilistic methods (by making reliability estimates) only statistically-based design values can be used. To understand the definitions of 'statistically-based' design values, it is necessary to regard the mate￾rial property of interest, not as a constant, but as a random variable, a quantity that varies from specimen to specimen according to some probability distribution. A reasonable first attempt at definitions of B-basis and A-basis material properties are the 10th and 1st percentiles of a material property distribution. One expects the property to usually be above these values, so these definitions are reasonable statistically￾based counterparts to the traditional deterministic notion of a design value. Of course, there is an obvious problem in practice; one doesn't know the probability distribution of a material property. So far only sim￾ple ideas of probability theory have been used in these definitions; it is in addressing uncertainty in these percentiles that statistical inference plays an essential role. 8.2.2 Basis values for unstructured data. Before breaking n specimens, imagine them each to have a strength value which can be represented as belonging to a common probability distribution. After breaking the specimens, one observes n num￾bers, and if n is large enough, a histogram of these numbers will approximate the unknown distribution. This probability distribution is referred to as a population, and the n numbers are a realization of a ran￾dom sample of this population. Conceptually, one can do this thought-experiment many times, obtaining different sets of n numbers. A statistically-based B-basis material property is a statistic, calculated from a

MIL-HDBK-17-1F Volume 1,Chapter 8 Statistical Methods random sample n,such that if one were to repeatedly obtain random samples of n specimens and cal- culate many of these basis values,95%of the time the calculated values would fall below the (unknown) 10th percentile.An A-basis value is defined similarly,replacing the 10th percentile with the 1st.In statis- tical parlance,basis values are 95%lower confidence limits on prescribed percentiles,which are also sometimes referred to as tolerance limits. Note that the definitions of statistically-based material properties have been developed in two steps. First a deterministic property was modeled with a probability distribution in order to take into account ob- served scatter in the property,and tentative definitions of basis values in terms of percentiles of this distri- bution were made.This takes into account uncertainty that remains,however much data on the property one obtains.But there is additional uncertainty,since instead of unlimited data,one has only n speci- mens.So the percentiles of our tentative definitions are replaced with conservative 'under-estimates'of these percentiles,thereby taking into account the additional uncertainty in a random material property due to limited data. An example will help fix ideas.Let the tensile strength of a material have a normal distribution with a mean of 1000 MPa and a standard deviation of 125 MPa.The 10th percentile of this population is z0.10÷1000-(1.282)125÷840MPa This would be the B-basis value if one had unlimited data,and hence knew the population.Assume in- stead that only n=10 specimens are available.A B-basis value can be calculated for these n speci- mens (see Section 8.3.4.3),and if one were to obtain many such sets of 10 specimens from the same population,this basis value would be less than 840 MPa for 95%of these repeated samples.Substantial scatter is characteristic of basis values determined from small data sets,due primarily to uncertainty in the population variance (see Section 8.2.5). The present discussion provides a fairly complete description of material basis values,if one is willing to make two simplifying assumptions:first that between-batch material property variability is negligible, and second that all of the data are obtained from tests at identical conditions.In Section 8.3.2,such data are defined to be unstructured.However,composite material properties often do vary substantially from batch to batch,and data on properties are usually obtained,not for a single set of fixed conditions but over a test matrix of some combination of temperatures,humidities,and stacking sequences.Data that exhibit these additional complexities will be called structured(see Section 8.3.2),and are analyzed using regression and analysis of variance.Regression analysis in general is discussed in Section 8.3.5. 8.2.3 Basis values in the presence of batch-to-batch variability Composite materials typically exhibit considerable variability in many properties from batch to batch. Because of this variability,one should not indiscriminately pool data over batches and apply the unstruc- tured data procedures discussed above and in Section 8.3.4.Basis values should incorporate the vari- ability to be expected between batches or panels of a material,particularly when one has data on only a few batches or panels,or when one has a particular reason for suspecting that this variability could be non-negligible.Pooling batches involves the implicit assumption that this source of variability is negligi- ble,and in the event that this is not the case,the values which result from pooling can be too optimistic. Before pooling data,the subsample compatibility methods of Section 8.3.2 should be applied.The inter- pretation of material basis values in the presence of between-batch(or panel,and so on)variability is dis- cussed below for the simplest case of a one-way ANOVA model(Section 8.3.5.2). The data for the present discussion consist of n measurements,all of the same property,of the same material,and tested under the same conditions,The only structure apparent in the data under this hypo- thetical scenario is that each specimen has been fabricated from one of k batches of raw material. (Equivalently,one might imagine material made from the same batch,but for which several autoclave runs had been required,resulting in non-negligible variability in properties between pane/s of specimens.) Each data value can be regarded as a sum of three parts.The first part is the unknown mean,the second 8-8

MIL-HDBK-17-1F Volume 1, Chapter 8 Statistical Methods 8-8 random sample n , such that if one were to repeatedly obtain random samples of n specimens and cal￾culate many of these basis values, 95% of the time the calculated values would fall below the (unknown) 10th percentile. An A-basis value is defined similarly, replacing the 10th percentile with the 1st. In statis￾tical parlance, basis values are 95% lower confidence limits on prescribed percentiles, which are also sometimes referred to as tolerance limits. Note that the definitions of statistically-based material properties have been developed in two steps. First a deterministic property was modeled with a probability distribution in order to take into account ob￾served scatter in the property, and tentative definitions of basis values in terms of percentiles of this distri￾bution were made. This takes into account uncertainty that remains, however much data on the property one obtains. But there is additional uncertainty, since instead of unlimited data, one has only n speci￾mens. So the percentiles of our tentative definitions are replaced with conservative 'under-estimates' of these percentiles, thereby taking into account the additional uncertainty in a random material property due to limited data. An example will help fix ideas. Let the tensile strength of a material have a normal distribution with a mean of 1000 MPa and a standard deviation of 125 MPa. The 10th percentile of this population is z0.10 =− =  1000 (1.282)125  840 MPa This would be the B-basis value if one had unlimited data, and hence knew the population. Assume in￾stead that only n = 10 specimens are available. A B-basis value can be calculated for these n speci￾mens (see Section 8.3.4.3), and if one were to obtain many such sets of 10 specimens from the same population, this basis value would be less than 840 MPa for 95% of these repeated samples. Substantial scatter is characteristic of basis values determined from small data sets, due primarily to uncertainty in the population variance (see Section 8.2.5). The present discussion provides a fairly complete description of material basis values, if one is willing to make two simplifying assumptions: first that between-batch material property variability is negligible, and second that all of the data are obtained from tests at identical conditions. In Section 8.3.2, such data are defined to be unstructured. However, composite material properties often do vary substantially from batch to batch, and data on properties are usually obtained, not for a single set of fixed conditions but over a test matrix of some combination of temperatures, humidities, and stacking sequences. Data that exhibit these additional complexities will be called structured (see Section 8.3.2), and are analyzed using regression and analysis of variance. Regression analysis in general is discussed in Section 8.3.5. 8.2.3 Basis values in the presence of batch-to-batch variability Composite materials typically exhibit considerable variability in many properties from batch to batch. Because of this variability, one should not indiscriminately pool data over batches and apply the unstruc￾tured data procedures discussed above and in Section 8.3.4. Basis values should incorporate the vari￾ability to be expected between batches or panels of a material, particularly when one has data on only a few batches or panels, or when one has a particular reason for suspecting that this variability could be non-negligible. Pooling batches involves the implicit assumption that this source of variability is negligi￾ble, and in the event that this is not the case, the values which result from pooling can be too optimistic. Before pooling data, the subsample compatibility methods of Section 8.3.2 should be applied. The inter￾pretation of material basis values in the presence of between-batch (or panel, and so on) variability is dis￾cussed below for the simplest case of a one-way ANOVA model (Section 8.3.5.2). The data for the present discussion consist of n measurements, all of the same property, of the same material, and tested under the same conditions, The only structure apparent in the data under this hypo￾thetical scenario is that each specimen has been fabricated from one of k batches of raw material. (Equivalently, one might imagine material made from the same batch, but for which several autoclave runs had been required, resulting in non-negligible variability in properties between panels of specimens.) Each data value can be regarded as a sum of three parts. The first part is the unknown mean, the second

MIL-HDBK-17-1F Volume 1,Chapter 8 Statistical Methods part is a shift in the mean due to the batch from which the specimen was obtained,and the third part is a random perturbation due to the scatter in measurements made on different specimens from the same batch. The unknown constant mean corresponds to a set of fixed conditions (for example,8-ply unidirec- tional tensile strength for a specific material,tested according to a well-defined test method,and at pre- scribed test conditions).If one were to produce batches endlessly,preparing specimens from each batch according to these fixed conditions,breaking specimens from each batch,and obtaining measurements of the property of interest,then the average of all of these measurements would approach this unknown constant in the limit of infinitely many batches.This unknown mean can be parameterized as a function of the conditions under which the specimens were prepared and tested,where the form of this function is known except for some constants;this is related to the notion of a regression model,which will be dis- cussed in some detail in Section 8.3.5.1. Imagine,however,that one were to test many specimens from a single batch.The average strength approaches a constant in this situation as well,but this constant will not be the same as in the case where each specimen comes from a different batch.In the situation discussed in the previous paragraph,the average converges to an overall population mean(a 'grand mean'),while the average converges to the population means for a particular batch in the present case.The difference between the overall popula- tion mean and the population mean for a particular batch is the second component of a material property measurement.This difference is a random quantity;it will vary from batch to batch in an unsystematic way.This random 'batch effect'is assumed to follow a normal probability distribution with a mean of zero, and some unknown variance called the between-batch component of variance,and denoted by o. Even when specimens are made from the same batch and tested under identical conditions,one will not get the same value every time.In addition to the population mean and the random 'batch effect'there is a third component to any measurement,which is also random,but which differs from specimen to specimen within a batch.This random quantity is called the within-batch variability,and it is modeled as a normally distributed random variable with a mean of zero and a varianceow,referred to as the within- batch component of variance. To summarize,a measurement made on data on a particular specimen from a specific batch is mod- eled as a sum of three parts: Xijk i+biteijk 8.2.3 where xiik is the kth measurement on data from batch j at a set of fixed conditions labeled by i.The random variables bi and eiik have normal distributions with mean zero and varianceso and ow,re- spectively.For the present discussion,there is only one set of fixed conditions,hence the subscript 'i' can be omitted.For the general regression and analysis of variance models discussed in Sections 8.3.5.1 and 8.3.5.2 there can be many combinations of fixed factors;there the 'i'subscript in Equation 8.2.3 must be retained. If data from more than one batch are available,then RECIPE (Section 8.1.2)will use the data to de- termine basis values which with 95%confidence are less than the appropriate percentile of a randomly chosen observation from a randomly chosen future batch,for a particular set of fixed conditions.Such values protect against the possibility of batch-to-batch variability resulting in future batches which have lower mean properties than those batches for which data are available. 8.2.4 Batches,panels,and confounding The model described in Equation 8.2.3 and Section 8.3.5 is based on the assumption of at most two sources of variability;these are referred to as 'between-batch variability'and within-batch variability'.In 8-9

MIL-HDBK-17-1F Volume 1, Chapter 8 Statistical Methods 8-9 part is a shift in the mean due to the batch from which the specimen was obtained, and the third part is a random perturbation due to the scatter in measurements made on different specimens from the same batch. The unknown constant mean corresponds to a set of fixed conditions (for example, 8-ply unidirec￾tional tensile strength for a specific material, tested according to a well-defined test method, and at pre￾scribed test conditions). If one were to produce batches endlessly, preparing specimens from each batch according to these fixed conditions, breaking specimens from each batch, and obtaining measurements of the property of interest, then the average of all of these measurements would approach this unknown constant in the limit of infinitely many batches. This unknown mean can be parameterized as a function of the conditions under which the specimens were prepared and tested, where the form of this function is known except for some constants; this is related to the notion of a regression model, which will be dis￾cussed in some detail in Section 8.3.5.1. Imagine, however, that one were to test many specimens from a single batch. The average strength approaches a constant in this situation as well, but this constant will not be the same as in the case where each specimen comes from a different batch. In the situation discussed in the previous paragraph, the average converges to an overall population mean (a ‘grand mean’), while the average converges to the population means for a particular batch in the present case. The difference between the overall popula￾tion mean and the population mean for a particular batch is the second component of a material property measurement. This difference is a random quantity; it will vary from batch to batch in an unsystematic way. This random ‘batch effect’ is assumed to follow a normal probability distribution with a mean of zero, and some unknown variance called the between-batch component of variance, and denoted by b 2 σ . Even when specimens are made from the same batch and tested under identical conditions, one will not get the same value every time. In addition to the population mean and the random ‘batch effect’ there is a third component to any measurement, which is also random, but which differs from specimen to specimen within a batch. This random quantity is called the within-batch variability, and it is modeled as a normally distributed random variable with a mean of zero and a variance w 2 σ , referred to as the within￾batch component of variance. To summarize, a measurement made on data on a particular specimen from a specific batch is mod￾eled as a sum of three parts: xijk i j ijk = + µ b + e 8.2.3 where xijk is the th k measurement on data from batch j at a set of fixed conditions labeled by i . The random variables bj and eijk have normal distributions with mean zero and variances b 2 σ and w 2 σ , re￾spectively. For the present discussion, there is only one set of fixed conditions, hence the subscript ' i ' can be omitted. For the general regression and analysis of variance models discussed in Sections 8.3.5.1 and 8.3.5.2 there can be many combinations of fixed factors; there the ' i ' subscript in Equation 8.2.3 must be retained. If data from more than one batch are available, then RECIPE (Section 8.1.2) will use the data to de￾termine basis values which with 95% confidence are less than the appropriate percentile of a randomly chosen observation from a randomly chosen future batch, for a particular set of fixed conditions. Such values protect against the possibility of batch-to-batch variability resulting in future batches which have lower mean properties than those batches for which data are available. 8.2.4 Batches, panels, and confounding The model described in Equation 8.2.3 and Section 8.3.5 is based on the assumption of at most two sources of variability; these are referred to as ‘between-batch variability’ and within-batch variability’. In

MIL-HDBK-17-1F Volume 1,Chapter 8 Statistical Methods the manufacturing of composites,however,there are typically at least three sources of variability.For composites made from prepreg,the additional source is due to the fact that several specimens are typi- cally manufactured together as a 'panel',consequently a third source can be referred to as 'between- panel'variability. When one has data on a material from several batches,but at only one set of fixed conditions,one cannot estimate batch and panel variabilities separately.Whenever data are obtained from a new batch, that data also comes from a different panel.(In statistical terminology,the batch and panel variances are confounded.)So what we call 'between-batch variability'in such cases is actually the sum of the be- tween-batch and between-panel variances.Unless the between-panel variability is negligible,the be- tween-batch variance will be over-estimated in such cases.This can result in material basis properties that are lower than they should be. Next consider the situation where data are available from several batches at more than one set of fixed conditions(see Section 8.3.7.8).If one assumes also that data at different conditions from the same batch are from different panels,then one is able,in principle,to estimate the between-batch and between- panel variances separately.However,the regression models in this chapter and the RECIPE software include only one source of such variability.Consequently,the between-panel variance is confounded,not with the between-batch variance as above.but with the within-batch variance.This can result in material basis values that are somewhat higher than they should be.This is likely to be a less serious problem than the case where panel and batch variances are confounded for several reasons.Perhaps the most important of these is that of the sources of variability,that due to batches is the primary concern,and is being treated appropriately.Another reason is that there is typically considerable variability within panels and if the between-panel variance is small with respect to the within-panel variability,then the material basis properties will not be substantially higher than they should be. 8.2.5 Sample size guidelines for determining basis values. Material basis values are often regarded as material properties,that is,these values are interpreted as constants which can be used to help characterize the material and processing.Since basis values will a/ways vary from one set of data to the next,even if the material,conditioning,and test remain un- changed,treating them as material constants is always an approximation. However,if the calculations are based on 'enough'data,the basis values should be reproducible,to within engineering accuracy,across comparable data sets.The objective of this section is to illustrate the small-sample reproducibility problem and to provide guidance on how many data are necessary in basis value calculations in order for these values to be approximately reproducible. How many data are 'enough'depends on many factors,including 1.The statistical model which is used to approximate the population from which the data is sampled, 2. The degree of reproducibility which is desired, The variability in the property being measured,and 4.Variability in measurements of the property due to the test method Because of this,it is impossible to give firm recommendations.The discussion in this section has another purpose.It is intended to provide background information and guidelines to assist the user of this hand- book in making a sample size decision.We emphasize that this section deals only with the stability of ba- sis values with respect to sample size.Another important issue relevant to the choice of a sample size. which deserves separate consideration,is the effect on basis values of statistical model assumptions since there is considerable uncertainty in model selection from small samples.Additional discussion of the effect of sample size selection is found in Section 2.2.5. 8-10

MIL-HDBK-17-1F Volume 1, Chapter 8 Statistical Methods 8-10 the manufacturing of composites, however, there are typically at least three sources of variability. For composites made from prepreg, the additional source is due to the fact that several specimens are typi￾cally manufactured together as a ‘panel’, consequently a third source can be referred to as ‘between￾panel’ variability. When one has data on a material from several batches, but at only one set of fixed conditions, one cannot estimate batch and panel variabilities separately. Whenever data are obtained from a new batch, that data also comes from a different panel. (In statistical terminology, the batch and panel variances are confounded.) So what we call 'between-batch variability' in such cases is actually the sum of the be￾tween-batch and between-panel variances. Unless the between-panel variability is negligible, the be￾tween-batch variance will be over-estimated in such cases. This can result in material basis properties that are lower than they should be. Next consider the situation where data are available from several batches at more than one set of fixed conditions (see Section 8.3.7.8). If one assumes also that data at different conditions from the same batch are from different panels, then one is able, in principle, to estimate the between-batch and between￾panel variances separately. However, the regression models in this chapter and the RECIPE software include only one source of such variability. Consequently, the between-panel variance is confounded, not with the between-batch variance as above, but with the within-batch variance. This can result in material basis values that are somewhat higher than they should be. This is likely to be a less serious problem than the case where panel and batch variances are confounded for several reasons. Perhaps the most important of these is that of the sources of variability, that due to batches is the primary concern, and is being treated appropriately. Another reason is that there is typically considerable variability within panels, and if the between-panel variance is small with respect to the within-panel variability, then the material basis properties will not be substantially higher than they should be. 8.2.5 Sample size guidelines for determining basis values. Material basis values are often regarded as material properties, that is, these values are interpreted as constants which can be used to help characterize the material and processing. Since basis values will always vary from one set of data to the next, even if the material, conditioning, and test remain un￾changed, treating them as material constants is always an approximation. However, if the calculations are based on 'enough' data, the basis values should be reproducible, to within engineering accuracy, across comparable data sets. The objective of this section is to illustrate the small-sample reproducibility problem and to provide guidance on how many data are necessary in basis value calculations in order for these values to be approximately reproducible. How many data are 'enough' depends on many factors, including 1. The statistical model which is used to approximate the population from which the data is sampled, 2. The degree of reproducibility which is desired, 3. The variability in the property being measured, and 4. Variability in measurements of the property due to the test method Because of this, it is impossible to give firm recommendations. The discussion in this section has another purpose. It is intended to provide background information and guidelines to assist the user of this hand￾book in making a sample size decision. We emphasize that this section deals only with the stability of ba￾sis values with respect to sample size. Another important issue relevant to the choice of a sample size, which deserves separate consideration, is the effect on basis values of statistical model assumptions - since there is considerable uncertainty in model selection from small samples. Additional discussion of the effect of sample size selection is found in Section 2.2.5

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共112页,可试读30页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有