Analysis of variance

In statistics, analysis of variance (ANOVA) is a collection of statistical models, and their associated procedures, in which the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form ANOVA provides a statistical test of whether or not the means of several groups are all equal, and therefore generalizes ttest to more than two groups. Doing multiple twosample ttests would result in an increased chance of committing a type I error. For this reason, ANOVAs are useful in comparing two, three or more means.
Contents
Models
There are three classes of models used in the analysis of variance, and these are outlined here.
Fixedeffects models (Model 1)
Main article: Fixed effects modelThe fixedeffects model of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see if the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole.
Randomeffects models (Model 2)
Main article: Random effects modelRandom effects models are used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments differ from ANOVA model 1.
Mixedeffects models (Model 3)
Main article: Mixed modelA mixedeffects model contains experimental factors of both fixed and randomeffects types, with appropriately different interpretations and analysis for the two types.
Assumptions of ANOVA
The analysis of variance has been studied from several approaches, the most common of which use a linear model that relates the response to the treatments and blocks. Even when the statistical model is nonlinear, it can be approximated by a linear model for which an analysis of variance may be appropriate.
A model often presented in textbooks
Many textbooks present the analysis of variance in terms of a linear model, which makes the following assumptions about the probability distribution of the responses:
 Independence of cases – this is an assumption of the model that simplifies the statistical analysis.
 Normality – the distributions of the residuals are normal.
 Equality (or "homogeneity") of variances, called homoscedasticity — the variance of data in groups should be the same. Modelbased approaches usually assume that the variance is constant. The constantvariance property also appears in the randomization (designbased) analysis of randomized experiments, where it is a necessary consequence of the randomized design and the assumption of unit treatment additivity.^{[1]} If the responses of a randomized balanced experiment fail to have constant variance, then the assumption of unit treatment additivity is necessarily violated.
To test the hypothesis that all treatments have exactly the same effect, the Ftest's pvalues closely approximate the permutation test's pvalues: The approximation is particularly close when the design is balanced.^{[2]} Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum.^{[nb 1]} The anova F–test (of the nullhypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions.^{[3]}^{[nb 2]} The Kruskal–Wallis test is a nonparametric alternative that does not rely on an assumption of normality. And the Friedman test is the nonparametric alternative for a oneway repeated measures ANOVA.
The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors (ε's) are independent and
Randomizationbased analysis
See also: Random assignment and Randomization testIn a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective randomassignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald A. Fisher. This designbased analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University.^{[4]} Kempthorne and his students make an assumption of unit treatment additivity, which is discussed in the books of Kempthorne and David R. Cox.^{[citation needed]}
Unittreatment additivity
In its simplest form, the assumption of unittreatment additivity states that the observed response y_{i,j} from experimental unit i when receiving treatment j can be written as the sum of the unit's response y_{i} and the treatmenteffect t_{j}, that is ^{[5]}^{[6]}
 y_{i,j} = y_{i} + t_{j}.
The assumption of unittreatment addivity implies that, for every treatment j, the jth treatment have exactly the same effect t_{j} on every experiment unit.
The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many consequences of treatmentunit additivity can be falsified. For a randomized experiment, the assumption of unittreatment additivity implies that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unittreatment additivity is that the variance is constant.
The property of unittreatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unittreatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance.^{[7]} Also, a statistician may specify that logarithmic transforms be applied to the responses, which are believed to follow a multiplicative model.^{[8]}^{[9]} According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition.
The assumption of unittreatment additivity was enunciated in experimental design by Kempthorne and Cox. Kempthorne's use of unit treatment additivity and randomization is similar to the designbased inference that is standard in finitepopulation survey sampling.
Derived linear model
Kempthorne uses the randomizationdistribution and the assumption of unit treatment additivity to produce a derived linear model, very similar to the textbook model discussed previously.
The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies by Kempthorne and his students (Hinkelmann and Kempthorne 2008). However, there are differences. For example, the randomizationbased analysis results in a small but (strictly) negative correlation between the observations.^{[10]}^{[11]} In the randomizationbased analysis, there is no assumption of a normal distribution and certainly no assumption of independence. On the contrary, the observations are dependent!
The randomizationbased analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomizationbased analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to modelbased analysis of balanced randomized experiments.
Statistical models for observational data
However, when applied to data from nonrandomized experiments or observational studies, modelbased analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use subjective models, as emphasized by Ronald A. Fisher and his followers. In practice, the estimates of treatmenteffects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public.^{[12]}
Logic of ANOVA
Partitioning of the sum of squares
The fundamental technique is a partitioning of the total sum of squares S into components related to the effects used in the model. For example, we show the model for a simplified ANOVA with one type of treatment at different levels.
So, the number of degrees of freedom f can be partitioned in a similar way and specifies the chisquared distribution which describes the associated sums of squares.
See also Lackoffit sum of squares.
The Ftest
Main article: FtestThe Ftest is used for comparisons of the components of the total deviation. For example, in oneway, or singlefactor ANOVA, statistical significance is tested for by comparing the F test statistic
where
 I = number of treatments
and
 n_{T} = total number of cases
to the Fdistribution with I − 1,n_{T} − I degrees of freedom. Using the Fdistribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chisquared distribution.
Power analysis
Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and alpha level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true.
Effect size
Main article: Effect sizeSeveral standardized measures of effect gauge the strength of the association between a predictor (or set of predictors) and the dependent variable. Effectsize estimates facilitate the comparison of findings in studies and across disciplines. Common effect size estimates reported in univariateresponse anova and multivariateresponse manova include the following: etasquared, partial etasquared, omega, and intercorrelation.
η^{2} ( etasquared ): Etasquared describes the ratio of variance explained in the dependent variable by a predictor while controlling for other predictors. Etasquared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). On average it overestimates the variance explained in the population. As the sample size gets larger the amount of bias gets smaller,
Partial η^{2} (Partial etasquared): Partial etasquared describes the "proportion of total variation attributable to the factor, partialling out (excluding) other factors from the total nonerror variation".^{[13]} Partial eta squared is often higher than eta squared,
Cohen (1992) suggests effect sizes for various indexes, including ƒ (where 0.1 is a small effect, 0.25 is a medium effect and 0.4 is a large effect). He also offers a conversion table (see Cohen, 1988, p. 283) for eta squared (η^{2}) where 0.0099 constitutes a small effect, 0.0588 a medium effect and 0.1379 a large effect. Though, considering that η^{2} are comparable to r^{2} when df of the numerator equals 1 (both measures' proportion of variance accounted for), these guidelines may overestimate the size of the effect. If going by the r guidelines (0.1 is a small effect, 0.3 a medium effect and 0.5 a large effect) then the equivalent guidelines for etasquared would be the square of these, i.e. 0.01 is a small effect, 0.09 a medium effect and 0.25 a large effect, and these should also be applicable to etasquared. When the df of the numerator exceeds 1, etasquared is comparable to Rsquared.^{[14]}
Omega^{2} ( omegasquared ): A more unbiased estimator of the variance explained in the population is omegasquared^{[15]}^{[16]}^{[17]}
While this form of the formula is limited to betweensubjects analysis with equal sample sizes in all cells,^{[17]} a generalized form of the estimator has been published for betweensubjects and withinsubjects analysis, repeated measure, mixed design, and randomized block design experiments.^{[18]} In addition, methods to calculate partial Omega^{2} for individual factors and combined factors in designs with up to three independent variables have been published.^{[18]}
Cohen's ƒ^{2}: This measure of effect size represents the square root of variance explained over variance not explained.SMCV or standardized mean of a contrast variable: This effect size is the ratio of mean to standard deviation of a contrast variable for contrast analysis in ANOVA. It may provide a probabilistic interpretation to various effect sizes in contrast analysis.^{[19]}
Follow up tests
A statistically significant effect in ANOVA is often followed up with one or more different followup tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Followup tests are often distinguished in terms of whether they are planned (a priori) or post hoc. Planned tests are determined before looking at the data and post hoc tests are performed after looking at the data. Post hoc tests such as Tukey's range test most commonly compare every group mean with every other group mean and typically incorporate some method of controlling for Type I errors. Comparisons, which are most commonly planned, can be either simple or compound. Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels.
Study designs and ANOVAs
There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment,^{[citation needed]} especially on the protocol that specifies the random assignment of treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model.^{[citation needed]}
Some popular designs use the following types of ANOVA:
 Oneway ANOVA is used to test for differences among two or more independent groups (means),e.g. different levels of urea application in a crop. Typically, however, the oneway ANOVA is used to test for differences among at least three groups, since the twogroup case can be covered by a ttest.^{[20]} When there are only two means to compare, the ttest and the ANOVA Ftest are equivalent; the relation between ANOVA and t is given by F = t^{2}.
 Factorial ANOVA is used when the experimenter wants to study the interaction effects among the treatments.
 Repeated measures ANOVA is used when the same subjects are used for each treatment (e.g., in a longitudinal study).
 Multivariate analysis of variance (MANOVA) is used when there is more than one response variable.
History
The analysis of variance was used informally by researchers in the 1800s using least squares.^{[citation needed]} In physics and psychology, researchers included a term for the operatoreffect, the influence of a particular person on measurements, according to Stephen Stigler's histories.^{[citation needed]}
Sir Ronald Fisher proposed a formal analysis of variance in a 1918 article The Correlation Between Relatives on the Supposition of Mendelian Inheritance.^{[21]} His first application of the analysis of variance was published in 1921.^{[22]} Analysis of variance became widely known after being included in Fisher's 1925 book Statistical Methods for Research Workers.
See also
 ANOVA on ranks
 ANOVAsimultaneous component analysis
 AMOVA
 ANCOVA
 ANORVA
 MANOVA
 Mixeddesign analysis of variance
Footnotes
 ^ Rosenbaum (2002, page 40) cites Section 5.7, Theorem 2.3 of Lehmann's Testing Statistical Hypotheses (1959)^{[Full citation needed]}.
 ^ Nonstatisticians may be confused because another Ftest is nonrobust: When used to test the equality of the variances of two populations, the Ftest is unreliable if there are deviations from normality (Lindman, 1974^{[page needed]}).
Notes
 ^ (Hinkelmann and Kempthorne (2008)
 ^ Hinkelmann and Kempthorne (2008)
 ^ Moore and McCabe^{[Full citation needed]})
 ^ Anscombe (1948)
 ^ Kempthorne and Cox, Chapter 2^{[Full citation needed]}
 ^ Hinkelmann and Kempthorne (2008, Chapters 56)
 ^ Hinkelmann and Kempthorne (2008, Chapter 7 or 8)
 ^ Cox, Chapter 2^{[Full citation needed]}
 ^ Bailey (2008)
 ^ (Hinkelmann and Kempthorne 2008, volume one, chapter 7
 ^ Bailey^{[Full citation needed]} chapter 1.14)
 ^ Freedman^{[Full citation needed]}
 ^ Pierce, Block & Aguinis (2004, p. 918)
 ^ Levine & Hullett (2002)
 ^ Bortz, 1999^{[Full citation needed]}, p. 269f.;
 ^ Bühner & Ziegler^{[Full citation needed]} (2009, p. 413f)
 ^ ^{a} ^{b} Tabachnick & Fidell (2007, p. 55)
 ^ ^{a} ^{b} Olejnik, S. & Algina, J. 2003. Generalized Eta and Omega Squared Statistics: Measures of Effect Size for Some Common Research Designs Psychological Methods. 8:(4)434447. http://cps.nova.edu/marker/olejnik2003.pdf
 ^ Zhang (2011)
 ^ Gosset (1908)^{[Full citation needed]}
 ^ The Correlation Between Relatives on the Supposition of Mendelian Inheritance. Ronald A. Fisher. Philosophical Transactions of the Royal Society of Edinburgh. 1918. (volume 52, pages 399–433)
 ^ On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample. Ronald A. Fisher. Metron, 1: 332 (1921)
References
 Anscombe, F. J. (1948). "The Validity of Comparative Experiments". Journal of the Royal Statistical Society. Series A (General) 111 (3): 181–211. doi:10.2307/2984159. JSTOR 2984159. MR30181.
 Bailey, R. A. (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 9780521683579. http://www.maths.qmul.ac.uk/~rab/DOEbook. Prepublication chapters are available online.
 Caliński, Tadeusz & Kageyama, Sanpei (2000). Block designs: A Randomization approach, Volume I: Analysis. Lecture Notes in Statistics. 150. New York: SpringerVerlag. ISBN 0387985786.
 Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0387953612.
 Cohen, Jacob (1992). "Statistics a power primer". Psychology Bulletin 112: 155–159. doi:10.1037/00332909.112.1.155. PMID 19565683.
 Cohen, Jacob (1988). Statistical power analysis for the behavior sciences (2nd ed.).
 Cox, David R. (1958). Planning of experiments
 Cox, David R. & Reid, Nancy M. (2000). The theory of design of experiments. (Chapman & Hall/CRC).
 Fisher, Ronald (1918). "Studies in Crop Variation. I. An examination of the yield of dressed grain from Broadbalk". Journal of Agricultural Science 11: 107–135. http://www.library.adelaide.edu.au/digitised/fisher/15.pdf.
 Freedman, David A. et al. Statistics, 4th edition (W.W. Norton & Company, 2007) [1]
 Freedman, David A.(2005). Statistical Models: Theory and Practice, Cambridge University Press. ISBN=9780521671057
 Hettmansperger, T. P.; McKean, J. W. (1998). Robust nonparametric statistical methods. Kendall's Library of Statistics. 5 (First ed.). London: Edward Arnold. pp. xiv+467 pp.. ISBN 0340549378, 0471194794. MR1604954. }
 Hinkelmann, Klaus & Kempthorne, Oscar (2008). Design and Analysis of Experiments. I and II (Second ed.). Wiley. ISBN 9780470385517.
 Olejnik, Stephen & Algina, James (2003). "Generalized Eta and Omega Squared Statistics: Measures of Effect Size for Some Common Research Designs". Psychological Methods 8 (4): 434–447. doi:10.1037/1082989X.8.4.434. PMID 14664681. http://cps.nova.edu/marker/olejnik2003.pdf.
 Kempthorne, Oscar (1979). The Design and Analysis of Experiments (Corrected reprint of (1952) Wiley ed.). Robert E. Krieger. ISBN 0882751050.
 Lentner, Marvin; Thomas Bishop (1993). Experimental design and analysis (Second ed.). P.O. Box 884, Blacksburg, VA 24063: Valley Book Company. ISBN 096162552X.
 Levine, T. R. & Hullett, C. R. (2002). "Etasquared, partial etasquared, and misreporting of effect size in communication research". Human Communication Research, 28, 612625.
 Lindman, H. R. (1974). Analysis of variance in complex experimental designs. San Francisco: W. H. Freeman & Co. Hillsdale, NJ USA: Erlbaum.
 Rosenbaum, Paul R. (2002). Observational Studies (2nd ed.). New York: SpringerVerlag.
 Tabachnick, Barbara G. & Fidell, Linda S. (2007). Using Multivariate Statistics (5th ed.). Boston: Pearson International Edition.
 Wichura, Michael J. (2006). The coordinatefree approach to linear models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 9780521868426, ISBN 0521868424. MR2283455.
 Zhang XHD (2011). Optimal HighThroughput Screening: Practical Experimental Design and Data Analysis for Genomescale RNAi Research. Cambridge University Press. ISBN 9780521734448.
External links
 SOCR ANOVA Activity and interactive applet.
 OneWay and TwoWay ANOVA in QtiPlot
 Examples of all ANOVA and ANCOVA models with up to three treatment factors, including randomized block, split plot, repeated measures, and Latin squares
 NIST/SEMATECH eHandbook of Statistical Methods, section 7.4.3: "Are the means equal?"
Design of experiments Scientific
MethodScientific experiment · Statistical design · Control · Internal & external validity · Experimental unit · Blinding
Optimal design: Bayesian · Random assignment · Randomization · Restricted randomization · Replication versus subsampling · Sample sizeTreatment
& BlockingModels
& InferenceLinear regression · Ordinary least squares · Bayesian
Random effect · Mixed model · Hierarchical model: Bayesian
Analysis of variance (Anova) · Cochran's theorem · Manova (multivariate) · Ancova (covariance)
Compare means · Multiple comparisonDesigns:
Completely
RandomizedFactorial · Fractional factorial · PlackettBurman · Taguchi
Block · Generalized randomized block design (GRBD) · Latin square · GraecoLatin square · HyperGraecoLatin square · Latin hypercube
Response surface methodology · Polynomial & rational modeling · BoxBehnken · Central composite
Repeated measures design · Crossover study ·
Randomized controlled trial · Sequential analysis · Sequential probability ratio testCategories: Analysis of variance
 Design of experiments
 Statistical tests
 Parametric statistics
Wikimedia Foundation. 2010.
Look at other dictionaries:
analysis of variance — n analysis of variation in an experimental outcome and esp. of a statistical variance in order to determine the contributions of given factors or variables to the variance * * * (ANOVA) a statistical method for analyzing the effects of each of… … Medical dictionary
Analysis of variance — A*nal y*sis of variance, n. (Statistics) a statistical technique by which the results of an observation or experiment are analyzed to determine the relative contributions of the different possible causative factors or variables to the outcome.… … The Collaborative International Dictionary of English
analysis of variance — ANOVA; variance analysis 1) A commonly used method for examining the statistically significant differences between the means of two or more populations In its simplest form (one way analysis of variance), it involves only one dependent variable… … Big dictionary of business and management
analysis of variance — ANOVA; = variance analysis In standard costing and budgetary control, the analysis of variances in order to seek their causes. The total profit variance or production cost variance is analysed into sub variances to indicate the major reasons for… … Accounting dictionary
analysis of variance — noun a statistical method for making simultaneous comparisons between two or more means; a statistical method that yields values that can be tested to determine whether a significant relation exists between variables • Syn: ↑ANOVA • Topics:… … Useful english dictionary
analysis of variance — Date: 1918 analysis of variation in an experimental outcome and especially of a statistical variance in order to determine the contributions of given factors or variables to the variance … New Collegiate Dictionary
analysis of variance — Statistics. a procedure for resolving the total variance of a set of variates into component variances that are associated with defined factors affecting the variates. Also called variance analysis. [1935 40] * * * … Universalium
analysis of variance — noun A collection of statistical models, and their associated procedures, in which the observed variance is partitioned into components due to different explanatory variables. Syn: ANOVA … Wiktionary
Analysis of Variance — (ANOVA) Tool used in statistics to apportion observed variance into probable causes, e.g. when used to determine the significance of variances between measurable items in different populations … Expanded glossary of Cycad terms
analysis of variance — noun a statistical procedure for resolving the total variance of a set of variates into component variances, which are associated with factors affecting the variates … Australian English dictionary