37  Conditions on the Error Term of the Repeated Measures ANOVA Model

In the previous chapter we developed a general model for the data generating process of a quantitative response as a function of a categorical predictor in the presence of blocking:

\[(\text{Response})_i = \sum_{j=1}^{k} \mu_k (\text{Group } j)_{i} + + \sum_{m=2}^{b} (\text{Block } m)_i + \varepsilon_i,\]

where

\[ \begin{aligned} (\text{Group } j)_i &= \begin{cases} 1 & \text{if the i-th observation is from Group } j \\ 0 & \text{otherwise} \end{cases} \\ (\text{Block } m)_i &= \begin{cases} 1 & \text{if the i-th observation is from Block } m \\ 0 & \text{otherwise} \end{cases} \end{aligned} \]

are indicator variables to capture the level of the factor and the block to which each observation belongs.

We also discussed a common method for estimating the parameters of this model from a sample — the method of least squares. However, if we are to construct a model for the sampling distribution of these estimates we must add some structure to the stochastic component \(\varepsilon\) in the model. In this chapter, we focus on the most common conditions we might impose and how those conditions impact the model for the sampling and null distributions (and therefore the computation of a confidence interval or p-value).

37.1 Conditions on the Error Distribution

In our model for the data generating process, we incorporated a component \(\varepsilon\) to capture the noise within each block. Since the error is a random variable (stochastic element), we know it has a distribution. We typically assume a certain structure to this distribution.

37.1.1 Correctly Specified Model

The first condition we consider is the most important. It states that for every value of the predictor, the average error is 0. We have actually already mentioned this condition in another form — the inherent assumption we make with the structure of our model. Our model states that any differences in the average response across the levels of the factor are similar across all blocks. This comes from the structure of our model. Therefore, this is equivalent to saying that the deterministic portion of our model for the data generating process is correctly specified. This is the same as the condition we introduced in Chapter 18.

Mean-0 Condition

The mean-0 condition states that the treatment differences are similar across all blocks. Even though we state this as a condition on the error terms, it is equivalent to say that the “the block effect is similar for all observations across treatment groups.”

Note

The mean-0 condition can be relaxed through the inclusion of an interaction term (see Definition 21.5), but this is beyond the scope of the text.

37.1.2 Independent Errors

The second condition we consider is that the noise attributed to one observed response for an individual is independent of the noise attributed to the observed response for any other individual. That is, the amount of error in any one observation is unrelated to the error in any other observations. This is the same condition we encountered in Chapter 10, Chapter 18, and Chapter 28.

Independence Condition

The independence condition states that the error in one observation is independent (see Definition 10.3) of the error in all other observations.

With just these first two conditions, we can use a bootstrap algorithm in order to model the sampling distribution of the least squares estimates of our parameters (see Appendix A). However, additional conditions are often considered.

The idea of assuming independence may seem counterintuitive; this entire unit exists because we felt there was a correlation among the responses. However, this condition is stating that once we account for the correlation induced by the blocks through the incorporation of the block terms in the model for the data generating process, the remaining noise is now independent. We essentially partitioned out the correlated component, and what remains is now just independent noise.

37.1.3 Same Degree of Precision

The third condition that is typically placed on the distribution of the errors is that the variability of the errors is the same for all combination of the predictors. Again, we encountered this condition in Chapter 18 and Chapter 28. As our model includes both the group comparisons of interest as well as the block terms, violating this condition happens if the response is more precise for one group/block combination than another.

Constant Variance

Also called homoskedasticity, the constant variance condition states that the variability of the errors within each group is the same across all groups.

With this additional condition imposed, we are able to modify our bootstrap algorithm when constructing a model for the sampling distribution of the least squares estimates.

37.1.4 Specific Form of the Error Distribution

The fourth condition that is typically placed on the distribution of the errors is that the errors follow a Normal distribution, as discussed in Chapter 18. Here, we are assuming a particular structure on the distribution of the error population.

Normality

The normality condition states that the distribution of the errors follows the functional form of a Normal distribution (Definition 18.2).

Let’s think about what this condition means for the responses. Given the shape of the Normal distribution, imposing this condition (in addition to the other conditions) implies that some errors are positive and some are negative. This in turn implies that some responses within a block will be above average for their group, and some responses will be below average for their group. More, because the distribution of the response within each block and group is just a shifted version of the distribution of the errors, we know that the distribution of the response variable itself follows a Normal distribution within a particular block and group. While this is similar to the argument made in Chapter 28 in the context of the ANOVA model, it is often difficult to visualize for a repeated measures ANOVA. Recall that blocking is typically used to increase the power of a study; as a result, it is common that only one observation exists within each group and block combination. This makes using the responses directly to visualize the shape of the distribution impossible. This is similar then to the linear regression model scenario discussed in Chapter 18.

With this last condition imposed, we can construct an analytic model for the sampling distribution of the least squares estimates. As in regression modeling, we are not required to impose all four conditions in order to obtain a model for the sampling distribution of the estimates. Historically, however, all four conditions have been routinely imposed in the scientific and engineering literature.

37.2 Conditions on the Block Effects

The blocking terms, and the associated \(\beta\) parameters, in the model for the data generating process were used to partition the variability in the response to account for the correlation among responses from the same block. In the previous chapter, we highlighted this when we stated that the \(\beta\) parameters really capture the subject-to-subject variability. Further, remember we stated that blocks should be viewed as a sample from some larger population. As a result, the blocking parameters represent a sample from a population; so, they have a distribution which must be constrained.

Note

The presence of this distribution is why blocks are referred to as “random effects” in the statistical literature when discussing a more general approach to addressing correlated responses.

The easiest way to discuss additional conditions on the blocking parameters is to think about each \(\beta\) as a “bump” attributed to that block. Think about a participant who is not a fan of frozen yogurt; then, regardless of which vendor the yogurt they are tasting originated from, the participant’s taste rating will tend to “bump” down compared to others.

37.2.1 Independent Block Effects

The first condition we consider is that the “bump” for one participant is unrelated to the “bump” for any other participant. Practically, one person’s taste for frozen yogurt is unaffected by the taste for frozen yogurt of anyone else.

We also impose the condition that the “bump” for a participant is unrelated to the amount of error in the response for that participant. That is, the error term must be independent of the blocking term.

Independence Condition

The independence condition among the blocks states that the blocks are independent of one another and that the blocks are independent of the error term.

37.2.2 Specific Form of the Distribution

The last condition that is typically placed on the distribution of the “bumps” is that the magnitude of these “bumps” follows a Normal distribution, as discussed in Chapter 18. Here, we are assuming a particular structure on the distribution of the blocks.

Normality

The normality condition states that the distribution of the block impacts follows the functional form of a Normal distribution (Definition 18.2).

Note

The conditions on the block effects are much more technical than those placed on the error term. The statistical theory for such models is beyond the scope of this text, but they impact the model for the sampling distribution of the estimates in a similar way as the conditions on the error term do.

37.3 Classical Repeated Measures ANOVA Model

We have discussed several conditions we could place on the stochastic portion of the data generating process. Placing all conditions on the error term and blocking effects is what we refer to as the “Classical Repeated Measures ANOVA Model.”

Definition 37.1 (Classical Repeated Measures ANOVA Model) For a quantitative response and single categorical predictor with \(k\) levels in the presence of \(b\) blocks, the classical repeated measures ANOVA model assumes the following data generating process:

\[(\text{Response})_i = \sum_{j=1}^{k} \mu_j (\text{Group } j)_i + \sum_{m=2}^{b} \beta_m (\text{Block } m)_i + \varepsilon_i\]

where

\[ \begin{aligned} (\text{Group } j)_{i} &= \begin{cases} 1 & \text{if i-th observation belongs to group j} \\ 0 & \text{otherwise} \end{cases} \\ (\text{Block } m)_{i} &= \begin{cases} 1 & \text{if i-th observation belongs to block m} \\ 0 & \text{otherwise} \end{cases} \end{aligned} \]

are indicator variables and where

  1. The error in the response for one subject is independent of the error in the response for all other subjects.
  2. The variability in the error of the response is the same across all predictors.
  3. The errors follow a Normal distribution.
  4. Any differences between the groups are similar across all blocks. This results from the deterministic portion of the model for the data generating process being correctly specified and is equivalent to saying the error in the response, on average, takes a value of 0 for all predictors.
  5. The effect of a block on the response is independent of the effect of any other block on the response.
  6. The effect of a block on the response is independent of the error in the response for all subjects.
  7. The block effects follow a Normal distribution.

This is the default “repeated measures ANOVA” analysis implemented in the majority of statistical packages.

Warning

A “hidden” (typically unstated but should not be ignored) condition is that the sample is representative of the underlying population. In the one sample case (Chapter 10), we referred to this as the errors being “identically distributed.” We no longer use the “identically distributed” language for technical reasons; however, we still require that the sample be representative of the underlying population.

We note that “repeated measures ANOVA” need not require all four conditions on the error distribution imposed in Definition 37.1. Placing all four conditions on the error term results in a specific analytical model for the sampling distribution of the least squares estimates. Changing the conditions changes the way we model the sampling distribution.

Big Idea

The model for the sampling distribution of a statistic is determined by the conditions you place on the stochastic portion of the model for the data generating process.

37.4 Imposing the Conditions

Let’s return to our model for the yogurt taste ratings as a function of the vendor while accounting for the correlation induced due to the repeated measures across participants given in Equation 36.1:

\[ \begin{aligned} (\text{Taste Rating})_i &= \mu_1 (\text{East Side})_i + \mu_2 (\text{Name Brand})_i + \mu_3 (\text{South Side})_i \\ &\qquad + \beta_2 (\text{Participant 2})_i + \beta_3 (\text{Participant 3})_i + \beta_4 (\text{Participant 4})_i \\ &\qquad + \beta_5 (\text{Participant 5})_i + \beta_6 (\text{Participant 6})_i + \beta_7 (\text{Participant 7})_i \\ &\qquad + \beta_8 (\text{Participant 8})_i + \beta_9 (\text{Participant 9})_i + \varepsilon_i, \end{aligned} \]

where we use the same indicator variables defined in Chapter 36. We were interested in the following research question:

Does the average taste rating differ for at least one of the three yogurt vendors?

This was captured by the following hypotheses:

\(H_0: \mu_1 = \mu_2 = \mu_3\)
\(H_1: \text{at least one } \mu_j \text{ differs}.\)

Using the method of least squares, we constructed point estimates of the parameters in the model. If we are willing to assume the data is consistent with the conditions for the classical repeated measures ANOVA model, we are able to model the sampling distribution of these estimates and therefore construct confidence intervals. Table 37.1 summarizes the results of fitting the model described in Equation 36.1 using the data available from the Frozen Yogurt Case Study. In addition to the least squares estimates, it also contains the standard error (see Definition 6.4) of each statistic, quantifying the variability in the estimates. Finally, there is a 95% confidence interval for each parameter.

Table 37.1: Estimated parameters in a model for the taste ratings of yogurt from three vendors using data from a randomized complete block design with 9 blocks.
Term Estimate Standard Error Lower 95% CI Upper 95% CI
East Side Yogurt 7.000 1.358 4.121 9.879
Name Brand 6.778 1.358 3.899 9.657
South Side Yogurt 6.222 1.358 3.343 9.101
Participant 2 1.000 1.737 -2.683 4.683
Participant 3 1.000 1.737 -2.683 4.683
Participant 4 1.000 1.737 -2.683 4.683
Participant 5 0.333 1.737 -3.350 4.016
Participant 6 -2.000 1.737 -5.683 1.683
Participant 7 2.667 1.737 -1.016 6.350
Participant 8 0.000 1.737 -3.683 3.683
Participant 9 -2.000 1.737 -5.683 1.683

We note that while these parameter estimates are somewhat interesting, none of them address our question directly, and none of them estimate the overall average taste rating for a particular vendor. We must remember that the first three estimates in Table 37.1 are really estimating the average rating for only the first participant.

Note

Often the parameter estimates in the repeated measures block design are not of interest.

37.5 Recap

We have covered a lot of ground in this chapter, and it is worth taking a moment to summarize the big ideas. In order to compare the mean response in each group in the presence of blocking, we took a step back and modeled the data generating process. Such a model consists of two components: a deterministic component explaining the response as a function of the predictor and the blocks, and a stochastic component capturing the noise in the system.

Certain conditions are placed on the distribution of the noise in our model as well as on the distribution of the block effects. With a full set of conditions (classical repeated measures ANOVA model), we are able to model the sampling distribution of the least squares estimates analytically. We can also construct an empirical model for the sampling distribution of the least squares estimates assuming the data is consistent with fewer conditions.

In general, the more conditions we are willing to impose on the data generating process, the more tractable the analysis; however, the most important aspect is that the data come from a process which is consistent with the conditions we impose, which is discussed in Chapter 39.