36  Modeling Correlated Responses

Our question of interest in this unit is the same as that in our previous unit:

\[H_0: \theta_1 = \theta_2 = \dotsb = \theta_k \qquad \text{vs.} \qquad H_1: \text{At least one } \theta_j \text{ differs}.\]

As this is the same question associated with an Analysis of Variance (ANOVA), it seems reasonable to begin with the model for describing a quantitative response as a function of a categorical predictor described in Chapter 27. In this chapter, we extend this model to account for the correlation between responses.

36.1 Statistical Model for Correlated Responses

For the Frozen Yogurt Case Study, we are comparing the average taste rating for different vendors. We might consider the following model for the data generating process (Equation 27.2) introduced in the previous unit:

\[(\text{Taste Rating})_i = \mu_1 (\text{East Side})_i + \mu_2 (\text{Name Brand})_i + \mu_3 (\text{South Side})_i + \varepsilon_i\]

where

\[ \begin{aligned} (\text{East Side})_i &= \begin{cases} 1 & \text{if i-th rating associated with east side yogurt vendor} \\ 0 & \text{otherwise} \end{cases} \\ (\text{Name Brand})_i &= \begin{cases} 1 & \text{if i-th rating associated with name brand yogurt vendor} \\ 0 & \text{otherwise} \end{cases} \\ (\text{South Side})_i &= \begin{cases} 1 & \text{if i-th rating associated with south side yogurt vendor} \\ 0 & \text{otherwise} \end{cases} \end{aligned} \]

are indicator variables to capture the various factor levels. In order to use this model, the first condition we imposed on the error term was that the error in the rating for one observation is independent of the error in the rating for all other observations. In fact, this condition is required to implement any form of inference (bootstrapping or an analytical approach). However, for the Frozen Yogurt Case Study, we know this condition is violated. If the errors were independent of one another, it would imply the responses were independent of one another. But, since each participant rated each of the three vendors, the ratings from the same participant are related.

Consider the participant who loves frozen yogurt and tends to always give a higher rating than other participants. This individual would tend to give a higher than average rating regardless of the vendor. That is, the error (which represents the difference between an observed rating and the average rating for that corresponding vendor) for this participant’s response for the Name Brand vendor would be a large positive value; however, the error for this participant’s response for the East Side vendor would also be a large positive value. That is, knowing the error for one of the participant’s responses would help us predict the error for another of their responses. This indicates a dependency. However, knowing this individual’s error is large for one vendor tells us nothing about how the next participant’s error term will behave.

Note

Violations of the independence condition can occur in clusters, which is what happens when blocks are present. Specifically, while observations from the same block are dependent on one another (correlated), observations from different blocks can remain independent.

At this point in the text, hopefully it is not a surprise that the way to address the correlated error terms is to partition the variability in the response further. Essentially, the blocking in the study informs us of another reason for the variation in the observed taste ratings: observations from the same participant will be similar. We want to tease this out of the variation in ratings among the same individual, and that is done by adding additional terms into the model for the data generating process.

For the Frozen Yogurt Case Study, consider the following model for the data generating process:

\[ \begin{aligned} (\text{Taste Rating})_i &= \mu_1 (\text{East Side})_i + \mu_2 (\text{Name Brand})_i + \mu_3 (\text{South Side})_i \\ &\qquad + \beta_2 (\text{Participant 2})_i + \beta_3 (\text{Participant 3})_i + \beta_4 (\text{Participant 4})_i \\ &\qquad + \beta_5 (\text{Participant 5})_i + \beta_6 (\text{Participant 6})_i + \beta_7 (\text{Participant 7})_i \\ &\qquad + \beta_8 (\text{Participant 8})_i + \beta_9 (\text{Participant 9})_i + \varepsilon_i \end{aligned} \tag{36.1}\]

where the indicators for the vendors were previously described and

\[(\text{Participant j})_i = \begin{cases} 1 & \text{i-th observation taken from Participant j} \\ 0 & \text{otherwise} \end{cases}\]

is an indicator of whether the observation comes from a particular participant. In this model, the \(\beta\) parameters capture the “bump” in each participant’s ratings that is due to the participant’s inherent feeling towards frozen yogurt. That is, every observation that is associated with the same participant will share this “bump,” capturing the similarity between observations from the same participant.

It may at first appear as if we forgot the indicator for Participant 1; however, it is not needed. Just as with any model, it is often easiest to see what is happening by thinking about the form of the model under specific cases. How do we describe observations (remember there is more than one) for Participant 2? The above model for the data generating process states that the average rating for the East Side vendor from Participant 2 is given by \(\mu_1 + \beta_2\). Similarly, the average rating for the East Side vendor from Participant 6 is given by \(\mu_1 + \beta_6\). What about Participant 1? Well, Participant 1 would have a 0 for every “participant indicator” variable in the model; therefore, the above model states that the average rating for the East Side vendor from Participant 1 is simply \(\mu_1\).

This affects how we interpret our parameters. In our model \(\mu_1\) is no longer the average rating given to East Side Yogurt; it is the average rating given to East Side Yogurt by the first participant. It is the same concept as the “reference group” (see Definition 21.2) discussed in Chapter 21.

Note

If there are \(b\) blocks, we need only include \(b-1\) indicator variables and corresponding parameters in the model for the data generating process in order to capture all the blocks. The remaining block is the “reference group” and is captured by the parameters comparing the factor levels under study.

This may seem like it affects our questions of interest. After all, the hypothesis

\[H_0: \mu_1 = \mu_2 = \mu_3\]

says that the “average taste rating for Participant 1 is the same for all vendors” instead of the “average taste rating across all individuals is the same for all vendors.” The latter is the hypothesis we want to test, but we have the parameters specified in terms of the first participant only. This “problem” resolves once we recognize an inherent assumption of our model. Notice the difference between the average ratings for the East Side vendor and Name Brand vendor for Participant 1 is

\[\mu_1 - \mu_2.\]

And, notice the difference between the average ratings for the East Side vendor and the Name Brand vendor for Participant 2 is

\[\left(\mu_1 + \beta_2\right) - \left(\mu_2 + \beta_2\right) = \mu_1 - \mu_2.\]

In fact, since the “bump” for every observation from Participant \(j\) is always \(\beta_j\), when comparing averages across vendors, these “bumps” cancel out. Therefore, if the mean response for one Participant is the same for all vendors, then it must be that the mean response across vendors is the same for all Participants (see Appendix B)! In context, this means that all individuals must share the same preferences for frozen yogurt vendors. This is a feature of the model, and we will discuss this in the next chapter.

Big Idea

The model we introduce for blocking assumes that any difference between the levels of a factor is similar across all blocks.

The model for the data generating process we have been discussing essentially says there are three reasons that the taste ratings differ from one observation to another:

  1. Ratings applied to different vendors may differ,
  2. Ratings from different individuals for the same vendor may differ, and
  3. Even within the same individual, ratings for cups of yogurt from the same vendor may differ due to unexplained variability.

In general, this type of model, often described as a “Repeated Measures ANOVA” model, partitions the variability in the response into three general categories: differences between groups, differences between blocks, differences within blocks.

Repeated Measures ANOVA Model

For a quantitative response and a single categorical predictor (also known as a factor) with \(k\) levels in the presence of \(b\) blocks, the repeated measures ANOVA model is

\[(\text{Response})_i = \sum_{j = 1}^{k} \mu_j (\text{Group } j)_i + \sum_{m = 2}^{b} \beta_m (\text{Block } m)_i + \varepsilon_i \tag{36.2}\]

where

\[ \begin{aligned} (\text{Group } j)_i &= \begin{cases} 1 & \text{i-th unit belongs to group } j \\ 0 & \text{otherwise} \end{cases} \\ (\text{Block } m)_i &= \begin{cases} 1 & \text{i-th unit belongs to block } m \\ 0 & \text{otherwise} \end{cases} \end{aligned} \]

are indicator variables capturing whether a unit belongs to the \(j\)-th group and \(m\)-th block, respectively; and, \(\mu_1, \mu_2, \dotsc, \mu_k\) and \(\beta_2, \beta_3, \dotsc, \beta_b\) are the parameters governing the model for the data generating process.

This model assumes any differences between groups are similar across all blocks.

In the past, the stochastic portion of the model \(\varepsilon\) captured the subject-to-subject variability. It no longer has the same role in this case. It now captures the variability in observations within the same block. That is, it captures the fact that if we repeatedly taste the same yogurt, we might rate it differently each time because of our mood or some other external factor that we have not captured. The subject-to-subject variability is captured by the \(\beta\) parameters in the model.

Big Idea

In a model without repeated measures (blocks), the error term captures the subject-to-subject variability. In a model with repeated measures, the error term captures the variability between observations within the same block.

There is something else that is unique about the repeated measures ANOVA model. We do not really care about all the parameters in the model. Our question of interest is based on the parameters \(\mu_1, \mu_2, \mu_3\). We would never be interested in testing something of the form

\[H_0: \beta_2 = \beta_3 \qquad \text{vs.} \qquad H_1: \beta_2 \neq \beta_3\]

as this would be comparing Participant 2 to Participant 3. Such a comparison (does Participant 2 have different yogurt ratings from Participant 3) is not useful. Said another way, we did not put the parameters \(\beta_2, \dotsc, \beta_9\) into the model because they helped us address a particular research objective; instead, we put them in the model because they captured the observed relationship in the responses. This is the difference between factors and blocks.

Note

The statistical theory underlying models which generalize the repeated measures ANOVA model make use of the terms “fixed effect” and “random effect” instead of factor and blocks. These more technical terms allow the model to generalize to a host of situations not covered by the repeated measures ANOVA model. For our purposes, however, it is sufficient to differentiate between factors and blocks.

Consider applying the questions listed at the end of Chapter 34 for distinguishing between a factor and a block. Notice that if we were to repeat the study, we would use the same three vendors, since they are a fundamental part of the question. However, we would not need to use the same participants in the sample; we would be satisfied with any random sample from the population. So, the values “East Side Yogurt,” “South Side Yogurt,” and “Name Brand” (at least, the three vendors these represent) are of specific interest. However, we do not care about “Participant 2” and “Participant 3.” These can be any two individuals from the population. Therefore, for the Frozen Yogurt Case Study, the vendor is the factor of interest, while the participant is the block term.

Big Idea

The parameters and corresponding indicator variables capturing the blocking are placed in the model for the data generating process to account for the correlation between responses.

(Optional) Comparison of Repeated Measures ANOVA to General Linear Regression Model

Notice that Equation 36.2 has a very similar form to Equation 21.3. The primary difference is the presence of an intercept term in Equation 21.3. Each indicator is acting as a predictor in the model for the data generating process. If we were to add an intercept to Equation 21.3, and then remove one of the indicator variables used to distinguish the factor of interest, then we would completely fall under the general linear regression model framework. This means that as we move forward, we can adopt results established for the general linear regression model.

The parameters in Equation 36.2 can be estimated using the method of least squares (Definition 17.2). We must keep in mind that these parameters do not correspond directly to the average response observed in each group. As a result, the sample mean from each group is often reported as well.

Just as before, while point estimates are helpful, inference requires that we quantify the variability in our estimates. And, just as before, we need to distinguish between the model for the data generating process and the model for the sampling distribution of the parameter estimates and the model for the null distribution of a standardized statistic. And, just as before, to move from the model for the data generating process to a model for the sampling distribution of the parameter estimates, we impose conditions on the stochastic component.