Appendix C — Glossary
C.1 Five Fundamental Ideas of Inference
This text revolves around five fundamental ideas of inference. These were introduced in the text, and they are provided here for quick reference along with a link to where they were first introduced.
A research question can often be framed in terms of a parameter that characterizes the population. Framing the question should then guide our analysis.
If data is to be useful for making conclusions about the population, a process referred to as drawing inference, proper data collection is crucial. Randomization can play an important role ensuring a sample is representative and that inferential conclusions are appropriate.
The use of data for decision making requires that the data be summarized and presented in ways that address the question of interest and represent the variability present.
Variability is inherent in any process, and as a result, our estimates are subject to sampling variability. However, these estimates often vary across samples in a predictable way; that is, they have a distribution that can be modeled.
With a model for the distribution of a statistic under a proposed model, we can quantify the the likelihood of an observed sample under that proposed model. This allows us to draw conclusions about the corresponding parameter, and therefore the population, of interest.
C.2 Distributional Quartet
This text refers to what we call the “distributional quartet” — the four key distributions that are central to nearly any analysis. These are introduced early in the text and are
- The distribution of the population; this characterizes the pattern of variability of a variable across individual units in the population. While this is not directly observed, we sometimes posit a model for this distribution.
- The distribution of the sample; this characterizes the pattern of variability of a variable across individual units in the sample. This is what we summarize (graphically/numerically) using the available data.
- The sampling distribution of the statistic; this characterizes the pattern of variability of a statistic across repeated samples. While this is not directly observed, we model it by applying conditions on the stochastic portion of the model for the data generating process.
- The null distribution of a (often standardized) statistic; this is the sampling distribution of a statistic when a specified null hypothesis is enforced.
C.3 Models for the Data Generating Process
This text takes a modeling approach to inference. The following models are introduced in the text; each model is presented with a link to where the model was fully defined in the text.
In general, given a quantitative response variable and no predictors, our model for the data generating process is
\[(\text{Response})_i = \mu + \epsilon_i\]
where \(\mu\) represents the average response in the population, the parameter of interest.
For a quantitative response and a quantitative predictor, the general form of the simple linear regression model is
\[(\text{Response})_i = \beta_0 + \beta_1 (\text{Predictor})_i + \varepsilon_i\]
where \(\beta_0\) and \(\beta_1\) are parameters governing the model for the data generating process.
For a quantitative response and one or more predictors, the general form of the linear regression model is
\[ \begin{aligned} (\text{Response})_i &= \beta_0 + \beta_1 (\text{Predictor 1})_i + \beta_2(\text{Predictor 2})_i + \dotsb + \beta_p (\text{Predictor } p)_i + \varepsilon_i \\ &= \beta_0 + \sum_{j=1}^{p} \beta_j (\text{Predictor j})_i + \varepsilon_i \end{aligned} \]
where \(\beta_j\) for \(j = 0, 1, 2, \dotsc, p\) are the \(p + 1\) parameters governing the model for the data generating process.
For a quantitative response and a single categorical predictor (also known as a factor) with \(k\) levels, the ANOVA model is
\[(\text{Response})_i = \sum_{j = 1}^{k} \mu_j (\text{Group } j)_i + \varepsilon_i\]
where
\[(\text{Group } j)_i = \begin{cases} 1 & \text{i-th unit belongs to group } j \\ 0 & \text{otherwise} \end{cases}\]
is an indicator variable capturing whether a unit belongs to the \(j\)-th group and \(\mu_1, \mu_2, \dotsc, \mu_k\) are the parameters governing the model for the data generating process.
For a quantitative response and a single categorical predictor (also known as a factor) with \(k\) levels in the presence of \(b\) blocks, the repeated measures ANOVA model is
\[(\text{Response})_i = \sum_{j = 1}^{k} \mu_j (\text{Group } j)_i + \sum_{m = 2}^{b} \beta_m (\text{Block } m)_i + \varepsilon_i\]
where
\[ \begin{aligned} (\text{Group } j)_i &= \begin{cases} 1 & \text{i-th unit belongs to group } j \\ 0 & \text{otherwise} \end{cases} \\ (\text{Block } m)_i &= \begin{cases} 1 & \text{i-th unit belongs to block } m \\ 0 & \text{otherwise} \end{cases} \end{aligned} \]
are indicator variables capturing whether a unit belongs to the \(j\)-th group and \(m\)-th block, respectively; and, \(\mu_1, \mu_2, \dotsc, \mu_k\) and \(\beta_2, \beta_3, \dotsc, \beta_b\) are the parameters governing the model for the data generating process.
This model assumes any differences between groups are similar across all blocks.
C.4 Glossary
The following key terms were defined in the text; each term is presented with a link to where the term was first encountered in the text.
- Alternative Hypothesis (Definition 3.10)
- The statement (or theory) about the parameter capturing what we would like to provide evidence for; this is the opposite of the null hypothesis. This is denoted \(H_1\) or \(H_a\), read “H-one” and “H-A” respectively.
- Average (Definition 5.2)
- Also known as the “mean,” this measure of location represents the balance point for the distribution. If \(x_i\) represents the \(i\)-th value of the variable \(x\) in the sample, the sample mean is typically denoted by \(\bar{x}\).
For a sample of size \(n\), it is computed by \[\bar{x} = \frac{1}{n}\sum_{i=1}^{n} x_i.\]
When referencing the average for a population, the mean is also called the “Expected Value,” and is often denoted by \(\mu\).
- Between Group Variability (Definition 29.1)
- When comparing a quantitative response across groups, the between group variability is the variability in the average response from one group to another.
- Bias (Definition 4.1)
- A set of measurements is said to be biased if they are consistently too high (or too low). Similarly, an estimate of a parameter is said to be biased if it is consistently too high (or too low).
- Blocking (Definition 25.5)
- Blocking is a way of minimizing the variability contributed by an inherent characteristic that results in dependent observations. In some cases, the blocks are the unit of observation which is sampled from a larger population, and multiple observations are taken on each unit. In other cases, the blocks are formed by grouping the units of observations according to an inherent characteristic; in these cases that shared characteristic can be thought of having a value that was sampled from a larger population.
In both cases, the observed blocks can be thought of as a random sample; within each block, we have multiple observations, and the observations from the same block are more similar than observations from different blocks.
- Bootstrapping (Definition 6.3)
- A method of modeling the sampling distribution by repeatedly resampling from the original data.
- Categorical Variable (Definition 1.5)
- Also called a “qualitative variable,” a measurement on a subject which denotes a grouping or categorization.
- Classical ANOVA Model (Definition 28.1)
- For a quantitative response and single categorical predictor with \(k\) levels, the classical ANOVA model assumes the following data generating process:
\[(\text{Response})_i = \sum_{j=1}^{k} \mu_j (\text{Group } j)_i + \varepsilon_i\]
where
\[ (\text{Group } j)_{i} = \begin{cases} 1 & \text{if i-th observation belongs to group } j \\ 0 & \text{otherwise} \end{cases} \]
are indicator variables and where
- The error in the response for one subject is independent of the error in the response for all other subjects.
- The variability in the error of the response within each group is the same across all groups.
- The errors follow a Normal Distribution.
This is the default “ANOVA” analysis implemented in the majority of statistical packages.
- Classical Regression Model (Definition 18.3)
- For a quantitative response and single predictor, the classical regression model assumes the following data generating process:
\[(\text{Response})_i = \beta_0 + \beta_1 (\text{Predictor})_{i} + \epsilon_i\]
where
- The error in the response has a mean of 0 for all values of the predictor.
- The error in the response for one subject is independent of the error in the response for all other subjects.
- The variability in the error of the response is the same for all values of the predictor.
- The errors follow a Normal Distribution.
This is the default “regression” analysis implemented in the majority of statistical packages.
- Classical Repeated Measures ANOVA Model (Definition 37.1)
- For a quantitative response and single categorical predictor with \(k\) levels in the presence of \(b\) blocks, the classical repeated measures ANOVA model assumes the following data generating process:
\[(\text{Response})_i = \sum_{j=1}^{k} \mu_j (\text{Group } j)_i + \sum_{m=2}^{b} \beta_m (\text{Block } m)_i + \varepsilon_i\]
where
\[ \begin{aligned} (\text{Group } j)_{i} &= \begin{cases} 1 & \text{if i-th observation belongs to group j} \\ 0 & \text{otherwise} \end{cases} \\ (\text{Block } m)_{i} &= \begin{cases} 1 & \text{if i-th observation belongs to block m} \\ 0 & \text{otherwise} \end{cases} \end{aligned} \]
are indicator variables and where
- The error in the response for one subject is independent of the error in the response for all other subjects.
- The variability in the error of the response is the same across all predictors.
- The errors follow a Normal distribution.
- Any differences between the groups are similar across all blocks. This results from the deterministic portion of the model for the data generating process being correctly specified and is equivalent to saying the error in the response, on average, takes a value of 0 for all predictors.
- The effect of a block on the response is independent of the effect of any other block on the response.
- The effect of a block on the response is independent of the error in the response for all subjects.
- The block effects follow a Normal distribution.
This is the default “repeated measures ANOVA” analysis implemented in the majority of statistical packages.
- Codebook (Definition 1.7)
- Also called a “data dictionary,” these provide complete information regarding the variables contained within a dataset.
- Confidence Interval (Definition 6.5)
- An interval (range of values) estimate of a parameter that incorporates the variability in the statistic. The process of constructing a \(k\)% confidence interval results in these intervals containing the parameter of interest in \(k\)% of repeated studies. The value of \(k\) is called the confidence level.
- Confounding (Definition 4.6)
- When the effect of a variable on the response is mis-represented due to the presence of a third, potentially unobserved, variable known as a confounder.
- Controlled Experiment (Definition 4.5)
- A study in which each subject is randomly assigned to one of the groups being compared in the study.
- Correlation Coefficient (Definition 16.1)
- A numerical measure of the strength and direction of the linear relationship between two quantitative variables.
The classical Pearson Correlation Coefficient \(r\) is given by the following formula:
\[r = \frac{\sum_{i=1}^{n} \left(x_i - \bar{x}\right)\left(y_i - \bar{y}\right)}{\sqrt{\sum_{i=1}^n \left(x_i - \bar{x}\right)^2 \sum_{i=1}^n \left(y_i - \bar{y}\right)^2}}\]
where \(\bar{x}\) and \(\bar{y}\) represent the sample means of the predictor and response, respectively.
- Degrees of Freedom (Definition 19.5)
- A measure of the flexibility in a sum of squares term; when a sum of squares is divided by the corresponding degrees of freedom, the result is a variance term.
- Deterministic Process (Definition 10.1)
- A process for which the output is completely determined by the input(s). That is, the output can be determined with certainty.
- Distribution (Definition 3.3)
- The pattern of variability corresponding to a set of values.
- Distribution of the Population (Definition 5.9)
- The pattern of variability in values of a variable at the population level. Generally, this is impossible to know, but we might model it.
- Distribution of the Sample (Definition 5.6)
- The pattern of variability in the observed values of a variable.
- Error Sum of Squares (Definition 19.3)
- The Error Sum of Squares, abbreviated SSE and sometimes referred to as the Residual Sum of Squares, is given by
\[SSE = \sum_{i=1}^{n} \left[(\text{Response})_i - (\text{Predicted Mean Response})_i\right]^2\]
where the predicted mean response is computed using the least squares estimates.
- Estimation (Definition 3.7)
- Using the sample to approximate the value of a parameter from the underlying population.
- Extrapolation (Definition 18.1)
- Using a model to predict outside of the region for which data is available.
- Factor (Definition 24.1)
- Also referred to as the “treatment” in some settings, a factor is a categorical predictor. The categories represented by this categorical variable are called “levels.”
- Frequency (Definition 3.4)
- The number of observations in a sample falling into a particular group (level) defined by a categorical variable.
- Hypothesis Testing (Definition 3.8)
- Using a sample to determine if the data is consistent with a working theory or if there is evidence to suggest the data is not consistent with the theory.
- Identically Distributed (Definition 10.4)
- A set of random variables is said to be identically distributed if they are from the same population.
Similarly, a set of observations is said to be identically distributed if they share the same data generating process.
- Independence (Definition 10.3)
- Two random variables are said to be independent when the likelihood that one random variable takes on a particular value does not depend on the value of the other random variable.
Similarly, two observations are said to be independent when the likelihood that one observation takes on a particular value does not depend on the value of the other observation.
- Indicator Variable (Definition 21.1)
- An indicator variable is a binary (takes the value 0 or 1) variable used to represent whether an observation belongs to a specific group defined by a categorical variable.
- Interaction Term (Definition 21.5)
- A variable resulting from taking the product of two predictors in a regression model. The product allows the effect of one predictor to depend on another predictor, essentially modifying the effect.
- Interquartile Range (Definition 5.5)
- Often abbreviated as IQR, this is the distance between the first and third quartiles. This measure of spread indicates the range over which the middle 50% of the data is spread.
- Law of Large Numbers (Definition 6.1)
- For our purposes, the Law of Large Numbers essentially says that as a sample size gets infinitely large, a statistic will become arbitrarily close (extremely good approximation) of the parameter it estimates.
- Least Squares Estimates (Definition 17.2)
- Often called the “best fit line,” these are the estimates of the parameters in a regression model chosen to minimize the sum of squared errors. Formally, for Equation 17.3, they are the values of \(\beta_0\) and \(\beta_1\) which minimize the quantity
\[\sum_{i=1}^n \left[(\text{Response})_i - \beta_0 - \beta_1(\text{Predictor})_{i}\right]^2.\]
The resulting estimates are often denoted by \(\widehat{\beta}_0\) and \(\widehat{\beta}_1\).
- Least Squares Estimates for General Linear Model (Definition 21.3)
- The least squares estimates for a general linear model (Equation 21.3) are the values of \(\beta_0, \beta_1, \beta_2, \dotsc, \beta_p\) which minimize the quantity
\[\sum_{i=1}^n \left[(\text{Response})_i - \beta_0 - \sum_{j=1}^{p} \beta_j(\text{Predictor } j)_{i}\right]^2.\]
- Mean Square (Definition 19.6)
- A mean square is the ratio of a sum of squares and its corresponding degrees of freedom. For a model of the form in Equation 17.3, we have
- Mean Square Total (MST): estimated variance of the responses; this is the same as the sample variance of the response.
- Mean Square for Regression (MSR): estimated variance of the predicted responses.
- Mean Square Error (MSE): estimated variance of the error terms; this is equivalent to the estimated variance of the response for a given value of the predictor (the variance of the response about the regression line).
In each case, the mean square is an estimated variance.
- Mean Square (in ANOVA) (Definition 29.3)
- A mean square is the ratio of a sum of squares and its corresponding degrees of freedom. For a model of the form in Equation 27.2, we have
- Mean Square Total (MST): estimated variance of the responses; this is the same as the sample variance of the response.
- Mean Square for Regression (MSR): estimated variance of the sample mean responses from each group; this is also called the Mean Square for Treatment (MSTrt) in ANOVA.
- Mean Square Error (MSE): estimated variance of the error terms; this is equivalent to the estimated variance of the response within a group.
In each case, the mean square is an estimated variance. These are equivalent to the MST, MSR, and MSE in the regression model (Definition 19.6).
- Multivariable (Definition 15.1)
- This term refers to questions of interest which involve more than a single variable. Often, these questions involve many variables. Multivariable models typically refer to a model with two or more predictors.
- Normal Distribution (Definition 18.2)
- Also called the Gaussian Distribution, this probability model is popular for modeling noise within a data generating process. It has the following characteristics:
- It is bell-shaped.
- It is symmetric, meaning the mean is directly at its center, and the lower half of the distribution looks like a mirror image of the upper half of the distribution.
- Often useful for modeling noise due to natural phenomena or sums of measurements.
The functional form of the Normal distribution is
\[f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{1}{2\sigma^2}(x - \mu)^2}\]
where \(\mu\) is the mean of the distribution and \(\sigma^2\) is the variance of the distribution.
- Null Distribution (Definition 7.1)
- The sampling distribution of a statistic when the null hypothesis is true.
- Null Hypothesis (Definition 3.9)
- The statement (or theory) about the parameter that we would like to disprove. This is denoted \(H_0\), read “H-naught” or “H-zero”.
- Null Value (Definition 3.11)
- The value associated with the equality component of the null hypothesis; it forms the threshold or boundary between the hypotheses. Note: not all questions of interest require a null value be specified.
- Numeric Variable (Definition 1.6)
- Also called a “quantitative variable,” a measurement on a subject which takes on a numeric value and for which ordinary arithmetic makes sense.
- Observational Study (Definition 4.4)
- A study in which each subject “self-selects” into one of groups being compared in the study. The phrase “self-selects” is used very loosely here and can include studies for which the groups are defined by an inherent characteristic or are chosen haphazardly.
- Outlier (Definition 5.7)
- An individual observation which is so extreme, relative to the rest of the observations in the sample, that it does not appear to conform to the same distribution.
- P-Value (Definition 7.2)
- The probability, assuming the null hypothesis is true, that we would observe a statistic, from sampling variability alone, as extreme or more so as that observed in our sample. The p-value quantifies the strength of evidence against the null hypothesis, with smaller values indicating stronger evidence.
- Parameter (Definition 3.6)
- Numeric quantity which summarizes the distribution of a variable within the population of interest. Generally denoted by Greek letters in statistical formulas.
- Percentile (Definition 5.1)
- The \(k\)-th percentile is the value \(q\) such that \(k\)% of the values in the distribution are less than or equal to \(q\). For example,
- 25% of values in a distribution are less than or equal to the 25-th percentile (known as the “first quartile” and denoted \(Q_1\)).
- 50% of values in a distribution are less than or equal to the 50-th percentile (known as the “median”).
- 75% of values in a distribution are less than or equal to the 75-th percentile (known as the “third quartile” and denoted \(Q_3\)).
- Population (Definition 1.1)
- The collection of subjects we would like to say something about.
- Power (Definition 25.3)
- In statistics, power refers to the probability that a study will discern a signal when one really exists in the data generating process. More technically, it is the probability a study will provide evidence against the null hypothesis when the null hypothesis is false.
- Probability Plot (Definition 20.3)
- Also called a “Quantile-Quantile Plot”, a probability plot is a graphic for comparing the distribution of an observed sample with a theoretical probability model for the distribution of the underlying population. The quantiles observed in the sample are plotted against those expected under the theoretical model.
- R-Squared (Definition 19.4)
- Sometimes reported as a percentage, the R-Squared value measures the proportion of the variability in the response explained by a model. It is given by
\[\text{R-squared} = \frac{SSR}{SST}.\]
- Randomization (Definition 25.2)
- Randomization can refer to random selection or random allocation. Random selection refers to the use of a random mechanism (e.g., a simple random sample, Definition 4.2, or a stratified random sample, Definition 4.3) to select units from the population. Random selection minimizes bias.
Random allocation refers to the use of a random mechanism when assigning units to a specific treatment group in a controlled experiment (Definition 4.5). Random allocation eliminates confounding and permits causal interpretations.
- Randomized Complete Block Design (Definition 34.1)
- A randomized complete block design is an example of a controlled experiment utilizing blocking. Each treatment is randomized to observations within blocks such that within each block every treatment is present and the same number of observations are assigned to each treatment.
- Reduction of Noise (Definition 25.4)
- Reducing extraneous sources of variability can be accomplished by fixing extraneous variables or blocking (Definition 25.5). These actions reduce the number of differences between the units under study.
- Reference Group (Definition 21.2)
- The group defined by setting all indicator variables in a model for the data generating process equal to 0.
- Regression (Definition 17.1)
- Used broadly, this refers to the process of fitting a statistical model for the data generating process to observed data. More specifically, it is a process of estimating the parameters in a data generating process using observed data.
- Regression Sum of Squares (Definition 19.2)
- The Regression Sum of Squares, abbreviated SSR, is given by
\[SSR = \sum_{i=1}^{n} \left[(\text{Predicted Mean Response})_i - (\text{Overall Mean Response})\right]^2\]
where the predicted mean response is computed using the least squares estimates and the overall mean response is the sample mean.
- Relative Frequency (Definition 3.5)
- Also called the “proportion,” the fraction of observations falling into a particular group (level) of a categorical variable.
- Replication (Definition 25.1)
- Replication results from taking measurements on different units (or subjects), for which you expect the results to be similar. That is, any variability across the units is due to natural variability within the population.
- Residual (Definition 20.1)
- The difference between the observed response and the predicted response (estimated deterministic portion of the model). Specifically, the residual for the \(i\)-th observation is given by
\[(\text{Residual})_i = (\text{Response})_i - (\text{Predicted Mean Response})_i\]
where the “predicted mean response” is often called the predicted, or fitted, value.
Residuals mimic the noise in the data generating process.
- Response (Definition 3.2)
- The primary variable of interest within a study. This is the variable you would either like to explain or estimate.
- Sample (Definition 1.2)
- The collection of subjects for which we actually obtain measurements (data).
- Sampling Distribution (Definition 6.2)
- The distribution of a statistic across repeated samples (of the same size) from the population.
- Simple Random Sample (Definition 4.2)
- Often abbreviated SRS, this is a sample of size \(n\) such that every collection of size \(n\) is equally likely to be the resulting sample. This is equivalent to a lottery.
- Standard Deviation (Definition 5.4)
- A measure of spread, this is the square root of the variance.
- Standard Error (Definition 6.4)
- The standard error is the estimated standard deviation of a statistic; that is, it is the standard deviation from a model for the sampling distribution of a statistic. It quantifies the variability in the statistic across repeated samples.
- Standardized (Test) Statistic (Definition 12.1)
- Also, known as a test statistic, a standardized statistic is a ratio of the signal in the sample to the noise in the sample. The larger the standardized statistic, the stronger the evidence of a signal; said another way, the larger the standardized statistic, the stronger the evidence against the null hypothesis.
- Standardized Statistic for ANOVA (Definition 29.4)
- Consider testing a set of hypotheses for a model of the data generating process of the form (Equation 27.2):
\[(\text{Response})_i = \sum_{j=1}^{k} \mu_j(\text{Group } j)_i + \varepsilon_i,\]
where
\[(\text{Group } j)_i = \begin{cases} 1 & \text{i-th unit belongs to group } j \\ 0 & \text{otherwise} \end{cases}\]
is an indicator variable. Denote this model as Model 1, and denote the model that results from applying the parameter constraints defined under the null hypothesis as Model 0. A standardized statistic, sometimes called the “standardized F statistic,” for testing the hypotheses is given by
\[T^* = \frac{\left(SSE_0 - SSE_1\right) / (k - r)}{SSE_1 / (n - k)},\]
where \(k\) is the number of parameters in the full unconstrained model and \(r\) is the number of parameters in the reduced model. Defining
\[MSA = \frac{SSE_0 - SSE_1}{k - r}\]
to be the “mean square for additional terms,” which captures the shift in the error sum of squares from the reduced model to the full unconstrained model, we can write the standardized statistic as
\[T^* = \frac{MSA}{MSE}\]
where the mean square error in the denominator comes from the full unconstrained model. Just as before, the MSE represents the residual variance — the variance in the response for a particular set of the predictors.
- Standardized Statistic for General Linear Model (Definition 21.4)
- Consider testing a set of hypotheses for a model of the data generating process of the form (Equation 21.3):
\[(\text{Response})_i = \beta_0 + \sum_{j=1}^{p} \beta_j(\text{Predictor } j)_i + \varepsilon_i.\]
Denote this model as Model 1, and denote the model that results from applying the parameter constraints defined under the null hypothesis as Model 0. A standardized statistic, sometimes called the “nested F statistic,” for testing the hypotheses is given by
\[T^* = \frac{\left(SSE_0 - SSE_1\right) / (p + 1 - r)}{SSE_1 / (n - p - 1)},\]
where \(p + 1\) is the number of parameters in the full unconstrained model (including the intercept) and \(r\) is the number of parameters in the reduced model. Defining
\[MSA = \frac{SSE_0 - SSE_1}{p + 1 - r}\]
to be the “mean square for additional terms,” which captures the shift in the error sum of squares from the reduced model to the full unconstrained model, we can write the standardized statistic as
\[T^* = \frac{MSA}{MSE}\]
where the mean square error in the denominator comes from the full unconstrained model. Just as before, the MSE represents the residual variance — the variance in the response for a particular set of the predictors.
- Standardized Statistic for Repeated Measures ANOVA (Definition 38.1)
- Consider testing a set of hypotheses for a model of the data generating process of the form (Equation 36.2):
\[(\text{Response})_i = \sum_{j=1}^{k} \mu_j (\text{Group } j)_i + \sum_{m=2}^{b} \beta_m (\text{Block } m)_i + \varepsilon_i\]
where
\[ \begin{aligned} (\text{Group } j)_i &= \begin{cases} 1 & \text{if i-th observation corresponds to group } j \\ 0 & \text{otherwise} \end{cases} \\ (\text{Block } m)_i &= \begin{cases} 1 & \text{if i-th observation corresponds to block } m \\ 0 & \text{otherwise} \end{cases} \end{aligned} \]
are indicator variables. Denote this model as Model 1, and denote the model that results from applying the parameter constraints defined under the null hypothesis as Model 0. A standardized statistic, sometimes called the “standardized F statistic,” for testing the hypotheses is given by
\[T^* = \frac{\left(SSE_0 - SSE_1\right) / (k + b - 1 - r)}{SSE_1 / (n - k - b + 1)},\]
where \(k + b - 1\) is the number of parameters in the full unconstrained model and \(r\) is the number of parameters in the reduced model. Defining
\[MSA = \frac{SSE_0 - SSE_1}{k + b - 1 - r}\]
to be the “mean square for additional terms,” which captures the shift in the error sum of squares from the reduced model to the full unconstrained model, we can write the standardized statistic as
\[T^* = \frac{MSA}{MSE}\]
where the mean square error in the denominator comes from the full unconstrained model. Just as before, the MSE represents the residual variance — the variance in the response for a particular set of predictors.
- Standardized Statistic for Simple Linear Regression (Definition 19.7)
- Consider testing a set of hypotheses for a model of the data generating process of the form (Equation 17.3):
\[(\text{Response})_i = \beta_0 + \beta_1(\text{Predictor})_i + \varepsilon_i.\]
Denote this model as Model 1, and denote the model that results from applying the parameter constraints defined under the null hypothesis as Model 0[^Fcaveat]. A standardized statistic, sometimes called the “standardized F statistic,” for testing the hypotheses is given by
\[T^* = \frac{\left(SSE_0 - SSE_1\right) / (2 - r)}{SSE_1 / (n - 2)},\]
where \(r\) is the number of parameters in the reduced model. Defining
\[MSA = \frac{SSE_0 - SSE_1}{2 - r}\]
to be the “mean square for additional terms,” which captures the shift in the error sum of squares from the reduced model to the full unconstrained model, we can write the standardized statistic as
\[T^* = \frac{MSA}{MSE}\]
where the mean square error in the denominator comes from the full unconstrained model.
- Statistic (Definition 5.8)
- Numeric quantity which summarizes the distribution of a variable within a sample.
- Statistical Inference (Definition 1.3)
- The process of using a sample to characterize some aspect of the underlying population.
- Stochastic Process (Definition 10.2)
- A process for which the output cannot be predicted with certainty.
- Stratified Random Sample (Definition 4.3)
- A sample in which the population is first divided into groups, or strata, based on a characteristic of interest; a simple random sample is then taken within each group.
- Time-Series Plot (Definition 20.2)
- A time-series plot of a variable is a line plot with the variable on the y-axis and time on the x-axis.
- Total Sum of Squares (Definition 19.1)
- The Total Sum of Squares, abbreviated SST, is given by
\[SST = \sum_{i=1}^{n} \left[(\text{Response})_i - (\text{Overall Mean Response})\right]^2\]
where the overall average response is the sample mean.
- Variability (Definition 3.1)
- The notion that measurements differ from one observation to another.
- Variable (Definition 1.4)
- A measurement, or category, describing some aspect of the subject.
- Variance (Definition 5.3)
- A measure of spread, this roughly captures the average distance values in the distribution are from the mean.
For a sample of size \(n\), it is computed by \[s^2 = \frac{1}{n-1}\sum_{i=1}^{n} \left(x_i - \bar{x}\right)^2\]
where \(\bar{x}\) is the sample mean and \(x_i\) is the \(i\)-th value in the sample. The division by \(n-1\) instead of \(n\) removes bias in the statistic.
The symbol \(\sigma^2\) is often used to denote the variance in the population.
- Within Group Variability (Definition 29.2)
- When comparing a quantitative response across groups, the within group variability is the variability in the response within each group.