Start here

First, use the Chapter 12 Excel ICE for Multiple Regression to practice getting the output and testing hypotheses in MLR.

🔗 Open Excel ICE for MLR

This page is the theory / formula version of that ICE. It explains the F tests, t tests, and sums of squares that Excel is computing for you.

📘 Section 12.2 – Hypothesis Tests in Multiple Linear Regression

🎯 What Are We Testing?

In multiple linear regression, hypothesis tests answer questions like:

We assume the usual regression model:

Y = β₀ + β₁x₁ + β₂x₂ + … + βkxk + ε,   ε ~ N(0, σ²), independent

Here, k is the number of predictors, so there are p = k + 1 parameters (including the intercept), and n observations.

🧪 Global F-Test – “Is the Regression Useful?”

The test for significance of regression checks whether the model with predictors does better than a model with only an intercept.

Hypotheses

H₀: β₁ = β₂ = … = βk = 0
H₁: βj ≠ 0 for at least one j

We partition the total variability:

SST = SSR + SSE

F Test Statistic

F₀ = (SSR / k) ÷ (SSE / (n − p)) = MSR / MSE

ANOVA Table Structure

Source SS df MS F
Regression SSR k MSR = SSR / k F₀ = MSR / MSE
Error SSE n − p MSE = SSE / (n − p)
Total SST n − 1

Interpretation: If H₀ is rejected, the model with predictors explains a significant portion of variability in Y (at least one slope is nonzero).

📊 R² and Adjusted R²

The global F-test is often paired with and adjusted R² to summarize overall fit.

R² = SSR / SST = 1 − SSE / SST

measures the proportion of total variability in Y explained by the regression. It always increases when you add more predictors.

To “penalize” unnecessary predictors, we use adjusted R²:

adj = 1 − [SSE / (n − p)] ÷ [SST / (n − 1)]

🧍 t-Tests for Individual Coefficients

Once the overall model is significant, we often ask whether individual predictors are helpful given that the others are already in the model.

General Hypotheses

H₀: βj = βj0
H₁: βj ≠ βj0

Test Statistic

t₀ = (β̂j − βj0) / se(β̂j)

Important Special Case

To test whether predictor xj contributes to the model:

H₀: βj = 0     vs.     H₁: βj ≠ 0

If we fail to reject H₀, xj does not provide a significant partial contribution, given that the other predictors remain in the model.

🧩 Partial F-Tests for Groups of Predictors

Sometimes we want to test a subset of predictors at once (for example, a block of interaction or quadratic terms).

Idea: Compare Full vs. Reduced Models

Extra sum of squares due to the subset:

SSR(extra) = SSRfull − SSRreduced

Partial F Statistic

F₀ = [SSR(extra) / q] ÷ MSEfull

Special case: When q = 1 (testing a single coefficient), the partial F-test is exactly equivalent to the t-test:

F₀ = t₀²

🧠 Summary