📘 Section 11.3 – Properties of the Least Squares Estimators

📌 Model Assumptions

We assume the simple linear regression model:

Y = β₀ + β₁x + ε E(ε) = 0, Var(ε) = σ², ε ~ i.i.d.

Then the least squares estimators:

β̂₁ = Sxy / Sxx β̂₀ = ȳ − β̂₁ x̄

✅ Unbiasedness of Estimators

E(β̂₁) = β₁ E(β̂₀) = β₀

Therefore, both β̂₀ and β̂₁ are unbiased estimators of the true coefficients.

📐 Variance of Estimators

Var(β̂₁) = σ² / Sxx Var(β̂₀) = σ² × (1/n + x̄² / Sxx)

📊 Covariance Between Estimators

Cov(β̂₀, β̂₁) = −σ² × x̄ / Sxx

🧮 Estimated Standard Errors

If σ² is unknown, we estimate it by:

σ̂² = SSE / (n − 2)

Then the estimated standard errors are:

se(β̂₁) = √(σ̂² / Sxx) se(β̂₀) = √[σ̂² × (1/n + x̄² / Sxx)]

📝 Notes

🧠 What Students Should Know (No Book Needed)

📈 Interactive Graph: Change x and y Data

Edit the data below (comma-separated) and see how the regression line changes!



🧪 Quick Practice

Question: If the slope β̂₁ = 3, what does it mean in the context of predicting salary from years of experience?



📋 Full Regression Output with Math Details

Once you click "Update Graph", the following section will compute and display:


📘 What Do These Numbers Mean? (With Full Math)