The law of total variance extends the Law of Total ExpectationLaw of Total ExpectationE[X]=E[E[X∣Y]]The expectation of X equals the expectation of its conditional expectation: E[X] = E[E[X|Y]]Read more → to second moments. It is the probabilistic analogue of the ANOVA decomposition: the total sum of squares equals within-group plus between-group sums of squares. It also connects to Jensen's InequalityJensen's Inequalityφ(E[X])≤E[φ(X)]For a convex function phi, phi(E[X]) <= E[phi(X)] — the image of the mean is at most the mean of imagesRead more →: the between-group variance is non-negative by convexity of squaring, explaining why Var(X)≥E[Var(X∣G)].
Lean4 Proof
Mathlib provides integral_condVar_add_variance_condExp in Mathlib.Probability.CondVar, which is exactly the law of total variance.
integral_condVar_add_variance_condExp is proved in Mathlib by expanding both sides via E[X2∣m]−(E[X∣m])2 and applying integral_condExp to collapse the tower.
Lean4 Proof
importMathlib.Probability.CondVarnamespaceMoonMathopenMeasureTheoryProbabilityTheory/-- **Law of Total Variance**.
For a square-integrable `X` in a probability space,
`E[Var(X|m)] + Var(E[X|m]) = Var(X)`. -/theorem total_variance {Ω : Type*} {m₀ m : MeasurableSpaceΩ}
{μ : MeasureΩ} [IsProbabilityMeasure μ]
(hm : m ≤ m₀) {X : Ω→ℝ} (hX : MemLpX2 μ) :
μ[Var[X; μ | m]] + Var[μ[X | m]; μ] = Var[X; μ] :=
integral_condVar_add_variance_condExp hm hX
endMoonMath