Proof that Autocovariance Depends Only on Lag for Weakly Stationary Processes

Exploring the cinematic intuition of Proof that Autocovariance Depends Only on Lag for Weakly Stationary Processes.

Visualizing...

Our institutional research engineers are currently mapping the formal proof for Proof that Autocovariance Depends Only on Lag for Weakly Stationary Processes.

Apply for Institutional Early Access →

The Formal Theorem

Let \ \\{X_t : t \\in \\mathbb{Z}\\} \ be a stochastic process. The autocovariance function between \ X_t \ and \ X_s \ is generally defined as \ \\gamma_X(t, s) = Cov(X_t, X_s) = E[(X_t - E[X_t])(X_s - E[X_s])] \.\n\nA process \ \\{X_t\\}\\} \ is defined to be *weakly stationary* if it satisfies the following two conditions:\n1. **Constant Mean:** \ E[X_t] = \\mu \ for all \ t \\in \\mathbb{Z} \, where \ \\mu \ is a finite constant.\n2. **Time-Invariant Autocovariance:** For any integers \ t, s, k \, the autocovariance between \ X_t \ and \ X_s \ is invariant under time shifts: \ Cov(X_t, X_s) = Cov(X_{t+k}, X_{s+k}) \.\n\n**Theorem Statement:** For a weakly stationary process \ \\{X_t\\}\\} \, its autocovariance function \ \\gamma_X(t, s) \ depends solely on the time lag \ h = t-s \, and not on the individual time points \ t \ or \ s \.\nSpecifically, there exists a function \ \\gamma_X: \\mathbb{Z} \\to \\mathbb{R} \ such that
\begin{aligned} \\gamma_X(t, s) = \\gamma_X(t-s) \\end{aligned}
\n\n**Proof:**\nLet \ \\{X_t\\}\\} \ be a weakly stationary process.\nBy definition, its mean is constant, \ E[X_t] = \\mu \.\nAlso, its autocovariance is time-invariant, meaning for any integers \ t, s, k \:\n\ Cov(X_t, X_s) = Cov(X_{t+k}, X_{s+k}) \\nLet \ \\gamma_X(t, s) \ denote \ Cov(X_t, X_s) \.\nSo, we have \ \\gamma_X(t, s) = \\gamma_X(t+k, s+k) \.\nTo show that this depends only on the lag \ h = t-s \, we can choose a specific value for \ k \.\nLet \ k = -s \. (This shifts the second time index to 0).\nSubstituting \ k = -s \ into the time-invariance property:\n\ \\gamma_X(t, s) = \\gamma_X(t + (-s), s + (-s)) \\n\ \\gamma_X(t, s) = \\gamma_X(t-s, 0) \\nThis result shows that the autocovariance function \ \\gamma_X(t, s) \ depends only on the difference \ t-s \ and the fixed time point 0. Therefore, it is effectively a function of the lag \ h = t-s \. We conventionally denote this function as \ \\gamma_X(h) \.\nThus, for a weakly stationary process,
\begin{aligned} \\gamma_X(t, s) = \\gamma_X(t-s) \\end{aligned}
\nFurthermore, setting \ s=t \ yields \ Var(X_t) = Cov(X_t, X_t) = \\gamma_X(t-t) = \\gamma_X(0) \. Since \ \\gamma_X(0) \ is a constant (not depending on \ t \), the variance of a weakly stationary process is also constant.

Analytical Intuition.

Imagine a grand, cosmic clockwork. If this clockwork is *weakly stationary*, it means that the *average state* of the universe (mean) never changes, and the *relationships* between cosmic events (covariance) are consistent. It doesn't matter if you observe event A and event B today, or event A' and event B' a million years from now, as long as the *time difference* between them is the same. The cosmic interaction between \ X_t \ and \ X_s \ is not tied to the absolute 'time coordinate' \ t \ or \ s \, but rather to their *separation* \ h = t-s \. The universe is not getting 'older' in a way that alters these fundamental statistical patterns; it's a timeless, statistically self-similar mechanism. If we know how much \ X_t \ influences \ X_{t+5} \, we automatically know how much \ X_{t+100} \ influences \ X_{t+105} \ – it's the same influence because the lag is still 5 units.
CAUTION

Institutional Warning.

Students often confuse weak stationarity with strict stationarity, or assume that constant variance is a *separate* condition, rather than a special case of the autocovariance depending on lag (when \ h=0 \). They might also struggle with the implication of "time-invariant autocovariance" meaning dependency on lag.

Academic Inquiries.

01

What is the practical significance of autocovariance depending only on lag?

It simplifies modeling considerably. Instead of estimating a full covariance matrix \ \\gamma_X(t,s) \ (which grows with \ T^2 \ for \ T \ observations), we only need to estimate \ \\gamma_X(h) \ for a few relevant lags \ h \. This makes models like ARMA much more tractable and allows for prediction based on past patterns.

02

Is constant variance explicitly stated in the definition of weak stationarity, or is it implied?

It is implied. If \ \\gamma_X(t, s) = \\gamma_X(t-s) \, then for \ s=t \, we have \ Var(X_t) = Cov(X_t, X_t) = \\gamma_X(0) \. Since \ \\gamma_X(0) \ is a constant (not depending on \ t \), the variance is constant. Some definitions explicitly state it for clarity, but it's derivable.

03

Does strict stationarity imply weak stationarity?

Yes, if the first and second moments exist. A strictly stationary process has time-invariant *joint distributions* for any set of observations \ (X_{t_1}, ..., X_{t_n}) \. This stronger condition implies time-invariant means, variances, and autocovariances, as these are derived from joint moments.

Standardized References.

  • Definitive Institutional SourceBrockwell, P. J., & Davis, R. A. (2016). Introduction to Time Series and Forecasting. Springer.

Institutional Citation

Reference this proof in your academic research or publications.

NICEFA Visual Mathematics. (2026). Proof that Autocovariance Depends Only on Lag for Weakly Stationary Processes: Visual Proof & Intuition. Retrieved from https://nicefa.org/library/time-series-analysis/proof-that-autocovariance-depends-only-on-lag-for-weakly-stationary-processes

Dominate the Logic.

"Abstract theory is just a movement we haven't seen yet."