LinkedIn Series

All posts in one place. Read in order or jump to what you need.

Overview & How to Use

This page collects my LinkedIn series in a playlist-style format. Each series has a short description and a numbered list of posts with direct links to LinkedIn. Start at Week 1 to follow the full narrative, or skim the summaries and jump into a topic that matches your needs.

Follow me on LinkedIn for new posts.

State Space Models

Playlist
Week 1 · What is a State Space Model?
August 13, 2025

  |   Read on LinkedIn

Introduces state space models as a way to separate signal from noise, combining interpretable components (trend, seasonalities, cycles, regressors) with Kalman filtering for estimation.

  |   Read on LinkedIn

Compares SSMs with ARIMA, highlighting SSM strengths for trends, breaks, and missing values and notes that ARIMA itself can be written in state space form.

  |   Read on LinkedIn

Shows how SSMs are built like Lego: combine trend, seasonalities, cycles, and regressors to decompose and explain complex time series.

  |   Read on LinkedIn

Explains the Kalman filter’s prediction–update logic and why it yields optimal state estimates given uncertainty in data and model.

  |   Read on LinkedIn

Shows how the smoother uses the full dataset to refine past state estimates, revealing clearer trends and breaks.

  |   Read on LinkedIn

Demonstrates how the filter naturally handles gaps by propagating the model forward, with uncertainty widening until data resumes.
Week 7 · Forecasting with State Space Models
September 24, 2025

  |   Read on LinkedIn

Covers multi-step forecasting by treating future points as missing, inspecting component-wise forecasts and uncertainty.

  |   Read on LinkedIn

Explains including covariates as deterministic regressors (variance set to zero) so betas are interpretable with standard errors equal to recursive least squares.

  |   Read on LinkedIn

Defines the three residual types and why one-step prediction errors are the right diagnostic for model adequacy (ACF flatness, zero mean, homoskedasticity).

  |   Read on LinkedIn

Connects prediction errors and their variance to the likelihood; shows how maximizing likelihood tunes parameters (e.g., signal-to-noise ratio) for the best fit.

Missing a post? Let me know and I’ll add it.