### The art of matrix multiplication

Before you start reading, we first recommend you to read about the State Space Model and the Kalman filter.

Matrix multiplication is at the core of the Kalman filter, for any given State Space Model. The state vector at time *t* contains the necessary current information for the different dynamic processes described in the State Space Model. The transition of the state vector from time *t* to time *t+1* is carried out by matrix multiplications involving the system matrices of the State Space Model. The number of calculations remain relatively small for many State Space Models, even when the time series is long. However, the number of calculations rapidly increases if the state vector dimension increases (and hence the dimensions of the system matrices increase). We show next how many calculations are needed to multiply two matrices.

**Matrix multiplication**

Let's say we have two matrices $A$ and $B$ with dimension $(n \times p)$ and $(p \times m)$. To obtain the matrix product $C = AB$ we need $n \times m \times (2p - 1)$ calculations. Now let's assign some numbers to $n, m, p$ to get an idea how rapidly the number of calculations increase.

$n$ | $p$ | $m$ | $n \times m \times (2p - 1)$ |
---|---|---|---|

1 | 1 | 1 | 1 |

2 | 2 | 2 | 12 |

8 | 8 | 8 | 960 |

32 | 32 | 32 | 64,512 |

128 | 128 | 128 | 4,177,920 |

512 | 512 | 512 | 268,173,312 |

### Our engine

Time series can contain many different dynamic features which require a more extensive model for an adequate analysis process. The various dynamic features will lead to a higher dimensional state vector in a State Space Model and hence higher dimensional system matrices. For their treatment we need algorithms that are efficiently implemented in a fast computer language. We use C. Although it is well established that C is a highly effective computer language in terms of speed, the efficient implementation of the algorithms is as important. The development of such implementation is far from trivial.

**Speed comparison**

We use a time series of length T=1000 and calculate the log likelihood M=100 times, something typical in an optimization routine. In fact, in many cases optimisation algorithms need much more than M=100 calls to the log likelihood function. We perform the analysis for a range of model specifications. Time is given in seconds. X var stands for explanatory variables.

Model desciption | Regular C | Efficient C | $\times$ faster |
---|---|---|---|

Local level model | 0.017 | 0.008 | 2.27 |

Local linear trend model | 0.028 | 0.010 | 2.78 |

Trend + monthly seasonal | 0.308 | 0.138 | 2.23 |

Trend + monthly seasonal + cycle | 0.602 | 0.284 | 2.12 |

Trend + monthly seasonal + 5 X var | 0.793 | 0.254 | 3.12 |

Trend + monthly seasonal + 50 X var | 9.726 | 1.102 | 8.83 |

Trend + monthly seasonal + 250 X var | 720.908 | 6.144 | 117.34 |

Trend + three seasonals + 150 X var | 7747.600 | 18.465 | 419.58 |

We learn from the table above that once the model becomes more involved, the implemenation of efficient algorithms can truly make a difference. Some of our clients need to analyze thousands of time series. Without our advanced algorithms, it becomes simply **impossible** for certain model configurations to get the computations done in a reasonable amount of time!