# Recursive Least Squares, Part 1

Developing Models for Kalman Filters

## Toward better evaluation efficiency

We have seen how the Normal Equations and Linear Least Squares method for determining best fit model parameter values can be constructed sequentially, by an updating process. However, it was noted that when it comes time to evaluate parameter values, solving the Normal Equations can require some relatively intense calculations. If you need to do this often — for example, in an adaptive real-time scenario where best parameter estimates must be pushed into service as quickly as possible — intensive calculations for the matrix inverse can consume an immense amount of processing. For such a scenario, is there a more efficient strategy?

The truth is, you don't need any of this for most model building purposes. Unfortunately, it is what you are going to see in virtually all the "scholarly literature" elsewhere. So the main point of this diversion is to show how all of those additional and probably unnecessary complications are really the same Linear Least Squares in disguise.

There are two ideas to explore.

- Is there any computational advantage to calculating only
*incremental adjustments to the parameter values*at each step, rather than fully recomputing new parameter values? - Is there a way to maintain the Normal Equations matrix in an already-inverted form, so that the numerically expensive inversion operations can be avoided?

## Incremental parameter adjustments

Start from the unweighted update formula for the Normal Equations that we developed previously.

$$\left({{m}_{n+1}}^{T}{m}_{n+1}+{{M}_{n}}^{T}{M}_{n}\right)\xb7\left({p}_{n}+\Delta p\right)\phantom{\rule{0.5em}{0ex}}=\phantom{\rule{0.5em}{0ex}}\left({{m}_{n+1}}^{T}\xb7{y}_{n+1}+{{M}_{n}}^{T}\xb7{Y}_{n}\right)$$If we now do a partial algebraic expansion of this, we can get the following equivalent formula in which the effects of the new and the previous terms are separated.

$${{m}_{n+1}}^{T}{m}_{n+1}\xb7{p}_{n}\phantom{\rule{0.8em}{0ex}}+\phantom{\rule{0.8em}{0ex}}{{M}_{n}}^{T}{M}_{n}\xb7{p}_{n}\phantom{\rule{0.8em}{0ex}}+\phantom{\rule{0.8em}{0ex}}{{M}_{n+1}}^{T}{M}_{n+1}\xb7\Delta p\phantom{\rule{0.8em}{0ex}}=\phantom{\rule{0.8em}{0ex}}{{m}_{n+1}}^{T}\xb7{y}_{n+1}\phantom{\rule{0.8em}{0ex}}+\phantom{\rule{0.8em}{0ex}}{{M}_{n}}^{T}\xb7{Y}_{n}$$ We can observe
that the `M`

term on the left and the _{n}^{T}M_{n}·p_{n}`M`

terms
on the right are known and equal — this is just the Normal Equation
conditions as established during the previous update. We can cancel them
from each side of the equation. _{n}^{T}Y_{n}

Collect the two terms with the common `m`

factor on the right side of the equation. _{n+1}^{T}

- The quantity in parentheses on the right reduces to a scalar value for the case when a single new constraint is added to the Normal Equations system.
- You will find expressions of this form in the scholarly papers and textbooks. If you have ever wondered where that "y minus something" term comes from in the various updating formulas, now you know.

## The Matrix Inversion Lemma

Before we can study the idea of maintaining the Normal Equation matrix
in inverse form, we will need to know about an obscure matrix formula, a
special case of the Sherman-Morrison-Woodbury matrix inversion formula,
often referred to as "*the Matrix Inversion Lemma.*" This particular
variant happens to be useful when working with variance and linear least squares problems.^{[1]}

The lemma is as follows:

* For compatible matrices,
with all matrices of full rank and matrix F square and nonsingular,
the following relationship holds.*

It is not necessary to read through the following proof, but if you do care to trace through the steps, the main thing to notice is that it is just some simple (but careful) algebra.

If one of the expressions above really is the inverse of the other, when they are multiplied together in either order, the product must yield an identity matrix. We will form these product expressions, then follow up with algebraic manipulations to verify that the necessary reduction occurs.

### Proof: the left-right product case

### Proof: the right-left product case

Promise, next time we will find some relevance in this unlikely seeming theorem, and apply it to the Least Squares updating problem. Then we will do some testing to verify that it works.

Footnotes:

[1] See the Wikipedia article for more information and alternate forms of the Sherman Morrison Woodbury matrix inversion formula.