# Variance Propagation

Developing Models for Kalman Filters

Last installment, we observed that the variance in the state values changes over time, starting with a flaky covariance that reflects a lack of knowledge about what the initial system state is. As system activity continues, more and more information from observation of system outputs is incorporated. In time, the variations in the state variables become dependent on the driving internal noise, rather than the peculiar representation of initial uncertainty.

In this installment, we will examine this in more detail, watching the changes in the state covariance analytically.

## Variance Propagation in an Observer system

Suppose the system starts in a completely
unknown initial state. We can represent this by picking an arbitrary
state vector of initial values randomly from within the normal
operating range, obtaining initial state `x`

.
For purposes of implementation, a zero state is as representative
of the input range as any. So a plausible choice for initializing
in the presence of no initial state information is simply to start
with a zero state vector. For normalized variables,
the covariance matrix ^{0}`P`

assumed for this state vector has values ^{0}`1/3`

on the main diagonal by
construction.

The state transition equations for the first update has terms for the state transition response, coupled inputs, and random state noise.

x^{1}= A x^{0}+ B u^{0}+ w^{0}

However, we know that from the separability property of linear
systems, we can consider the effects of the input terms `B u`

separately from the effects of the noise. Consequently, we can
simplify this by considering only the disturbance offset terms
`e`

.

e^{1}= A e^{0}+ w^{0}

When we add in some additional observer terms, the feedback
term `K`

will couple some of the original noise back
from output to input, and in addition to that, couple some amount
of the output observation noise. This has the effect of modifying
the noise propagation terms and adding a new one.

e^{1}= (I - K C) A e^{0}+ (I - K C) w^{0}K v^{0}

We have previously defined notations for covariances of the
noise terms ` e, w,`

and `v`

.

- New internal random noise with covariance
`Q`

. This comes from unknown sources and directly disturbs the variables in the next state. - Observation random noise with covariance
`V`

. This is coupled into the state error through the observer action. - State uncertainty propagating in from the initial state
vector
`x`

. This is represented by the constructed^{0}`P`

covariance matrix.^{0}

We know that variances are additive when random variables are
added. We also know that for a transformed variable `M x`

,
where `x`

has covariance `P`

, that the
covariance of the result is going to be `M P M`

.
Consequently, we can determine that the new state covariance
subsequent to the update will be as follows. ^{T}

P^{1}= (I - K C) A P^{0}A^{T}(I - K C)^{T}A^{T}+ (I - K C) Q (I - K C)^{T}+ K V K

The output observation equations apply transformation `C`

to the vector of state variables, so this will also show variance effects
that can be calculated. The two contributing terms
are the state vector and a vector of random observation noise
described by covariance `V`

.

y^{1}= C x^{1}+ v^{0}Y^{1}= C P^{1}C^{T}+ V

Given the `P`

variance of the state, we can repeat
this processing to determine the state variance at each instant.
^{0}

As always, we are not going to fully believe any of this until we see it put into practice. But before we can generate a suitable simulation, we need to know how to produce random vectors having a specified covariance; that's the topic for next time.

[1] The term *Riccati Equations*
originally came from the study of a family of differential
equations by Jacopo Riccati (1676–1754), see the Encyclopedia
of Mathematics, Springer, ISBN 978-1-55608-010-4 at a university
technical library near you. Since those original studies, the
meaning has been expanded to include "algebraic"
quadratic equations with a very similar form.