# Restructuring State Models, part 3

Developing Models for Kalman Filters

There is one more important kind of transition matrix structure that has both practical and theoretical importance, if not the best numerical properties.

For example, suppose that you have a model that does close to what you need, but not quite close enough. Since linear systems are in general additive, it would be helpful if you could devise a new model, starting with the model you currently have, that "adds on" some new internal state variables to provide the additional behaviors. But to do this, you need to have a structure with states that act with a high degree of independence.

## Eigenvalue Transformation

Here is a quick review of *non-symmetric matrix eigenvalue
theory*, emphasizing the case of a square, real-valued, non-symmetric
matrix. Feel free to skip this entire installment if you are completely
up-to-speed on this topic.

For a given real-valued matrix
`A`

, an *eigenvector* or *characteristic vector*
`e`

of that matrix is a special vector such that result of
multiplying the matrix times this vector is the original `e`

vector scaled by some constant `λ`

, called an
*eigenvalue* or *characteristic value*.

A e = e λ

All matrices have these (complex-valued matrices too). For our case of real-valued square matrices, some of the eigenvalues can be complex numbers, and the corresponding eigenvectors are also complex. It is easy to verify that if a complex number is an eigenvalue, then its complex conjugate is an eigenvalue also, and it has the complex conjugate eigenvector associated.

The real and imaginary parts of a complex-valued eigenvector can
be separated and managed in a real valued form. When either of these
special vectors is multiplied by the `A`

matrix, the
results is a mix of the two real-valued parts rather than a scaled
version of the original vector.

α Real part of the eigenvalue β Imaginary part of the eigenvalue e$$\left[{e}_{r}\phantom{\rule{1em}{0ex}}{e}_{i}\right]\phantom{\rule{1em}{0ex}}\left[\begin{array}{c}\phantom{\rule{0.6em}{0ex}}\alpha \phantom{\rule{1em}{0ex}}\beta \\ -\beta \phantom{\rule{1em}{0ex}}\alpha \end{array}\right]$$_{r}Real part of the eigenvector e_{r}Imaginary part of the eigenvector

To make the eigenvectors unique, the convention is to scale them so that they have unit magnitude — the sum of squared terms in the normalized vector is 1.0. (For a complex eigenvector, the sum of products of the conjugate terms is 1.0 — real and imaginary parts cannot be scaled separately.)

If a vector is an eigenvector of a matrix, and a scalar scaling factor is applied to every term of this matrix, the vector remains an eigenvector, but the eigenvalue changes by the same scaling factor applied to the matrix.

The number of linearly independent eigenvalues you can find for any matrix equals the rank of the matrix. Degenerate eigenvectors with associated eigenvalues 0.0 can be artificially constructed for the case of matrices that are not full rank. This is not a concern for our models, which will be constructed to be full rank.

The eigenvectors (or special pairs) can be collected as columns
of matrix `E`

. Eigenvalues (or for the complex
pair cases, eigenvalue parts) are collected to form a matrix
`Λ`

with the eigenvalues along the main
diagonal in the same order as the eigenvectors. Once organized
in this fashion, the relationship between the matrix
`Λ`

and the matrix `E`

can then be
summarized by the following.

A E = E Λ

The `E`

matrix, being full rank, has an inverse. The
following two relationships can then be shown to apply.

E^{-1}A E = Λ E Λ E^{-1}= A

If the `E`

matrix is selected as a transformation mapping
from one set of state variables into an equivalent alternative
set,^{[1]} the relationships above
tell us that resulting state transition equations can be put into an
"almost diagonal" form in which the states have minimal interaction
with each other, acting like a set of low-order systems running in
parallel.

## A little more detail

Let's follow through by applying this. Start with the original state transition model equations.

$${x}^{i+1}\phantom{\rule{1.0em}{0ex}}=\phantom{\rule{1.0em}{0ex}}A{x}^{i}\phantom{\rule{1.0em}{0ex}}+\phantom{\rule{1.0em}{0ex}}B{u}^{i}$$ $${y}^{i}\phantom{\rule{1.0em}{0ex}}=\phantom{\rule{1.0em}{0ex}}C{x}^{i}\phantom{\rule{1.0em}{0ex}}$$ Perform an eigenvalue analysis and from this construct
the `E`

and `Λ`

matrices. The critical
calculations are provided by the Octave `eig`

function, but an alternative is the `realeig`

^{[2]} function that uses
the `eig`

function and then constructs the real-valued
form as previously discussed. This gives you
the `E`

matrix, `Λ`

matrix, and
`E`

matrix. ^{-1}

Substitute the decomposed matrix form for the `A`

matrix
in the original state transition equations, and then premultiply
on both sides by the `E`

matrix.^{-1}

Now introduce a notation for the new variables generated by the eigenvector mapping. We can now see that the transformed state variable system has the transition matrix in almost-diagonal form.

$$z\phantom{\rule{1.0em}{0ex}}=\phantom{\rule{1.0em}{0ex}}{E}^{-1}x\phantom{\rule{1.0em}{0ex}}\phantom{\rule{1.0em}{0ex}}\phantom{\rule{1.0em}{0ex}}\phantom{\rule{1.0em}{0ex}}x\phantom{\rule{1.0em}{0ex}}=\phantom{\rule{1.0em}{0ex}}Ez$$ $${z}^{i+1}\phantom{\rule{1.0em}{0ex}}=\phantom{\rule{1.0em}{0ex}}\Lambda {z}^{i}\phantom{\rule{1.0em}{0ex}}+\phantom{\rule{1.0em}{0ex}}{E}^{-1}B{u}^{i}$$ $${y}^{i+1}\phantom{\rule{1.0em}{0ex}}=\phantom{\rule{1.0em}{0ex}}CE{z}^{i+1}\phantom{\rule{1.0em}{0ex}}$$

## Let's test it...

We fire up Octave and enter the data for an almost arbitrarily selected state transition model.

Amat = [ ... 1.03, 0.14, 0.0, 0.0 ; ... -0.10, 0.92, -0.09, 0.0 ; ... 0.0, 0.20, 0.88, -0.40; ... 0.0, 0.0, 0.00, 0.78; ]; Cmat = [0.86, 0.42, -0.48, -0.60 ]; Bmat = [0, 0, 0, 1.00 ]';

Now apply the `realeig`

function to perform a normal
complex-valued eigenvector-eigenvalue decomposition of the matrix,
and then reorganize it in real-valued form as discussed. Apply this
to the `Amat`

state transition matrix.

% Eigenvector decomposition [reig, lam, reiginv] = realeig(Amat)

We get the following results for the `E`

, `Λ`

,
and `E`

matrices. ^{-1}

reig = 0.72095 0.28328 -0.31933 -0.20947 -0.29264 0.16670 0.56193 0.37405 -0.62816 0.68864 0.00000 0.81460 0.00000 0.00000 0.00000 0.39068 lam = 0.97317 0.00000 0.00000 0.00000 0.00000 0.92841 0.16320 0.00000 0.00000 -0.16320 0.92841 0.00000 0.00000 0.00000 0.00000 0.78000 reiginv = 1.11177 0.63179 -0.61027 1.26368 1.01414 0.57630 0.89546 -1.87516 0.27814 1.93763 -0.58345 -0.48949 0.00000 0.00000 0.00000 2.55966

The `Λ`

matrix has the expected block-diagonal
form, indicating two real-valued eigenvalues and one pair
of complex conjugate eigenvalues. Let's apply the decomposition
formula to try restoring the original matrix.

Amat testA = reig * lam * reig^-1

Amat = 1.03000 0.14000 -0.00000 0.00000 -0.10000 0.92000 -0.09000 -0.00000 -0.00000 0.20000 0.88000 -0.40000 0.00000 0.00000 0.00000 0.78000 testA = 1.03000 0.14000 -0.00000 0.00000 -0.10000 0.92000 -0.09000 -0.00000 -0.00000 0.20000 0.88000 -0.40000 0.00000 0.00000 0.00000 0.78000

These match. The original matrix is successfully reconstructed from the near-diagonal form. The transformation is verified. So let us now use it to construct the transformed state transition equations.

% Diagonalized state equations Ad = lam Bd = reiginv * Bmat Cd = Cmat * reig

Ad = 0.97317 0.00000 0.00000 0.00000 0.00000 0.92841 0.16320 0.00000 0.00000 -0.16320 0.92841 0.00000 0.00000 0.00000 0.00000 0.78000 Bd = 1.26368 -1.87516 -0.48949 2.55966 Cd = 0.798629 -0.016916 -0.038611 -0.648458

Footnotes:

[1] For review, see the Transformations article in this series.

[2] This is a simple, special purpose function, not part of the regular Octave function libraries. You can download a copy of realeig.m to study or use it.