Adjusting Time Step for Discrete Models

Developing Models for Kalman Filters

In this installment we consider one final model transformation, one that modifies the time step that your model uses.

When in the process of developing a new model, you need to pick carefully the time step for your measurement experiments, so that relevant system behaviors can be effectively observed to produce a robust numerical representation.

  • You want the time step short. Abundant samples means better resolution, redundancy that supports better noise filtering, reduced aliasing hazards, and easier conversions between digital and analog signals.
  • You want the time step long. Longer steps make response characteristics easier to identify, yielding better numerical precision and better model accuracy.

The "rule of thumb" of least 10 samples per cycle for the highest meaningful frequency in your system response suggests a reasonable compromise. However, that choice is likely to be overruled later because of practical restrictions. For example, after you develop your model using a 200 samples per second sample rate, you might be forced to apply the model using a 60 updates per second digital timing clock compatible with existing PID control equipment — for better or worse. (And you will want to study how much worse...)


Motivation: sampling exponential physical responses

We have previously discussed how a stable linear system responds to disturbances: with an exponentially decaying response. States influence each other as this progresses, so all of the states show a combination of the characteristic exponential responses. These responses are always in proportion to the values of the state variables. The exponential characteristic remains the same, regardless of what happened in past history to put the system into its current state.

single state case

Suppose that the system is stable and has a single state variable. Its impulse response decreases from an initial value of 1, until eventually settling to zero. The rate of decay in the state variable value can be characterized by a rate constant λ such that

    e-λT = D   

where D is the state value after the time interval T has passed, and scalar parameter λ indicates the rate of decay. We can use a logarithm function to solve for the rate constant parameter.

    log( e-λ·T) =  log(D)   →  λ = -log(D) / T 

multiple state case

For a two-variable pair associated with a complex eigenvalue of the state transition matrix, the associated settling for both variables is bounded by the same exponential decay rate.

All other multiple-state cases are just different combinations of the previous two cases, always with exponential settling in all of the terms involved in the mix, just with different proportions and rates.

There some obvious special cases.

  1. Zero time step case. States do not have any opportunity to change during a degenerate "instantaneous" time step. The state transition matrix to represent this is the Identity Matrix.
  2. Infinite time step case. Given enough time, all states settle fully to zero. The state transition matrix to represent this limiting case is the Zero Matrix.
  3. Existing model case. A known model provides a state transition matrix for one particular chosen time step.
  4. Multiple-step special case. The state transition matrix at a time interval that is twice as long can be obtained by applying the original state transition operation twice. This idea can be extended for additional integer number of time steps.


The matrix logarithm

The time step used in the model doesn't make any difference to the actual system. The system will show its exponentially decaying impulse responses, and your choice of time steps only determines which positions along those decay curves you wish to represent.

Starting from the identity matrix of a zero-time transition, as the step-time interval is allowed to increase, more exponential decays occurs, and more state interaction occurs, eventually reaching the known model transition matrix at time interval T. Is there a way to interpolate along this trajectory to evaluate the state transitions that occur at arbitrary time intervals shorter than T?

As it turns out, there is a matrix-valued exponential decay function that does this, with a matrix-valued decay rate Λ.

    e-Λ·T = A 

At least for the special cases of non-pathological state transition matrices, a "matrix logarithm" function exists that can calculate the Λ parameter matrix.[1] Knowing Λ, you can then substitute any value you want for T to obtain the state transition matrix for that time interval. General conditions for existence and uniqueness of the matrix logarithm are complicated, and you can research these if you wish. The following series approximation for the "principal matrix logarithm" is typically easier to evaluate than closed-form formulas.

X A I log A = - k = 1 X k + 1 k

This can be reversed. Given a matrix logarithm, the value of the matrix can be reconstructed using a series expansion for a matrix exponential function.[2]

exp Λ = k = 0 Λ k k!


The test

As usual, we don't fully believe it until we see it...

Suppose that a state transition dynamic model has been developed to work with a 1/200 second sample rate. This is the state transition matrix as it exists for the initial 0.005 second time step.

 A0p005 =
   0.98644   0.01000  -0.04000
   0.00000   0.94544  -0.16244
   0.00000   0.16244   0.80144 

We can apply the series approximation formula to obtain a good estimate for the principal matrix logarithm.

log0p005 = zeros(3,3);
X = eye(3,3)-A0p005;
Xpow = eye(3,3);

for term=1:6
  Xpow = Xpow*X;
  log0p005 = log0p005 + Xpow*(1/term);
log0p005 = -log0p005
log0p005 =
  -0.01365   0.01411  -0.04343
  -0.00000  -0.03993  -0.18428
  -0.00000   0.18428  -0.20329

I found that six terms was sufficient for adequate convergence of the power series formula. Did this really produce the matrix logarithm as intended? Check by calculating the power series expression for a matrix exponential function.

exp0p005 = zeros(3,3);
X = eye(3,3);
for  term=1:6
  exp0p005 = exp0p005 + X;
  X = (X * log0p005) / term;
A0p005 =
   0.98644   0.01000  -0.04000
   0.00000   0.94544  -0.16244
   0.00000   0.16244   0.80144

exp0p005 =
   0.98644   0.01000  -0.04000
   0.00000   0.94544  -0.16244
   0.00000   0.16244   0.80144

So far so good! The matrix exponential successfully "reversed" the matrix logarithm to restore the original matrix.

For example, the updating interval is 0.005 seconds for the initial sampling rate. The system must operate (for better or worse) at 60 samples per second with a new sampling interval of 0.01666667 seconds. We can get this with three steps at the original sample interval of 0.005 seconds, followed by a 0.00166667 sample fractional step. That final fraction is 1/3 of one sample interval at the original rate. The exponent of that fractional step is obtained simply by multiplying the full-step matrix logarithm by 1/3.

log0p00166 =
  -0.004551   0.004705  -0.014478
  -0.000000  -0.013309  -0.061428
  -0.000000   0.061428  -0.067763

The corresponding fractional state transition matrix is then calculated by evaluating the matrix exponential.

A0p00166 =
   0.99546   0.00423  -0.01410
   0.00000   0.98495  -0.05896
   0.00000   0.05896   0.93269

So now build the new transition matrix from a cascade of three of the initial transition steps followed by the fractional step.

A0p0166 = A0p005 * A0p005 * A0p005 * A0p00166 =
   0.95551   0.00801  -0.10704
   0.00000   0.74068  -0.38917
   0.00000   0.38917   0.39569

Just for comparison, let's approach this in a different way. As it happens, the new 0.01666667 second step could also be obtained by applying 1/3 step transitions 10 times, each fractional transition spanning 0.00166667 seconds. Let's try doing it that way.

 A0p0166 = eye(3,3);
for term=1:10
  A0p0166 = A0p0166 * exp0p00166;
A0p0166 =
   0.95551   0.00801  -0.10704
   0.00000   0.74069  -0.38918
   0.00000   0.38918   0.39569

The two different calculations produce equivalent results.


A loose end

Don't you hate loose ends? In this case, the inconvenient detail is that, so far, we have the time-adjusted the state transition matrix, but not the other matrices for the dynamic model.

The C matrix bases its reports on the current values of the system state. It makes absolutely no difference what sort of state transition steps were used to reach any given system state. Ergo: no adjustment required!

But the B matrix? Not so lucky. Consider carefully what it means. Suppose that a "unit impulse" signal u is applied to the system in a zero state. The impulse does something to displace the system in a way that we cannot know, but results show up at the next sample instant as a new state vector equal to B.

That is how we model the system at least. For the real linear system, after the impulse takes effect, there is a short piece of exponential decay, just like any other passage of time, until the target sampling interval is reached. This suggests a close relationship between the input response and the state transition response.

There is another way that the system state could arrive at the value B other than being driven there by an impulse. The system could have started in some state such that over the one sample time interval the exponential state decay takes it naturally to equal the value B. In fact, once past the instantaneous initial impulse, there is no difference. This idea motivates the following scheme for adjusting the B matrix for a new time step.

  1. Let AT1 be the state transition matrix for the model at the original time step T1.
  2. Let the corresponding input coupling matrix be BT1.
  3. Let AT2 be the state transition matrix for the new time step T2, as determined using the methods from the rest of this installment.

Then the equivalent input coupling matrix for the new time step T2 is then given by

     BT2 = AT2 AT1-1 BT1 

If you are not fully convinced, you already know how to set up your system model and improve the modified input coupling matrix using the LMS method. Or, become a master at linear systems mathematics, and recalculate everything exactly according to theory.


A retrospective

Let's briefly review how far we have come. If you have followed this series from the beginning:

  1. You can apply Linear Least Squares or Total Least Squares methods to fit the parameters of a linear state transition model.
  2. You can compare how your model predictions perform compared to actual system outputs by simulation with actual input data.
  3. You can restructure models that require multiple steps of past history into models requiring only one step of past history and multiple state variables using shift register techniques.
  4. You can transform ARMA models developed using statistical methods into equivalent state transition models.
  5. You can improve your models numerically, using LMS gradient learning techniques.
  6. You can apply scaling, transposition, restructuring, and eigenvalue transformations to change the form of the state transition matrix without changing its performance.
  7. You can use correlation analysis to check whether your model captures all of the important behaviors of the system and to deduce an appropriate model order.
  8. You can adjust your model order by learning new response characteristics or removing insignificant ones.
  9. You can adjust your model so that it can be used at a different time step than the one you used to collect your I/O test data sets.

All of these techniques take an adaptive systems approach to modeling. That is: assume that your system model needs to be improved to make it match the real system more closely. That is surely the case when you start with no model at all!

When you reach the point that your state transition equations give rather credible but imperfect tracking of the real system behavior, it is time to change gears and consider the Kalman Filter approach. We know that the model results are going to be imperfect, because the real system has noise disturbances that the model does not and cannot know about. The random disturbances propagate in the state variables to send the system along a different trajectory than the model predicts.

We can't be completely sure at any individual point what the noise effects are, but over a longer period, we can start to see systematic trajectory changes. When we see those, we can be increasingly confident that they resulted from random effects, and we can take action to restore accurate tracking. We begin this new direction next time.




[1] You can find out more about Wikipedia at Matrix Logarithm, Andreas Ambrosioy, 2013-03-21, available at the site.

[2] The Wikipedia article on Matrix Exponentials.


Contents of the "Developing Models for Kalman Filters" section of the website, including this page, are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License unless otherwise specifically noted. For complete information about the terms of this license, see The license allows usage and derivative works for individual or commercial purposes under certain restrictions. For permissions beyond the scope of this license, see the contact information page for this site.