Shortcuts

pypose.module.UKF

class pypose.module.UKF(model, Q=None, R=None, msqrt=None)[source]

Performs Batched Unscented Kalman Filter (UKF).

Parameters:
  • model (System) – The system model to be estimated, a subclass of pypose.module.NLS.

  • Q (Tensor, optional) – The covariance matrices of system transition noise. Ignored if provided during each iteration. Default: None

  • R (Tensor, optional) – The covariance matrices of system observation noise. Ignored if provided during each iteration. Default: None

A non-linear system can be described as

\[\begin{aligned} \mathbf{x}_{k+1} &= \mathbf{f}(\mathbf{x}_k, \mathbf{u}_k, t_k) + \mathbf{w}_k, \quad \mathbf{w}_k \sim \mathcal{N}(\mathbf{0}, \mathbf{Q}) \\ \mathbf{y}_{k} &= \mathbf{g}(\mathbf{x}_k, \mathbf{u}_k, t_k) + \mathbf{v}_k, \quad \mathbf{v}_k \sim \mathcal{N}(\mathbf{0}, \mathbf{R}) \end{aligned} \]

UKF can be described as the following equations, where the subscript \(\cdot_{k}\) is omited for simplicity.

  1. Sigma Points.

\[\begin{aligned} & \mathbf{x}^{\left ( i \right ) } = \mathbf{x}^{+} + \mathbf{\check{x}}^{\left(i \right)}, & \quad i=0,...,2n\\ & \mathbf{\check{x}}^{\left (i \right)} = \mathbf{0}, & \quad i=0 \\ & \mathbf{\check{x}}^{\left (i \right)} = \left(\sqrt{(n+\kappa)P} \right)_{i}^{T}, & \quad i= 1, ..., n \\ & \mathbf{\check{x}}^{\left(i \right)} = -\left(\sqrt{(n+\kappa)P} \right)_{i}^{T}, & \quad i=n+1, ..., 2n \\ \end{aligned} \]

where \(\left(\sqrt{(n+\kappa)P}\right) _{i}\) is the \(i\)-th row of \(\left(\sqrt{(n+\kappa)P}\right)\) and \(n\) is the dimension of state \(\mathbf{x}\).

Their weighting cofficients are given as

\[\begin{aligned} &W^{(0)} = \frac{\kappa}{n+\kappa}, & \quad i = 0\\ &W^{(i)} = \frac{1}{2(n+\kappa)}, & \quad i = 1,...,2n\\ \end{aligned} \]
  1. Priori State Estimation.

\[\mathbf{x}^{-} = \sum_{i=0}^{2n} W^{(i)} f(\mathbf{x}^{(i)}, \mathbf{u}, t) \]
  1. Priori Covariance.

\[\mathbf{P}^{-} = \sum_{i=0}^{2n} W^{(i)} \left(\mathbf{x}^{(i)} - \mathbf{x}^{-} \right) \left(\mathbf{x}^{(i)} - \mathbf{x}^- \right)^T + \mathbf{Q} \]
  1. Observational Estimation.

\[\begin{aligned} & \mathbf{y}^{(i)} = g \left( \mathbf{x}^{(i)}, \mathbf{u}, t \right), & \quad i = 0, \cdots, 2n \\ & \bar{\mathbf{y}} = \sum_{i=0}^{2n} W^{(i)} \mathbf{y}^{(i)} \end{aligned} \]
  1. Observational Covariance.

\[\mathbf{P}_{y} = \sum_{i=0}^{2n} W^{(i)} \left(\mathbf{y}^{(i)} - \bar{\mathbf{y}} \right) \left( \mathbf{y}^{(i)} - \bar{\mathbf{y}} \right)^T + \mathbf{R} \]
  1. Priori and Observation Covariance:

\[\mathbf{P}_{xy} = \sum_{i=0}^{2n} W^{(i)} \left( \mathbf{x}^{(i)} - \mathbf{x}^- \right) \left( \mathbf{y}^{(i)} - \bar{\mathbf{y}} \right)^T \]
  1. Kalman Gain

\[\mathbf{K} = \mathbf{P}_{xy}\mathbf{P}_{y}^{-1} \]
  1. Posteriori State Estimation.

\[\mathbf{x}^{+} = \mathbf{x}^{-} + \mathbf{K} (\mathbf{y}- \bar{\mathbf{y}}) \]
  1. Posteriori Covariance Estimation.

\[\mathbf{P} = \mathbf{P}^{-} - \mathbf{K}\mathbf{P}_{y}\mathbf{K}^T \]

where superscript \(\cdot^{-}\) and \(\cdot^{+}\) denote the priori and posteriori estimation, respectively.

Example

  1. Define a discrete-time non-linear system (NLS) model

>>> import torch, pypose as pp
>>> class NLS(pp.module.NLS):
...     def __init__(self):
...         super().__init__()
...
...     def state_transition(self, state, input, t=None):
...         return state.cos() + input
...
...     def observation(self, state, input, t):
...         return state.sin() + input
  1. Create a model and filter

>>> model = NLS()
>>> ukf = pp.module.UKF(model)
  1. Prepare data

>>> T, N = 5, 2 # steps, state dim
>>> states = torch.zeros(T, N)
>>> inputs = torch.randn(T, N)
>>> observ = torch.zeros(T, N)
>>> # std of transition, observation, and estimation
>>> q, r, p = 0.1, 0.1, 10
>>> Q = torch.eye(N) * q**2
>>> R = torch.eye(N) * r**2
>>> P = torch.eye(N).repeat(T, 1, 1) * p**2
>>> estim = torch.randn(T, N) * p
  1. Perform UKF prediction. Note that estimation error becomes smaller with more steps.

>>> for i in range(T - 1):
...     w = q * torch.randn(N) # transition noise
...     v = r * torch.randn(N) # observation noise
...     states[i+1], observ[i] = model(states[i] + w, inputs[i])
...     estim[i+1], P[i+1] = ukf(estim[i], observ[i] + v, inputs[i], P[i], Q, R)
... print('Est error:', (states - estim).norm(dim=-1))
Est error: tensor([8.8161, 9.0322, 5.4756, 2.2453, 0.9141])

Note

Implementation is based on Section 14.3 of this book

compute_cov(a, b, w, Q=0)[source]

Compute covariance of two set of variables.

forward(x, y, u, P, Q=None, R=None, t=None, k=None)[source]

Performs one step estimation.

Parameters:
  • x (Tensor) – estimated system state of previous step.

  • y (Tensor) – system observation at current step (measurement).

  • u (Tensor) – system input at current step.

  • P (Tensor) – state estimation covariance of previous step.

  • Q (Tensor, optional) – covariance of system transition model.

  • R (Tensor, optional) – covariance of system observation model.

  • t (Tensor, optional) – timestamp for ths system to estimate. Default: None.

  • k (int, optional) – a parameter for weighting the sigma points. If None is given, then 3 - n is adopted (n is the state dimension), which is best for Gaussian noise model. Default: None.

Returns:

posteriori state and covariance estimation

Return type:

list of Tensor

sigma_weight_points(x, P, k)[source]

Compute sigma point and its weights

Parameters:
  • x (Tensor) – estimated system state of previous step

  • P (Tensor) – state estimation covariance of previous step

  • k (int) – parameter for weighting sigma points.

Returns:

sigma points and their weights

Return type:

tuple of Tensor

Docs

Access documentation for PyPose

View Docs

Tutorials

Get started with tutorials and examples

View Tutorials

Get Started

Find resources and how to start using pypose

View Resources