pypose.bspline¶
- class pypose.bspline(data, interval=0.1, extrapolate=False)[source]¶
B-spline interpolation, which currently only supports SE3 LieTensor.
- Parameters:
data (
LieTensor
) – the input sparse poses with [batch_size, poses_num, dim] shape.interval (
float
) – the unit interval between interpolated poses. We assume the interval of adjacent input poses is 1. Therefore, if we setinterval
as0.1
, the interpolated poses between the poses at \(t\) and \(t+1\) will be at \([t, t+0.1, ..., t+0.9, t+1]\). Default:0.1
.extrapolate (
bool
) – flag to determine whether the interpolate poses pass the start and end poses. Default:False
.
- Returns:
the interpolated SE3 LieTensor.
- Return type:
A poses query at \(t \in [t_i, t_{i+1})\) (i.e. a segment of the spline) only relies on the poses at four steps \(\{t_{i-1},t_i,t_{i+1},t_{i+2}\}\). It means that the interpolation between adjacent poses needs four consecutive poses. In this function, the interpolated poses are evenly distributed between a pose query according to the interval parameter.
The absolute pose of the spline \(T_{s}^w(t)\), where \(w\) denotes the world and \(s\) is the spline coordinate frame, can be calculated:
\[\begin{aligned} &T^w_s(t) = T^w_{i-1} \prod^{i+2}_{j=i}\delta T_j,\\ &\delta T_j= \exp(\lambda_{j-i}(t)\delta \hat{\xi}_j),\\ &\lambda(t) = MU(t), \end{aligned} \]\(M\) is a predefined matrix, shown as follows:
\[M = \frac{1}{6}\begin{bmatrix} 5& 3& -3& 1 \\ 1& 3& 3& -2\\ 0& 0& 0& 1 \\ \end{bmatrix} \]\(U(t)\) is a vector:
\[U(t) = \begin{bmatrix} 1\\ u(t)\\ u(t)^2\\ u(t)^3\\ \end{bmatrix} \]where \(u(t)=(t-t_i)/\Delta t\), \(\Delta t = t_{i+1} - t_{i}\). \(t_i\) is the time point of the \(i_{th}\) given pose. \(t\) is the interpolation time point \(t \in [t_{i},t_{i+1})\).
\(\delta \hat{\xi}_j\) is the transformation between \(\hat{\xi}_{j-1}\) and \(\hat{\xi}_{j}\)
\[\begin{aligned} \delta \hat{\xi}_j :&= \log(\exp(\hat{\xi} _w^{j-1})\exp(\hat{\xi}_j^w)) &= \log(T_j^{j-1})\in \mathfrak{se3} \end{aligned} \]Note
The implementation is based on Eq. (3), (4), (5), and (6) of this paper: David Hug, et al., HyperSLAM: A Generic and Modular Approach to Sensor Fusion and Simultaneous Localization And Mapping in Continuous-Time, 2020 International Conference on 3D Vision (3DV), Fukuoka, Japan, 2020.
Examples
>>> import torch, pypose as pp >>> a1 = pp.euler2SO3(torch.Tensor([0., 0., 0.])) >>> a2 = pp.euler2SO3(torch.Tensor([torch.pi / 4., torch.pi / 3., torch.pi / 2.])) >>> poses = pp.SE3([[[0., 4., 0., a1[0], a1[1], a1[2], a1[3]], ... [0., 3., 0., a1[0], a1[1], a1[2], a1[3]], ... [0., 2., 0., a1[0], a1[1], a1[2], a1[3]], ... [0., 1., 0., a1[0], a1[1], a1[2], a1[3]], ... [1., 0., 1., a2[0], a2[1], a2[2], a2[3]], ... [2., 0., 1., a2[0], a2[1], a2[2], a2[3]], ... [3., 0., 1., a2[0], a2[1], a2[2], a2[3]], ... [4., 0., 1., a2[0], a2[1], a2[2], a2[3]]]]) >>> wayposes = pp.bspline(poses, 0.1)
Fig. 1. Result of B Spline Interpolation in SE3.¶