graphslam.graph module

A Graph class that stores the edges and vertices required for Graph SLAM.

Given:

  • A set of \(M\) edges (i.e., constraints) \(\mathcal{E}\)

    • \(e_j \in \mathcal{E}\) is an edge

    • \(\mathbf{e}_j \in \mathbb{R}^\bullet\) is the error associated with that edge, where \(\bullet\) is a scalar that depends on the type of edge

    • \(\Omega_j\) is the \(\bullet \times \bullet\) information matrix associated with edge \(e_j\)

  • A set of \(N\) vertices \(\mathcal{V}\)

    • \(v_i \in \mathcal{V}\) is a vertex

    • \(\mathbf{x}_i \in \mathbb{R}^c\) is the compact pose associated with \(v_i\)

    • \(\boxplus\) is the pose composition operator that yields a (non-compact) pose that lies in (a subspace of) \(\mathbb{R}^d\)

We want to optimize

\[\chi^2 = \sum_{e_j \in \mathcal{E}} \mathbf{e}_j^T \Omega_j \mathbf{e}_j.\]

Let

\[\begin{split}\mathbf{x} := \begin{bmatrix} \mathbf{x}_1 \\ \mathbf{x}_2 \\ \vdots \\ \mathbf{x}_N \end{bmatrix} \in \mathbb{R}^{cN}.\end{split}\]

We will solve this optimization problem iteratively. Let

\[\mathbf{x}^{k+1} := \mathbf{x}^k \boxplus \Delta \mathbf{x}^k.\]

The \(\chi^2\) error at iteration \(k+1\) is

\[\chi_{k+1}^2 = \sum_{e_j \in \mathcal{E}} \underbrace{\left[ \mathbf{e}_j(\mathbf{x}^{k+1}) \right]^T}_{1 \times \bullet} \underbrace{\Omega_j}_{\bullet \times \bullet} \underbrace{\mathbf{e}_j(\mathbf{x}^{k+1})}_{\bullet \times 1}.\]

We will linearize the errors as:

\[\begin{split}\mathbf{e}_j(\mathbf{x}^{k+1}) &= \mathbf{e}_j(\mathbf{x}^k \boxplus \Delta \mathbf{x}^k) \\ &\approx \mathbf{e}_j(\mathbf{x}^{k}) + \frac{\partial}{\partial \Delta \mathbf{x}^k} \left[ \mathbf{e}_j(\mathbf{x}^k \boxplus \Delta \mathbf{x}^k) \right] \Delta \mathbf{x}^k \\ &= \mathbf{e}_j(\mathbf{x}^{k}) + \left( \left. \frac{\partial \mathbf{e}_j(\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)} \right|_{\Delta \mathbf{x}^k = \mathbf{0}} \right) \frac{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial \Delta \mathbf{x}^k} \Delta \mathbf{x}^k.\end{split}\]

Plugging this into the formula for \(\chi^2\), we get:

\[\begin{split}\chi_{k+1}^2 &\approx \ \ \ \ \ \sum_{e_j \in \mathcal{E}} \underbrace{[ \mathbf{e}_j(\mathbf{x}^k)]^T}_{1 \times \bullet} \underbrace{\Omega_j}_{\bullet \times \bullet} \underbrace{\mathbf{e}_j(\mathbf{x}^k)}_{\bullet \times 1} \\ &\hphantom{\approx} \ \ \ + \sum_{e_j \in \mathcal{E}} \underbrace{[ \mathbf{e}_j(\mathbf{x^k}) ]^T }_{1 \times \bullet} \underbrace{\Omega_j}_{\bullet \times \bullet} \underbrace{\left( \left. \frac{\partial \mathbf{e}_j(\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)} \right|_{\Delta \mathbf{x}^k = \mathbf{0}} \right)}_{\bullet \times dN} \underbrace{\frac{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial \Delta \mathbf{x}^k}}_{dN \times cN} \underbrace{\Delta \mathbf{x}^k}_{cN \times 1} \\ &\hphantom{\approx} \ \ \ + \sum_{e_j \in \mathcal{E}} \underbrace{(\Delta \mathbf{x}^k)^T}_{1 \times cN} \underbrace{ \left( \frac{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial \Delta \mathbf{x}^k} \right)^T}_{cN \times dN} \underbrace{\left( \left. \frac{\partial \mathbf{e}_j(\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)} \right|_{\Delta \mathbf{x}^k = \mathbf{0}} \right)^T}_{dN \times \bullet} \underbrace{\Omega_j}_{\bullet \times \bullet} \underbrace{\left( \left. \frac{\partial \mathbf{e}_j(\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)} \right|_{\Delta \mathbf{x}^k = \mathbf{0}} \right)}_{\bullet \times dN} \underbrace{\frac{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial \Delta \mathbf{x}^k}}_{dN \times cN} \underbrace{\Delta \mathbf{x}^k}_{cN \times 1} \\ &= \chi_k^2 + 2 \mathbf{b}^T \Delta \mathbf{x}^k + (\Delta \mathbf{x}^k)^T H \Delta \mathbf{x}^k,\end{split}\]

where

\[\begin{split}\mathbf{b}^T &= \sum_{e_j \in \mathcal{E}} \underbrace{[ \mathbf{e}_j(\mathbf{x^k}) ]^T }_{1 \times \bullet} \underbrace{\Omega_j}_{\bullet \times \bullet} \underbrace{\left( \left. \frac{\partial \mathbf{e}_j(\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)} \right|_{\Delta \mathbf{x}^k = \mathbf{0}} \right)}_{\bullet \times dN} \underbrace{\frac{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial \Delta \mathbf{x}^k}}_{dN \times cN} \\ H &= \sum_{e_j \in \mathcal{E}} \underbrace{ \left( \frac{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial \Delta \mathbf{x}^k} \right)^T}_{cN \times dN} \underbrace{\left( \left. \frac{\partial \mathbf{e}_j(\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)} \right|_{\Delta \mathbf{x}^k = \mathbf{0}} \right)^T}_{dN \times \bullet} \underbrace{\Omega_j}_{\bullet \times \bullet} \underbrace{\left( \left. \frac{\partial \mathbf{e}_j(\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)} \right|_{\Delta \mathbf{x}^k = \mathbf{0}} \right)}_{\bullet \times dN} \underbrace{\frac{\partial (\mathbf{x}^k \boxplus \Delta \mathbf{x}^k)}{\partial \Delta \mathbf{x}^k}}_{dN \times cN}.\end{split}\]

Using this notation, we obtain the optimal update as

\[\Delta \mathbf{x}^k = -H^{-1} \mathbf{b}.\]

We apply this update to the poses and repeat until convergence.

class graphslam.graph.Graph(edges, vertices)[source]

Bases: object

A graph that will be optimized via Graph SLAM.

Parameters:
_chi2

The current \(\chi^2\) error, or None if it has not yet been computed

Type:

float, None

_edges

A list of the edges (i.e., constraints) in the graph

Type:

list[graphslam.edge.base_edge.BaseEdge]

_fixed_gradient_indices

The set of gradient indices (i.e., Vertex.gradient_index) for vertices that are fixed

Type:

set[int]

_g2o_params

A dictionary where the values are BaseG2OParameter objects

Type:

dict, None

_gradient

The gradient \(\mathbf{b}\) of the \(\chi^2\) error, or None if it has not yet been computed

Type:

numpy.ndarray, None

_hessian

The Hessian matrix \(H\), or None if it has not yet been computed

Type:

scipy.sparse.lil_matrix, None

_len_gradient

The length of the gradient vector (and the Hessian matrix)

Type:

int, None

_vertices

A list of the vertices in the graph

Type:

list[graphslam.vertex.Vertex]

_calc_chi2_gradient_hessian()[source]

Calculate the \(\chi^2\) error, the gradient \(\mathbf{b}\), and the Hessian \(H\).

_initialize()[source]

Fill in the vertices attributes for the graph’s edges, and other necessary preparations.

calc_chi2()[source]

Calculate the \(\chi^2\) error for the Graph.

Returns:

The \(\chi^2\) error

Return type:

float

equals(other, tol=1e-06)[source]

Check whether two graphs are equal.

Parameters:
  • other (Graph) – The graph to which we are comparing

  • tol (float) – The tolerance

Returns:

Whether the two graphs are equal

Return type:

bool

classmethod from_g2o(infile, custom_edge_types=None)[source]

Load a graph from a .g2o file.

Parameters:
  • infile (str) – The path to the .g2o file

  • custom_edge_types (list[type], None) – A list of custom edge types, which must be subclasses of BaseEdge

Returns:

The loaded graph

Return type:

Graph

optimize(tol=0.0001, max_iter=20, fix_first_pose=True, verbose=True)[source]

Optimize the \(\chi^2\) error for the Graph.

Parameters:
  • tol (float) – If the relative decrease in the \(\chi^2\) error between iterations is less than tol, we will stop

  • max_iter (int) – The maximum number of iterations

  • fix_first_pose (bool) – If True, we will fix the first pose

  • verbose (bool) – Whether to print information about the optimization

Returns:

ret – Information about this optimization

Return type:

OptimizationResult

plot(vertex_color='r', vertex_marker='o', vertex_markersize=3, edge_color='b', title=None)[source]

Plot the graph.

Parameters:
  • vertex_color (str) – The color that will be used to plot the vertices

  • vertex_marker (str) – The marker that will be used to plot the vertices

  • vertex_markersize (int) – The size of the plotted vertices

  • edge_color (str) – The color that will be used to plot the edges

  • title (str, None) – The title that will be used for the plot

to_g2o(outfile)[source]

Save the graph in .g2o format.

Parameters:

outfile (str) – The path where the graph will be saved

class graphslam.graph.OptimizationResult[source]

Bases: object

A class for storing information about a graph optimization; see Graph.optimize.

converged

Whether the optimization converged

Type:

bool

duration_s

The total time for the optimization (in seconds)

Type:

float, None

final_chi2

The final \(\chi^2\) error

Type:

float, None

initial_chi2

The initial \(\chi^2\) error

Type:

float, None

iteration_results

Information about each iteration

Type:

list[IterationResult]

num_iterations

The number of iterations that were performed

Type:

int, None

class IterationResult[source]

Bases: object

A class for storing information about a single graph optimization iteration; see Graph.optimize.

calc_chi2_gradient_hessian_duration_s

The time to compute \(\chi^2\), the gradient, and the Hessian (in seconds); see Graph._calc_chi2_gradient_hessian

Type:

float, None

chi2

The \(\chi^2\) of the graph after performing this iteration’s update

Type:

float, None

duration_s

The total time for this iteration (in seconds)

Type:

float, None

rel_diff

The relative difference in the \(\chi^2\) as a result of this iteration

Type:

float, None

solve_duration_s

The time to solve \(H \Delta \mathbf{x}^k = -\mathbf{b}\) (in seconds)

Type:

float, None

update_duration_s

The time to update the poses (in seconds)

Type:

float, None

is_complete_iteration()[source]

Whether this was a full iteration.

At iteration i, we compute the \(\chi^2\) error for iteration i-1 (see Graph.optimize). If this meets the convergence criteria, then we do not solve the linear system and update the poses, and so this is not a complete iteration.

Returns:

Whether this was a complete iteration (i.e., we solve the linear system and updated the poses)

Return type:

bool

class graphslam.graph._Chi2GradientHessian[source]

Bases: object

A class that is used to aggregate the \(\chi^2\) error, gradient, and Hessian.

chi2

The \(\chi^2\) error

Type:

float

gradient

The contributions to the gradient vector

Type:

defaultdict

hessian

The contributions to the Hessian matrix

Type:

defaultdict

class DefaultArray[source]

Bases: object

A class for use in a defaultdict.

static update(chi2_grad_hess, incoming)[source]

Update the \(\chi^2\) error and the gradient and Hessian dictionaries.

Parameters:
  • chi2_grad_hess (_Chi2GradientHessian) – The _Chi2GradientHessian that will be updated

  • incoming (tuple) – The return value from BaseEdge.calc_chi2_gradient_hessian