Large-scale, drift-free SLAM using highly robustified building model constraints

Abstract : Constrained key-frame based local bundle adjustment is at the core of many recent systems that address the problem of large-scale, georeferenced SLAM based on a monocular camera and on data from inexpensive sensors and/or databases. The majority of these methods, however, impose constraints that result from proprioceptive sensors (e.g. IMUs, GPS, Odometry) while ignoring the possibility of explicitly constraining the structure (e.g. point cloud) resulting from the reconstruction process. Moreover, research on on-line interactions between SLAM and deep learning methods remains scarce, and as a result, few SLAM systems take advantage of deep architectures. We explore both these areas in this work: we use a fast deep neural network to infer semantic and structural information about the environment, and using a Bayesian framework, inject the results into a bundle adjustment process that constrains the 3d point cloud to texture-less 3d building models.
Document type :
Conference papers
Complete list of metadatas

https://hal-cea.archives-ouvertes.fr/cea-01813715
Contributor : Léna Le Roy <>
Submitted on : Tuesday, June 12, 2018 - 3:29:59 PM
Last modification on : Wednesday, January 23, 2019 - 2:39:24 PM

Identifiers

Citation

A. Salehi, V. Gay-Bellile, S. Bourgeois, N. Allezard, F. Chausse. Large-scale, drift-free SLAM using highly robustified building model constraints. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep 2017, Vancouver, Canada. pp.1586-1593, ⟨10.1109/IROS.2017.8205966⟩. ⟨cea-01813715⟩

Share

Metrics

Record views

105