Parallelized SLAM: Enhancing Mapping and Localization through Concurrent Processing

 

Abstract

Simultaneous Localization and Mapping (SLAM) systems face high computational demands, hindering real-time implementation on low-end computers. An approach to address this challenge involves offline processing, i.e., the map of the environment map is created offline in a powerful computer and then passed to the low-end computer that uses it for navigation, which has fewer resources. However, even creating the map on a powerful computer is slow since SLAM is designed as a sequential process.

This work proposes a parallel mapping method, pSLAM, to speed up the offline creation of maps. In pSLAM, a video sequence is partitioned into multiple subsequences, each processed independently, creating individual submaps. These submaps are subsequently merged to create a unified global map of the environment.

Our experiments across a diverse range of scenarios demonstrate an increase in processing speed of up to 6 times compared to the sequential approach while maintaining the same level of robustness. Furthermore, we conducted comparative analyses against state-of-the-art SLAM methods, namely UcoSLAM, OpenVSLAM, and ORB-SLAM3, outperforming them across all evaluated scenarios.

Cite us

Parallelized SLAM: Enhancing Mapping and Localization through Concurrent Processing. Romero-Ramirez, F. J.;  Cazorla, M.; Marín-Jimenez M.J., Medina-Carnicer, R., Muñoz-Salinas, R. [Under revision]

Code and datasets

From an engineering point of view,  pSLAM is an evolution of UcoSLAM and ReSLAM. The source code used through the experimentation can be downloaded at the following link:

The datasets used for the experimental section of this paper can be downloaded from the following link:

DATASETS