Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots
Editorial:
MDPI AG
Fecha de edición:
2022-05
Cita:
Gonzalez, P., Mora, A., Garrido, S., Barber, R., & Moreno, L. (2022). Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots. In Sensors, 22, (10), 3690-3710
ISSN:
1424-3210
Patrocinador:
Comunidad de Madrid
Ministerio de Economía y Competitividad (España)
Ministerio de Ciencia e Innovación (España)
Universidad Carlos III de Madrid
Agradecimientos:
This work was supported by the funding from HEROITEA: Heterogeneous Intelligent
Multi-Robot Team for Assistance of Elderly People (RTI2018-095599-B-C21), funded by Spanish Ministerio
de Economia y Competitividad, RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation
Hub, S2018/NMT-4331, funded by “Programas de Actividades I+D en la Comunidad de Madrid”
and cofunded by Structural Funds of the EU.
We acknowledge the R&D&I project PLEC2021-007819 funded by MCIN/AEI/
10.13039/501100011033 and by the European Union NextGenerationEU/PRTR and the Comunidad de
Madrid (Spain) under the multiannual agreement with Universidad Carlos III de Madrid (“Excelencia
para el Profesorado Universitario’—EPUC3M18) part of the fifth regional research plan 2016–2020.
Proyecto:
Comunidad de Madrid. S2018/NMT-4331
Gobierno de España. RTI2018-095599-B-C21
Gobierno de España. PLEC2021-007819
Comunidad de Madrid. EPUC3M18
Palabras clave:
Lidar odometry
,
Scan matching
,
Slam
,
Scene segmentation
,
Topological
,
Harmony search
Derechos:
© 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
Atribución 3.0 España
Resumen:
Nowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping,
navigation, and low-level scene segmentation. However, single data type maps are not enough
in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capa
Nowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping,
navigation, and low-level scene segmentation. However, single data type maps are not enough
in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capability of robots to
map on different levels the surrounding environment. It exploits the benefits of several data types,
counteracting the cons of each of the sensors. This research introduces several techniques to achieve
mapping and navigation through indoor environments. First, a scan matching algorithm based on
ICP with distance threshold association counter is used as a multi-objective-like fitness function.
Then, with Harmony Search, results are optimized without any previous initial guess or odometry. A
global map is then built during SLAM, reducing the accumulated error and demonstrating better
results than solo odometry LiDAR matching. As a novelty, both algorithms are implemented in
2D and 3D mapping, overlapping the resulting maps to fuse geometrical information at different
heights. Finally, a room segmentation procedure is proposed by analyzing this information, avoiding
occlusions that appear in 2D maps, and proving the benefits by implementing a door recognition
system. Experiments are conducted in both simulated and real scenarios, proving the performance of
the proposed algorithms.
[+]
[-]
Mostrar el registro completo del ítem
Impacto:
Ficheros en el ítem
Vista Previa del Fichero
*Click en la imagen del fichero para previsualizar.(Los elementos embargados carecen de esta funcionalidad)
Este ítem aparece en la(s) siguiente(s) colección(es)