Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots

dc.affiliation.dptoUC3M. Departamento de Ingeniería de Sistemas y Automáticaes
dc.affiliation.grupoinvUC3M. Grupo de Investigación: Laboratorio de Robótica (Robotics Lab)es
dc.contributor.authorGonzález Prieto, Pavel Enrique
dc.contributor.authorMora Velasco, Alicia
dc.contributor.authorGarrido Bullón, Luis Santiago
dc.contributor.authorBarber Castaño, Ramón Ignacio
dc.contributor.authorMoreno Lorente, Luis Enrique
dc.contributor.funderComunidad de Madrides
dc.contributor.funderMinisterio de Economía y Competitividad (España)es
dc.contributor.funderMinisterio de Ciencia e Innovación (España)es
dc.contributor.funderUniversidad Carlos III de Madrides
dc.description.abstractNowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping, navigation, and low-level scene segmentation. However, single data type maps are not enough in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capability of robots to map on different levels the surrounding environment. It exploits the benefits of several data types, counteracting the cons of each of the sensors. This research introduces several techniques to achieve mapping and navigation through indoor environments. First, a scan matching algorithm based on ICP with distance threshold association counter is used as a multi-objective-like fitness function. Then, with Harmony Search, results are optimized without any previous initial guess or odometry. A global map is then built during SLAM, reducing the accumulated error and demonstrating better results than solo odometry LiDAR matching. As a novelty, both algorithms are implemented in 2D and 3D mapping, overlapping the resulting maps to fuse geometrical information at different heights. Finally, a room segmentation procedure is proposed by analyzing this information, avoiding occlusions that appear in 2D maps, and proving the benefits by implementing a door recognition system. Experiments are conducted in both simulated and real scenarios, proving the performance of the proposed algorithms.en
dc.description.sponsorshipThis work was supported by the funding from HEROITEA: Heterogeneous Intelligent Multi-Robot Team for Assistance of Elderly People (RTI2018-095599-B-C21), funded by Spanish Ministerio de Economia y Competitividad, RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by “Programas de Actividades I+D en la Comunidad de Madrid” and cofunded by Structural Funds of the EU. We acknowledge the R&D&I project PLEC2021-007819 funded by MCIN/AEI/ 10.13039/501100011033 and by the European Union NextGenerationEU/PRTR and the Comunidad de Madrid (Spain) under the multiannual agreement with Universidad Carlos III de Madrid (“Excelencia para el Profesorado Universitario’—EPUC3M18) part of the fifth regional research plan 2016–2020.en
dc.identifier.bibliographicCitationGonzalez, P., Mora, A., Garrido, S., Barber, R., & Moreno, L. (2022). Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots. In Sensors, 22, (10), 3690-3710en
dc.publisherMDPI AGen
dc.relation.projectIDComunidad de Madrid. S2018/NMT-4331es
dc.relation.projectIDGobierno de España. RTI2018-095599-B-C21es
dc.relation.projectIDGobierno de España. PLEC2021-007819es
dc.relation.projectIDComunidad de Madrid. EPUC3M18es
dc.rights© 2022 by the authors. Licensee MDPI, Basel, Switzerland.en
dc.rightsAtribución 3.0 España*
dc.rights.accessRightsopen accessen
dc.subject.ecienciaRobótica e Informática Industriales
dc.subject.otherLidar odometryen
dc.subject.otherScan matchingen
dc.subject.otherScene segmentationen
dc.subject.otherHarmony searchen
dc.titleMulti-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robotsen
dc.typeresearch article*
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
7.7 MB
Adobe Portable Document Format