Publication:
Sparse LiDAR and Stereo Fusion (SLS-Fusion) for Depth Estimation and 3D Object Detection

dc.affiliation.dptoUC3M. Departamento de Informáticaes
dc.affiliation.grupoinvUC3M. Grupo de Investigación: Inteligencia Artificial Aplicada (GIAA)es
dc.contributor.authorMai, Nguyen-Anh-Minh
dc.contributor.authorDuthon, Pierre
dc.contributor.authorKhoudour, Louahdi
dc.contributor.authorCrouzil, Alain
dc.contributor.authorVelastin Carroza, Sergio Alejandro
dc.date.accessioned2021-06-02T10:27:20Z
dc.date.available2021-06-02T10:27:20Z
dc.date.issued2021-03-17
dc.descriptionProcedings in: 11th International Conference on Pattern Recognition Systems (ICPRS-21), conference paper, 17-19 mar, 2021, Universidad de Talca, Curicó, Chile.es
dc.description.abstractThe ability to accurately detect and localize objects is recognized as being the most important for the perception of self-driving cars. From 2D to 3D object detection, the most difficult is to determine the distance from the ego-vehicle to objects. Expensive technology like LiDAR can provide a precise and accurate depth information, so most studies have tended to focus on this sensor showing a performance gap between LiDAR-based methods and camera-based methods. Although many authors have investigated how to fuse LiDAR with RGB cameras, as far as we know there are no studies to fuse LiDAR and stereo in a deep neural network for the 3D object detection task. This paper presents SLS-Fusion, a new approach to fuse data from 4-beam LiDAR and a stereo camera via a neural network for depth estimation to achieve better dense depth maps and thereby improves 3D object detection performance. Since 4-beam LiDAR is cheaper than the well-known 64-beam LiDAR, this approach is also classified as a low-cost sensors-based method. Through evaluation on the KITTI benchmark, it is shown that the proposed method significantly improves depth estimation performance compared to a baseline method. Also when applying it to 3D object detection, a new state of the art on low-cost sensor based method is achieved.es
dc.identifier.bibliographicCitationMai, N-A-M., et al. (2021, march). Sparse LiDAR and Stereo Fusion (SLS-Fusion) for Depth Estimation and 3D Object Detection. In: 11th International Conference on Pattern Recognition Systems (ICPRS-21), conference paper, 17-19 mar, 2021, Universidad de Talca, Curicó, Chile.en
dc.identifier.otherhttp://www.icprs.org/
dc.identifier.publicationtitleSparse LiDAR and Stereo Fusion (SLS-Fusion) for Depth Estimation and 3D Object Detectionen
dc.identifier.urihttps://hdl.handle.net/10016/32824
dc.identifier.uxxiCC/0000032474
dc.language.isoeng
dc.publisherIET Digital Library
dc.relation.eventdate2021-03-17
dc.relation.eventplaceUniversidad de Talca, Curicó, Chile (conferencia virtual)es
dc.relation.eventtitle11th International Conference on Pattern Recognition Systems (ICPRS-21)en
dc.rights© Institution of Engineering and Technology, 2021.en
dc.rights.accessRightsopen access
dc.subject.ecienciaInformáticaes
dc.subject.otherAutonomous vehicleen
dc.subject.other3D object detectionen
dc.subject.otherDepth completionen
dc.subject.otherLidar stereo fusionen
dc.subject.otherPseudo lidaren
dc.titleSparse LiDAR and Stereo Fusion (SLS-Fusion) for Depth Estimation and 3D Object Detectionen
dc.typeconference proceedings*
dc.type.hasVersionAM*
dspace.entity.typePublication
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Sparse_ICPRS_2021_ps.pdf
Size:
1.85 MB
Format:
Adobe Portable Document Format
Description: