Publication:
Mono-DCNet: Monocular 3D Object Detection via Depth-based Centroid Refinement and Pose Estimation

Loading...
Thumbnail Image
Identifiers
Publication date
2022-07-19
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
3D object detection is a well-known problem for autonomous systems. Most of the existing methods use sensor fusion techniques with Radar, LiDAR, and Cameras. However, one of the challenges is to estimate the 3D shape and location of the adjoining vehicles from a single monocular image without other 3D sensors; such as Radar or LiDAR. To solve the lack of the depth information, a novel method for 3D vehicle detection is presented. In this work, instead of using the whole depth map and the viewing angle (allocentric angle), only the depth mask of each object is used to refine the projected centroid and estimate its egocentric angle directly. The performance of the proposed method is tested and validated using the KITTI dataset, obtaining similar results to other state-of-the-art methods for Monocular 3D Object Detection.
Description
Proceeding of 2022 IEEE Intelligent Vehicles Symposium (IV), (33rd IEEE IV), 04-09 June 2022, Aachen, Germany.
Keywords
Image segmentation, Three-dimensional displays, Laser radar, Power demand, Shape, Intelligent vehicles, Pose estimation
Bibliographic citation
2022 IEEE Intelligent Vehicles Symposium (IV). IEEE. Pp. 664-669.