Astudillo Olalla, ArmandoAl Kaff, Abdulla Hussein AbdulrahmanGarcía Fernández, Fernando2023-03-022022-07-192022 IEEE Intelligent Vehicles Symposium (IV). IEEE. Pp. 664-669.978-1-6654-8821-1https://hdl.handle.net/10016/36723Proceeding of 2022 IEEE Intelligent Vehicles Symposium (IV), (33rd IEEE IV), 04-09 June 2022, Aachen, Germany.3D object detection is a well-known problem for autonomous systems. Most of the existing methods use sensor fusion techniques with Radar, LiDAR, and Cameras. However, one of the challenges is to estimate the 3D shape and location of the adjoining vehicles from a single monocular image without other 3D sensors; such as Radar or LiDAR. To solve the lack of the depth information, a novel method for 3D vehicle detection is presented. In this work, instead of using the whole depth map and the viewing angle (allocentric angle), only the depth mask of each object is used to refine the projected centroid and estimate its egocentric angle directly. The performance of the proposed method is tested and validated using the KITTI dataset, obtaining similar results to other state-of-the-art methods for Monocular 3D Object Detection.5eng©2022 IEEE.Image segmentationThree-dimensional displaysLaser radarPower demandShapeIntelligent vehiclesPose estimationMono-DCNet: Monocular 3D Object Detection via Depth-based Centroid Refinement and Pose Estimationconference proceedingsRobótica e Informática Industrialhttps://doi.org/10.1109/IV51971.2022.9827373embargoed access6646692022 IEEE Intelligent Vehicles Symposium (IV)CC/0000034081