Modeling Perception in Autonomous Vehicles via 3D Convolutional Representations on LiDAR

Research Projects
Organizational Units
Journal Issue
This paper proposes an algorithm to model and process streams of LiDAR data under an autonomous vehicle framework. LiDAR is assumed to be an exteroceptive sensor that allows the vehicle to have dynamic 3D scene perception of its surroundings. We employ an encoder-decoder architecture based on 3D-Convolutional layers called 3D Convolution Encoder-Decoder (3D-CED), together with a transfer learning strategy to extract a set of features from point clouds, which are relevant in the context of autonomous driving. The resulting features allow to make inferences of the future point cloud data and detect multiple abstraction level anomalies in controlled scenarios by utilizing a probabilistic switching dynamic model called High Dimensional Markov Jump Particle Filter (HD-MJPF). Moreover, a comparison is provided between piecewise linear, piecewise nonlinear, and nonlinear predictive models for anomaly detection at multiple abstraction levels. Our approach is evaluated with data collected from the LiDAR sensors of the autonomous vehicle while performing certain tasks in a controlled environment.
3 D-convolutional encoder decoder, Anomaly detection, Hierarchical generalize dynamic bayesian network, High dimensional Markov jump particle filter, LSTM, Transfer learning
Bibliographic citation
Iqbal, H., Campo, D., Marin-Plaza, P., Marcenaro, L., Gómez, D. M., & Regazzoni, C. S. (2022). Modeling perception in autonomous vehicles via 3D convolutional representations on LiDAR. IEEE Transactions on Intelligent Transportation Systems, 23(9), 14608-14619