Publication:
Interpretable global-local dynamics for the prediction of eye fixations in autonomous driving scenarios

Loading...
Thumbnail Image
Identifiers
Publication date
2020-12-01
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
Human eye movements while driving reveal that visual attention largely depends on the context in which it occurs. Furthermore, an autonomous vehicle which performs this function would be more reliable if its outputs were understandable. Capsule Networks have been presented as a great opportunity to explore new horizons in the Computer Vision field, due to their capability to structure and relate latent information. In this article, we present a hierarchical approach for the prediction of eye fixations in autonomous driving scenarios. Context-driven visual attention can be modeled by considering different conditions which, in turn, are represented as combinations of several spatio-temporal features. With the aim of learning these conditions, we have built an encoder-decoder network which merges visual features' information using a global-local definition of capsules. Two types of capsules are distinguished: representational capsules for features and discriminative capsules for conditions. The latter and the use of eye fixations recorded with wearable eye tracking glasses allow the model to learn both to predict contextual conditions and to estimate visual attention, by means of a multi-task loss function. Experiments show how our approach is able to express either frame-level (global) or pixel-wise (local) relationships between features and contextual conditions, allowing for interpretability while maintaining or improving the performance of black-box related systems in the literature. Indeed, our proposal offers an improvement of 29% in terms of information gain with respect to the best performance reported in the literature.
Description
Keywords
Top-down visual attention, Eye fixation prediction, Context-based learning, Interpretability, Capsule networks, Convolutional neural networks, Autonomous driving
Bibliographic citation
Martinez-Cebrian, J., Fernandez-Torres, M. A. & Diaz-De-Maria, F. (2020). Interpretable Global-Local Dynamics for the Prediction of Eye Fixations in Autonomous Driving Scenarios. IEEE Access, 8, 217068–217085.