RT Journal Article T1 Hierarchical generator of tracking global hypotheses A1 Gómez Silva, María José A1 Escalera Hueso, Arturo de la A1 Armingol Moreno, José María AB The presence of crowds, crossing people, occlusions, and individuals entering and leaving the monitored scenario turns the automatization of Multi-Object Tracking into a demanding task. Due to the difficulties in dealing with those situations, the data association between the incoming observations and their corresponding identities could produce split, merged, and even missed tracks. This article proposes a Hierarchical Generator of Tracking Global Hypotheses (HGTGH) to prevent those errors. In this method, the data association process is divided into hierarchical levels according to multiple factors, such as the duration of tracking on the individuals or the number of frames in a row where they have been missed. A dedicated formulation of the association cost at each level properly combines various affinity metrics. Instead of generating hypotheses for each individual and analyzing them through a batch of future frames, the proposed method immediately generates a global hypothesis that describes the assignment of a whole set of identities on every incoming frame. The generated hypothesis is also able to render new people entering the scene. Thanks to this advantage, the proposed method simultaneously addresses the reduction of identity switches and the problem of starting new tracks. This novel data association method constitutes the core of an online tracking algorithm, which has been evaluated over the MOT17 dataset to demonstrate its effectiveness. PB Elsevier SN 0957-4174 YR 2022 FD 2022-11-11 LK https://hdl.handle.net/10016/36121 UL https://hdl.handle.net/10016/36121 LA eng NO This work was supported by the Spanish Government through the CICYT projects [grant numbers: TRA2016-78886-C3-1-R, RTI2018-096036-B-C21], Universidad Carlos III of Madrid through (PEAVAUTO-CM-UC3M) and the Comunidad de Madrid through SEGVAUTO-4.0-CM [grant number: P2018/EMT-4362] and Ministerio de Educación, Cultura y Deporte para la Formación de Profesorado Universitario [grant number: FPU14/02143]. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research. DS e-Archivo RD 27 jul. 2024