RT Journal Article T1 Towards node liability in federated learning: Computational cost and network overhead A1 Malandrino, Francesco A1 Chiasserini, Carla Fabiana A2 IEEE, AB Many machine learning (ML) techniques suf-fer from the drawback that their output (e.g., a classifi-cation decision) is not clearly and intuitively connected to their input (e.g., an image). To cope with this issue, several explainable ML techniques have been proposed to, e.g., identify which pixels of an input image had the strongest influence on its classification. However, in distributed scenarios, it is often more important to connect decisions with the information used for the model training and the nodes supplying such information. To this end, in this paper we focus on federated learning and present a new methodology, named node liability in federated learning (NL-FL), which permits to identify the source of the training information that most contributed to a given decision. After discussing NL-FL’s cost in terms of extra computation, storage, and network latency, we demonstrate its usefulness in an edge-based scenario. We find that NL-FL is able to swiftly identify misbehaving nodes and to exclude them from the training process, thereby improving learning accuracy. SN 0163-6804 YR 2021 FD 2021-09 LK https://hdl.handle.net/10016/34234 UL https://hdl.handle.net/10016/34234 LA eng NO This work was supported through the EU 5Growth project (Grant No. 856709). DS e-Archivo RD 27 jul. 2024