Publication:
Towards node liability in federated learning: Computational cost and network overhead

Loading...
Thumbnail Image
Identifiers
Publication date
2021-09
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
Many machine learning (ML) techniques suf-fer from the drawback that their output (e.g., a classifi-cation decision) is not clearly and intuitively connected to their input (e.g., an image). To cope with this issue, several explainable ML techniques have been proposed to, e.g., identify which pixels of an input image had the strongest influence on its classification. However, in distributed scenarios, it is often more important to connect decisions with the information used for the model training and the nodes supplying such information. To this end, in this paper we focus on federated learning and present a new methodology, named node liability in federated learning (NL-FL), which permits to identify the source of the training information that most contributed to a given decision. After discussing NL-FL’s cost in terms of extra computation, storage, and network latency, we demonstrate its usefulness in an edge-based scenario. We find that NL-FL is able to swiftly identify misbehaving nodes and to exclude them from the training process, thereby improving learning accuracy.
Description
Keywords
Training, Costs, Machine learning, Collaborative work, Computational efficiency, Servers
Bibliographic citation
Malandrino, Francesco; Chiasserini, Carla Fabiana. Towards node liability in federated learning: computational cost and network overhead. In: IEEE Communications Magazine, 59(9), Sep.2021, Pp. 72-77