Federated learning (FedL) is a machine learning (ML) technique utilized to train deep neural networks (DeepNNs) in a distributed way without the need to share data among the federated training clients. FedL was proposed for edge computing and Internet of things (IoT) tasks in which a centralized server was responsible for coordinating and governing the training process. To remove the design limitation implied by the centralized entity, this work proposes two different solutions to decentralize existing FedL algorithms, enabling the application of FedL on networks with arbitrary communication topologies, and thus extending the domain of application of FedL to more complex scenarios and new tasks. Of the two proposed algorithms, one, called FedLCon, is developed based on results from discrete-time weighted average consensus theory and is able to reconstruct the performances of the standard centralized FedL solutions, as also shown by the reported validation tests.
Dettaglio pubblicazione
2022, MACHINE INTELLIGENCE RESEARCH, Pages 319-330 (volume: 19)
A Weighted Average Consensus Approach for Decentralized Federated Learning (01a Articolo in rivista)
Giuseppi A., Manfredi S., Pietrabissa A.
Gruppo di ricerca: Networked Systems
keywords