This paper presents a Federated Learning (FL) algorithm that allows the decentralization of all FL solutions that employ a model-averaging procedure. The proposed algorithm proves to be capable of attaining faster convergence rates and no performance loss against the starting centralized FL implementation with a reduced communication overhead compared to existing consensus-based and centralized solutions. To this end, a Multi-Hop consensus protocol, originally presented in the scope of dynamical system consensus theory, leveraging on standard Lyapunov stability discussions, has been proposed to assure that all federation clients share the same average model employing only information obtained from their m-step neighbours. Experimental results on different communication topologies and the MNIST and MedMNIST v2 datasets validate the algorithm properties demonstrating a performance drop, compared with centralized FL setting, of about 1%.
Dettaglio pubblicazione
2023, IEEE ACCESS, Pages 80613-80623 (volume: 11)
A Discrete-Time Multi-Hop Consensus Protocol for Decentralized Federated Learning (01a Articolo in rivista)
Menegatti D., Giuseppi A., Manfredi S., Pietrabissa A.
Gruppo di ricerca: Networked Systems
keywords