Federated Learning Stability Under Byzantine Attacks
Abstract
Federated Learning (FL) is a machine learning approach that enables private and decentralized model training. Although FL has been shown to be very useful in several applications, its privacy constraints cause a lack of model update transparency which makes it vulnerable to several types of attacks. In particular, based on detailed convergence analyses, we show in this paper that when the traditional model-combining scheme is used, even a single Byzantine node that keeps sending random reports will cause the whole FL model to diverge to non-useful solutions. A low complexity model combining approach is also proposed to stabilize the FL system and make it converge to a suboptimal solution just by controlling the model norm. The Physikalisch-Technische Bundesanstalt extra-large electrocardiogram (PTB-XL ECG) dataset is used to validate the findings of this paper and show the efficiency of the proposed approach in identifying heart anomalies.
Collections
- Computer Science & Engineering [2402 items ]