Show simple item record

AuthorQayyum, Adnan
AuthorJanjua, Muhammad Umar
AuthorQadir, Junaid
Available date2023-07-13T05:40:51Z
Publication Date2022
Publication NameComputers and Security
ResourceScopus
ISSN1674048
URIhttp://dx.doi.org/10.1016/j.cose.2022.102827
URIhttp://hdl.handle.net/10576/45571
AbstractOne of the key challenges in federated learning (FL) is the detection of malicious parameter updates. In a typical FL setup, the presence of malicious client(s) can potentially demolish the overall training of the shared global model by influencing the aggregation process of the server. In this paper, we present a hybrid learning-based method for the detection of poisoned/malicious parameter updates from malicious clients. Furthermore, to highlight the effectiveness of the proposed method, we provide empirical evidence by evaluating the proposed method against a well-known label flipping attack on three different image classification tasks. The results suggest that our method can effectively detect and discard poisoned parameter updates without causing a significant drop in the performance of the overall learning of the FL paradigm. Our proposed method has achieved an average malicious parameters updates detection accuracy of 97.57%, 92.35%, and 89.42% for image classification task on MNIST, CIFAR, and APTOS diabetic retinopathy (DR) detection. Our method provides a performance gain of approximately 2% as compared to a recent similar state of the art method on MNIST classification and provided a comparable performance on federated extended MNIST (FEMNIST). 2022 The Authors
SponsorThe first and last author would like to acknowledge funding from NCCS Blockchain Lab and Qatar National Library (QNL), respectively. This research project was partially funded by Blockchain Research Lab at Information Technology University (ITU), Lahore, Pakistan. The publication of this article was funded by the Qatar National Library (QNL).
Languageen
PublisherElsevier
SubjectAdversarial ML
Federated learning
Label flipping attack
Robust FL
Robust ML
TitleMaking federated learning robust to adversarial attacks by learning data and model association
TypeArticle
Volume Number121
dc.accessType Abstract Only


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record