عرض بسيط للتسجيلة

المؤلفQayyum, Adnan
المؤلفJanjua, Muhammad Umar
المؤلفQadir, Junaid
تاريخ الإتاحة2023-07-13T05:40:51Z
تاريخ النشر2022
اسم المنشورComputers and Security
المصدرScopus
الرقم المعياري الدولي للكتاب1674048
معرّف المصادر الموحدhttp://dx.doi.org/10.1016/j.cose.2022.102827
معرّف المصادر الموحدhttp://hdl.handle.net/10576/45571
الملخصOne of the key challenges in federated learning (FL) is the detection of malicious parameter updates. In a typical FL setup, the presence of malicious client(s) can potentially demolish the overall training of the shared global model by influencing the aggregation process of the server. In this paper, we present a hybrid learning-based method for the detection of poisoned/malicious parameter updates from malicious clients. Furthermore, to highlight the effectiveness of the proposed method, we provide empirical evidence by evaluating the proposed method against a well-known label flipping attack on three different image classification tasks. The results suggest that our method can effectively detect and discard poisoned parameter updates without causing a significant drop in the performance of the overall learning of the FL paradigm. Our proposed method has achieved an average malicious parameters updates detection accuracy of 97.57%, 92.35%, and 89.42% for image classification task on MNIST, CIFAR, and APTOS diabetic retinopathy (DR) detection. Our method provides a performance gain of approximately 2% as compared to a recent similar state of the art method on MNIST classification and provided a comparable performance on federated extended MNIST (FEMNIST). 2022 The Authors
راعي المشروعThe first and last author would like to acknowledge funding from NCCS Blockchain Lab and Qatar National Library (QNL), respectively. This research project was partially funded by Blockchain Research Lab at Information Technology University (ITU), Lahore, Pakistan. The publication of this article was funded by the Qatar National Library (QNL).
اللغةen
الناشرElsevier
الموضوعAdversarial ML
Federated learning
Label flipping attack
Robust FL
Robust ML
العنوانMaking federated learning robust to adversarial attacks by learning data and model association
النوعArticle
رقم المجلد121


الملفات في هذه التسجيلة

الملفاتالحجمالصيغةالعرض

لا توجد ملفات لها صلة بهذه التسجيلة.

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة