عرض بسيط للتسجيلة

المؤلفAli, Hassan
المؤلفButt, Muhammad Atif
المؤلفFilali, Fethi
المؤلفAl-Fuqaha, Ala
المؤلفQadir, Junaid
تاريخ الإتاحة2024-09-30T07:16:20Z
تاريخ النشر2024
اسم المنشورIEEE Transactions on Intelligent Transportation Systems
المعرّفhttp://dx.doi.org/10.1109/TITS.2023.3343971
الاقتباسH. Ali, M. A. Butt, F. Filali, A. Al-Fuqaha and J. Qadir, "Consistent Valid Physically-Realizable Adversarial Attack Against Crowd-Flow Prediction Models," in IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 6, pp. 5567-5582, June 2024, doi: 10.1109/TITS.2023.3343971.
الرقم المعياري الدولي للكتاب1524-9050
معرّف المصادر الموحدhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85181563422&origin=inward
معرّف المصادر الموحدhttp://hdl.handle.net/10576/59517
الملخصRecent works have shown that deep learning (DL) models can effectively learn city-wide crowd-flow patterns, which can be used for more effective urban planning and smart city management. However, DL models have been known to perform poorly on inconspicuous adversarial perturbations. Although many works have studied these adversarial perturbations in general, the adversarial vulnerabilities of deep CFP models in particular have remained largely unexplored. In this paper, we perform a rigorous analysis of the adversarial vulnerabilities of DL-based CFP models under multiple threat settings, making three-fold contributions; 1) we propose CaV-detect by formally identifying two novel properties - Consistency and Validity - of the CFP inputs that enable the detection of standard adversarial inputs with 0% false acceptance rate (FAR); 2) we leverage universal adversarial perturbations and an adaptive adversarial loss to present adaptive adversarial attacks to evade CaV-detect defense; 3) we propose CVP, a Consistent, Valid and Physically-realizable adversarial attack, that explicitly inducts the consistency and validity priors in the perturbation generation mechanism. We find out that although the crowd-flow models are vulnerable to adversarial perturbations, it is extremely challenging to simulate these perturbations in physical settings, notably when CaV-detect is in place. We also show that CVP attack considerably outperforms the adaptively modified standard attacks in FAR and adversarial loss metrics. We conclude with useful insights emerging from our work and highlight promising future research directions.
راعي المشروعThis work was supported by the Qatar National Research Fund (a member of Qatar Foundation) through National Priorities Research Program (NPRP) under Grant 13S-0206-200273. Qatar National Library providing Open Access funding.
اللغةen
الناشرIEEE
الموضوعPerturbation methods
Standards
Adaptation models
Computer architecture
Analytical models
History
Data models
Deep neural networks
CFP
adversarial ML
العنوانConsistent Valid Physically-Realizable Adversarial Attack Against Crowd-Flow Prediction Models
النوعArticle
الصفحات5567 - 5582
رقم العدد6
رقم المجلد25
ESSN1558-0016
dc.accessType Open Access


الملفات في هذه التسجيلة

Thumbnail

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة