• English
    • العربية
  • العربية
  • Login
  • QU
  • QU Library
  •  Home
  • Communities & Collections
  • Help
    • Item Submission
    • Publisher policies
    • User guides
    • FAQs
  • About QSpace
    • Vision & Mission
View Item 
  •   Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Research Units
  • Qatar Mobility Innovations Center
  • QMIC Research
  • View Item
  • Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Research Units
  • Qatar Mobility Innovations Center
  • QMIC Research
  • View Item
  •      
  •  
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Consistent Valid Physically-Realizable Adversarial Attack Against Crowd-Flow Prediction Models

    Thumbnail
    View/Open
    Consistent_Valid_Physically-Realizable_Adversarial_Attack_Against_Crowd-Flow_Prediction_Models.pdf (17.82Mb)
    Date
    2024
    Author
    Ali, Hassan
    Butt, Muhammad Atif
    Filali, Fethi
    Al-Fuqaha, Ala
    Qadir, Junaid
    Metadata
    Show full item record
    Abstract
    Recent works have shown that deep learning (DL) models can effectively learn city-wide crowd-flow patterns, which can be used for more effective urban planning and smart city management. However, DL models have been known to perform poorly on inconspicuous adversarial perturbations. Although many works have studied these adversarial perturbations in general, the adversarial vulnerabilities of deep CFP models in particular have remained largely unexplored. In this paper, we perform a rigorous analysis of the adversarial vulnerabilities of DL-based CFP models under multiple threat settings, making three-fold contributions; 1) we propose CaV-detect by formally identifying two novel properties - Consistency and Validity - of the CFP inputs that enable the detection of standard adversarial inputs with 0% false acceptance rate (FAR); 2) we leverage universal adversarial perturbations and an adaptive adversarial loss to present adaptive adversarial attacks to evade CaV-detect defense; 3) we propose CVP, a Consistent, Valid and Physically-realizable adversarial attack, that explicitly inducts the consistency and validity priors in the perturbation generation mechanism. We find out that although the crowd-flow models are vulnerable to adversarial perturbations, it is extremely challenging to simulate these perturbations in physical settings, notably when CaV-detect is in place. We also show that CVP attack considerably outperforms the adaptively modified standard attacks in FAR and adversarial loss metrics. We conclude with useful insights emerging from our work and highlight promising future research directions.
    URI
    https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85181563422&origin=inward
    DOI/handle
    http://dx.doi.org/10.1109/TITS.2023.3343971
    http://hdl.handle.net/10576/59517
    Collections
    • QMIC Research [‎278‎ items ]

    entitlement


    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us | Send Feedback
    Contact Us | Send Feedback | QU

     

     

    Home

    Submit your QU affiliated work

    Browse

    All of Digital Hub
      Communities & Collections Publication Date Author Title Subject Type Language Publisher
    This Collection
      Publication Date Author Title Subject Type Language Publisher

    My Account

    Login

    Statistics

    View Usage Statistics

    About QSpace

    Vision & Mission

    Help

    Item Submission Publisher policiesUser guides FAQs

    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us | Send Feedback
    Contact Us | Send Feedback | QU

     

     

    Video