• English
    • العربية
  • العربية
  • Login
  • QU
  • QU Library
  •  Home
  • Communities & Collections
  • Help
    • Item Submission
    • Publisher policies
    • User guides
    • FAQs
  • About QSpace
    • Vision & Mission
View Item 
  •   Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Faculty Contributions
  • College of Engineering
  • Computer Science & Engineering
  • View Item
  • Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Faculty Contributions
  • College of Engineering
  • Computer Science & Engineering
  • View Item
  •      
  •  
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Defending Emotional Privacy with Adversarial Machine Learning for Social Good

    View/Open
    Defending_Emotional_Privacy_with_Adversarial_Machine_Learning_for_Social_Good.pdf (979.5Kb)
    Date
    2023
    Author
    Al-Maliki, Shawqi
    Abdallah, Mohamed
    Qadir, Junaid
    Al-Fuqaha, Ala
    Metadata
    Show full item record
    Abstract
    Protecting the privacy of personal information, including emotions, is essential, and organizations must comply with relevant regulations to ensure privacy. Unfortunately, some organizations do not respect these regulations, or they lack transparency, leaving human privacy at risk. These privacy violations often occur when unauthorized organizations misuse machine learning (ML) technology, such as facial expression recognition (FER) systems. Therefore, researchers and practitioners must take action and use ML technology for social good to protect human privacy. One emerging research area that can help address privacy violations is the use of adversarial ML for social good. Evasion attacks, which are used to fool ML systems, can be repurposed to prevent misused ML technology, such as ML-based FER, from recognizing true emotions. By leveraging adversarial ML for social good, we can prevent organizations from violating human privacy by misusing ML technology, particularly FER systems, and protect individuals' personal and emotional privacy. In this work, we propose an approach called Chaining of Adversarial ML Attacks (CAA) to create a robust attack that fools misused technology and prevents it from detecting true emotions. To validate our proposed approach, we conduct extensive experiments using various evaluation metrics and baselines. Our results show that CAA significantly contributes to emotional privacy preservation, with the fool rate of emotions increasing proportionally to the chaining length. In our experiments, the fool rate increases by 48% in each subsequent chaining stage of the chaining targeted attacks (CTA) while keeping the perturbations imperceptible (ϵ=0.0001)
    DOI/handle
    http://dx.doi.org/10.1109/IWCMC58020.2023.10182780
    http://hdl.handle.net/10576/66064
    Collections
    • Computer Science & Engineering [‎2482‎ items ]

    entitlement


    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us | Send Feedback
    Contact Us | Send Feedback | QU

     

     

    Home

    Submit your QU affiliated work

    Browse

    All of Digital Hub
      Communities & Collections Publication Date Author Title Subject Type Language Publisher
    This Collection
      Publication Date Author Title Subject Type Language Publisher

    My Account

    Login

    Statistics

    View Usage Statistics

    About QSpace

    Vision & Mission

    Help

    Item Submission Publisher policiesUser guides FAQs

    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us | Send Feedback
    Contact Us | Send Feedback | QU

     

     

    Video