• English
    • العربية
  • العربية
  • Login
  • QU
  • QU Library
  •  Home
  • Communities & Collections
  • Help
    • Item Submission
    • Publisher policies
    • User guides
    • FAQs
  • About QSpace
    • Vision & Mission
View Item 
  •   Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Faculty Contributions
  • College of Engineering
  • Computer Science & Engineering
  • View Item
  • Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Faculty Contributions
  • College of Engineering
  • Computer Science & Engineering
  • View Item
  •      
  •  
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Attention-based Network for Image/Video Salient Object Detection

    View/Open
    Attention-based_Network_for_Image_Video_Salient_Object_Detection.pdf (2.723Mb)
    Date
    2023-09
    Author
    Elharrouss, Omar
    Elkaitouni, Soukaina El Idrissi
    Akbari, Younes
    Al-Maadeed, Somaya
    Bouridane, Ahmed
    Metadata
    Show full item record
    Abstract
    The goal of video or image salient object detection is to identify the most important object in the scene, which can be helpful in many computer vision-based tasks. As the human vision framework has a successful capacity to effortlessly perceive locales of interest from complex scenes, salient object detection mimics a similar concept. However, the salient object detection (SOD) of complex video scenes is a challenging task. This paper mainly focuses on learning from channel and Spatiotemporal representations for image/video salient object detection. The proposed method consists of three levels, the frontend, the attention models, and the backend. While the frontend consists of VGG backbone which ultimately learns the representation of the common and the discrimination features. After that, both Attention, Channel-wise, and Spatiotemporal models are applied to highlight the significant object using a feature detector and to calculate the spatial attention. Then the output features are fused to obtain the final saliency result. Experimental investigation evaluations confirm that our proposed model has proved its validity and effectiveness compared with the state-of-the-art methods.
    URI
    https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85179513733&origin=inward
    DOI/handle
    http://dx.doi.org/10.1109/EUVIP58404.2023.10323073
    http://hdl.handle.net/10576/55892
    Collections
    • Computer Science & Engineering [‎2429‎ items ]

    entitlement


    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us | Send Feedback
    Contact Us | Send Feedback | QU

     

     

    Home

    Submit your QU affiliated work

    Browse

    All of Digital Hub
      Communities & Collections Publication Date Author Title Subject Type Language Publisher
    This Collection
      Publication Date Author Title Subject Type Language Publisher

    My Account

    Login

    Statistics

    View Usage Statistics

    About QSpace

    Vision & Mission

    Help

    Item Submission Publisher policiesUser guides FAQs

    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us | Send Feedback
    Contact Us | Send Feedback | QU

     

     

    Video