• English
    • العربية
  • العربية
  • Login
  • QU
  • QU Library
  •  Home
  • Communities & Collections
  • Help
    • Item Submission
    • Publisher policies
    • User guides
    • FAQs
  • About QSpace
    • Vision & Mission
View Item 
  •   Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Faculty Contributions
  • College of Engineering
  • Computer Science & Engineering
  • View Item
  • Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Faculty Contributions
  • College of Engineering
  • Computer Science & Engineering
  • View Item
  •      
  •  
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Optimized Resource and Deep Learning Model Allocation in O-RAN Architecture

    Thumbnail
    Date
    2023-01-01
    Author
    Makhlouf, Ahmed
    Abdellatif, Alaa Awad
    Badawy, Ahmed
    Mohamed, Amr
    Metadata
    Show full item record
    Abstract
    In the era of 5G and beyond, telecommunication networks tend to move Radio Access Network (RAN) from centralized architecture to a more distributed architecture for greater interoperability and flexibility. Open RAN (O-RAN) architecture is a paradigm shift that is proposed to enable disaggregation, virtualization, and cloudification of RAN components, possibly offered from multiple vendors, to be connected through open interfaces. Leveraging this O-RAN architecture, Deep Learning (DL) models may be running as a service close to the end users, rather than on the core network, to benefit from reduced latency and bandwidth consumption. If multiple DL models learn on the virtual edge, they will compete for the available communication and computation resources. In this paper, we introduce Optimized Resource and Model Allocation (ORMA), a framework that provides optimized resource allocation for multiple DL models learning at the edge, that aims to maximize the aggregate accuracy while respecting the limited physical resources. Distinguished from related works, ORMA optimizes the learning-related parameters, such as dataset size and number of epochs, as well as the amount of communication and computation resources allocated to each DL model to maximize the aggregate accuracy. Our results show that ORMA consistently outperforms a baseline approach that adopts a fixed, fair resource allocation (FRA) among different DL models, at different total bandwidths and CPU combinations.
    URI
    https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85167621483&origin=inward
    DOI/handle
    http://dx.doi.org/10.1109/WiMob58348.2023.10187766
    http://hdl.handle.net/10576/49122
    Collections
    • Computer Science & Engineering [‎2428‎ items ]

    entitlement


    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us | Send Feedback
    Contact Us | Send Feedback | QU

     

     

    Home

    Submit your QU affiliated work

    Browse

    All of Digital Hub
      Communities & Collections Publication Date Author Title Subject Type Language Publisher
    This Collection
      Publication Date Author Title Subject Type Language Publisher

    My Account

    Login

    Statistics

    View Usage Statistics

    About QSpace

    Vision & Mission

    Help

    Item Submission Publisher policiesUser guides FAQs

    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us | Send Feedback
    Contact Us | Send Feedback | QU

     

     

    Video