Diminishing Returns and Deep Learning for Adaptive CPU Resource Allocation of Containers
Abstract
Containers provide a lightweight runtime environment for microservices applications while enabling better server utilization. Automatic optimal allocation of CPU pins to the containers serving specific workloads can help to minimize the completion time of jobs. Most of the existing state-of-the-art focused on building new efficient scheduling algorithms for placing the containers on the infrastructure, and the resources to the containers are allocated manually and statically. An automatic method to identify and allocate optimal CPU resources to the containers can help to improve the efficiency of the scheduling algorithms. In this article, we introduce a new deep learning-based approach to allocate optimal CPU resources to the containers automatically. Our approach uses the law of diminishing marginal returns to determine the optimal number of CPU pins for containers to gain maximum performance while maximizing the number of concurrent jobs. The proposed method is evaluated using real workloads on a Docker-based containerized infrastructure. The results demonstrate the effectiveness of the proposed solution in reducing the completion time of the jobs by 23% to 74% compared to commonly used static CPU allocation methods. 2004-2012 IEEE.
Collections
- Computer Science & Engineering [2402 items ]