Proactive Workload Prediction and Resource Management in Hybrid Cloud using Machine Learning Techniques
Loading...
Date
item.page.authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
newlineCloud Computing (CC) paradigm has improved information and communication in
newlinerecent years and provided a backbone to modern infrastructure. CC enhances the
newlineservices of organizations such as Government, industries, and academia with a payas-
newlineyou-go model. More than 60% application workload is migrated to CC. The applications
newlinehosted on CC heavily use resources and generate more traffic, specifically
newlineduring specific events. The management of resources is one of the issues in CC.
newlineTo achieve better quality in service provisioning and avoid Service Level Agreement
newline(SLA) violation, the elasticity of resources is a major requirement in CC. The hybrid
newlinecloud model excels in resource requirements with private and public cloud services to
newlinedeploy elasticity applications. The resource monitoring and prediction improve the
newlineresource management policy with elasticity. For elasticity, a traditional adaptive policy
newlineimplements threshold-based auto-scaling approaches that are adaptive and simple
newlineto follow. However, such a static threshold policy may not be effective during a
newlinehigh-dynamic and unpredictable workload. An efficient auto-scaling technique that
newlinepredicts the system load is essential. Balancing the dynamism of load through the best
newlineauto-scale policy is still a challenging issue. This research work addresses resource
newlineprediction mechanisms to handle workload demands in CC through ML techniques.
newlineThis work explores how these techniques can be adapted to resource management
newlineproblems to increase resource availability and reduce SLA violations of Cloud data
newlinecenters while simultaneously satisfying application QoS requirements. The data center
newlineparameters such as CPU utilization and users requests are analyzed and suggest
newlinean algorithm using Machine learning and Queuing theory concepts that pro-actively
newlineindicate an appropriate number of future computing resources for short-term resource
newlinedemand. The experiment shows that the suggested model enhances the elasticity of resources
newlinewith performance metrics. The suggested approach i