[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

SMig-RL: An Evolutionary Migration Framework for Cloud Services Based on Deep Reinforcement Learning

Published: 06 October 2020 Publication History

Abstract

Service migration is an often-used approach in cloud computing to minimize the access cost by moving the service close to most users. Although it is effective in a certain sense, the service migration in existing research still suffers from some deficiencies in its evolutionary abilities in scalability, sensitivity, and adaptability to effectively react to the dynamically changing environments. This article proposes an evolutionary framework based on deep reinforcement learning for virtual service migration in large-scale mobile cloud centers. To enhance the spatio-temporal sensitivity of the algorithm, we design a scalable reward function for virtual service migration, redefine the input state, and add a Recurrent Neural Network (RNN) to the learning framework. Additionally, in order to enhance the adaptability of the algorithm, we also decompose the action space and exploit the network cost to adjust the number of virtual machine (VMs). The experimental results show that, compared with the existing results, the migration strategy generated by the algorithm can not only significantly reduce the total service cost and achieve the load balancing at the same time, but also address the burst situations with low cost in dynamic environments.

References

[1]
Shan Cao, Yang Wang, and Chengzhong Xu. 2017. Service migrations in the cloud for mobile accesses: A reinforcement learning approach. In 2017 International Conference on Networking, Architecture, and Storage (NAS). 1--10.
[2]
Min Chen, Wei Li, Giancarlo Fortino, Yixue Hao, Long Hu, and Iztok Humar. 2018. A dynamic service-migration mechanism in edge cognitive computing.arXiv preprint arXiv:1808.07198 (2018).
[3]
Zong-Gan Chen, Ke-Jing Du, Zhi-Hui Zhan, and Jun Zhang. 2015. Deadline constrained cloud computing resources scheduling for cost optimization based on dynamic objective genetic algorithm. In 2015 IEEE Congress on Evolutionary Computation (CEC). 708--714.
[4]
Martin Duggan, Jim Duggan, Enda Howley, and Enda Barrett. 2017. A network aware approach for the scheduling of virtual machine migration during peak loads. Cluster Computing 20, 3 (2017), 2083--2094.
[5]
Song Fu and Cheng Zhong Xu. 2005. Service migration in distributed virtual machines for adaptive grid computing. In 2005 International Conference on Parallel Processing (ICPP’05). 358--365.
[6]
Chaima Ghribi, Makhlouf Hadji, and Djamal Zeghlache. 2013. Energy efficient VM scheduling for cloud data centers: Exact allocation and migration algorithms. (2013), 671--678.
[7]
Hadi Goudarzi and Massoud Pedram. 2012. Energy-efficient virtual machine replication and placement in a cloud computing system. In 2012 IEEE 5th International Conference on Cloud Computing. 750--757.
[8]
Alexander Graves. 2012. Supervised sequence labelling with recurrent neural networks. 385 (2012), 1--131.
[9]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 1735--1780.
[10]
Jeffrey D. Johnson, Jinghong Li, and Zengshi Chen. 2000. Reinforcement learning: An introduction: R.S. Sutton, A.G. Barto, MIT Press, Cambridge, MA 1998, 322 pp. ISBN 0-262-19398-1. Neurocomputing 35, 1 (2000), 205--206.
[11]
Kenji Kanai, Takeshi Muto, Jiro Katto, Shinya Yamamura, and Takura Sato. 2016. Proactive content caching for mobile video utilizing transportation systems and evaluation through field experiments. IEEE Journal on Selected Areas in Communications 34, 8 (2016), 1--1.
[12]
Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations (2015).
[13]
Timothy Paul Lillicrap, Jonathan James Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, and Daniel Pieter Wierstra. 2015. Continuous control with deep reinforcement learning. Computer Science 8, 6 (2015), A187.
[14]
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Tim Harley, Timothy P. Lillicrap, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. International Conference on Machine Learning (2016), 1928--1937.
[15]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529--533.
[16]
Konstantinos Oikonomou and Ioannis Stavrakakis. 2010. Scalable service migration in autonomic network environments. IEEE Journal on Selected Areas in Communications 28, 1 (2010), 84--94.
[17]
Jia Rao, Xiangping Bu, Cheng-Zhong Xu, Le Yi Wang, and Gang George Yin. 2009. VCONF: A reinforcement learning approach to virtual machines auto-configuration. In 6th International Conference on Autonomic Computing. 137--146.
[18]
Ajay Thomas S. and Santhiya C. 2017. Dynamic resource scheduling using Delay time algorithm in Cloud environment. In 2017 2nd International Conference on Computing and Communications Technologies (ICCCT). 55--58.
[19]
John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. 2015. Trust region policy optimization. International Conference on Machine Learning (2015), 1889--1897.
[20]
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms.arXiv preprint arXiv:1707.06347 (2017).
[21]
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin A. Riedmiller. 2014. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning. 387--395.
[22]
Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12. 1057--1063.
[23]
Rahul Urgaonkar, Shiqiang Wang, Ting He, Murtaza Zafer, Kevin S. Chan, and Kin K. Leung. 2015. Dynamic service migration and workload scheduling in edge-clouds. Performance Evaluation 91 (2015), 205--228.
[24]
Paul J. Werbos. 1990. Backpropagation through time: What it does and how to do it. Proc. IEEE 78, 10 (1990), 1550--1560.
[25]
Cheng Zhong Xu, Jia Rao, and Xiangping Bu. 2012. URL: A unified reinforcement learning approach for autonomic cloud management. J. Parallel and Distrib. Comput. 72, 2 (2012), 95--105.
[26]
Tianqi Zhao, Wei Zhang, Haiyan Zhao, and Zhi Jin. 2017. A reinforcement learning-based framework for the generation and evolution of adaptation rules. In 2017 IEEE International Conference on Autonomic Computing (ICAC). 103--112.

Cited By

View all
  • (2024)Work Scheduling in Cloud Network Based on Deep Q-LSTM Models for Efficient Resource UtilizationJournal of Grid Computing10.1007/s10723-024-09746-622:1Online publication date: 28-Feb-2024
  • (2024)Deep reinforcement learning-based methods for resource scheduling in cloud computing: a review and future directionsArtificial Intelligence Review10.1007/s10462-024-10756-957:5Online publication date: 23-Apr-2024
  • (2023)Machine Learning for Service Migration: A SurveyIEEE Communications Surveys & Tutorials10.1109/COMST.2023.327312125:3(1991-2020)Online publication date: 1-Jul-2023
  • Show More Cited By

Index Terms

  1. SMig-RL: An Evolutionary Migration Framework for Cloud Services Based on Deep Reinforcement Learning

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Internet Technology
      ACM Transactions on Internet Technology  Volume 20, Issue 4
      November 2020
      391 pages
      ISSN:1533-5399
      EISSN:1557-6051
      DOI:10.1145/3427795
      • Editor:
      • Ling Liu
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 06 October 2020
      Accepted: 01 July 2020
      Revised: 01 April 2020
      Received: 01 July 2019
      Published in TOIT Volume 20, Issue 4

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Cloud computing
      2. Q-learning
      3. RNN
      4. deep reinforcement learning
      5. dynamic service migration
      6. mobile access

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Funding Sources

      • National Key R&D Program of China

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)20
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 24 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Work Scheduling in Cloud Network Based on Deep Q-LSTM Models for Efficient Resource UtilizationJournal of Grid Computing10.1007/s10723-024-09746-622:1Online publication date: 28-Feb-2024
      • (2024)Deep reinforcement learning-based methods for resource scheduling in cloud computing: a review and future directionsArtificial Intelligence Review10.1007/s10462-024-10756-957:5Online publication date: 23-Apr-2024
      • (2023)Machine Learning for Service Migration: A SurveyIEEE Communications Surveys & Tutorials10.1109/COMST.2023.327312125:3(1991-2020)Online publication date: 1-Jul-2023
      • (2023)COUNSEL: Cloud Resource Configuration Management using Deep Reinforcement Learning2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)10.1109/CCGrid57682.2023.00035(286-298)Online publication date: May-2023
      • (2022)Energy Efficiency Strategy for Big Data in Cloud Environment Using Deep Reinforcement LearningMobile Information Systems10.1155/2022/87161322022Online publication date: 1-Jan-2022
      • (2022)Scalable Virtual Machine Migration using Reinforcement LearningJournal of Grid Computing10.1007/s10723-022-09603-420:2Online publication date: 1-Jun-2022
      • (2021)A Stackelberg Game Approach toward Migration of Enterprise Applications to the CloudMathematics10.3390/math91923489:19(2348)Online publication date: 22-Sep-2021

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media