[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Energy-aware parameter tuning for mixed workloads in cloud server

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

Server energy consumption constitutes a significant portion of the overall energy usage in cloud data centers. Achieving energy optimization through tuning server parameter configurations is of paramount importance for energy conservation within cloud data centers. However, due to the diverse and dynamic nature of mixed workloads in cloud servers, achieving highly energy efficient server parameter configurations often necessitates continuous adjustments and optimizations in response to workload fluctuations. To tackle this challenge, this paper introduces an energy-efficient optimization method for cloud servers tailored to scenarios with mixed workloads, known as Energy-aware Parameter Tuning for Mixed Workloads (EPTMW). This method dynamically performs joint tuning of CPU frequency and system kernel parameters based on the operational status of the real-time workloads. Additionally, we evaluate EPTMW using benchmark workloads from BenchSEE and SERT. The experimental results demonstrate that EPTMW can enhance energy efficiency by 28.5\(\%\) compared to the energy efficiency level achieved under the default server parameter configuration, all while maintaining server performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Algorithm 2
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

Data available on request from the authors.

References

  1. Lin, W., Shi, F., Wu, W., Li, K., Wu, G., Mohammed, A.-A.: A taxonomy and survey of power models and power modeling for cloud servers. ACM Comput. Surv. (CSUR) 53(5), 1–41 (2020)

    Article  Google Scholar 

  2. Katal, A., Dahiya, S., Choudhury, T.: Energy efficiency in cloud computing data centers: a survey on software technologies. Clust. Comput. 26(3), 1845–1875 (2023)

    Article  Google Scholar 

  3. Kim, W., Gupta, M.S., Wei, G.-Y., Brooks, D.: System level analysis of fast, per-core dvfs using on-chip switching regulators. In: 2008 IEEE 14th International Symposium on High Performance Computer Architecture, pp. 123–134 (2008). IEEE

  4. Singh, B.P., Kumar, S.A., Gao, X.-Z., Kohli, M., Katiyar, S.: A study on energy consumption of dvfs and simple vm consolidation policies in cloud computing data centers using cloudsim toolkit. Wireless Pers. Commun. 112, 729–741 (2020)

    Article  Google Scholar 

  5. Wu, C.-M., Chang, R.-S., Chan, H.-Y.: A green energy-efficient scheduling algorithm using the dvfs technique for cloud datacenters. Futur. Gener. Comput. Syst. 37, 141–147 (2014)

    Article  Google Scholar 

  6. Huang, H., Pang, P., Chen, Q., Zhao, J., Zheng, W., Guo, M.: Csc: Collaborative system configuration for i/o-intensive applications in multi-tenant clouds. In: 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 1327–1337 (2022). IEEE

  7. Athanassoulis, M., Bøgh, K.S., Idreos, S.: Optimal column layout for hybrid workloads. Proceed. VLDB Endow. 12(13), 2393–2407 (2019)

    Article  Google Scholar 

  8. Luo, B., Wang, S., Shi, W., He, Y.: ecope: Workload-aware elastic customization for power efficiency of high-end servers. IEEE Transac. Cloud Comput. 4(2), 237–249 (2015)

    Article  Google Scholar 

  9. Shamsa, E., Kanduri, A., Rahmani, A.M., Liljeberg, P.: Energy-performance co-management of mixed-sensitivity workloads on heterogeneous multi-core systems. In: Proceedings of the 26th Asia and South Pacific Design Automation Conference, pp. 421–427 (2021)

  10. Liu, J., Tang, S., Xu, G., Ma, C., Lin, M.: A novel configuration tuning method based on feature selection for hadoop mapreduce. IEEE Access 8, 63862–63871 (2020)

    Article  Google Scholar 

  11. Chi, R., Qian, Z., Lu, S.: A game theoretical method for auto-scaling of multi-tiers web applications in cloud. In: Proceedings of the Fourth Asia-Pacific Symposium on Internetware, pp. 1–10 (2012)

  12. Standardization, T.C.N.I.: Benchmark of Server Energy Efficiency (2022). https://www.energylabel.com.cn/benchsee/benchSEE_en.html

  13. Committee, S.: Server Efficiency Rating Tool (2021). http://www.spec.org/sert/index.html

  14. Lucas Filho, E.R., Almeida, E.C., Scherzinger, S., Herodotou, H.: Investigating automatic parameter tuning for sql-on-hadoop systems. Big Data Res. 25, 100204 (2021)

    Article  Google Scholar 

  15. Lin, C., Zhuang, J., Feng, J., Li, H., Zhou, X., Li, G.: Adaptive code learning for spark configuration tuning. In: 2022 IEEE 38th International Conference on Data Engineering (ICDE), pp. 1995–2007 (2022). IEEE

  16. Guo, Y., Shan, H., Huang, S., Hwang, K., Fan, J., Yu, Z.: Gml: efficiently auto-tuning flink’s configurations via guided machine learning. IEEE Trans. Parallel Distrib. Syst. 32(12), 2921–2935 (2021)

    Article  Google Scholar 

  17. Brondolin, R., Santambrogio, M.D.: A black-box monitoring approach to measure microservices runtime performance. ACM Trans. Archit. Code Optim. (TACO) 17(4), 1–26 (2020)

    Article  Google Scholar 

  18. Zhao, J., Zhu, Q., Wu, W., Wei, H.: A configurable parallel iterative tuning framework. In: 2021 14th International Conference on Advanced Computer Theory and Engineering (ICACTE), pp. 43–49 (2021). IEEE

  19. Imamura, S., Yoshida, E.: Reducing cpu power consumption for low-latency ssds. In: 2018 IEEE 7th Non-Volatile Memory Systems and Applications Symposium (NVMSA), pp. 79–84 (2018). IEEE

  20. Panda, S.K., Lin, M., Zhou, T.: Energy-efficient computation offloading with dvfs using deep reinforcement learning for time-critical iot applications in edge computing. IEEE Internet Things J. 10(8), 6611–6621 (2022)

    Article  Google Scholar 

  21. Li, H., Wei, Y., Xiong, Y., Ma, E., Tian, W.: A frequency-aware and energy-saving strategy based on dvfs for spark. J. Supercomput. 77, 11575–11596 (2021)

    Article  Google Scholar 

  22. Amulu, L.M., Ramraj, R.: Combinatorial meta-heuristics approaches for dvfs-enabled green clouds. J. Supercomput. 76, 5825–5834 (2020)

    Article  Google Scholar 

  23. Masri, A., Al-Jabi, M.: Toward iot fog computing-enabled system energy consumption modeling and optimization by adaptive tcp/ip protocol. PeerJ Comput. Sci. 7, 653 (2021)

    Article  Google Scholar 

  24. Sudha, D., Chitnis, S.: Energy-aware parameter tuning mechanism for workflow scheduling in the cloud environment. Mater. Today Proceed. 45, 3137–3142 (2021)

    Google Scholar 

  25. Rocha, I., Morris, N., Chen, L.Y., Felber, P., Birke, R., Schiavoni, V.: Pipetune: Pipeline parallelism of hyper and system parameters tuning for deep learning clusters. In: Proceedings of the 21st International Middleware Conference, pp. 89–104 (2020)

  26. Malik, M., Sasan, A., Joshi, R., Rafatirah, S., Homayoun, H.: Characterizing hadoop applications on microservers for performance and energy efficiency optimizations. In: 2016 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pp. 153–154 (2016). IEEE

  27. Li, D., Huang, J.: A learning-based approach towards automated tuning of ssd configurations. arXiv preprint arXiv:2110.08685 (2021)

  28. Wold, S., Esbensen, K., Geladi, P.: Principal component analysis. Chemom. Intell. Lab. Syst. 2(1–3), 37–52 (1987)

    Article  Google Scholar 

  29. Chen, T., Guestrin, C.: Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016)

  30. Hat, R.: Performance Tuning Guide - Red Hat Customer Portal (2023). https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/pdf/performance_tuning_guide/red_hat_enterprise_linux-7-performance_tuning_guide-en-us.pdf

  31. OpenEuler: A-Tune (2023). https://www.openeuler.org/en/other/projects/atune/

  32. Hat, R.: TuneD Project (2023). https://tuned-project.org/

  33. Zhang, X., Wu, H., Chang, Z., Jin, S., Tan, J., Li, F., Zhang, T., Cui, B.: Restune: Resource oriented tuning boosted by meta-learning for cloud databases. In: Proceedings of the 2021 International Conference on Management of Data, pp. 2102–2114 (2021)

  34. Stein, M.: Large sample properties of simulations using latin hypercube sampling. Technometrics 29(2), 143–151 (1987)

    Article  MathSciNet  Google Scholar 

  35. Liu, F.T., Ting, K.M., Zhou, Z.-H.: Isolation forest. In: 2008 Eighth Ieee International Conference on Data Mining, pp. 413–422 (2008). IEEE

  36. Maulud, D., Abdulazeez, A.M.: A review on linear regression comprehensive in machine learning. J. Appl. Sci. Technol. Trends 1(4), 140–147 (2020)

    Article  Google Scholar 

  37. Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1, 81–106 (1986)

    Google Scholar 

  38. Drucker, H., Burges, C.J., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Adv. Neural Inform. Process. Syst. 9 (1996)

  39. Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001)

    Article  Google Scholar 

  40. Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A.V., Gulin, A.: Catboost: unbiased boosting with categorical features. Adv. Neural Inform. Process. Syst. 31, 6639 (2018)

  41. Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., Liu, T.-Y.: Lightgbm: a highly efficient gradient boosting decision tree. Adv. Neural Inform. Process. Syst. 30, 3149 (2017)

  42. Motwani, R., Raghavan, P.: Randomized algorithms. ACM Comput. Surv. (CSUR) 28(1), 33–37 (1996)

    Article  Google Scholar 

  43. Snoek, J., Larochelle, H., Adams, R.P.: Practical bayesian optimization of machine learning algorithms. Adv. Neural Inform. Process. Syst. 25, 2960 (2012)

  44. Bäck, T., Schwefel, H.-P.: An overview of evolutionary algorithms for parameter optimization. Evol. Comput. 1(1), 1–23 (1993)

    Article  Google Scholar 

  45. Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). Evol. Comput. 11(1), 1–18 (2003)

    Article  Google Scholar 

  46. Ozaki, Y., Tanigaki, Y., Watanabe, S., Onishi, M.: Multiobjective tree-structured parzen estimator for computationally expensive optimization problems. In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference, pp. 533–541 (2020)

Download references

Funding

This work is supported by National Natural Science Foundation of China (62072187, 62002078), Guangdong Major Project of Basic and Applied Basic Research (2019B030302002), Guangdong Marine Economic Development Special Fund Project (GDNRC[2022]17), Guangzhou Development Zone Science and Technology Project (2021GH10) and the Major Key Project of PCL (PCL2023A09).

Author information

Authors and Affiliations

Authors

Contributions

All authors contribute to paper through either code, experiments or writing.

Corresponding authors

Correspondence to Weiwei Lin or Yangguang Xu.

Ethics declarations

Competing interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, J., Lin, W., Xu, Y. et al. Energy-aware parameter tuning for mixed workloads in cloud server. Cluster Comput 27, 4805–4821 (2024). https://doi.org/10.1007/s10586-023-04212-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-023-04212-6

Keywords

Navigation