Abstract
Server energy consumption constitutes a significant portion of the overall energy usage in cloud data centers. Achieving energy optimization through tuning server parameter configurations is of paramount importance for energy conservation within cloud data centers. However, due to the diverse and dynamic nature of mixed workloads in cloud servers, achieving highly energy efficient server parameter configurations often necessitates continuous adjustments and optimizations in response to workload fluctuations. To tackle this challenge, this paper introduces an energy-efficient optimization method for cloud servers tailored to scenarios with mixed workloads, known as Energy-aware Parameter Tuning for Mixed Workloads (EPTMW). This method dynamically performs joint tuning of CPU frequency and system kernel parameters based on the operational status of the real-time workloads. Additionally, we evaluate EPTMW using benchmark workloads from BenchSEE and SERT. The experimental results demonstrate that EPTMW can enhance energy efficiency by 28.5\(\%\) compared to the energy efficiency level achieved under the default server parameter configuration, all while maintaining server performance.
Similar content being viewed by others
Data availability
Data available on request from the authors.
References
Lin, W., Shi, F., Wu, W., Li, K., Wu, G., Mohammed, A.-A.: A taxonomy and survey of power models and power modeling for cloud servers. ACM Comput. Surv. (CSUR) 53(5), 1–41 (2020)
Katal, A., Dahiya, S., Choudhury, T.: Energy efficiency in cloud computing data centers: a survey on software technologies. Clust. Comput. 26(3), 1845–1875 (2023)
Kim, W., Gupta, M.S., Wei, G.-Y., Brooks, D.: System level analysis of fast, per-core dvfs using on-chip switching regulators. In: 2008 IEEE 14th International Symposium on High Performance Computer Architecture, pp. 123–134 (2008). IEEE
Singh, B.P., Kumar, S.A., Gao, X.-Z., Kohli, M., Katiyar, S.: A study on energy consumption of dvfs and simple vm consolidation policies in cloud computing data centers using cloudsim toolkit. Wireless Pers. Commun. 112, 729–741 (2020)
Wu, C.-M., Chang, R.-S., Chan, H.-Y.: A green energy-efficient scheduling algorithm using the dvfs technique for cloud datacenters. Futur. Gener. Comput. Syst. 37, 141–147 (2014)
Huang, H., Pang, P., Chen, Q., Zhao, J., Zheng, W., Guo, M.: Csc: Collaborative system configuration for i/o-intensive applications in multi-tenant clouds. In: 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 1327–1337 (2022). IEEE
Athanassoulis, M., Bøgh, K.S., Idreos, S.: Optimal column layout for hybrid workloads. Proceed. VLDB Endow. 12(13), 2393–2407 (2019)
Luo, B., Wang, S., Shi, W., He, Y.: ecope: Workload-aware elastic customization for power efficiency of high-end servers. IEEE Transac. Cloud Comput. 4(2), 237–249 (2015)
Shamsa, E., Kanduri, A., Rahmani, A.M., Liljeberg, P.: Energy-performance co-management of mixed-sensitivity workloads on heterogeneous multi-core systems. In: Proceedings of the 26th Asia and South Pacific Design Automation Conference, pp. 421–427 (2021)
Liu, J., Tang, S., Xu, G., Ma, C., Lin, M.: A novel configuration tuning method based on feature selection for hadoop mapreduce. IEEE Access 8, 63862–63871 (2020)
Chi, R., Qian, Z., Lu, S.: A game theoretical method for auto-scaling of multi-tiers web applications in cloud. In: Proceedings of the Fourth Asia-Pacific Symposium on Internetware, pp. 1–10 (2012)
Standardization, T.C.N.I.: Benchmark of Server Energy Efficiency (2022). https://www.energylabel.com.cn/benchsee/benchSEE_en.html
Committee, S.: Server Efficiency Rating Tool (2021). http://www.spec.org/sert/index.html
Lucas Filho, E.R., Almeida, E.C., Scherzinger, S., Herodotou, H.: Investigating automatic parameter tuning for sql-on-hadoop systems. Big Data Res. 25, 100204 (2021)
Lin, C., Zhuang, J., Feng, J., Li, H., Zhou, X., Li, G.: Adaptive code learning for spark configuration tuning. In: 2022 IEEE 38th International Conference on Data Engineering (ICDE), pp. 1995–2007 (2022). IEEE
Guo, Y., Shan, H., Huang, S., Hwang, K., Fan, J., Yu, Z.: Gml: efficiently auto-tuning flink’s configurations via guided machine learning. IEEE Trans. Parallel Distrib. Syst. 32(12), 2921–2935 (2021)
Brondolin, R., Santambrogio, M.D.: A black-box monitoring approach to measure microservices runtime performance. ACM Trans. Archit. Code Optim. (TACO) 17(4), 1–26 (2020)
Zhao, J., Zhu, Q., Wu, W., Wei, H.: A configurable parallel iterative tuning framework. In: 2021 14th International Conference on Advanced Computer Theory and Engineering (ICACTE), pp. 43–49 (2021). IEEE
Imamura, S., Yoshida, E.: Reducing cpu power consumption for low-latency ssds. In: 2018 IEEE 7th Non-Volatile Memory Systems and Applications Symposium (NVMSA), pp. 79–84 (2018). IEEE
Panda, S.K., Lin, M., Zhou, T.: Energy-efficient computation offloading with dvfs using deep reinforcement learning for time-critical iot applications in edge computing. IEEE Internet Things J. 10(8), 6611–6621 (2022)
Li, H., Wei, Y., Xiong, Y., Ma, E., Tian, W.: A frequency-aware and energy-saving strategy based on dvfs for spark. J. Supercomput. 77, 11575–11596 (2021)
Amulu, L.M., Ramraj, R.: Combinatorial meta-heuristics approaches for dvfs-enabled green clouds. J. Supercomput. 76, 5825–5834 (2020)
Masri, A., Al-Jabi, M.: Toward iot fog computing-enabled system energy consumption modeling and optimization by adaptive tcp/ip protocol. PeerJ Comput. Sci. 7, 653 (2021)
Sudha, D., Chitnis, S.: Energy-aware parameter tuning mechanism for workflow scheduling in the cloud environment. Mater. Today Proceed. 45, 3137–3142 (2021)
Rocha, I., Morris, N., Chen, L.Y., Felber, P., Birke, R., Schiavoni, V.: Pipetune: Pipeline parallelism of hyper and system parameters tuning for deep learning clusters. In: Proceedings of the 21st International Middleware Conference, pp. 89–104 (2020)
Malik, M., Sasan, A., Joshi, R., Rafatirah, S., Homayoun, H.: Characterizing hadoop applications on microservers for performance and energy efficiency optimizations. In: 2016 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pp. 153–154 (2016). IEEE
Li, D., Huang, J.: A learning-based approach towards automated tuning of ssd configurations. arXiv preprint arXiv:2110.08685 (2021)
Wold, S., Esbensen, K., Geladi, P.: Principal component analysis. Chemom. Intell. Lab. Syst. 2(1–3), 37–52 (1987)
Chen, T., Guestrin, C.: Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016)
Hat, R.: Performance Tuning Guide - Red Hat Customer Portal (2023). https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/pdf/performance_tuning_guide/red_hat_enterprise_linux-7-performance_tuning_guide-en-us.pdf
OpenEuler: A-Tune (2023). https://www.openeuler.org/en/other/projects/atune/
Hat, R.: TuneD Project (2023). https://tuned-project.org/
Zhang, X., Wu, H., Chang, Z., Jin, S., Tan, J., Li, F., Zhang, T., Cui, B.: Restune: Resource oriented tuning boosted by meta-learning for cloud databases. In: Proceedings of the 2021 International Conference on Management of Data, pp. 2102–2114 (2021)
Stein, M.: Large sample properties of simulations using latin hypercube sampling. Technometrics 29(2), 143–151 (1987)
Liu, F.T., Ting, K.M., Zhou, Z.-H.: Isolation forest. In: 2008 Eighth Ieee International Conference on Data Mining, pp. 413–422 (2008). IEEE
Maulud, D., Abdulazeez, A.M.: A review on linear regression comprehensive in machine learning. J. Appl. Sci. Technol. Trends 1(4), 140–147 (2020)
Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1, 81–106 (1986)
Drucker, H., Burges, C.J., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Adv. Neural Inform. Process. Syst. 9 (1996)
Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001)
Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A.V., Gulin, A.: Catboost: unbiased boosting with categorical features. Adv. Neural Inform. Process. Syst. 31, 6639 (2018)
Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., Liu, T.-Y.: Lightgbm: a highly efficient gradient boosting decision tree. Adv. Neural Inform. Process. Syst. 30, 3149 (2017)
Motwani, R., Raghavan, P.: Randomized algorithms. ACM Comput. Surv. (CSUR) 28(1), 33–37 (1996)
Snoek, J., Larochelle, H., Adams, R.P.: Practical bayesian optimization of machine learning algorithms. Adv. Neural Inform. Process. Syst. 25, 2960 (2012)
Bäck, T., Schwefel, H.-P.: An overview of evolutionary algorithms for parameter optimization. Evol. Comput. 1(1), 1–23 (1993)
Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). Evol. Comput. 11(1), 1–18 (2003)
Ozaki, Y., Tanigaki, Y., Watanabe, S., Onishi, M.: Multiobjective tree-structured parzen estimator for computationally expensive optimization problems. In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference, pp. 533–541 (2020)
Funding
This work is supported by National Natural Science Foundation of China (62072187, 62002078), Guangdong Major Project of Basic and Applied Basic Research (2019B030302002), Guangdong Marine Economic Development Special Fund Project (GDNRC[2022]17), Guangzhou Development Zone Science and Technology Project (2021GH10) and the Major Key Project of PCL (PCL2023A09).
Author information
Authors and Affiliations
Contributions
All authors contribute to paper through either code, experiments or writing.
Corresponding authors
Ethics declarations
Competing interests
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Liang, J., Lin, W., Xu, Y. et al. Energy-aware parameter tuning for mixed workloads in cloud server. Cluster Comput 27, 4805–4821 (2024). https://doi.org/10.1007/s10586-023-04212-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10586-023-04212-6