[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3299869.3314038acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article

PS2: Parameter Server on Spark

Published: 25 June 2019 Publication History

Abstract

Most of the data is extracted and processed by Spark in Tencent Machine Learning Platform. However, seldom of them use Spark MLlib, an official machine learning (ML) library on top of Spark due to its inefficiency. In contrast, systems like parameter servers, XGBoost and TensorFlow are more used, which incur expensive cost of transferring data in and out of Spark ecosystem. In this paper, we identify the causes of inefficiency in Spark MLlib and solve the problem by building parameter servers on top of Spark. We propose PS2, a parameter server architecture that integrates Spark without hacking the core of Spark. With PS2, we leverage the power of Spark for data processing and ML training, and parameter servers for maintaining ML models. By carefully analyzing Tencent ML workloads, we figure out a widely existing computation pattern for ML models---element-wise operations among multiple high dimensional vectors. Based on this observation, we propose a new data abstraction, called Dimension Co-located Vector (DCV) for efficient model management in PS2. A DCV is a distributed vector that considers locality in parameter servers and enables efficient computation with multiple co-located distributed vectors. For ease-of-use, we also design a wide variety of advanced operators for operating DCVs. Finally, we carefully implement the PS2 system and evaluate it against existing systems on both public and Tencent workloads. Empirical results demonstrate that PS2 can outperform Spark MLlib by up to 55.6X and specialized ML systems like Petuum by up to 3.7X.

References

[1]
Caffeonspark. https://github.com/yahoo/CaffeOnSpark.
[2]
Tensorflowonspark. https://github.com/yahoo/TensorFlowOnSpark.
[3]
D. J. Abadi, S. Madden, and N. Hachem. Column-stores vs. row-stores: how different are they really? In SIGMOD, pages 967--980, 2008.
[4]
M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. A. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng. Tensorflow: A system for large-scale machine learning. In OSDI, pages 265--283, 2016.
[5]
M. Armbrust, R. S. Xin, C. Lian, Y. Huai, D. Liu, J. K. Bradley, X. Meng, T. Kaftan, M. J. Franklin, A. Ghodsi, and M. Zaharia. Spark SQL: relational data processing in spark. In SIGMOD, pages 1383--1394, 2015.
[6]
M. Boehm, M. Dusenberry, D. Eriksson, A. V. Evfimievski, F. M. Manshadi, N. Pansare, B. Reinwald, F. Reiss, P. Sen, A. Surve, and S. Tatikonda. Systemml: Declarative machine learning on spark. PVLDB, 9(13):1425--1436, 2016.
[7]
T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. CoRR, abs/1512.01274, 2015.
[8]
W. Chen, Z. Wang, and J. Zhou. Large-scale L-BFGS using mapreduce. In Annual Conference on Neural Information Processing Systems, pages 1332--1340, 2014.
[9]
J. Dai, Y. Wang, X. Qiu, D. Ding, Y. Zhang, Y. Wang, X. Jia, C. Zhang, Y. Wan, Z. Li, J. Wang, S. Huang, Z. Wu, Y. Wang, Y. Yang, B. She, D. Shi, Q. Lu, K. Huang, and G. Song. Bigdl: A distributed deep learning framework for big data. CoRR, abs/1804.05839, 2018.
[10]
J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121--2159, 2011.
[11]
J. E. Gonzalez, R. S. Xin, A. Dave, D. Crankshaw, M. J. Franklin, and I. Stoica. Graphx: Graph processing in a distributed dataflow framework. In OSDI, pages 599--613, 2014.
[12]
A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. In SIGKDD, pages 855--864, 2016.
[13]
M. Hamilton, S. Raghunathan, A. Annavajhala, D. Kirsanov, E. de Leon, E. Barzilay, I. Matiach, J. Davison, M. Busch, M. Oprescu, R. Sur, R. Astala, T. Wen, and C. Park. Flexible and scalable deep learning with mmlspark. CoRR, abs/1804.04031, 2018.
[14]
R. Jagerman, C. Eickhoff, and M. de Rijke. Computing web-scale topic models using an asynchronous parameter server. In SIGIR. ACM, 2017.
[15]
J. Jiang, B. Cui, C. Zhang, and F. Fu. Dimboost: Boosting gradient boosting decision tree to higher dimensions. In SIGMOD, pages 1363--1376, 2018.
[16]
J. Jiang, B. Cui, C. Zhang, and L. Yu. Heterogeneity-aware distributed parameter servers. In SIGMOD, pages 463--478, 2017.
[17]
J. Jiang, J. Jiang, B. Cui, and C. Zhang. Tencentboost: A gradient boosting tree system with parameter server. In ICDE, pages 281--284, 2017.
[18]
J. Jiang, L. Yu, J. Jiang, Y. Liu, and B. Cui. Angel: a new large-scale machine learning system. National Science Review, pages 216--236, 2017.
[19]
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[20]
A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1106--1114, 2012.
[21]
M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B.-Y. Su. Scaling distributed machine learning with the parameter server. In Proc. OSDI, pages 583--598, 2014.
[22]
X. Meng, J. K. Bradley, B. Yavuz, E. R. Sparks, S. Venkataraman, D. Liu, J. Freeman, D. B. Tsai, M. Amde, S. Owen, D. Xin, R. Xin, M. J. Franklin, R. Zadeh, M. Zaharia, and A. Talwalkar. Mllib: Machine learning in apache spark. JMLR, 17:34:1--34:7, 2016.
[23]
B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: online learning of social representations. In SIGKDD, pages 701--710, 2014.
[24]
E. R. Sparks, S. Venkataraman, T. Kaftan, M. J. Franklin, and B. Recht. Keystoneml: Optimizing pipelines for large-scale advanced analytics. In ICDE, pages 535--546, 2017.
[25]
R. Srivastava, G. K. Palshikar, S. Chaurasia, and A. Dixit. What's next? a recommendation system for industrial training. Data Science and Engineering, pages 232--247, 2018.
[26]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1--9, 2015.
[27]
J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. LINE: large-scale information network embedding. In WWW, pages 1067--1077, 2015.
[28]
E. P. Xing, Q. Ho, W. Dai, J. K. Kim, J. Wei, S. Lee, X. Zheng, P. Xie, A. Kumar, and Y. Yu. Petuum: A new platform for distributed machine learning on big data. In SIGKDD, pages 1335--1344, 2015.
[29]
L. Yu, B. Cui, C. Zhang, and Y. Shao. Lda*: A robust and large-scale topic modeling system. PVLDB, 10(11):1406--1417, 2017.
[30]
J. Yuan, F. Gao, Q. Ho, W. Dai, J. Wei, X. Zheng, E. P. Xing, T.-Y. Liu, and W.-Y. Ma. Lightlda: Big topic models on modest computer clusters. In WWW, pages 1351--1361. ACM, 2015.
[31]
M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauly, M. J. Franklin, S. Shenker, and I. Stoica. Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing. In NSDI, pages 15--28, 2012.
[32]
M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica. Spark: Cluster computing with working sets. In HotCloud, 2010.
[33]
M. Zaharia, T. Das, H. Li, T. Hunter, S. Shenker, and I. Stoica. Discretized streams: fault-tolerant streaming computation at scale. In SOSP, pages 423--438, 2013.
[34]
Z. Zhang, J. Jiang, W. Wu, C. Zhang, L. Yu, and B. Cui. Mllib*: Fast training of glms using spark mllib. In ICDE, 2019.

Cited By

View all
  • (2025)Joint Dynamic Data and Model Parallelism for Distributed Training of DNNs Over Heterogeneous InfrastructureIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2024.350658836:2(150-167)Online publication date: Feb-2025
  • (2025)DIMS: Distributed Index for Similarity Search in Metric SpacesIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.348775937:1(210-225)Online publication date: Jan-2025
  • (2024)SparDL: Distributed Deep Learning Training with Efficient Sparse Communication2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00142(1752-1764)Online publication date: 13-May-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGMOD '19: Proceedings of the 2019 International Conference on Management of Data
June 2019
2106 pages
ISBN:9781450356435
DOI:10.1145/3299869
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 June 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. distributed machine learning
  2. parameter server
  3. spark

Qualifiers

  • Research-article

Funding Sources

  • National Key Research and Development Program of China
  • NSFC

Conference

SIGMOD/PODS '19
Sponsor:
SIGMOD/PODS '19: International Conference on Management of Data
June 30 - July 5, 2019
Amsterdam, Netherlands

Acceptance Rates

SIGMOD '19 Paper Acceptance Rate 88 of 430 submissions, 20%;
Overall Acceptance Rate 785 of 4,003 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)38
  • Downloads (Last 6 weeks)4
Reflects downloads up to 15 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Joint Dynamic Data and Model Parallelism for Distributed Training of DNNs Over Heterogeneous InfrastructureIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2024.350658836:2(150-167)Online publication date: Feb-2025
  • (2025)DIMS: Distributed Index for Similarity Search in Metric SpacesIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.348775937:1(210-225)Online publication date: Jan-2025
  • (2024)SparDL: Distributed Deep Learning Training with Efficient Sparse Communication2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00142(1752-1764)Online publication date: 13-May-2024
  • (2022)BaguaProceedings of the VLDB Endowment10.14778/3503585.350359015:4(804-813)Online publication date: 14-Apr-2022
  • (2022)HETProceedings of the VLDB Endowment10.14778/3489496.348951115:2(312-320)Online publication date: 4-Feb-2022
  • (2022)NuPS: A Parameter Server for Machine Learning with Non-Uniform Parameter AccessProceedings of the 2022 International Conference on Management of Data10.1145/3514221.3517860(481-495)Online publication date: 10-Jun-2022
  • (2022)CuWide: Towards Efficient Flow-Based Training for Sparse Wide Models on GPUsIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2020.303810934:9(4119-4132)Online publication date: 1-Sep-2022
  • (2022)EC-Graph: A Distributed Graph Neural Network System with Error-Compensated Compression2022 IEEE 38th International Conference on Data Engineering (ICDE)10.1109/ICDE53745.2022.00053(648-660)Online publication date: May-2022
  • (2022)Hetu: a highly efficient automatic parallel distributed deep learning systemScience China Information Sciences10.1007/s11432-022-3581-966:1Online publication date: 1-Dec-2022
  • (2021)Just move it!Proceedings of the VLDB Endowment10.14778/3476311.347632514:12(2707-2710)Online publication date: 28-Oct-2021
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media