[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Breaking Down the Parallel Performance of GROMACS, a High-Performance Molecular Dynamics Software

  • Conference paper
  • First Online:
Parallel Processing and Applied Mathematics (PPAM 2022)

Abstract

GROMACS is one of the most widely used HPC software packages using the Molecular Dynamics (MD) simulation technique. In this work, we quantify GROMACS parallel performance using different configurations, HPC systems, and FFT libraries (FFTW, Intel MKL FFT, and FFT PACK). We break down the cost of each GROMACS computational phase and identify non-scalable stages, such as MPI communication during the 3D FFT computation when using a large number of processes. We show that the Particle-Mesh Ewald phase and the 3D FFT calculation significantly impact the GROMACS performance. Finally, we discuss performance opportunities with a particular interest in developing GROMACS for the FFT calculations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 47.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 59.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abraham, M., et al.: GROMACS: high performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX 1, 19–25 (2015). https://doi.org/10.1016/j.softx.2015.06.001

    Article  Google Scholar 

  2. Berendsen, H.J., van der Spoel, D., van Drunen, R.: GROMACS: a message-passing parallel molecular dynamics implementation. Comput. Phys. Commun. 91(1–3), 43–56 (1995). https://doi.org/10.1016/0010-4655(95)00042-E

    Article  Google Scholar 

  3. Borhani, D.W., Shaw, D.E.: The future of molecular dynamics simulations in drug discovery. J. Comput. Aided Mol. Des. 26(1), 15–26 (2012). https://doi.org/10.1007/s10822-011-9517-y

    Article  Google Scholar 

  4. Brooks, B.R., al.: Charmm: the biomolecular simulation program. J. Comput. chem. 30(10), 1545–1614 (2009). https://doi.org/10.1002/jcc.21287

  5. Elber, R.: Long-timescale simulation methods. Curr. Opin. Struct. Biol. 15(2), 151–156 (2005). https://doi.org/10.1016/j.sbi.2005.02.004

    Article  Google Scholar 

  6. Frigo, M., Johnson, S.G.: The design and implementation of FFTW3. Proc. IEEE 93(2), 216–231 (2005). https://doi.org/10.1109/JPROC.2004.840301, special issue on “Program Generation, Optimization, and Platform Adaptation"

  7. Gruber, C.C., Pleiss, J.: Systematic benchmarking of large molecular dynamics simulations employing GROMACS on massive multiprocessing facilities. J. Comput. Chem. 32(4), 600–606 (2011). https://doi.org/10.1002/jcc.21645

  8. Karplus, M., McCammon, J.A.: Molecular dynamics simulations of biomolecules. Nat. Struct. Biol. 9(9), 646–652 (2002). https://doi.org/10.1038/nsb0902-646

    Article  Google Scholar 

  9. Knüpfer, A., et al.: Score-p: a joint performance measurement run-time infrastructure for periscope, scalasca, TAU, and vampir. In: Brunst, H., Muller, M., Nagel, W., Resch, M. (eds.) Tools for High Performance Computing 2011, pp. 79–91. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-31476-6_7

  10. Knüpfer, A., et al.: The vampir performance analysis tool-set. In: Tools for High Performance Computing, pp. 139–155. Springer, Berlin (2008). https://doi.org/10.1007/978-3-540-68564-7_9

  11. Kutzner, C., Apostolov, R., Hess, B., Grubmuller, H.: Scaling of the GROMACS 4.6 molecular dynamics code on superMUC. Adv. Parallel Comput. 25, 722–727 (2014). https://doi.org/10.3233/978-1-61499-381-0-722

  12. Kutzner, C., Páll, S., Fechner, M., Esztermann, A., de Groot, B.L., Grubmuller, H.: More bang for your buck: Improved use of GPU nodes for GROMACS 2018. J. Comput. Chem. 40(27), 2418–2431 (2019). https://doi.org/10.1002/jcc.26011

    Article  Google Scholar 

  13. Páll, S., Abraham, M.J., Kutzner, C., Hess, B., Lindahl, E.: Tackling Exascale software challenges in molecular dynamics simulations with GROMACS. In: Markidis, S., Laure, E. (eds.) EASC 2014. LNCS, vol. 8759, pp. 3–27. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-15976-8_1

    Chapter  Google Scholar 

  14. Phillips, J.C., et al.: Scalable molecular dynamics with NAMD. J. Comput. Chem. 26(16), 1781–1802 (2005). https://doi.org/10.1002/jcc.20289

  15. Shamshirgar, D.S., Hess, B., Tornberg, A.K.: A comparison of the spectral EWALD and smooth particle mesh EWALD methods in GROMACS. arXiv preprint arXiv:1712.04718 (2017). 10.48550/arXiv. 1712.04718

  16. Smith, S.A., Cromey, C.E., Lowenthal, D.K., Domke, J., Jain, N., Thiagarajan, J.J., Bhatele, A.: Mitigating inter-job interference using adaptive flow-aware routing. In: SC 2018: International Conference for High Performance Computing, Networking, Storage and Analysis (2018). https://doi.org/10.1109/SC.2018.00030

  17. Swarztrauber, P.N.: Vectorizing the FFTs. In: Rodrigue, G. (ed.) Parallel Computations, pp. 51–83. Academic Press (1982). https://doi.org/10.1016/B978-0-12-592101-5.50007-5

  18. Van Der Spoel, D., Lindahl, E., Hess, B., Groenhof, G., Mark, A.E., Berendsen, H.J.: Gromacs: fast, flexible, and free. J. Comput. Chem. 26(16), 1701–1718 (2005). https://doi.org/10.1002/jcc.20291

    Article  Google Scholar 

  19. Yokota, R., Barba, L.A.: A tuned and scalable fast multipole method as a preeminent algorithm for exascale systems. Int. J. High Perform. Comput. Appl. 26(4), 337–346 (2012). https://doi.org/10.1177/1094342011429952

    Article  Google Scholar 

Download references

Acknowledgments

Financial support was provided by the SeRC Exascale Simulation Software Initiative (SESSI) and the DEEP-SEA project. The DEEP-SEA project has received funding from the European Union’s Horizon 2020/EuroHPC research and innovation program under grant agreement No 955606. National VR contribution from Sweden matches the EuroHPC funding. The computations of this work were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at HPC2N, partially funded by the Swedish Research Council through grant agreement no. 2018-05973.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Måns I. Andersson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Andersson, M.I., Murugan, N.A., Podobas, A., Markidis, S. (2023). Breaking Down the Parallel Performance of GROMACS, a High-Performance Molecular Dynamics Software. In: Wyrzykowski, R., Dongarra, J., Deelman, E., Karczewski, K. (eds) Parallel Processing and Applied Mathematics. PPAM 2022. Lecture Notes in Computer Science, vol 13826. Springer, Cham. https://doi.org/10.1007/978-3-031-30442-2_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30442-2_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30441-5

  • Online ISBN: 978-3-031-30442-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics