Abstract
GROMACS is one of the most widely used HPC software packages using the Molecular Dynamics (MD) simulation technique. In this work, we quantify GROMACS parallel performance using different configurations, HPC systems, and FFT libraries (FFTW, Intel MKL FFT, and FFT PACK). We break down the cost of each GROMACS computational phase and identify non-scalable stages, such as MPI communication during the 3D FFT computation when using a large number of processes. We show that the Particle-Mesh Ewald phase and the 3D FFT calculation significantly impact the GROMACS performance. Finally, we discuss performance opportunities with a particular interest in developing GROMACS for the FFT calculations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abraham, M., et al.: GROMACS: high performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX 1, 19–25 (2015). https://doi.org/10.1016/j.softx.2015.06.001
Berendsen, H.J., van der Spoel, D., van Drunen, R.: GROMACS: a message-passing parallel molecular dynamics implementation. Comput. Phys. Commun. 91(1–3), 43–56 (1995). https://doi.org/10.1016/0010-4655(95)00042-E
Borhani, D.W., Shaw, D.E.: The future of molecular dynamics simulations in drug discovery. J. Comput. Aided Mol. Des. 26(1), 15–26 (2012). https://doi.org/10.1007/s10822-011-9517-y
Brooks, B.R., al.: Charmm: the biomolecular simulation program. J. Comput. chem. 30(10), 1545–1614 (2009). https://doi.org/10.1002/jcc.21287
Elber, R.: Long-timescale simulation methods. Curr. Opin. Struct. Biol. 15(2), 151–156 (2005). https://doi.org/10.1016/j.sbi.2005.02.004
Frigo, M., Johnson, S.G.: The design and implementation of FFTW3. Proc. IEEE 93(2), 216–231 (2005). https://doi.org/10.1109/JPROC.2004.840301, special issue on “Program Generation, Optimization, and Platform Adaptation"
Gruber, C.C., Pleiss, J.: Systematic benchmarking of large molecular dynamics simulations employing GROMACS on massive multiprocessing facilities. J. Comput. Chem. 32(4), 600–606 (2011). https://doi.org/10.1002/jcc.21645
Karplus, M., McCammon, J.A.: Molecular dynamics simulations of biomolecules. Nat. Struct. Biol. 9(9), 646–652 (2002). https://doi.org/10.1038/nsb0902-646
Knüpfer, A., et al.: Score-p: a joint performance measurement run-time infrastructure for periscope, scalasca, TAU, and vampir. In: Brunst, H., Muller, M., Nagel, W., Resch, M. (eds.) Tools for High Performance Computing 2011, pp. 79–91. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-31476-6_7
Knüpfer, A., et al.: The vampir performance analysis tool-set. In: Tools for High Performance Computing, pp. 139–155. Springer, Berlin (2008). https://doi.org/10.1007/978-3-540-68564-7_9
Kutzner, C., Apostolov, R., Hess, B., Grubmuller, H.: Scaling of the GROMACS 4.6 molecular dynamics code on superMUC. Adv. Parallel Comput. 25, 722–727 (2014). https://doi.org/10.3233/978-1-61499-381-0-722
Kutzner, C., Páll, S., Fechner, M., Esztermann, A., de Groot, B.L., Grubmuller, H.: More bang for your buck: Improved use of GPU nodes for GROMACS 2018. J. Comput. Chem. 40(27), 2418–2431 (2019). https://doi.org/10.1002/jcc.26011
Páll, S., Abraham, M.J., Kutzner, C., Hess, B., Lindahl, E.: Tackling Exascale software challenges in molecular dynamics simulations with GROMACS. In: Markidis, S., Laure, E. (eds.) EASC 2014. LNCS, vol. 8759, pp. 3–27. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-15976-8_1
Phillips, J.C., et al.: Scalable molecular dynamics with NAMD. J. Comput. Chem. 26(16), 1781–1802 (2005). https://doi.org/10.1002/jcc.20289
Shamshirgar, D.S., Hess, B., Tornberg, A.K.: A comparison of the spectral EWALD and smooth particle mesh EWALD methods in GROMACS. arXiv preprint arXiv:1712.04718 (2017). 10.48550/arXiv. 1712.04718
Smith, S.A., Cromey, C.E., Lowenthal, D.K., Domke, J., Jain, N., Thiagarajan, J.J., Bhatele, A.: Mitigating inter-job interference using adaptive flow-aware routing. In: SC 2018: International Conference for High Performance Computing, Networking, Storage and Analysis (2018). https://doi.org/10.1109/SC.2018.00030
Swarztrauber, P.N.: Vectorizing the FFTs. In: Rodrigue, G. (ed.) Parallel Computations, pp. 51–83. Academic Press (1982). https://doi.org/10.1016/B978-0-12-592101-5.50007-5
Van Der Spoel, D., Lindahl, E., Hess, B., Groenhof, G., Mark, A.E., Berendsen, H.J.: Gromacs: fast, flexible, and free. J. Comput. Chem. 26(16), 1701–1718 (2005). https://doi.org/10.1002/jcc.20291
Yokota, R., Barba, L.A.: A tuned and scalable fast multipole method as a preeminent algorithm for exascale systems. Int. J. High Perform. Comput. Appl. 26(4), 337–346 (2012). https://doi.org/10.1177/1094342011429952
Acknowledgments
Financial support was provided by the SeRC Exascale Simulation Software Initiative (SESSI) and the DEEP-SEA project. The DEEP-SEA project has received funding from the European Union’s Horizon 2020/EuroHPC research and innovation program under grant agreement No 955606. National VR contribution from Sweden matches the EuroHPC funding. The computations of this work were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at HPC2N, partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Andersson, M.I., Murugan, N.A., Podobas, A., Markidis, S. (2023). Breaking Down the Parallel Performance of GROMACS, a High-Performance Molecular Dynamics Software. In: Wyrzykowski, R., Dongarra, J., Deelman, E., Karczewski, K. (eds) Parallel Processing and Applied Mathematics. PPAM 2022. Lecture Notes in Computer Science, vol 13826. Springer, Cham. https://doi.org/10.1007/978-3-031-30442-2_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-30442-2_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-30441-5
Online ISBN: 978-3-031-30442-2
eBook Packages: Computer ScienceComputer Science (R0)