Abstract
One of the main hurdles of a broad distribution of PGAS approaches is the prevalence of MPI, which as a de-facto standard appears in the code basis of many applications. To take advantage of the PGAS APIs like GASPI without a major change in the code basis, interoperability between MPI and PGAS approaches needs to be ensured. In this article, we address this challenge by providing our study and preliminary performance results regarding interoperating GASPI and MPI on the performance crucial parts of the Ludwig and iPIC3D applications. In addition, we draw a strategy for better coupling of both APIs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Birn, J., Hesse, M.: Geospace environment modeling (GEM) magnetic reconnection challenge: resistive tearing, anisotropic pressure and hall effects. JGR: Space Phys. 106(A3), 3737–3750 (2001)
Davidson, E.: Message-passing for Lattice Boltzmann. Master’s thesis, EPCC, The University of Edinburgh, Scotland, UK (2008)
Desplat, J.C., Pagonabarraga, I., Bladon, P.: LUDWIG: a parallel Lattice-Boltzmann code for complex fluids. Comput. Phys. Commun. 134(3), 273–290 (2001)
Gray, A., Hart, A., Henrich, O., Stratford, K.: Scaling soft matter physics to thousands of graphic processing units in parallel. IJHPCA 29(3), 274–283 (2015)
Machado, R., Rotaru, T., Rahn, M., Bartsch, V.: Guide to porting MPI applications to GPI-2. Technical report, Fraunhofer ITWM (2015)
Markidis, S., Henri, P., Lapenta, G., Rönnmark, K., Hamrin, M., Meliania, Z., Laure, E.: The fluid-kinetic particle-in-cell method for plasma simulations. J. Comput. Phys. 271, 415–429 (2014)
Markidis, S., Lapenta, G., et al.: Multi-scale simulations of plasma with iPIC3D. Math. Comput. Simul. 80(7), 1509–1519 (2010)
MVAPICH. MVAPICH: MPI over InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE. http://mvapich.cse.ohio-state.edu/benchmarks/
Simmendinger, C., Rahn, M., Gruenewald, D.: The GASPI API: a failure tolerant PGAS API for asynchronous dataflow on heterogeneous architectures. In: Resch, M., Bez, W., Focht, E., Kobayashi, H., Patel, N. (eds.) Sustained Simulation Performance 2014, pp. 17–32. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-10626-7_2
Acknowledgement
This work was funded by EU H2020 Research and Innovation programme through the INTERTWinE project (no. 671602). The simulations were performed on resources provided by SNIC at PDC-HPC, KTH.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Akhmetova, D. et al. (2018). Interoperability of GASPI and MPI in Large Scale Scientific Applications. In: Wyrzykowski, R., Dongarra, J., Deelman, E., Karczewski, K. (eds) Parallel Processing and Applied Mathematics. PPAM 2017. Lecture Notes in Computer Science(), vol 10778. Springer, Cham. https://doi.org/10.1007/978-3-319-78054-2_26
Download citation
DOI: https://doi.org/10.1007/978-3-319-78054-2_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-78053-5
Online ISBN: 978-3-319-78054-2
eBook Packages: Computer ScienceComputer Science (R0)