[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Enhancing MPI+OpenMP Task Based Applications for Heterogeneous Architectures with GPU Support

  • Conference paper
  • First Online:
OpenMP in a Modern World: From Multi-device Support to Meta Programming (IWOMP 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13527))

Included in the following conference series:

Abstract

Heterogeneous supercomputers are widespread over HPC systems and programming efficient applications on these architectures is a challenge. Task-based programming models are a promising way to tackle this challenge. Since OpenMP 4.0 and 4.5, the target directives enable to offload pieces of code to GPUs and to express it as tasks with dependencies. Therefore, heterogeneous machines can be programmed using MPI+OpenMP(task+target) to exhibit a very high level of concurrent asynchronous operations for which data transfers, kernel executions, communications and CPU computations can be overlapped. Hence, it is possible to suspend tasks performing these asynchronous operations on the CPUs and to overlap their completion with another task execution. Suspended tasks can resume once the associated asynchronous event is completed in an opportunistic way at every scheduling point. We have integrated this feature into the MPC framework and validated it on a AXPY microbenchmark and evaluated on a MPI+OpenMP(tasks) implementation of the LULESH proxy applications. The results show that we are able to improve asynchronism and the overall HPC performance, allowing applications to benefit from asynchronous execution on heterogeneous machines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 31.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 39.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://gitlab.inria.fr/ropereir/iwomp2022.

  2. 2.

    https://docs.nvidia.com/cuda/cuda-driver-api/api-sync-behavior.html.

  3. 3.

    https://gitlab.inria.fr/ropereir/iwomp2022.

References

  1. Augonnet, C., Thibault, S., Namyst, R., Wacrenier, P.A.: StarPU: a unified platform for task scheduling on heterogeneous multicore architectures. Concurr. Comput. Pract. Exp. 23(2), 187–198 (2011). https://doi.org/10.1002/cpe.1631

    Article  Google Scholar 

  2. Ayguadé, E., Badia, R.M., Igual, F.D., Labarta, J., Mayo, R., Quintana-Ortí, E.S.: An extension of the StarSs programming model for platforms with multiple GPUs. In: Sips, H., Epema, D., Lin, H.-X. (eds.) Euro-Par 2009. LNCS, vol. 5704, pp. 851–862. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03869-3_79

    Chapter  Google Scholar 

  3. Bauer, M., Treichler, S., Slaughter, E., Aiken, A.: Legion: expressing locality and independence with logical regions. In: SC 2012: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, pp. 1–11 (2012). https://doi.org/10.1109/SC.2012.71

  4. Beyer, J.C., Stotzer, E.J., Hart, A., de Supinski, B.R.: OpenMP for accelerators. In: Chapman, B.M., Gropp, W.D., Kumaran, K., Müller, M.S. (eds.) IWOMP 2011. LNCS, vol. 6665, pp. 108–121. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21487-5_9

    Chapter  Google Scholar 

  5. Carribault, P., Pérache, M., Jourdren, H.: Enabling low-overhead hybrid MPI/OpenMP parallelism with MPC. In: Sato, M., Hanawa, T., Müller, M.S., Chapman, B.M., de Supinski, B.R. (eds.) IWOMP 2010. LNCS, vol. 6132, pp. 1–14. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13217-9_1

    Chapter  Google Scholar 

  6. De Melo, A.C.: The new Linux ‘perf’ tools. In: Slides from Linux Kongress, vol. 18, pp. 1–42 (2010)

    Google Scholar 

  7. Duran, A., Ferrer, R., Ayguadé, E., Badia, R.M., Labarta, J.: A proposal to extend the OpenMP tasking model with dependent tasks. Int. J. Parallel Program. 37(3), 292–305 (2009). https://doi.org/10.1007/s10766-009-0101-1

    Article  MATH  Google Scholar 

  8. Gautier, T., Lementec, F., Faucher, V., Raffin, B.: X-kaapi: a multi paradigm runtime for multicore architectures. In: Workshop P2S2 in Conjunction of ICPP, Lyon, France, p. 16, October 2013. https://hal.inria.fr/hal-00727827

  9. Jia, Z., Kwon, Y., Shipman, G., McCormick, P., Erez, M., Aiken, A.: A distributed multi-GPU system for fast graph processing. Proc. VLDB Endow. 11(3), 297–310 (2017). https://doi.org/10.14778/3157794.3157799

    Article  Google Scholar 

  10. Karlin, I., McGraw, J., Keasler, J., Still, B.: Tuning the LULESH Mini-App for Current and Future Hardware (2013). https://www.osti.gov/biblio/1070167

  11. Karlin, I.: LULESH Programming Model and Performance Ports Overview (2012)

    Google Scholar 

  12. Karlin, I., Keasler, J., Neely, R.: LULESH 2.0 updates and changes. Technical report LLNL-TR-641973, August 2013

    Google Scholar 

  13. Kowalke, O.: Distinguishing coroutines and fibers (2014)

    Google Scholar 

  14. Lin, D.-L., Huang, T.-W.: Efficient GPU computation using task graph parallelism. In: Sousa, L., Roma, N., Tomás, P. (eds.) Euro-Par 2021. LNCS, vol. 12820, pp. 435–450. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85665-6_27

    Chapter  Google Scholar 

  15. Lopez, V., Criado, J., Peñacoba, R., Ferrer, R., Teruel, X., Garcia-Gasulla, M.: An OpenMP free agent threads implementation. In: McIntosh-Smith, S., de Supinski, B.R., Klinkenberg, J. (eds.) IWOMP 2021. LNCS, vol. 12870, pp. 211–225. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85262-7_15

    Chapter  Google Scholar 

  16. Lu, H., Seo, S., Balaji, P.: MPI+ULT: overlapping communication and computation with user-level threads, pp. 444–454, August 2015. https://doi.org/10.1109/HPCC-CSS-ICESS.2015.82

  17. Olivier, S.L., de Supinski, B.R., Schulz, M., Prins, J.F.: Characterizing and mitigating work time inflation in task parallel programs. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC 2012, Washington, DC, USA. IEEE Computer Society Press (2012)

    Google Scholar 

  18. OpenMP Architecture Review Board: OpenMP Application Program Interface Version 4.0 (2013). http://www.openmp.org/mp-documents/OpenMP4.0.0.pdf

  19. OpenMP Architecture Review Board: OpenMP Application Program Interface Version 4.5 (2015). http://www.openmp.org/wp-content/uploads/openmp-4.5.pdf

  20. Pereira, R., Roussel, A., Carribault, P., Gautier, T.: Communication-aware task scheduling strategy in hybrid MPI+OpenMP applications. In: McIntosh-Smith, S., de Supinski, B.R., Klinkenberg, J. (eds.) IWOMP 2021. LNCS, vol. 12870, pp. 197–210. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85262-7_14

    Chapter  Google Scholar 

  21. Protze, J., Hermanns, M.A., Demiralp, A., Müller, M.S., Kuhlen, T.: MPI detach - asynchronous local completion. In: 27th European MPI Users’ Group Meeting, EuroMPI/USA 2020, pp. 71–80. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3416315.3416323

  22. Sala, K., Teruel, X., Pérez, J., Peña, A., Beltran, V., Labarta, J.: Integrating blocking and non-blocking MPI primitives with task-based programming models. Parallel Comput. 85, 153–166 (2019). https://doi.org/10.1016/j.parco.2018.12.008

    Article  Google Scholar 

  23. Schuchart, J., Tsugane, K., Gracia, J., Sato, M.: The impact of Taskyield on the design of tasks communicating through MPI. In: de Supinski, B.R., Valero-Lara, P., Martorell, X., Mateo Bellido, S., Labarta, J. (eds.) IWOMP 2018. LNCS, vol. 11128, pp. 3–17. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98521-3_1

    Chapter  Google Scholar 

  24. de Supinski, B.R., et al.: The ongoing evolution of OpenMP. Proc. IEEE 106(11), 2004–2019 (2018). https://doi.org/10.1109/JPROC.2018.2853600

    Article  Google Scholar 

  25. Tian, S., Doerfert, J., Chapman, B.: Concurrent execution of deferred OpenMP target tasks with hidden helper threads. In: Chapman, B., Moreira, J. (eds.) LCPC 2020. LNTCS, vol. 13149, pp. 41–56. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-95953-1_4

    Chapter  Google Scholar 

Download references

Acknowledgements

Present results have been obtained within the frame of DIGIT, Contractual Research Laboratory between CEA, CEA/DAM-Île de France center and the University of Reims Champagne Ardenne.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrien Roussel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ferat, M., Pereira, R., Roussel, A., Carribault, P., Steffenel, LA., Gautier, T. (2022). Enhancing MPI+OpenMP Task Based Applications for Heterogeneous Architectures with GPU Support. In: Klemm, M., de Supinski, B.R., Klinkenberg, J., Neth, B. (eds) OpenMP in a Modern World: From Multi-device Support to Meta Programming. IWOMP 2022. Lecture Notes in Computer Science, vol 13527. Springer, Cham. https://doi.org/10.1007/978-3-031-15922-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-15922-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15921-3

  • Online ISBN: 978-3-031-15922-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics