Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleNovember 2023
RMARaceBench: A Microbenchmark Suite to Evaluate Race Detection Tools for RMA Programs
SC-W '23: Proceedings of the SC '23 Workshops of The International Conference on High Performance Computing, Network, Storage, and AnalysisPages 205–214https://doi.org/10.1145/3624062.3624087Parallel programming models with Remote Memory Access (RMA), such as MPI RMA, OpenSHMEM, and GASPI, allow processes to modify the memory of other processes directly. Special care is required to avoid concurrent conflicting accesses that lead to data ...
- ArticleMay 2022
Hybrid Parallel ILU Preconditioner in Linear Solver Library GaspiLS
AbstractKrylov subspace solvers such as GMRES and preconditioners such as incomplete LU (ILU) are the most commonly used methods to solve general-purpose, large-scale linear systems in simulations efficiently. Parallel Krylov subspace solvers and ...
- ArticleAugust 2021
Scalable Hybrid Parallel ILU Preconditioner to Solve Sparse Linear Systems
Euro-Par 2021: Parallel Processing WorkshopsPages 540–544https://doi.org/10.1007/978-3-031-06156-1_46AbstractIncomplete LU (ILU) preconditioners are widely used to improve the convergence of general-purpose large sparse linear systems in computational simulations because of their robustness, accuracy, and usability as a black-box preconditioner. However, ...
A runtime system for finite element methods in a partitioned global address space
CF '20: Proceedings of the 17th ACM International Conference on Computing FrontiersPages 39–48https://doi.org/10.1145/3387902.3392628With approaching exascale performance, applications in the domain of high-performance computing (HPC) have to scale to an ever-increasing amount of compute nodes. The Global Address Space Programming Interface (GASPI) communication API promises to ...
- research-articleMay 2019
Interoperability strategies for GASPI and MPI in large-scale scientific applications
- Christian Simmendinger,
- Roman Iakymchuk,
- Luis Cebamanos,
- Dana Akhmetova,
- Valeria Bartsch,
- Tiberiu Rotaru,
- Mirko Rahn,
- Erwin Laure,
- Stefano Markidis,
- Gabriele Mencagli,
- Felipe MG França,
- Cristiana Barbosa Bentes,
- Leandro Augusto Justen Marzulo,
- Mauricio Lima Pilla,
- Roman Wyrzykowski,
- Ewa Deelman
International Journal of High Performance Computing Applications (SAGE-HPCA), Volume 33, Issue 3Pages 554–568https://doi.org/10.1177/1094342018808359One of the main hurdles of partitioned global address space (PGAS) approaches is the dominance of message passing interface (MPI), which as a de facto standard appears in the code basis of many applications. To take advantage of the PGAS APIs like global ...
- research-articleSeptember 2018
Building and utilizing fault tolerance support tools for the GASPI applications
International Journal of High Performance Computing Applications (SAGE-HPCA), Volume 32, Issue 5Pages 613–626https://doi.org/10.1177/1094342016677085Today's high performance computing systems are made possible by multiple increases in hardware parallelity. This results in the decrease of mean time to failures of the systems with each newer generation, which is an alarming trend. Therefore, it is not ...
- ArticleSeptember 2015
Building a Fault Tolerant Application Using the GASPI Communication Layer
CLUSTER '15: Proceedings of the 2015 IEEE International Conference on Cluster ComputingPages 580–587https://doi.org/10.1109/CLUSTER.2015.106It is commonly agreed that highly parallel software on Exascale computers will suffer from many more runtime failures due to the decreasing trend in the mean time to failures (MTTF). Therefore, it is not surprising that a lot of research is going on in ...