default search action
13. PVM/MPI 2006: Bonn, Germany
- Bernd Mohr, Jesper Larsson Träff, Joachim Worringen, Jack J. Dongarra:
Recent Advances in Parallel Virtual Machine and Message Passing Interface, 13th European PVM/MPI User's Group Meeting, Bonn, Germany, September 17-20, 2006, Proceedings. Lecture Notes in Computer Science 4192, Springer 2006, ISBN 3-540-39110-X
Invited Talks
- Al Geist:
Too Big for MPI? 1 - Richard L. Graham:
Approaches for Parallel Applications Fault Tolerance. 2 - William D. Gropp:
Where Does MPI Need to Grow? 3 - Ryutaro Himeno:
Peta-Scale Supercomputer Project in Japan and Challenges to Life and Human Simulation in Japan. 4 - Vaidy S. Sunderam:
Resource and Application Adaptivity in Message Passing Systems. 5 - Katherine A. Yelick:
Performance Advantages of Partitioned Global Address Space Languages. 6
Tutorials
- William D. Gropp, Ewing L. Lusk:
Using MPI-2: A Problem-Based Approach. 7 - Bernd Mohr, Felix Wolf:
Performance Tools for Parallel Programming. 8-9 - Robert B. Ross, Joachim Worringen:
High-Performance Parallel I/O. 10 - Rolf Rabenseifner, Georg Hager, Gabriele Jost, Rainer Keller:
Hybrid MPI and OpenMP Parallel Programming. 11
Outstanding Papers
- William D. Gropp, Rajeev Thakur:
Issues in Developing a Thread-Safe MPI Implementation. 12-21 - Fabian Kulla, Peter Sanders:
Scalable Parallel Suffix Array Construction. 22-29 - Salman Pervez, Ganesh Gopalakrishnan, Robert M. Kirby, Rajeev Thakur, William D. Gropp:
Formal Verification of Programs That Use MPI One-Sided Communication. 30-39
Collective Communication
- Jelena Pjesivac-Grbovic, Graham E. Fagg, Thara Angskun, George Bosilca, Jack J. Dongarra:
MPI Collective Algorithm Selection and Quadtree Encoding. 40-48 - Peter Sanders, Jesper Larsson Träff:
Parallel Prefix (Scan) Algorithms for MPI. 49-57 - Jesper Larsson Träff:
Efficient Allgather for Regular SMP-Clusters. 58-65 - Amith R. Mamidala, Abhinav Vishnu, Dhabaleswar K. Panda:
Efficient Shared Memory and RDMA Based Design for MPI_Allgather over InfiniBand. 66-75
Communication Protocols
- Timothy S. Woodall, Galen M. Shipman, George Bosilca, Richard L. Graham, Arthur B. Maccabe:
High Performance RDMA Protocols in HPC. 76-85 - Darius Buntinas, Guillaume Mercier, William D. Gropp:
Implementation and Shared-Memory Evaluation of MPICH2 over the Nemesis Communication Subsystem. 86-95 - Manjunath Gorentla Venkata, Patrick G. Bridges:
MPI/CTP: A Reconfigurable MPI for HPC Applications. 96-104
Debugging and Verification
- Bettina Krammer, Michael M. Resch:
Correctness Checking of MPI One-Sided Communication Using Marmot. 105-114 - Christopher Gottbrath, Brian Barrett, William D. Gropp, Ewing L. Lusk, Jeffrey M. Squyres:
An Interface to Support the Identification of Dynamic MPI 2 Processes for Scalable Parallel Debugging. 115-122 - Igor Grudenic, Nikola Bogunovic:
Modeling and Verification of MPI Based Distributed Software. 123-132
Fault Tolerance
- David Dewolfs, Jan Broeckhove, Vaidy S. Sunderam, Graham E. Fagg:
FT-MPI, Fault-Tolerant Metacomputing and Generic Name Services: A Case Study. 133-140 - Thara Angskun, Graham E. Fagg, George Bosilca, Jelena Pjesivac-Grbovic, Jack J. Dongarra:
Scalable Fault Tolerant Protocol for Parallel Runtime Environments. 141-149 - Angelo Duarte, Dolores Rexachs, Emilio Luque:
An Intelligent Management of Fault Tolerance in Cluster Using RADICMPI. 150-157 - Emilio Hernández, Yudith Cardinale, Wilmer Pereira:
Extended mpiJava for Distributed Checkpointing and Recovery. 158-165
Metacomputing and Grid
- Franco Frattolillo:
Running PVM Applications on Multidomain Clusters. 166-173 - Boris Bierbaum, Carsten Clauss, Thomas Eickermann, Lidia Kirtchakova, Arnold Krechel, Stephan Springstubbe, Oliver Wäldrich, Wolfgang Ziegler:
Reliable Orchestration of Distributed MPI-Applications in a UNICORE-Based Grid with MetaMPICH and MetaScheduling. 174-183 - Boris Bierbaum, Carsten Clauss, Martin Pöppe, Stefan Lankes, Thomas Bemmerl:
The New Multidevice Architecture of MetaMPICH in the Context of Other Approaches to Grid-Enabled MPI. 184-193 - Adam Kai Leung Wong, Andrzej M. Goscinski:
Using an Enterprise Grid for Execution of MPI Parallel Applications - A Case Study. 194-201
Parallel I/O
- Joachim Worringen:
Self-adaptive Hints for Collective I/O. 202-211 - Andrew B. Hastings, Alok N. Choudhary:
Exploiting Shared Memory to Improve Parallel I/O Performance. 212-221 - Jan Seidel, Rudolf Berrendorf, Marcel Birkner, Marc-André Hermanns:
High-Bandwidth Remote Parallel I/O with the Distributed Memory Filesystem MEMFS. 222-229 - Yuichi Tsujita:
Effective Seamless Remote MPI-I/O Operations with Derived Data Types Using PVFS2. 230-237
Implementation Issues
- Surendra Byna, Xian-He Sun, Rajeev Thakur, William Gropp:
Automatic Memory Optimizations for Improving MPI Derived Datatype Performance. 238-246 - Márcia C. Cera, Guilherme P. Pezzi, Elton N. Mathias, Nicolas Maillard, Philippe Olivier Alexandre Navaux:
Improving the Dynamic Creation of Processes in MPI-2. 247-255
Object-Oriented Message Passing
- Guillermo L. Taboada, Juan Touriño, Ramon Doallo:
Non-blocking Java Communications Support on Clusters. 256-265 - Prabhanjan Kambadur, Douglas P. Gregor, Andrew Lumsdaine, Amey Dharurkar:
Modernizing the C++ Interface to MPI. 266-274
Limitations and Extensions
- Robert Latham, Robert B. Ross, Rajeev Thakur:
Can MPI Be Used for Persistent Parallel Services? 275-284 - Claudia Leopold, Michael Süß:
Observations on MPI-2 Support for Hybrid Master/Slave Applications in Dynamic and Heterogeneous Environments. 285-292 - Guntram Berti, Jesper Larsson Träff:
What MPI Could (and Cannot) Do for Mesh-Partitioning on Non-homogeneous Networks. 293-302
Performance
- Markus Geimer, Felix Wolf, Brian J. N. Wylie, Bernd Mohr:
Scalable Parallel Trace-Based Performance Analysis. 303-312 - Kevin A. Huck, Allen D. Malony, Sameer Shende, Alan Morris:
TAUg: Runtime Global Performance Data Access Using MPI. 313-321 - Thomas Ludwig, Stephan Krempel, Julian M. Kunkel, Frank Panse, Dulip Withanage:
Tracing the MPI-IO Calls' Disk Accesses. 322-330 - Douglas Doerfler, Ron Brightwell:
Measuring MPI Send and Receive Overhead and Application Availability in High Performance Network Interfaces. 331-338 - Keith D. Underwood:
Challenges and Issues in Benchmarking MPI. 339-346 - Rainer Keller, George Bosilca, Graham E. Fagg, Michael M. Resch, Jack J. Dongarra:
Implementation and Usage of the PERUSE-Interface in Open MPI. 347-355
ParSim
- Carsten Trinitis, Martin Schulz:
5th International Special Session on Current Trends in Numerical Simulation for Parallel Engineering Environments. 356-357 - Mark Baker, Bryan Carpenter, Aamir Shafi:
MPJ Express Meets Gadget: Towards a Java Code for Cosmological Simulations. 358-365 - Ulrich Küttler, Wolfgang A. Wall:
An Approach for Parallel Fluid-Structure Interaction on Unstructured Meshes. 366-373 - Torsten Hoefler, Peter Gottschling, Wolfgang Rehm, Andrew Lumsdaine:
Optimizing a Conjugate Gradient Solver with Non-Blocking Collective Operations. 374-382 - Andreas Pflug, Michael Siemers, Bernd Szyszka:
Parallel DSMC Gasflow Simulation of an In-Line Coater for Reactive Sputtering. 383-390 - Jirí Starý, Radim Blaheta, Ondrej Jakl, Roman Kohut:
Parallel Simulation of T-M Processes in Underground Repository of Spent Nuclear Fuel. 391-399
Poster Abstracts
- Dries Kimpe, Stefan Vandewalle, Stefaan Poedts:
On the Usability of High-Level Parallel IO in Unstructured Grid Simulations. 400-401 - Joachim Worringen:
Automated Performance Comparison. 402-403 - Carsten Kutzner, David van der Spoel, Martin Fechner, Erik Lindahl, Udo W. Schmitt, Bert L. de Groot, Helmut Grubmüller:
Improved GROMACS Scaling on Ethernet Switched Clusters. 404-405 - Alexander V. Konovalov, Alexandr Kurylev, Anton Pegushin, Sergey Scharf:
Asynchronity in Collective Operation Implementation. 406-407 - Alexey N. Salnikov:
PARUS: A Parallel Programming Framework for Heterogeneous Multiprocessor Systems. 408-409 - Mitsuo Murata:
Application of PVM to Protein Homology Search. 410-411
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.