Synonyms
Irregular array accesses arise in many scientific applications including sparse matrix solvers, unstructured mesh partial differential equation (PDE) solvers, and particle methods. Traditional compilation techniques require that indices to data arrays be symbolically analyzable at compile time. A common characteristic of irregular applications is the use of indirect indexing to represent relationships among array elements. This means that data arrays are indexed through values of other arrays, called indirection arrays. Figure 1 depicts a simple example of a loop with indirection arrays. The use of indirection arrays prevents compilers from identifying array data access patterns. Inability to characterize array access patterns symbolically can prevent compilers from generating efficient code for irregular applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Bibliography
Baden SB, Fink SJ (2000) A programming methodology for dual-tier multicomputers. IEEE Trans Softw Eng 26(3):212–226
Baden SB, Kohn SR (1995) Portable parallel programming of numerical problems under the LPAR system. J Parallel Distrib Comput 27(1):38–55
Basumallik A, Eigenmann R (2006) Optimizing irregular shared-memory applications for distributed-memory systems. In: Proceedings of the eleventh ACM SIGPLAN symposium on principles and practice of parallel programming (PPoPP’06), New York, 29–31 Mar 2006. ACM, New York, pp 119–128
Benkner S, Mehrotra P, Van Rosendale J, Zima H (1998) High-level management of communication schedules in HPF-like languages. In: Proceedings of the 12th international conference on supercomputing (ICS’98), Melbourne. ACM, New York, pp 109–116
Brezany P, Gerndt M, Sipkova V, Zima HP (1992) SUPERB support for irregular scientific computations. In: Proceedings of the scalable high performance computing conference (SHPCC-92), Williamsburg, 26–29 Apr 1992, pp 314–321
Chen DK, Torrellas J, Yew PC (1994) An efficient algorithm for the run-time parallelization of DOACROSS loops. In: Proceedings of the 1994 ACM/IEEE conference on supercomputing, Washington, DC, pp 518–527
Chong FT, Sharma SD, Brewer EA, Saltz JH (1995) Multiprocessor runtime support for fine-grained, irregular DAGs. Parallel Process Lett 5:671–683
Cintra M, Martinez J, Torrellas J (2000) Architectural support for scalable speculative parallelization in shared-memory multiprocessors. In: Proceedings of 27th annual international symposium on computer architecture, Vancouver, pp 13–24
Dang F, Yu H, Rauchwerger L (2002) The R-LRPD test: speculative parallelization of partially parallel loops. In: Proceedings of the international parallel and distributed processing symposium (IPDPS02), Ft. Lauderdale
Das R, Saltz J, von Hanxleden R (1994) Slicing analysis and indirect accesses to distributed arrays. Lecture notes in computer science, vol 768. Springer, Berlin/Heidelberg
Gupta M, Nim R (1998) Techniques for speculative run-time parallelization of loops. In: Proceedings of the 1998 ACM/IEEE conference on supercomputing (SC’98), pp 1–12
Hwang YS, Moon B, Sharma SD, Ponnusamy R, Das R, Saltz JH (1995) Runtime and language support for compiling adaptive irregular programs. Softw Pract Exp 25(6):597–621
Koelbel C, Mehrotra P, Van Rosendale J (1990) Supporting shared data structures on distributed memory machines. In: Symposium on principles and practice of parallel programming. ACM, New York, pp 177–186
Krothapalli VP, Sadayappan P (1990) Dynamic scheduling of DOACROSS loops for multiprocessors. In: International conference on databases, parallel architectures and their applications (PARBASE-90), Miami, pp 66–75
Lin Y, Padua DA (2000) Compiler analysis of irregular memory accesses. In: Proceedings of the ACM SIGPLAN 2000 conference on programming language design and implementation. Vancouver, pp 157–168
Liu Z, Huang L, Chapman BM, Weng TH (2004) Efficient implementation of OpenMP for clusters with implicit data distribution. WOMPAT, Houston, pp 121–136
Lusk EL, Overbeek RA (1987) A minimalist approach to portable, parallel programming. In: Jamieson L, Gannon D, Douglass R (eds) The characteristics of parallel algorithms. MIT Press, Cambridge, MA, pp 351–362
Merlin JH, Baden SB, Fink S, Chapman BM (1999) Multiple data parallelism with HPF and KeLP. Future Gener Comput Syst 15(3):393–405
Midkiff SP, Padua DA (1987) Compiler algorithms for synchronization. IEEE Trans Comput 36(12):1485–1495
Mirchandaney R, Saltz JH, Smith RM, Nicol DM, Crowley K (1988) Principles of runtime support for parallel processors. In: Proceedings of the second international conference on supercomputing (ICS’88), St. Malo, pp 140–152
Ponnusamy R, Hwang YS, Das R, Saltz JH, Choudhary A, Fox G (1995) Supporting irregular distributions using data-parallel languages. Parallel Distrib Technol Syst Appl 3(1):12–24
Prvulovic M, Garzaran MJ, Rauchwerger L, Torrellas J (2001) Removing architectural bottlenecks to the scalability of speculative parallelization. In: Proceedings, 28th annual international symposium on Computer architecture, pp 204–215
Rauchwerger L, Padua D (1995) The LRPD test: speculative run-time parallelization of loops with privatization and reduction parallelization. In: Proceedings of the ACM SIGPLAN 1995 conference on programming language design and implementation (PLDI’95), La Jolla, 18–21 June 1995. ACM, New York, pp 218–232
Saltz JH, Berryman H, Wu J (1991) Multiprocessors and run-time compilation. Concurr Pract Exp 3(6):573–592
Saltz JH, Mirchandaney R, Crowley K (1991) Run-time parallelization and scheduling of loops. IEEE Trans Comput 40(5):603–612
Singh DE, Martín MJ, Rivera FF (2003) Increasing the parallelism of irregular loops with dependences. In: Euro-Par, Klagenfurt, pp 287–296
Strout M, Carter L, Ferrante J (2003) Compile-time composition of run-time data and iteration reordering. Program Lang Des Implement 38(5):91–102
Su J, Yelick K (2005) Automatic support for irregular computations in a high-level language. In: Proceedings, 19th IEEE International Parallel and distributed processing symposium, Atlanta, pp 53b, 04–08 Apr 2005
Ujaldon M, Zapata EL, Chapman BM, Zima HP (1997) Vienna-Fortran/HPF extensions for sparse and irregular problems and their compilation. IEEE Trans Parallel Distrib Syst 8(10):1068–1083
von Hanxleden R, Kennedy K, Koelbel C, Das R, Saltz J (1993) Compiler analysis for irregular problems in Fortran D. In: Proceedings of the fifth international workshop on languages and compilers for parallel computing. Springer, London, pp 97–111
Wu J, Das R, Saltz J, Berryman H, Hiranandani S (1995) Distributed memory compiler design for sparse problems. IEEE Trans Comput 44(6):737–753
Walker D (1989) The implementation of a three-dimensional PIC Code on a hypercube concurrent processor. In: Conference on hypercubes, concurrent computers, and application, Pasadena
Wolfe M (1989) More iteration space tiling. In: Proceedings of the 1989 ACM/IEEE conference on supercomputing (Supercomputing’89), pp 655–664
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media, LLC
About this entry
Cite this entry
Saltz, J.H., Das, R. (2011). Run Time Parallelization. In: Padua, D. (eds) Encyclopedia of Parallel Computing. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-09766-4_164
Download citation
DOI: https://doi.org/10.1007/978-0-387-09766-4_164
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-09765-7
Online ISBN: 978-0-387-09766-4
eBook Packages: Computer ScienceReference Module Computer Science and Engineering