[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Run Time Parallelization

  • Reference work entry
Encyclopedia of Parallel Computing
  • 386 Accesses

Synonyms

Parallelization

Irregular array accesses arise in many scientific applications including sparse matrix solvers, unstructured mesh partial differential equation (PDE) solvers, and particle methods. Traditional compilation techniques require that indices to data arrays be symbolically analyzable at compile time. A common characteristic of irregular applications is the use of indirect indexing to represent relationships among array elements. This means that data arrays are indexed through values of other arrays, called indirection arrays. Figure 1 depicts a simple example of a loop with indirection arrays. The use of indirection arrays prevents compilers from identifying array data access patterns. Inability to characterize array access patterns symbolically can prevent compilers from generating efficient code for irregular applications.

Run Time Parallelization. Fig. 1
figure 1_164

Example of a loop involving indirection arrays

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 1,080.00
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

  1. Baden SB, Fink SJ (2000) A programming methodology for dual-tier multicomputers. IEEE Trans Softw Eng 26(3):212–226

    Article  Google Scholar 

  2. Baden SB, Kohn SR (1995) Portable parallel programming of numerical problems under the LPAR system. J Parallel Distrib Comput 27(1):38–55

    Article  MATH  Google Scholar 

  3. Basumallik A, Eigenmann R (2006) Optimizing irregular shared-memory applications for distributed-memory systems. In: Proceedings of the eleventh ACM SIGPLAN symposium on principles and practice of parallel programming (PPoPP’06), New York, 29–31 Mar 2006. ACM, New York, pp 119–128

    Google Scholar 

  4. Benkner S, Mehrotra P, Van Rosendale J, Zima H (1998) High-level management of communication schedules in HPF-like languages. In: Proceedings of the 12th international conference on supercomputing (ICS’98), Melbourne. ACM, New York, pp 109–116

    Google Scholar 

  5. Brezany P, Gerndt M, Sipkova V, Zima HP (1992) SUPERB support for irregular scientific computations. In: Proceedings of the scalable high performance computing conference (SHPCC-92), Williamsburg, 26–29 Apr 1992, pp 314–321

    Google Scholar 

  6. Chen DK, Torrellas J, Yew PC (1994) An efficient algorithm for the run-time parallelization of DOACROSS loops. In: Proceedings of the 1994 ACM/IEEE conference on supercomputing, Washington, DC, pp 518–527

    Google Scholar 

  7. Chong FT, Sharma SD, Brewer EA, Saltz JH (1995) Multiprocessor runtime support for fine-grained, irregular DAGs. Parallel Process Lett 5:671–683

    Article  Google Scholar 

  8. Cintra M, Martinez J, Torrellas J (2000) Architectural support for scalable speculative parallelization in shared-memory multiprocessors. In: Proceedings of 27th annual international symposium on computer architecture, Vancouver, pp 13–24

    Google Scholar 

  9. Dang F, Yu H, Rauchwerger L (2002) The R-LRPD test: speculative parallelization of partially parallel loops. In: Proceedings of the international parallel and distributed processing symposium (IPDPS02), Ft. Lauderdale

    Google Scholar 

  10. Das R, Saltz J, von Hanxleden R (1994) Slicing analysis and indirect accesses to distributed arrays. Lecture notes in computer science, vol 768. Springer, Berlin/Heidelberg

    Google Scholar 

  11. Gupta M, Nim R (1998) Techniques for speculative run-time parallelization of loops. In: Proceedings of the 1998 ACM/IEEE conference on supercomputing (SC’98), pp 1–12

    Google Scholar 

  12. Hwang YS, Moon B, Sharma SD, Ponnusamy R, Das R, Saltz JH (1995) Runtime and language support for compiling adaptive irregular programs. Softw Pract Exp 25(6):597–621

    Article  Google Scholar 

  13. Koelbel C, Mehrotra P, Van Rosendale J (1990) Supporting shared data structures on distributed memory machines. In: Symposium on principles and practice of parallel programming. ACM, New York, pp 177–186

    Google Scholar 

  14. Krothapalli VP, Sadayappan P (1990) Dynamic scheduling of DOACROSS loops for multiprocessors. In: International conference on databases, parallel architectures and their applications (PARBASE-90), Miami, pp 66–75

    Google Scholar 

  15. Lin Y, Padua DA (2000) Compiler analysis of irregular memory accesses. In: Proceedings of the ACM SIGPLAN 2000 conference on programming language design and implementation. Vancouver, pp 157–168

    Google Scholar 

  16. Liu Z, Huang L, Chapman BM, Weng TH (2004) Efficient implementation of OpenMP for clusters with implicit data distribution. WOMPAT, Houston, pp 121–136

    Google Scholar 

  17. Lusk EL, Overbeek RA (1987) A minimalist approach to portable, parallel programming. In: Jamieson L, Gannon D, Douglass R (eds) The characteristics of parallel algorithms. MIT Press, Cambridge, MA, pp 351–362

    Google Scholar 

  18. Merlin JH, Baden SB, Fink S, Chapman BM (1999) Multiple data parallelism with HPF and KeLP. Future Gener Comput Syst 15(3):393–405

    Article  Google Scholar 

  19. Midkiff SP, Padua DA (1987) Compiler algorithms for synchronization. IEEE Trans Comput 36(12):1485–1495

    Article  MATH  Google Scholar 

  20. Mirchandaney R, Saltz JH, Smith RM, Nicol DM, Crowley K (1988) Principles of runtime support for parallel processors. In: Proceedings of the second international conference on supercomputing (ICS’88), St. Malo, pp 140–152

    Google Scholar 

  21. Ponnusamy R, Hwang YS, Das R, Saltz JH, Choudhary A, Fox G (1995) Supporting irregular distributions using data-parallel languages. Parallel Distrib Technol Syst Appl 3(1):12–24

    Article  Google Scholar 

  22. Prvulovic M, Garzaran MJ, Rauchwerger L, Torrellas J (2001) Removing architectural bottlenecks to the scalability of speculative parallelization. In: Proceedings, 28th annual international symposium on Computer architecture, pp 204–215

    Google Scholar 

  23. Rauchwerger L, Padua D (1995) The LRPD test: speculative run-time parallelization of loops with privatization and reduction parallelization. In: Proceedings of the ACM SIGPLAN 1995 conference on programming language design and implementation (PLDI’95), La Jolla, 18–21 June 1995. ACM, New York, pp 218–232

    Google Scholar 

  24. Saltz JH, Berryman H, Wu J (1991) Multiprocessors and run-time compilation. Concurr Pract Exp 3(6):573–592

    Article  Google Scholar 

  25. Saltz JH, Mirchandaney R, Crowley K (1991) Run-time parallelization and scheduling of loops. IEEE Trans Comput 40(5):603–612

    Article  Google Scholar 

  26. Singh DE, Martín MJ, Rivera FF (2003) Increasing the parallelism of irregular loops with dependences. In: Euro-Par, Klagenfurt, pp 287–296

    Google Scholar 

  27. Strout M, Carter L, Ferrante J (2003) Compile-time composition of run-time data and iteration reordering. Program Lang Des Implement 38(5):91–102

    Google Scholar 

  28. Su J, Yelick K (2005) Automatic support for irregular computations in a high-level language. In: Proceedings, 19th IEEE International Parallel and distributed processing symposium, Atlanta, pp 53b, 04–08 Apr 2005

    Google Scholar 

  29. Ujaldon M, Zapata EL, Chapman BM, Zima HP (1997) Vienna-Fortran/HPF extensions for sparse and irregular problems and their compilation. IEEE Trans Parallel Distrib Syst 8(10):1068–1083

    Article  Google Scholar 

  30. von Hanxleden R, Kennedy K, Koelbel C, Das R, Saltz J (1993) Compiler analysis for irregular problems in Fortran D. In: Proceedings of the fifth international workshop on languages and compilers for parallel computing. Springer, London, pp 97–111

    Google Scholar 

  31. Wu J, Das R, Saltz J, Berryman H, Hiranandani S (1995) Distributed memory compiler design for sparse problems. IEEE Trans Comput 44(6):737–753

    Article  MATH  Google Scholar 

  32. Walker D (1989) The implementation of a three-dimensional PIC Code on a hypercube concurrent processor. In: Conference on hypercubes, concurrent computers, and application, Pasadena

    Google Scholar 

  33. Wolfe M (1989) More iteration space tiling. In: Proceedings of the 1989 ACM/IEEE conference on supercomputing (Supercomputing’89), pp 655–664

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry

Saltz, J.H., Das, R. (2011). Run Time Parallelization. In: Padua, D. (eds) Encyclopedia of Parallel Computing. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-09766-4_164

Download citation

Publish with us

Policies and ethics