Abstract
During the last decade, performance prediction for industrial and scientific workloads on massively parallel high-performance computing systems has been and still is an active research area. Due to the complexity of applications, the approach to deriving an analytical performance model from current workloads becomes increasingly challenging: automatically generated models often suffer from inaccurate performance prediction; manually constructed analytical models show better prediction, but are very labor-intensive. Our approach aims at closing the gap between compiler-supported automatic model construction and the manual analytical modeling of workloads. Commonly, performance-counter values are used to validate the model, so that prediction errors can be determined and quantified. Instead of manually instrumenting the executable for accessing performance counters, we modified the GCC compiler to insert calls to run-time system functions. Added compiler options enable the user to control the instrumentation process. Subsequently, the instrumentation focuses on frequently executed code parts. Similar to established frameworks, a run-time system is used to track the application behavior: traces are generated at run-time, enabling the construction of architecture independent models (using quadratic programming) and, thus, the prediction of larger workloads. In this paper, we introduce our framework and demonstrate its applicability to benchmarks as well as real world numerical workloads. The experiments reveal an average error rate of 9% for the prediction of larger workloads.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Frank, M.I., Agarwal, A., Vernon, M.K.: LoPC: modeling contention in parallel algorithms. SIGPLAN Not. 32(7), 276–287 (1997)
Alexandrov, A., Ionescu, M.F., Schauser, K.E., Scheiman, C.: LogGP: Incorporating long messages into the LogP model — one step closer towards a realistic model for parallel computation. Technical report, Santa Barbara, CA, USA (1995)
Culler, D.E., Karp, R.M., Patterson, D., Sahay, A., Santos, E.E., Schauser, K.E., Subramonian, R., von Eicken, T.: LogP: a practical model of parallel computation. Commun. ACM 39(11), 78–85 (1996)
Alam, S., Vetter, J.: A framework to develop symbolic performance models of parallel applications. In: IEEE International Parallel & Distributed Processing Symposium, p. 368 (2006)
Bhatia, N., Alam, S.R., Vetter, J.S.: Performance modeling of emerging HPC architectures. In: HPCMP Users Group Conference, pp. 367–373 (2006)
Ipek, E., de Supinski, B.R., Schulz, M., McKee, S.A.: An approach to performance prediction for parallel applications. In: Cunha, J.C., Medeiros, P.D. (eds.) Euro-Par 2005. LNCS, vol. 3648, pp. 196–205. Springer, Heidelberg (2005)
Marin, G.: Semi-automatic synthesis of parameterized performance models for scientific programs. Master’s thesis, Rice University, Houston, Texas (April 2003)
Marin, G., Mellor-Crummey, J.: Cross-architecture performance predictions for scientific applications using parameterized models. In: SIGMETRICS 2004/Performance ’04: Proceedings of the joint international conference on Measurement and modeling of computer systems, pp. 2–13. ACM, New York (2004)
Yang, L.T., Ma, X., Mueller, F.: Cross-platform performance prediction of parallel applications using partial execution. In: SC 2005: Proceedings of the 2005 ACM/IEEE conference on Supercomputing, p. 40. IEEE Computer Society, Los Alamitos (2005)
Carrington, L., Snavely, A., Wolter, N.: A performance prediction framework for scientific applications. Future Generation Computer Systems 22(3), 336–346 (2006)
Lee, B.C., Collins, J., Wang, H., Brooks, D.: CPR: Composable performance regression for scalable multiprocessor models. In: MICRO: 41st International Symposium on Microarchitecture (2008)
Lee, B.C., Brooks, D.: Efficiency trends and limits from comprehensive microarchitectural adaptivity. SIGOPS Oper. Syst. Rev. 42(2), 36–47 (2008)
Deelman, E., Dube, A., Hoisie, A., Luo, Y., Oliver, R.L., Sundaram-Stukel, D., Wasserman, H.J., Adve, V.S., Bagrodia, R., Browne, J.C., Houstis, E.N., Lubeck, O.M., Rice, J.R., Teller, P.J., Vernon, M.K.: POEMS: end-to-end performance design of large parallel adaptive computational systems. In: WOSP 1998: Proceedings of the 1st International Workshop on Software and Performance, pp. 18–30 (1998)
Shende, S.S., Malony, A.D.: The tau parallel performance system. Int. J. High Perform. Comput. Appl. 20(2), 287–311 (2006)
Dvořák, Z.: Loop optimizer cheatsheet, http://gcc.gnu.org/wiki/GettingStarted?action=AttachFile&do=get&target=loopcheat.ps
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Schindewolf, M., Kramer, D., Cintra, M. (2010). Compiler-Directed Performance Model Construction for Parallel Programs. In: Müller-Schloer, C., Karl, W., Yehia, S. (eds) Architecture of Computing Systems - ARCS 2010. ARCS 2010. Lecture Notes in Computer Science, vol 5974. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11950-7_17
Download citation
DOI: https://doi.org/10.1007/978-3-642-11950-7_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-11949-1
Online ISBN: 978-3-642-11950-7
eBook Packages: Computer ScienceComputer Science (R0)