[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/1449764.1449794acmconferencesArticle/Chapter ViewAbstractPublication PagessplashConference Proceedingsconference-collections
research-article

Java performance evaluation through rigorous replay compilation

Published: 19 October 2008 Publication History

Abstract

A managed runtime environment, such as the Java virtual machine, is non-trivial to benchmark. Java performance is affected in various complex ways by the application and its input, as well as by the virtual machine (JIT optimizer, garbage collector, thread scheduler, etc.). In addition, non-determinism due to timer-based sampling for JIT optimization, thread scheduling, and various system effects further complicate the Java performance benchmarking process.
Replay compilation is a recently introduced Java performance analysis methodology that aims at controlling non-determinism to improve experimental repeatability. The key idea of replay compilation is to control the compilation load during experimentation by inducing a pre-recorded compilation plan at replay time. Replay compilation also enables teasing apart performance effects of the application versus the virtual machine.
This paper argues that in contrast to current practice which uses a single compilation plan at replay time, multiple compilation plans add statistical rigor to the replay compilation methodology. By doing so, replay compilation better accounts for the variability observed in compilation load across compilation plans. In addition, we propose matched-pair comparison for statistical data analysis. Matched-pair comparison considers the performance measurements per compilation plan before and after an innovation of interest as a pair, which enables limiting the number of compilation plans needed for accurate performance analysis compared to statistical analysis assuming unpaired measurements.

References

[1]
M. Arnold, S. Fink, D. Grove, M. Hind, and P. F. Sweeney. Adaptive optimization in the Jalapeno JVM. In OOPSLA, pages 47--65, Oct. 2000.
[2]
BEA. BEA JRockit: Java for the enterprise. Technical white paper. http://www.bea.com, Jan. 2006.
[3]
S. Blackburn, P. Cheng, and K. S. McKinley. Myths and reality: The performance impact of garbage collection. In SIGMETRICS, pages 25--36, June 2004.
[4]
S. Blackburn, P. Cheng, and K. S. McKinley. Oil and water? High performance garbage collection in Java with JMTk. In ICSE, pages 137--146, May 2004.
[5]
S. M. Blackburn, R. Garner, C. Hoffmann, A. M. Khang, K. S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, A. Phansalkar, D. Stefanovic, T. VanDrunen, D. von Dincklage, and B. Wiedermann. The DaCapo benchmarks: Java benchmarking development and analysis. In OOPSLA, pages 169--190, Oct. 2006.
[6]
S. M. Blackburn, M. Hertz, K. S. McKinley, J. E. B. Moss, and T. Yang. Profile-based pretenuring. ACM Trans. Program. Lang. Syst., 29(1):2, 2007.
[7]
S. M. Blackburn and A. L. Hosking. Barriers: Friend or foe? In ISMM, pages 143--151, Oct. 2004.
[8]
M. D. Bond and K. S. McKinley. Continuous path and edge profiling. In MICRO, pages 130--140, Dec. 2005.
[9]
M. D. Bond and K. S. McKinley. Probabilistic calling context. In OOPSLA, pages 97--112, Oct. 2007.
[10]
M. D. Bond and K. S. McKinley. Bell: Bit-encoding online memory leak detection. In ASPLOS, pages 61--72, Oct. 2006.
[11]
D. Buytaert, A. Georges, M. Hind, M. Arnold, L. Eeckhout, and K. De Bosschere. Using HPM-sampling to drive dynamic compilation. In OOPSLA, pages 553--568, Oct. 2007.
[12]
J. Cavazos and J. E. B. Moss. Inducing heuristics to decide whether to schedule. In PLDI, pages 183--194, June 2004.
[13]
M. Cierniak, M. Eng, N. Glew, B. Lewis, and J. Stichnoth. The open runtime platform: A flexible high-performance managed runtime environment. Intel Technology Journal, 7(1):5--18, 2003.
[14]
L. Eeckhout, A. Georges, and K. De Bosschere. How Java programs interact with virtual machines at the microarchitectural level. In OOPSLA, pages 169--186, Nov. 2003.
[15]
A. Georges, D. Buytaert, and L. Eeckhout. Statistically rigorous Java performance evaluation. In OOPSLA, pages 57--76, Oct. 2007.
[16]
D. Gu, C. Verbrugge, and E. M. Gagnon. Relative factors in performance analysis of Java virtual machines. In VEE, pages 111--121, June 2006.
[17]
S. Z. Guyer, K. S. McKinley, and D. Frampton. Free-me: A static analysis for automatic individual object reclamation. In PLDI, pages 364--375, June 2006.
[18]
M. Hauswirth, P. F. Sweeney, A. Diwan, and M. Hind. Vertical profiling: Understanding the behavior of object-oriented applications. In OOPSLA, pages 251--269, Oct. 2004.
[19]
X. Huang, S. M. Blackburn, K. S. McKinley, J. E. B. Moss, Z. Wang, and P. Cheng. The garbage collection advantage: Improving program locality. In OOPSLA, pages 69--80, Oct. 2004.
[20]
R. A. Johnson and D. W. Wichern. Applied Multivariate Statistical Analysis. Prentice Hall, 2002.
[21]
C. Krintz and B. Calder. Using annotations to reduce dynamic optimization time. In PLDI, pages 156--167, May 2001.
[22]
B. Lee, K. Resnick, M. D. Bond, and K. S. McKinley. Correcting the dynamic call graph using control-flow constraints. In CC, pages 80--95, March 2007.
[23]
D. J. Lilja. Measuring Computer Performance: A Practitioner's Guide. Cambridge University Press, 2000.
[24]
J. Maebe, D. Buytaert, L. Eeckhout, and K. De Bosschere. Javana: A system for building customized Java program analysis tools. In OOPSLA, pages 153--168, Oct. 2006.
[25]
V. Sundaresan, D. Maier, P. Ramarao, and M. Stoodley. Experiences with multithreading and dynamic class loading in a Java just-in-time compiler. In CGO, pages 87--97, Mar. 2006.
[26]
J. Neter, M. H. Kutner, W. Wasserman, and C. J. Nachtsheim. Applied Linear Statistical Models. McGraw-Hill, 1996.
[27]
K. Ogata, T. Onodera, K. Kawachiya, H. Komatsu, and T. Nakatani. Replay compilation: Improving debuggability of a just-in-time compiler. In OOPSLA, pages 241--252, Oct. 2006.
[28]
M. Paleczny, C. Vick, and C. Click. The Java Hotspot server compiler. In JVM, pages 1--12, Apr. 2001.
[29]
N. Sachindran, and J. E. B. Moss. Mark-copy: fast copying GC with less space overhead. In OOPSLA, pages 326--343, Nov. 2003.
[30]
N. Sachindran, J. E. B. Moss, and E. D. Berger. MC2: high-performance garbage collection for memory-constrained environments. In OOPSLA, pages 81--98, Oct. 2004.
[31]
F. T. Schneider, M. Payer, and T. R. Gross. Online optimizations driven by hardware performance monitoring. In PLDI, pages 373--382, June 2007.
[32]
S. Soman, C. Krintz, and D. F. Bacon. Dynamic selection of application-specific garbage collectors. In ISMM, pages 49--60, June 2004.
[33]
Standard Performance Evaluation Corporation. SPECjvm98 Benchmarks. http://www.spec.org/jvm98.
[34]
T. Suganuma, T. Yasue, M. Kawahito, H. Komatsu, and T. Nakatani. Design and evaluation of dynamic optimizations for a Java just-in-time compiler. In TOPLAS, 27(4):732--785, July 2005.
[35]
P. F. Sweeney, M. Hauswirth, B. Cahoon, P. Cheng, A. Diwan, D. Grove, and M. Hind. Using hardware performance monitors to understand the behavior of Java applications. In VM, pages 57--72, May 2004.
[36]
J. Whaley. A portable sampling--based profiler for Java virtual machines. In Proceedings of the ACM 2000 Conference on Java Grande, pages 78--87, June 2000.
[37]
T. Yang, M. Hertz, E. D. Berger, S. F. Kaplan, and J. E. B. Moss. Automatic heap sizing: taking real memory into account. In ISMM, pages 61--72, June 2004.

Cited By

View all
  • (2023)Diagnosing Compiler Performance by Comparing Optimization DecisionsProceedings of the 20th ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes10.1145/3617651.3622994(47-61)Online publication date: 19-Oct-2023
  • (2023)Don’t Trust Your Profiler: An Empirical Study on the Precision and Accuracy of Java ProfilersProceedings of the 20th ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes10.1145/3617651.3622985(100-113)Online publication date: 19-Oct-2023
  • (2023)Optimization-Aware Compiler-Level Event ProfilingACM Transactions on Programming Languages and Systems10.1145/359147345:2(1-50)Online publication date: 26-Jun-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
OOPSLA '08: Proceedings of the 23rd ACM SIGPLAN conference on Object-oriented programming systems languages and applications
October 2008
654 pages
ISBN:9781605582153
DOI:10.1145/1449764
  • cover image ACM SIGPLAN Notices
    ACM SIGPLAN Notices  Volume 43, Issue 10
    September 2008
    613 pages
    ISSN:0362-1340
    EISSN:1558-1160
    DOI:10.1145/1449955
    Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 October 2008

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. benchmarking
  2. java
  3. matched-pair comparison
  4. performance evaluation
  5. replay compilation
  6. virtual machine

Qualifiers

  • Research-article

Conference

OOPSLA08
Sponsor:

Acceptance Rates

Overall Acceptance Rate 268 of 1,244 submissions, 22%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)7
  • Downloads (Last 6 weeks)1
Reflects downloads up to 18 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Diagnosing Compiler Performance by Comparing Optimization DecisionsProceedings of the 20th ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes10.1145/3617651.3622994(47-61)Online publication date: 19-Oct-2023
  • (2023)Don’t Trust Your Profiler: An Empirical Study on the Precision and Accuracy of Java ProfilersProceedings of the 20th ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes10.1145/3617651.3622985(100-113)Online publication date: 19-Oct-2023
  • (2023)Optimization-Aware Compiler-Level Event ProfilingACM Transactions on Programming Languages and Systems10.1145/359147345:2(1-50)Online publication date: 26-Jun-2023
  • (2023)Automated Generation and Evaluation of JMH Microbenchmark Suites From Unit TestsIEEE Transactions on Software Engineering10.1109/TSE.2022.318800549:4(1704-1725)Online publication date: 1-Apr-2023
  • (2023)Controlling Automatic Experiment-Driven Systems Using Statistics and Machine LearningSoftware Architecture. ECSA 2022 Tracks and Workshops10.1007/978-3-031-36889-9_9(105-119)Online publication date: 16-Jul-2023
  • (2022)Compressed Forwarding Tables ReconsideredProceedings of the 19th International Conference on Managed Programming Languages and Runtimes10.1145/3546918.3546928(45-63)Online publication date: 14-Sep-2022
  • (2022)Reducing Experiment Costs in Automated Software Performance Regression Detection2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)10.1109/SEAA56994.2022.00017(56-59)Online publication date: Aug-2022
  • (2021)How Software Refactoring Impacts Execution TimeACM Transactions on Software Engineering and Methodology10.1145/348513631:2(1-23)Online publication date: 24-Dec-2021
  • (2020)Towards rigorous validation of energy optimisation experimentsProceedings of the 2020 Genetic and Evolutionary Computation Conference10.1145/3377930.3390245(1232-1240)Online publication date: 25-Jun-2020
  • (2020)A Rigorous Benchmarking and Performance Analysis Methodology for Python Workloads2020 IEEE International Symposium on Workload Characterization (IISWC)10.1109/IISWC50251.2020.00017(83-93)Online publication date: Oct-2020
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media