[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2668930.2688051acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article
Free access

Utilizing Performance Unit Tests To Increase Performance Awareness

Published: 31 January 2015 Publication History

Abstract

Many decisions taken during software development impact the resulting application performance. The key decisions whose potential impact is large are usually carefully weighed. In contrast, the same care is not used for many decisions whose individual impact is likely to be small -- simply because the costs would outweigh the benefits. Developer opinion is the common deciding factor for these cases, and our goal is to provide the developer with information that would help form such opinion, thus preventing performance loss due to the accumulated effect of many poor decisions.
Our method turns performance unit tests into recipes for generating performance documentation. When the developer selects an interface and workload of interest, relevant performance documentation is generated interactively. This increases performance awareness -- with performance information available alongside standard interface documentation, developers should find it easier to take informed decisions even in situations where expensive performance evaluation is not practical. We demonstrate the method on multiple examples, which show how equipping code with performance unit tests works.

References

[1]
K. Beck. Simple Smalltalk Testing. Cambridge University Press, 1997.
[2]
A. Buble, L. Bulej, and P. Tůma. CORBA benchmarking: a course with hidden obstacles. In Proc. IPDPS 2003 PMEOPDS, 2003.
[3]
L. Bulej, T. Bures, V. Horky, J. Keznikl, and P. Tuma. Performance awareness in component systems: Vision paper. In Proc. COMPSAC 2012 CORCS, 2012.
[4]
L. Bulej, T. Bures, J. Keznikl, A. Koubkova, A. Podzimek, and P. Tuma. Capturing Performance Assumptions using Stochastic Performance Logic. In Proc. ICPE 2012. ACM, 2012.
[5]
O. Burn et al. Checkstyle, 2014. http://checkstyle.sf.net.
[6]
M. Böhm and J.-J. Dubray. Dom4J performance versus Xerces / Xalan, 2008. http://dom4j.sf.net/dom4j-1.6.1/benchmarks/xpath.
[7]
Caliper: Microbenchmarking framework for Java, 2013. http://code.google.com/p/caliper.
[8]
Y. Chen, R. H. Katz, and J. D. Kubiatowicz. Dynamic replica placement for scalable content delivery. In Proc. IPTPS 2002. Springer, 2002.
[9]
C. Click. The Art of Java Benchmarking. http://www.azulsystems.com/presentations/art-of-java-benchmarking.
[10]
C. Tapus, I.-H. Chung, and J. K. Hollingsworth. Active Harmony: Towards automated performance tuning. In Proc. SC 2002. IEEE, 2002.
[11]
DocBook, 2014. http://www.docbook.org.
[12]
Document Object Model, 2005. http://w3.org/DOM.
[13]
A. D'Ambrogio. A WSDL extension for performance-enabled description of web services. In Proc. ISCIS 2005. Springer, 2005.
[14]
Eclipse, 2014. http://www.eclipse.org.
[15]
M. Ellims, J. Bridges, and D. Ince. The economics of unit testing. Empirical Software Engineering, 11(1), 2006.
[16]
T. Fahringer and C. S. Jünior. Modeling and detecting performance problems for distributed and parallel programs with JavaPSL. In Proc. SC 2001. ACM, 2001.
[17]
A. Georges, D. Buytaert, and L. Eeckhout. Statistically rigorous Java performance evaluation. In Proc. OOPSLA 2007. ACM, 2007.
[18]
GRAL, 2014. http://trac.erichseifert.de/gral.
[19]
Guava: Google Core Libraries for Java 1.6+, 2014. http://code.google.com/p/guava-libraries.
[20]
V. Horký, F. Haas, J. Kotr, M. Lčacina, and P. Tuma. Performance Regression Unit Testing: A Case Study. In Proc. EPEW 2013. Springer, 2013.
[21]
D. Hovemeyer and W. Pugh. Finding bugs is easy. SIGPLAN Not., 39(12), Dec. 2004. http://findbugs.sf.net.
[22]
IEEE standard for software unit testing. ANSI/IEEE Std 1008--1987, 1986.
[23]
Japex Micro-benchmark Framework, 2013. https://java.net/projects/japex.
[24]
Javadoc Tool, 2014. http://www.oracle.com/technetwork/java/javase/documentation/index-jsp-135444.html.
[25]
Jaxen, 2013. http://jaxen.codehaus.org.
[26]
JDOM, 2013. http://www.jdom.org.
[27]
JFreeChart, 2013. http://www.jfree.org/jfreechart.
[28]
JMH: Java Microbenchmark Harness, 2014. http://openjdk.java.net/projects/code-tools/jmh.
[29]
T. Kalibera, L. Bulej, and P. Tuma. Benchmark precision and random initial state. In Proc. SPECTS 2005, 2005.
[30]
L. Madeyski. Test-Driven Development: An Empirical Evaluation of Agile Practice. Springer, 2010.
[31]
L. Marek et al. DiSL: a domain-specific language for bytecode instrumentation. In Proc. AOSD 2012, 2012.
[32]
Metadata Encoding and Transmission Standard, 2014. http://www.loc.gov/standards/mets.
[33]
N. Mitchell and G. Sevitsky. The causes of bloat, the limits of health. In Proc. OOPSLA 2007. ACM, 2007.
[34]
NIST/SEMATECH e-Handbook of Statistical Methods, 2014. http://www.itl.nist.gov/div898/handbook.
[35]
OpenBenchmarking.org: An Open, Collaborative Testing Platform For Benchmarking & Performance Analysis, 2014. http://openbenchmarking.org.
[36]
Primitive Collections for Java, 2003. http://pcj.sf.net.
[37]
O. Shacham, M. Vechev, and E. Yahav. Chameleon: Adaptive selection of collections. In Proc. PLDI 2009. ACM, 2009.
[38]
ACE+TAO+CIAO+DAnCE Distributed Scoreboard, 2014. http://www.dre.vanderbilt.edu/scoreboard.
[39]
SPL Tools, 2013. http://d3s.mff.cuni.cz/software/spl.
[40]
Trove, 2012. http://trove.starlight-systems.com.
[41]
Xeiam XChart, 2014. http://xeiam.com/xchart.jsp.
[42]
XML Path Language (XPath) 2.0, 2010. http://w3.org/TR/xpath20.
[43]
G. Xu, M. Arnold, N. Mitchell, A. Rountev, and G. Sevitsky. Go with the flow: Profiling copies to find runtime bloat. In Proc. PLDI 2009. ACM, 2009.
[44]
G. Xu et al. Software bloat analysis: Finding, removing, and preventing performance problems in modern large-scale object-oriented applications. In Proc. FoSER 2010. ACM, 2010.
[45]
H. Yu, D. Zhang, and L. Rauchwerger. An adaptive algorithm selection framework. In Proc. PACT 2004. IEEE, 2004.

Cited By

View all
  • (2024)Overhead Comparison of Instrumentation FrameworksCompanion of the 15th ACM/SPEC International Conference on Performance Engineering10.1145/3629527.3652269(249-256)Online publication date: 7-May-2024
  • (2022)Characterizing and Detecting Methods to be Benchmarked under Performance Unit TestInternational Journal of Software Engineering and Knowledge Engineering10.1142/S021819402250048632:09(1279-1305)Online publication date: 20-Aug-2022
  • (2021)Using application benchmark call graphs to quantify and improve the practical relevance of microbenchmark suitesPeerJ Computer Science10.7717/peerj-cs.5487(e548)Online publication date: 28-May-2021
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICPE '15: Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering
January 2015
366 pages
ISBN:9781450332484
DOI:10.1145/2668930
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 31 January 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. java
  2. javadoc
  3. performance awareness
  4. performance documentation
  5. performance testing

Qualifiers

  • Research-article

Funding Sources

  • Charles University
  • FP 7 FET Proactive

Conference

ICPE'15
Sponsor:
ICPE'15: ACM/SPEC International Conference on Performance Engineering
January 28 - February 4, 2015
Texas, Austin, USA

Acceptance Rates

ICPE '15 Paper Acceptance Rate 23 of 74 submissions, 31%;
Overall Acceptance Rate 252 of 851 submissions, 30%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)106
  • Downloads (Last 6 weeks)19
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Overhead Comparison of Instrumentation FrameworksCompanion of the 15th ACM/SPEC International Conference on Performance Engineering10.1145/3629527.3652269(249-256)Online publication date: 7-May-2024
  • (2022)Characterizing and Detecting Methods to be Benchmarked under Performance Unit TestInternational Journal of Software Engineering and Knowledge Engineering10.1142/S021819402250048632:09(1279-1305)Online publication date: 20-Aug-2022
  • (2021)Using application benchmark call graphs to quantify and improve the practical relevance of microbenchmark suitesPeerJ Computer Science10.7717/peerj-cs.5487(e548)Online publication date: 28-May-2021
  • (2021)Applying test case prioritization to software microbenchmarksEmpirical Software Engineering10.1007/s10664-021-10037-x26:6Online publication date: 30-Sep-2021
  • (2021)Predicting unstable software benchmarks using static source code featuresEmpirical Software Engineering10.1007/s10664-021-09996-y26:6Online publication date: 18-Aug-2021
  • (2020)Towards the use of the readily available tests from the release pipeline as performance testsProceedings of the ACM/IEEE 42nd International Conference on Software Engineering10.1145/3377811.3380351(1435-1446)Online publication date: 27-Jun-2020
  • (2020)Can a Chatbot Support Software Engineers with Load Testing? Approach and ExperiencesProceedings of the ACM/SPEC International Conference on Performance Engineering10.1145/3358960.3375792(120-129)Online publication date: 20-Apr-2020
  • (2019)Microservice-Tailored Generation of Session-Based Workload Models for Representative Load Testing2019 IEEE 27th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)10.1109/MASCOTS.2019.00043(323-335)Online publication date: Oct-2019
  • (2019)Software microbenchmarking in the cloud. How bad is it really?Empirical Software Engineering10.1007/s10664-019-09681-124:4(2469-2508)Online publication date: 1-Aug-2019
  • (2018)An evaluation of open-source software microbenchmark suites for continuous performance assessmentProceedings of the 15th International Conference on Mining Software Repositories10.1145/3196398.3196407(119-130)Online publication date: 28-May-2018
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media