[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.5555/2818754.2818817acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

A flexible and non-intrusive approach for computing complex structural coverage metrics

Published: 16 May 2015 Publication History

Abstract

Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test.
In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our non-intrusive approach for computing code coverage information.

References

[1]
Agitar Code Coverage. http://www.agitar.com/.
[2]
Clover Java Code Coverage. http://www.cenqua.com/clover.
[3]
IBM Rational Pure Coverage http://www.rational.com/products/purecoverage/.
[4]
Jcover Java code coverage analyzer. http://www.mmsindia.com/JCover.html.
[5]
Purify Plus run-time analysis tools for application reliability and performance. http://www-03.ibm.com/software/products/en/purifyplus.
[6]
Z. Awedikian, K. Ayari, and G. Antoniol. MC/DC automatic test input data generation. In GECCO, pages 1657--1664, 2009.
[7]
S. Bardin, N. Kosmatov, and F. Cheynier. Efficient leveraging of symbolic execution to advanced coverage criteria. In ICST, pages 173--182, 2014.
[8]
S. Elbaum, A. G. Malishevsky, and G. Rothermel. Prioritizing test cases for regression testing. In ISSTA, pages 102--112, 2000.
[9]
S. G. Elbaum and J. C. Munson. Evaluating regression test suites based on their fault exposure capability. Journal of Software Maintenance: Research and Practice, 12(3): 171--184, 2000.
[10]
K. Ghani and J. A. Clark. Automatic test data generation for multiple condition and mc/dc coverage. In ICSEA, pages 152--157, 2009.
[11]
M. J. Harrold, R. Gupta, and M. L. Soffa. A methodology for controlling the size of a test suite. ACM Trans. Softw. Eng. Methodol., 2(3): 270--285, July 1993.
[12]
K. Hayhurst, D. Veerhusen, and L. Rierson. A practical tutorial on modified condition/decision coverage. Technical Report TM-2001-210876, NASA, 2001.
[13]
L. Inozemtseva and R. Holmes. Coverage is not strongly correlated with test suite effectiveness. In Proceedings of the 36th International Conference on Software Engineering, ICSE 2014, pages 435--445, New York, NY, USA, 2014. ACM.
[14]
J. A. Jones, M. J. Harrold, and J. Stasko. Visualization of test information to assist fault localization. In ICSE, pages 467--477, 2002.
[15]
A. Joshi and M. P. E. Heimdahl. Model-Based Safety Analysis of Simulink Models Using SCADE Design Verifier. In SAFECOMP, volume 3688 of LNCS, pages 122--135, September 2005.
[16]
K. Lakhotia, P. McMinn, and M. Harman. Automated test data generation for coverage: Haven't we solved this problem yet? In TAICPART, pages 95--104, 2009.
[17]
J. J. Li, D. M. Weiss, and H. Yee. An automatically-generated run-time instrumenter to reduce coverage testing overhead. In AST, pages 49--56, 2008.
[18]
C.-K. Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V. J. Reddi, and K. Hazelwood. Pin: Building customized program analysis tools with dynamic instrumentation. In PLDI, pages 190--200, 2005.
[19]
J. Misurda, J. A. Clause, J. L. Reed, B. R. Childers, and M. L. Soffa. Demand-driven structural testing with dynamic instrumentation. In ICSE, pages 156--165, 2005.
[20]
C. Pacheco, S. K. Lahiri, M. D. Ernst, and T. Ball. Feedback-directed random test generation. ICSE, pages 75--84, 2007.
[21]
C. Păsăreanu and N. Rungta. Symbolic PathFinder: symbolic execution of Java bytecode. In ASE, pages 179--180, 2010.
[22]
C. S. Păsăreanu, W. Visser, D. Bushnell, J. Geldenhuys, P. Mehlitz, and N. Rungta. Symbolic Pathfinder: integrating symbolic execution with model checking for Java bytecode analysis. Automated Software Engineering, pages 1--35.
[23]
S. Person, G. Yang, N. Rungta, and S. Khurshid. Directed incremental symbolic execution. In PLDI, pages 504--515, 2011.
[24]
S. Rapps and E. J. Weyuker. Selecting software test data using data flow information. IEEE Trans. Software Engineering, SE-11(4): 367--375, Apr. 1985.
[25]
G. Rothermel and M. J. Harrold. A safe, efficient regression test selection technique. ACM Trans. Softw. Eng. Methodol., 6(2): 173--210, Apr. 1997.
[26]
G. Rothermel, M. J. Harrold, J. Ostrin, and C. Hong. An empirical study of the effects of minimization on the fault detection capabilities of test suites. In ICSM, pages 34--43, 1998.
[27]
RTCA/DO-178C. Software considerations in airborne systems and equipment certification.
[28]
N. Rungta, S. Person, and J. Branchaud. A change impact analysis to characterize evolving program behaviors. In ICSM, pages 109--118, 2012.
[29]
SAE-ARP4761. Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment. SAE International, December 1996.
[30]
K. Sen and G. Agha. CUTE and jCUTE: Concolic unit testing and explicit path model-checking tools. In CAV, pages 419--423, 2006. (Tool Paper).
[31]
M. Staats. Towards a framework for generating tests to satisfy complex code coverage in java pathfinder. In NFM, pages 116--120, 2009.
[32]
M. Staats, G. Gay, M. W. Whalen, and M. P. Heimdahl. On the danger of coverage directed test case generation. In 15th Int'l Conf. on Fundamental Approaches to Software Engineering (FASE), April 2012.
[33]
J. Sztipanovits and G. Karsai. Generative programming for embedded systems. In GPCE, pages 32--49, 2002.
[34]
M. M. Tikir and J. K. Hollingsworth. Efficient instrumentation for code coverage testing. SIGSOFT Softw. Eng. Notes, 27(4): 86--96, July 2002.
[35]
G.-R. Uh, R. Cohn, B. Yadavalli, R. Peri, and R. Ayyagari. Analyzing dynamic binary instrumentation overhead. In Workshop on Binary Instrumentation and Application, 2007.
[36]
W. Visser, K. Havelund, G. Brat, and S. Park. Model checking programs. In ASE, Grenoble, France, 2000.
[37]
W. Visser, C. S. Păsăreanu, and R. Pelánek. Test input generation for Java containers using state matching. In ISSTA, pages 37--48, 2006.
[38]
M. W. Whalen, M. P. Heimdahl, and I. J. D. Silva. Efficient test coverage measurement for mc/dc. Technical Report 13--019, University of Minnesota Twin Cities, 2013.
[39]
Q. Yang, J. J. Li, and D. Weiss. A survey of coverage based testing tools. In AST, pages 99--103, 2006.
[40]
H. Zhu and P. Hall. Test data adequacy measurement. Software Engineering Journal, 8(1): 21--29, 1993.

Cited By

View all
  • (2019)Optimal MC/DC test case generationProceedings of the 41st International Conference on Software Engineering: Companion Proceedings10.1109/ICSE-Companion.2019.00118(288-289)Online publication date: 25-May-2019
  • (2018)Bug Localization with Semantic and Structural Features using Convolutional Neural Network and Cascade ForestProceedings of the 22nd International Conference on Evaluation and Assessment in Software Engineering 201810.1145/3210459.3210469(101-111)Online publication date: 28-Jun-2018
  • (2018)Statistical errors in software engineering experimentsProceedings of the 40th International Conference on Software Engineering10.1145/3180155.3180161(1195-1206)Online publication date: 27-May-2018

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICSE '15: Proceedings of the 37th International Conference on Software Engineering - Volume 1
May 2015
999 pages
ISBN:9781479919345

Sponsors

Publisher

IEEE Press

Publication History

Published: 16 May 2015

Check for updates

Qualifiers

  • Research-article

Conference

ICSE '15
Sponsor:

Acceptance Rates

Overall Acceptance Rate 276 of 1,856 submissions, 15%

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 31 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2019)Optimal MC/DC test case generationProceedings of the 41st International Conference on Software Engineering: Companion Proceedings10.1109/ICSE-Companion.2019.00118(288-289)Online publication date: 25-May-2019
  • (2018)Bug Localization with Semantic and Structural Features using Convolutional Neural Network and Cascade ForestProceedings of the 22nd International Conference on Evaluation and Assessment in Software Engineering 201810.1145/3210459.3210469(101-111)Online publication date: 28-Jun-2018
  • (2018)Statistical errors in software engineering experimentsProceedings of the 40th International Conference on Software Engineering10.1145/3180155.3180161(1195-1206)Online publication date: 27-May-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media