[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3439961.3439977acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbqsConference Proceedingsconference-collections
research-article

How far are we from testing a program in a completely automated way, considering the mutation testing criterion at unit level?

Published: 06 March 2021 Publication History

Abstract

Testing is a mandatory activity to guarantee software quality. Not only knowledge about the software under testing is required to generate high-quality test cases, but also knowledge about the business rules implemented in software product to cover more than 80% of its source code. Therefore, we investigate in this study the adequacy, effectiveness, and cost of smart and random automated generated test sets for Java programs. We observed that the smart generated test sets, in general, are more adequate and less expensive than random generated tests, but regarding effectiveness, random generated test are more efficient. Moreover, we observed that smart automated test sets are complementary between them, and we explored if random generated test sets could be complementary to smart automated test sets as well. When we combined smart generated test sets, we observed an increase of more than 8% in statement coverage and more than 15% in mutation score when compared to random generated test sets. However, when we added random generated test sets to previous combination of smart generated test sets, results show a lower increase of statement coverage and mutation score, while increasing considerably the test set generation cost. Therefore, we advocate that the use of random testing should be integrated with smart generated tests only with a minimization strategy to avoid redundant test sets, keeping the cost reasonable.

References

[1]
J. H. Andrews, L. C. Briand, and Y. Labiche. 2005. Is mutation an appropriate tool for testing experiments?. In XXVII International Conference on Software Engineering – ICSE’05. ACM Press, 402–411. https://doi.org/10.1145/1062455.1062530 event-place: St. Louis, MO, USA.
[2]
J. H. Andrews, L. C. Briand, Y. Labiche, and A. S. Namin. 2006. Using Mutation Analysis for Assessing and Comparing Testing Coverage Criteria. IEEE Transactions on Software Engineering 32, 8 (Aug. 2006), 608–624. https://doi.org/10.1109/TSE.2006.83
[3]
Apache Software Foundation. 2016. Apache Maven Project. (June 2016). Disponível em: https://maven.apache.org/. Acesso em: 04/07/2016 bibtex*[howpublished=Página Web].
[4]
Victor R. Basili, Gianluigi Caldiera, and H. Dieter Rombach. 1994. Encyclopedia of Software Engineering. Vol. 2. John Wiley & Sons, Inc., 528–532. bibtex*[chapter=Goal Question Metric Paradigm].
[5]
B. Boehm and V. R. Basili. 2001. Software Defect Reduction Top 10 List. Computer 34, 1 (2001), 135–137. https://doi.org/10.1109/2.962984 bibtex*[location=Los Alamitos, CA, USA;publisher=IEEE Computer Society Press].
[6]
Henry Coles. 2015. PITest: real world mutation testing. (Jan. 2015). Disponível em: http://pitest.org/. Acesso em: 04/07/2016. bibtex*[howpublished=Página Web].
[7]
R. A. DeMillo, R. J. Lipton, and F. G. Sayward. 1978. Hints on Test Data Selection: Help for the Practicing Programmer. IEEE Computer 11, 4 (April 1978), 34–43. https://doi.org/10.1109/C-M.1978.218136
[8]
Eclipse Foundation. 2015. Eclipse IDE. (June 2015). Disponível em: https://eclipse.org/mars/. Acesso em: 04/07/2016 bibtex*[howpublished=Página Web].
[9]
Gordon Fraser and Andrea Arcuri. 2012. Sound Empirical Evidence in Software Testing. In Proceedings of the 34th International Conference on Software Engineering(ICSE’12). IEEE Press, 178–188. bibtex*[acmid=2337245;numpages=11] event-place: Zurich, Switzerland.
[10]
Gordon Fraser and Andrea Arcuri. 2016. EvoSuite at the SBST 2016 Tool Competition. In Proceedings of the 9th International Workshop on Search-Based Software Testing. ACM, 33–36. https://doi.org/10.1145/2897010.2897020 bibtex*[acmid=2897020;numpages=4] event-place: Austin, Texas.
[11]
René Just, Gregory M. Kapfhammer, and Franz Schweiggert. 2011. Using Conditional Mutation to Increase the Efficiency of Mutation Analysis. In Proceedings of the 6th International Workshop on Automation of Software Test (Waikiki, Honolulu, HI, USA) (AST’11). Association for Computing Machinery, New York, NY, USA, 50–56. https://doi.org/10.1145/1982595.1982606
[12]
J. S. Kracht, J. Z. Petrovic, and K. R. Walcott-Justice. 2014. Empirically Evaluating the Quality of Automatically Generated and Manually Written Test Suites. In 2014 14th International Conference on Quality Software. 256–265. https://doi.org/10.1109/QSIC.2014.33 ISSN: 1550-6002.
[13]
A. Leitner, I. Ciupa, B. Meyer, and M. Howard. 2007. Reconciling Manual and Automated Testing: The AutoTest Experience. In System Sciences, 2007. HICSS 2007. 40th Annual Hawaii International Conference on. 261a–261a. https://doi.org/10.1109/HICSS.2007.462 ISSN: 1530-1605.
[14]
Y.-S. Ma, J. Offutt, and Y. R. Kwon. 2005. MuJava: an automated class mutation system: Research Articles. STVR – Software Testing, Verification and Reliability 15, 2(2005), 97–133. https://doi.org/10.1002/stvr.v15:2 bibtex*[location=Chichester, UK, UK;publisher=John Wiley and Sons Ltd.].
[15]
Carlos Pacheco and Michael D. Ernst. 2007. Randoop: Feedback-directed Random Testing for Java. In Companion to the 22Nd ACM SIGPLAN Conference on Object-oriented Programming Systems and Applications Companion(OOPSLA ’07). ACM, 815–816. https://doi.org/10.1145/1297846.1297902 bibtex*[acmid=1297902;numpages=2] event-place: Montreal, Quebec, Canada.
[16]
M. Roper. 1994. Software Testing. McGrall Hill.
[17]
Abdelilah Sakti, Gilles Pesant, and Yann-Gaël Guéhéneuc. 2015. JTExpert at the Third Unit Testing Tool Competition. 52–55. https://doi.org/10.1109/SBST.2015.20
[18]
Anthony J. Simons. 2007. JWalk: A Tool for Lazy, Systematic Testing of Java Classes by Design Introspection and User Interaction. Automated Software Engg. 14, 4 (Dec. 2007), 369–418. https://doi.org/10.1007/s10515-007-0015-3 bibtex*[location=Hingham, MA, USA;publisher=Kluwer Academic Publishers;acmid=1296046;issue_date=December 2007;numpages=50].
[19]
N Smeets and A J H Simons. 2011. Automated unit testing with Randoop, JWalk and MuJava versus manual JUnit testing. Research Reports. Department of Computer Science, University of Sheffield/University of Antwerp, Sheffield, Antwerp.
[20]
S. R. S. Souza, M. P. Prado, E. F. Barbosa, and J. C. Maldonado. 2012. An Experimental Study to Evaluate the Impact of the Programming Paradigm in the Testing Activity. CLEI Electronic Journal 15, 1 (April 2012), 1–13. Paper 3 bibtex*[publisher=scielouy].
[21]
Auri MR Vincenzi, Tiago Bachiega, Daniel G de Oliveira, Simone RS de Souza, and Jose C Maldonado. 2016. The complementary aspect of automatically and manually generated test case sets. In Proceedings of the 7th International Workshop on Automating Test Case Design, Selection, and Evaluation. 23–30.
[22]
C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, and A. Wesslén. 2012. Experimentation in software engineering. Springer Heidelberg, New York, NY, USA. https://doi.org/10.1007/978-3-642-29044-2
[23]
Sai Zhang. 2011. Palus: A Hybrid Automated Test Generation Tool for Java. In Proceedings of the 33rd International Conference on Software Engineering (Waikiki, Honolulu, HI, USA) (ICSE ’11). Association for Computing Machinery, New York, NY, USA, 1182–1184. https://doi.org/10.1145/1985793.1986036
[24]
Nivio Ziviani. 2011. Project of Algorithms with Java and C++ Implementations. Cengage Learning. (in Portuguese).

Cited By

View all
  • (2024)Automating the correctness assessment of AI-generated code for security contextsJournal of Systems and Software10.1016/j.jss.2024.112113216(112113)Online publication date: Oct-2024
  • (2023)An initial investigation of ChatGPT unit test generation capabilityProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624035(15-24)Online publication date: 25-Sep-2023
  • (2023)An Experimental Study Evaluating Cost, Adequacy, and Effectiveness of Pynguin's Test SetsProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624034(5-14)Online publication date: 25-Sep-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
SBQS '20: Proceedings of the XIX Brazilian Symposium on Software Quality
December 2020
430 pages
ISBN:9781450389235
DOI:10.1145/3439961
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 March 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Automated Test Data Generator
  2. Coverage Testing Mutation Testing
  3. Software Testing
  4. Test Set Combination

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • CAPES Brazilian Funding Agency

Conference

SBQS'20
SBQS'20: 19th Brazilian Symposium on Software Quality
December 1 - 4, 2020
São Luís, Brazil

Acceptance Rates

Overall Acceptance Rate 35 of 99 submissions, 35%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)32
  • Downloads (Last 6 weeks)0
Reflects downloads up to 24 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Automating the correctness assessment of AI-generated code for security contextsJournal of Systems and Software10.1016/j.jss.2024.112113216(112113)Online publication date: Oct-2024
  • (2023)An initial investigation of ChatGPT unit test generation capabilityProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624035(15-24)Online publication date: 25-Sep-2023
  • (2023)An Experimental Study Evaluating Cost, Adequacy, and Effectiveness of Pynguin's Test SetsProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624034(5-14)Online publication date: 25-Sep-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media