[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3511430.3511433acmotherconferencesArticle/Chapter ViewAbstractPublication PagesisecConference Proceedingsconference-collections
research-article
Open access

Human-based Test Design versus Automated Test Generation: A Literature Review and Meta-Analysis

Published: 24 February 2022 Publication History

Abstract

Automated test generation has been proposed to allow test cases to be created with less effort. While much progress has been made, it remains a challenge to automatically generate strong as well as small test suites that are also relevant to engineers. However, how these automated test generation approaches compare to or complement manually written test cases is still an open research question. In the light of the potential benefits of automated test generation in practice, its long history, and the apparent lack of summative evidence supporting its use, the present study aims to systematically review the current body of peer-reviewed publications comparing automated test generation and manual test design performed by humans. We conducted a literature review and meta-analysis to collect data comparing manually written tests with automatically generated ones regarding test efficiency and effectiveness. The overall results of the literature review suggest that automated test generation outperforms manual testing in terms of testing time, the number of tests created and the code coverage achieved. Nevertheless, most of the studies report that manually written tests detect more faults (both injected and naturally occurring ones), are more readable, and detect more specific bugs than those created using automated test generation. Our results suggest that just a few studies report specific statistics (e.g., effect sizes) that can be used in a proper meta-analysis, and therefore, results are inconclusive when comparing automated test generation and manual testing due to the lack of sufficient statistical data and power. Nevertheless, our meta-analysis results suggest that manual and automated test generation are clearly outperforming random testing for all metrics considered.

References

[1]
[n. d.]. Decision Coverage Testing. https://www.tutorialspoint.com/software_testing_dictionary/decision_coverage_testing.htm
[2]
Saswat Anand, Edmund K Burke, Tsong Yueh Chen, John Clark, Myra B Cohen, Wolfgang Grieskamp, Mark Harman, Mary Jean Harrold, Phil McMinn, Antonia Bertolino, 2013. An orchestrated survey of methodologies for automated software test case generation. Journal of Systems and Software 86, 8 (2013), 1978–2001.
[3]
Luciano Baresi and Mauro Pezze. 2006. An introduction to software testing. Electronic Notes in Theoretical Computer Science 148, 1 (2006), 89–111.
[4]
Angela Boland, Gemma Cherry, and Rumona Dickson. 2017. Doing a systematic review: A student’s guide. (2017).
[5]
Michael Borenstein, Larry V Hedges, Julian PT Higgins, and Hannah R Rothstein. 2011. Introduction to meta-analysis. John Wiley & Sons.
[6]
Mariano Ceccato, Alessandro Marchetto, Leonardo Mariani, Cu D Nguyen, and Paolo Tonella. 2015. Do automatically generated test cases make debugging easier? an experimental assessment of debugging effectiveness and efficiency. ACM Transactions on Software Engineering and Methodology (TOSEM) 25, 1(2015), 1–38.
[7]
James M Clarke. 1998. Automated test generation from a behavioral model. In Proceedings of Pacific Northwest Software Quality Conference. IEEE Press. Citeseer.
[8]
Richard A DeMillo, Richard J Lipton, and Frederick G Sayward. 1978. Hints on test data selection: Help for the practicing programmer. Computer 11, 4 (1978), 34–41.
[9]
Daniel Di Nardo, Fabrizio Pastore, and Lionel Briand. 2017. Augmenting field data for testing systems subject to incremental requirements changes. ACM Transactions on Software Engineering and Methodology (TOSEM) 26, 1(2017), 1–40.
[10]
Rahul Dixit, Christof Lutteroth, and Gerald Weber. 2015. FormTester: effective integration of model-based and manually specified test cases. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 2. IEEE, 745–748.
[11]
Tore Dyba, Torgeir Dingsoyr, and Geir K Hanssen. 2007. Applying systematic reviews to diverse study types: An experience report. In First international symposium on empirical software engineering and measurement (ESEM 2007). IEEE, 225–234.
[12]
Eduard Enoiu, Daniel Sundmark, Adnan Čaušević, and Paul Pettersson. 2017. A comparative study of manual and automated testing for industrial control software. In 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST). IEEE, 412–417.
[13]
Eduard P Enoiu, Adnan Cauevic, Daniel Sundmark, and Paul Pettersson. 2016. A controlled experiment in testing of safety-critical embedded software. In 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST). IEEE, 1–11.
[14]
Gordon Fraser, Matt Staats, Phil McMinn, Andrea Arcuri, and Frank Padberg. 2013. Does automated white-box test generation really help software testers?. In Proceedings of the 2013 International Symposium on Software Testing and Analysis. 291–301.
[15]
Omar S Gómez, Karen Cortés-Verdín, and César J Pardo. 2017. Efficiency of software testing techniques: A controlled experiment replication and network meta-analysis. e-Informatica Software Engineering Journal 11, 1 (2017).
[16]
Maria Fernanda Granda. 2014. An experiment design for validating a test case generation strategy from requirements models. In 2014 IEEE 4th international workshop on empirical requirements engineering (EmpiRE). IEEE, 44–47.
[17]
Giovanni Grano, Simone Scalabrino, Harald C Gall, and Rocco Oliveto. 2018. An empirical investigation on the readability of manual and generated test cases. In 2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC). IEEE, 348–3483.
[18]
M Harrer, P Cuijpers, TA Furukawa, and DD Ebert. 2019. Doing meta-analysis in R: a hands-on guide. PROTECT Lab Erlangen(2019).
[19]
Mark Hays, Jane Huffman Hayes, and Arne C Bathke. 2014. Validation of Software Testing Experiments: A Meta-Analysis of ICST 2013. In 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation. IEEE, 333–342.
[20]
Sahitya Kakarla, Selina Momotaz, and Akbar Siami Namin. 2011. An evaluation of mutation and data-flow testing: A meta-analysis. In 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops. IEEE, 366–375.
[21]
G Kapfhammer. 2004. The Computer Science Handbook, chapter Software Testing.
[22]
Barbara Kitchenham, Pearl Brereton, and David Budgen. 2010. The educational value of mapping studies of software engineering literature. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering-Volume 1. 589–598.
[23]
Barbara A Kitchenham. 2004. Systematic reviews. In 10th International Symposium on Software Metrics, 2004. Proceedings. IEEE, xii–xii.
[24]
Barbara Ann Kitchenham, David Budgen, and Pearl Brereton. 2015. Evidence-based software engineering and systematic reviews. Vol. 4. CRC press.
[25]
Pavneet Singh Kochhar, Tegawendé F Bissyandé, David Lo, and Lingxiao Jiang. 2013. An empirical study of adoption of software testing in open source projects. In 2013 13th International Conference on Quality Software. IEEE, 103–112.
[26]
Jeshua S Kracht, Jacob Z Petrovic, and Kristen R Walcott-Justice. 2014. Empirically evaluating the quality of automatically generated and manually written test suites. In 2014 14th International Conference on Quality Software. IEEE, 256–265.
[27]
Rogene Lacanienta, Shingo Takada, Haruto Tanno, and Morihide Oinuma. 2014. Test scenario generation for web application based on past test artifacts. IEICE TRANSACTIONS on Information and Systems 97, 5 (2014), 1109–1118.
[28]
Nan Li, Anthony Escalona, and Tariq Kamal. 2016. Skyfire: Model-based testing with cucumber. In 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST). IEEE, 393–400.
[29]
Nan Li and Jeff Offutt. 2015. A test automation language framework for behavioral models. In 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW). IEEE, 1–10.
[30]
Arthur Marques, Franklin Ramalho, and Wilkerson L Andrade. 2014. Comparing model-based testing with traditional testing strategies: An empirical study. In 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation Workshops. IEEE, 264–273.
[31]
Phil McMinn. 2004. Search-based software test data generation: a survey. Software testing, Verification and reliability 14, 2(2004), 105–156.
[32]
James Miller. 2000. Applying meta-analytical procedures to software engineering experiments. Journal of Systems and Software 54, 1 (2000), 29–39.
[33]
Audris Mockus, Nachiappan Nagappan, and Trung T Dinh-Trong. 2009. Test coverage and post-verification defects: A multiple case study. In 2009 3rd international symposium on empirical software engineering and measurement. IEEE, 291–301.
[34]
Glenford J Myers, Tom Badgett, Todd M Thomas, and Corey Sandler. 2004. The art of software testing. Vol. 2. Wiley Online Library.
[35]
Alexander Pretschner, Wolfgang Prenninger, Stefan Wagner, Christian Kühnel, Martin Baumgartner, Bernd Sostawa, Rüdiger Zölch, and Thomas Stauner. 2005. One evaluation of model-based testing and its automation. In Proceedings of the 27th international conference on Software engineering. 392–401.
[36]
Rudolf Ramler, Dietmar Winkler, and Martina Schmidt. 2012. Random test case generation and manual unit testing: Substitute or complement in retrofitting tests for legacy code?. In 2012 38th Euromicro Conference on Software Engineering and Advanced Applications. IEEE, 286–293.
[37]
Rudolf Ramler, Klaus Wolfmaier, and Theodorich Kopetzky. 2013. A replicated study on random test case generation and manual unit testing: How many bugs do professional developers find?. In 2013 IEEE 37th Annual Computer Software and Applications Conference. IEEE, 484–491.
[38]
Mark Rodgers, Amanda Sowden, Mark Petticrew, Lisa Arai, Helen Roberts, Nicky Britten, and Jennie Popay. 2009. Testing methodological guidance on the conduct of narrative synthesis in systematic reviews: effectiveness of interventions to promote smoke alarm ownership and function. Evaluation 15, 1 (2009), 49–73.
[39]
Christoph Schulze, Dharmalingam Ganesan, Mikael Lindvall, Rance Cleaveland, and Daniel Goldman. 2014. Assessing model-based testing: an empirical study conducted in industry. In Companion Proceedings of the 36th International Conference on Software Engineering. 135–144.
[40]
Guido Schwarzer, James R Carpenter, and Gerta Rücker. 2015. Multivariate meta-analysis. In Meta-Analysis with R. Springer, 165–185.
[41]
Domenico Serra, Giovanni Grano, Fabio Palomba, Filomena Ferrucci, Harald C Gall, and Alberto Bacchelli. 2019. On the effectiveness of manual and automatic unit test generation: ten years later. In 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR). IEEE, 121–125.
[42]
Mario Linares Vasquez, Carlos Bernal-Cárdenas, Kevin Moran, and Denys Poshyvanyk. 2018. How do Developers Test Android Applications?arXiv preprint arXiv:1801.06268(2018).
[43]
Wolfgang Viechtbauer. 2007. Confidence intervals for the amount of heterogeneity in meta-analysis. Statistics in medicine 26, 1 (2007), 37–52.
[44]
M Vizard. 2013. App testing now consumes a quarter of IT budget. CIO Insight (2013).
[45]
Xiaoyin Wang, Lingming Zhang, and Philip Tanofsky. 2015. Experience report: How is dynamic symbolic execution different from manual testing? a study on klee. In Proceedings of the 2015 International Symposium on Software Testing and Analysis. 199–210.
[46]
Xiaojing Zhang, Haruto Tanno, and Takashi Hoshino. 2011. Introducing test case derivation techniques into traditional software development: Obstacles and potentialities. In 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops. IEEE, 559–560.

Index Terms

  1. Human-based Test Design versus Automated Test Generation: A Literature Review and Meta-Analysis
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        ISEC '22: Proceedings of the 15th Innovations in Software Engineering Conference
        February 2022
        235 pages
        ISBN:9781450396189
        DOI:10.1145/3511430
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 24 February 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        Conference

        ISEC 2022

        Acceptance Rates

        Overall Acceptance Rate 76 of 315 submissions, 24%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 674
          Total Downloads
        • Downloads (Last 12 months)282
        • Downloads (Last 6 weeks)41
        Reflects downloads up to 26 Dec 2024

        Other Metrics

        Citations

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media