Abstract
Defect taxonomies collect and organize the domain knowledge and project experience of experts and are a valuable instrument of system testing for several reasons. They provide systematic backup for the design of tests, support decisions for the allocation of testing resources and are a suitable basis for measuring the product and test quality. In this paper, we propose a method of system testing based on defect taxonomies and investigate how these can systematically improve the efficiency and effectiveness, i.e. the maturity of requirements-based testing. The method is evaluated via an industrial case study based on two projects from a public health insurance institution by comparing one project with defect taxonomy-supported testing and one without. Empirical data confirm that system testing supported by defect taxonomies (1) reduces the number of test cases, and (2) increases of the number of identified failures per test case.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Serrano, N., Ciordia, I.: Bugzilla, ITracker, and other bug trackers. IEEE Software 22(2), 11–13 (2005)
ISTQB: Standard glossary of terms used in software testing. Version 2.1 (2010)
McDonald, R., Musson, R., Smith, R.: The practical guide to defect prevention - techniques to meet the demand for more reliable software. Microsoft Press (2008)
Bach, J.: Risk and Requirements-Based Testing. IEEE Computer 32(6), 113–114 (1999)
Beizer, B.: Software testing techniques. Thomson Computer Press (1990)
Black, R.: Advanced Software Testing. Guide to the ISTQB Advanced Certification as an Advanced Test Analyst, vol. 1. Rocky Nook (2008)
Carr, M.J., Konda, S.L., Monarch, I., Ulrich, F.C., Walker, C.F.: Taxonomy-based risk identification, Software Engineering Institute, Carnegie-Mellon University, Pittsburgh (1993)
ISO/IEC: ISO/IEC 9126-1:2001 Software engineering - Product quality - Part 1: Quality model (2001)
Kelly, D., Shepard, T.: A case study in the use of defect classification in inspections. In: Proceedings of the 2001 Conference of the Centre for Advanced Studies on Collaborative Research (2001)
Vijayaraghavan, G., Kaner, C.: Bug taxonomies: Use them to generate better tests. STAR EAST (2003)
Kaner, C., Falk, J., Nguyen, H.Q.: Testing computer software. Van Nostrand Reinhold (1993)
IEEE: IEEE Std 1044-1993: IEEE Standard Classification for Software Anomalies (1993)
Mariani, L.: A fault taxonomy for component-based software. Electronic Notes in Theoretical Computer Science 82(6), 55–65 (2003)
Beer, A., Peischl, B.: Testing of Safety-Critical Systems – a Structural Approach to Test Case Design. In: Safety-Critical Systems Symposium, SSS 2011 (2011)
Looker, N., Munro, M., Xu, J.: Simulating errors in web services. International Journal of Simulation Systems, Science & Technology 5, 29–37 (2004)
Marchetto, A., Ricca, F., Tonella, P.: An empirical validation of a web fault taxonomy and its usage for web testing. Journal of Web Engineering 8(4), 316–345 (2009)
Morell, L.J.: A theory of fault-based testing. IEEE Transactions on Software Engineering, 844–857 (1990)
Vallespir, D., Grazioli, F., Herbert, J.: A framework to evaluate defect taxonomies. In: Argentine Congress on Computer Science (2009)
Chillarege, R., Bhandari, I.S., Chaar, J.K., Halliday, M.J., Moebus, D.S., Ray, B.K., Wong, M.Y.: Orthogonal defect classification-a concept for in-process measurements. IEEE Transactions on Software Engineering 18(11), 943–956 (1992)
Zheng, J., Williams, L., Nagappan, N., Snipes, W., Hudepohl, J.M.: On the value of static analysis for fault detection in software. IEEE Transactions on Software Engineering, 240–286 (2006)
El Emam, K., Wieczorek, I.: The repeatability of code defect classifications. IEEE (1998)
Henningsson, K., Wohlin, C.: Assuring fault classification agreement-an empirical evaluation. IEEE (2004)
Falessi, D., Cantone, G.: Exploring feasibility of software defects orthogonal classification. Software and Data Technologies, 136–152 (2008)
Fenton, N.E., Ohlsson, N.: Quantitative analysis of faults and failures in a complex software system. IEEE Transactions on Software Engineering 26(8), 797–814 (2000)
Basili, V., Briand, L.C., Melo, W.L.: A validation of object-oriented design metrics as quality indicators. IEEE Transactions on Software Engineering 22(10), 751–761 (1996)
Andersson, C., Runeson, P.: A replicated quantitative analysis of fault distributions in complex software systems. IEEE Transactions on Software Engineering, 273–286 (2007)
de Grood, D.J.: TestGoal – Result-Driven Testing. Springer (2008)
IEC: S+ IEC 61508 Commented version (2010)
Vegas, S., Basili, V.: A characterisation schema for software testing techniques. Empirical Software Engineering 10(4), 437–466 (2005)
Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering 14(2), 131–164 (2009)
Atos: SiTEMPPO, http://at.atos.net/de-at/solutions/sitemppo/ (accessed: June 10, 2012)
Spillner, A., Rossner, T., Winter, M., Linz, T.: Software Testing Practice: Test Management. Rocky Nook (2007)
Argyrous, G.: Statistics for research: with a guide to SPSS. Sage (2011)
Ramler, R., Klammer, C., Natschläger, T.: The Usual Suspects: A Case Study on Delivered Defects per Developer. In: ESEM 2010 (2010)
Ramler, R., Biffl, S., Grünbacher, P.: Value-Based Management of Software Testing. In: Value-Based Software Engineering, pp. 225–244 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Felderer, M., Beer, A. (2013). Using Defect Taxonomies to Improve the Maturity of the System Test Process: Results from an Industrial Case Study. In: Winkler, D., Biffl, S., Bergsmann, J. (eds) Software Quality. Increasing Value in Software and Systems Development. SWQD 2013. Lecture Notes in Business Information Processing, vol 133. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35702-2_9
Download citation
DOI: https://doi.org/10.1007/978-3-642-35702-2_9
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-35701-5
Online ISBN: 978-3-642-35702-2
eBook Packages: Computer ScienceComputer Science (R0)