[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Using error abstraction and classification to improve requirement quality: conclusions from a family of four empirical studies

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Achieving high software quality is a primary concern for software development organizations. Researchers have developed many quality improvement methods that help developers detect faults early in the lifecycle. To address some of the limitations of fault-based quality improvement approaches, this paper describes an approach based on errors (i.e. the sources of the faults). This research extends Lanubile et al.’s, error abstraction process by providing a formal requirement error taxonomy to help developers identify both faults and errors. The taxonomy was derived from the software engineering and psychology literature. The error abstraction and classification process and the requirement error taxonomy are validated using a family of four empirical studies. The main conclusions derived from the four studies are: (1) the error abstraction and classification process is an effective approach for identifying faults; (2) the requirement error taxonomy is useful addition to the error abstraction process; and (3) deriving requirement errors from cognitive psychology research is useful.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Basili VR, Green S, Laitenberger O, Lanubile F, Shull F, Sørumgård S, Zelkowitz MV (1996) The empirical investigation of perspective-based reading. Empir Software Eng: An International Journal 1(2):133–164

    Article  Google Scholar 

  • Basili VR, Shull F, Lanubile F (July 1999) Building knowledge through families of experiments. IEEE Trans Software Eng 25(4):456–473

    Google Scholar 

  • Bland M (2000) An introduction to medical statistics, Chapter-9, 3rd edn. Oxford, University Press Inc, New York. ISBN 0192632698

    Google Scholar 

  • Boehm B, Basili VR (2001) Software defect reduction top 10 List. Computer 34(1):135–137

    Article  Google Scholar 

  • Card DN (1998) Learning from our mistakes with defect causal analysis. IEEE Softw 15(1):56–63

    Article  Google Scholar 

  • Card SK, Moran TP, Newell A (1983) The psychology of human-computer interaction. Erlbaum, Hillsdale

    Google Scholar 

  • Carver J (2003) The impact of background and experience on software inspections, PhD Thesis. Department of Computer Science, University of Maryland College Park, Maryland

    Google Scholar 

  • Chaar JK, Halliday MJ, Bhandari IS, Chillarege R (1993) In-process evaluation for software inspection and test. IEEE Trans Software Eng 19(11):1055–1070

    Article  Google Scholar 

  • Chillarege R, Bhandari IS, Chaar JK, Halliday MJ, Moebus DS, Ray BK, Wong MY (1992) Orthogonal defect classification-a concept for in-process measurements. IEEE Trans Software Eng 18(11):943–956

    Article  Google Scholar 

  • Endres A, Rombach D (2003) A handbook of software and systems engineering, 1st edn. Pearson Addison Wesley, Harlow

    Google Scholar 

  • Field A (2007) Discovering statistics using SPSS, 2nd edn. SAGE Publications Ltd, London

    Google Scholar 

  • Florac W (1992) Software quality measurement: a framework for counting problems and defects. Technical Reports, CMU/SEI-92-TR-22. Software Engineering Institute

  • Grady RB (1996) Software failure analysis for high-return process improvement. Hewlett-Packard J 47(4):15–24

    Google Scholar 

  • IEEE Std 610.12-1990 (1990) IEEE standard glossary of software engineering terminology

  • Jacobs J, Moll JV, Krause P, Kusters R, Trienekens J, Brombacher A (2005) Exploring defect causes in products developed by virtual teams. J Inform Software Tech 47(6):399–410

    Article  Google Scholar 

  • Kan SH, Basili VR, Shapiro LN (1994) Software quality: an overview from the perspective of total quality management. IBM Syst J 33(1):4–19

    Article  Google Scholar 

  • Kitchenham B (2004) Procedures for Performing Systematic Reviews. TR/SE-0401. Department of Computer Science, Keele University and National ICT, Australia Ltd. http://www.elsevier.com/framework_products/promis_misc/inf-systrev.pdf

  • Lanubile F, Shull F, Basili VR (1998) Experimenting with error abstraction in requirements documents. In Proceedings of Fifth International Software Metrics Symposium, METRICS98 pp 114–121

  • Lawrence CP, Kosuke I (2004) Design error classification and knowledge. J Knowl Manag Pract (May)

  • Lezak M, Perry D, Stoll D (2000) A case study in root cause defect analysis. In Proceedings of the 22nd International Conference on Software Engineering. Limerick, Ireland. pp 428–437

  • Masuck C (2005) Incorporating a fault categorization and analysis process in the software build cycle. J Comput Sci Colleges 20(5):239–248

    Google Scholar 

  • Mays RG, Jones CL, Holloway GJ, Studinski DP (1990) Experiences with defect prevention. IBM Syst J 29(1):4–32

    Article  Google Scholar 

  • Nakashima T, Oyama M, Hisada H, Ishii N (1999) Analysis of software bug causes and its prevention. J Inform Software Tech 41(15):1059–1068

    Article  Google Scholar 

  • Norman DA (1981) Categorization of action slips. Psychol Rev 88:1–15

    Article  MathSciNet  Google Scholar 

  • Pfleeger SL, Atlee JM (2006) Software engineering theory and practice, 3rd edn. Prentice Hall, Upper Saddle River

    Google Scholar 

  • Rasmussen J (1982) Human errors: a taxonomy for describing human malfunction in industrial installations. J Occup Accid 4:311–335

    Article  Google Scholar 

  • Rasmussen, J., “Skills, Rules, Knowledge: Signals, Signs and Symbols and Other Distinctions in Human Performance Models.” IEEE Transactions: Systems, Man, & Cybernetics, 1983. SMC-13: 257-267.

  • Reason J (1990) Human error. Cambridge University Press, New York

    Book  Google Scholar 

  • Sakthivel S (1991) A survey of requirements verification techniques. J Inf Technol 6:68–79

    Article  Google Scholar 

  • Seaman CB (1999) Qualitative methods in empirical studies of software engineering. IEEE Trans Softw Eng 25(4):557–572

    Article  Google Scholar 

  • Seaman CB, Basili VR (1997) An empirical study of communication in code inspections. Proceedings of International Conference in Software Engineering, pp 96–106, Boston, Mass. May

  • Sommerville I (2007) Software engineering, 8th edn. Addison Wesley, Harlow

    MATH  Google Scholar 

  • Walia GS (2006a) Empirical validaton of requirement error abstraction and classification: a Multidisciplinary Approach, M.S Thesis, Comput Sci Eng, Mississippi, Starkville

  • Walia GS, Carver J (2009) A systematic literature review to identify and classify requirement errors. J Inform Software Tech 51(7):1087–1109

    Article  Google Scholar 

  • Walia GS, Carver J, Philip T (2006b) Requirement Error Abstraction and Classification: An Empirical Study. In Proceedings of IEEE Symposium on Empirical Software Engineering. ACM Press, Brazil pp 336–345

  • Walia G, Carver J, Philip T (2007) Requirement error abstraction and classification: a control group replicated study, in 18th IEEE International Symposium on Software Reliability Engineering. Trollhättan, Sweden

Download references

Acknowledgements

We thank the study participants. We also thank Dr. Thomas Philip for providing access to his courses. We acknowledge the Empirical Software Engineering groups at MSU and NDSU for providing useful feedback on the study designs and data analysis. We thank Dr. Gary Bradshaw for his expertise on cognitive psychology. We thank Dr. Edward Allen and Dr. Guilherme Travassos for reviewing early drafts of this paper. We also thank the reviewers for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeffrey C. Carver.

Additional information

Editor: Murray Wood

Appendix A

Appendix A

This appendix describes the different errors in each of the fourteen detailed error classes (described in Table 1). A complete description of the requirement error taxonomy (along with examples of errors and faults) has been published in a systematic literature review (Walia and Carver 2009). Tables 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26 and 27 show each error class along with the specific errors that make up that error class.

In addition, the following are representative examples of the faults discovered in the studies that were attributed to different error types in the requirement error taxonomy:

  • Missing functionality for viewing reservations in the Starkville Theatre System project (Study 1). This fault was attributed to the Specific Application Knowledge error (Table 17).

    Table 17 Specific application errors
  • The wrong placement of a precondition in the use case. This fault was attributed to the Process Execution error (Table 18).

    Table 18 Process execution errors
  • Inconsistency in the interface description with other sections of the requirement document. This fault was attributed to the Communication error among team members (Table 14).

  • Main success scenario in a use case is vague. This fault was attributed to the Domain Knowledge error (Table 16).

  • Inconsistent information about indexing particular user functionality. This fault was attributed to the Human Cognition error (Table 19).

    Table 19 Other human cognition errors
    Table 20 Inadequate method of achieving goals and objectives
    Table 21 Management errors
    Table 22 Requirement elicitation errors
    Table 23 Requirement analysis errors
  • Extraneous functional requirement in Data Warehouse requirements document. This fault was attributed to the Traceability process error (Table 24).

    Table 24 Requirement traceability errors
  • Missing information regarding security requirements. This fault was attributed to the Requirement Elicitation process error (Table 22).

  • Functionality listed in wrong section. This fault was attributed to the Requirement Organization error (Table 25).

    Table 25 Requirement organization errors
    Table 26 No use of standard for documenting errors
    Table 27 Specification errors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Walia, G.S., Carver, J.C. Using error abstraction and classification to improve requirement quality: conclusions from a family of four empirical studies. Empir Software Eng 18, 625–658 (2013). https://doi.org/10.1007/s10664-012-9202-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-012-9202-3

Keywords

Navigation