Abstract
In the development of many safety-critical systems, test cases are still created on the basis of experience rather than systematic methods. As a consequence, many redundant test cases are created and many aspects remain untested. One of the most important questions in testing dependable systems is: which are the right test techniques to obtain a test set that will detect critical errors in a complex system? In this paper, we provide an overview of the state-of-practice in designing test cases for dependable event-based systems regulated by the IEC 61508 and DO-178B standards. For example, the IEC 61508 standard stipulates modelbased testing and systematic test-case design and generation techniques such as transition-based testing and equivalence-class partitioning for software verification. However, it often remains unclear in which situation these techniques should be applied and what information is needed to select the right technique to obtain the best set of test cases. We propose an approach that selects appropriate test techniques by considering issues such as specification techniques, failure taxonomies and quality risks. We illustrate our findings with a case study for an interlocking system for Siemens transportation systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Avizienis A, Laprie J-C, Randell B, Landwehr C (2004) Basic concepts and taxonomy of dependable and secure computing. IEEE Trans Dependable Secur Comput 1:11-33
Beer A, Heindl M (2007) Issues in testing dependable event-based systems at a systems integration company. Proc Int Conf Availab Reliab Secur (ARES 2007), Vienna, Austria
Beer A, Menzel M (2008) Test automation patterns: closing the gap between requirements and test. Testing Experience Magazine, December 2008
Beer A, Ramler R (2008) The role of experience in software testing practice. 34thUROMICRO conference on software engineering and advanced applications. SEAA 2008, Parma, Italy
Beizer B (1990) Software system testing and quality assurance. Van Nostrand Reinhold
Belinfante A, Frantzen L, Schallhart C (2005) Tools for test case generation. In: Model-based testing of reactive systems. LNCS 3472. Springer
Black R (2009) Advanced software testing – guide to the ISTQB advanced certification Vol1 and 2, Rockynook
Blackburn M (2002) The Mars polar lander failure. STQE Magazine, September/October 2002
SEI (2006) CMMI® for Development, Version 1.2. Carnegie Mellon University
CENELEC (2001) EN 50128 Railway applications: communications, signalling and processing systems – software for railway control and protection systems
de Grood D-J (2008) TestGoal, Result-driven testing. Collis BV, Leiden.
Eastaughffe KA, Cant A, et al (1999) A framework for assessing standards for safety critical computer-based systems. In Proc Fourth IEEE Int Symp Forum Softw Eng Stand
Fernandez J C, Jard C, Jeron T, Viho C (1997) An experiment in automatic generation of test suites for protocols with verification technology. Sci Comput Program 29:123-146
Frantzen L, Tretmans J, Willemse TAC (2006) A symbolic framework for model-based testing.
In: Havelund K, Núñez M, Rosu G, Wolff B (eds), Formal approaches to software testing and
runtime verification (FATES/RV) LNCS 4262. Springer
Hamlet R (1994) Random testing. In: Marciniak J (ed) Encyclopedia of software engineering. Wiley, New York
Howden W (1976) Reliability of the path analysis testing strategy. IEEE Trans Softw Eng 2:208- 215
ISO (1989) ISO 8807 Information processing systems – open systems interconnection – LOTOS – A formal description technique based on the temporal ordering of observational behaviour
ISTQB (2010) Standard glossary of terms used in software testing. Version 2.1. International Software Testing Qualifications Board, Glossary Working Party
Kahlouche H, Viho C, Zendri M (1998) An industrial experiment in automatic generation of executable test suites for a cache coherency protocol. In: Proc Int Workshop Test Commun Syst
Kuhn D, Wallace D (2000) Failure modes in medical device software: an analysis of 15 years of recall data. Nat. Institute of Standards and Technology, Gaithersburg, MD USA. http://csrc.nist.gov/staff/Kuhn/final-rqse.pdf. Accessed 25 August 2010
Littlewood B, Strigini L (1993) Validation of ultrahigh dependability for software-based systems. Comm. ACM 36(11):69-80
Lyu MR (ed) (1987) Handbook of software reliability engineering. IEEE Computer Society Press
McDonald M, Musson R, Smith R (2008) The practical guide to defect prevention – techniques to meet the demand for more reliable software. Microsoft Press
Milius S, Steinke U (2010) Modellbasierte softwareentwicklung mit SCADE in der eisenbahnautomatisierung. http://www.iti.cs.tu-bs.de/∼milius/research/modelbased.pdf. Accessed 24 August 2010
Mohacsi S, Wallner J (2010) A hybrid approach for model-based random testing. VALID 2010, Nice, France
Mogyorodi G (2008) Requirements-based testing – ambiguity reviews. Testing Experience Magazine
Myers G (1979) The art of software testing. Wiley & Sons
Nielsen DS (1971) The cause consequence diagram method as a basis for quantitative accident analysis. Danish Atomic Energy Commission, RISO-M-1374
OMG (2010) UML superstructure reference. http://www.omg.org/spec/UML/2.1.2/Superstructure/PDF. Accessed April 2010
Peischl B (2007) Standards for safety critical software: validation, verification, and testing requirements. SNA-TR-2007-1, Softnet-Report
Straden L, Trebbien-Nielsen C (2002) Standards for safety-related applications. Nordtest Technical Report.
Tretmans J (1996) Test generation with inputs, outputs and repetitive quiescence. Softw Concepts Tools 17(3):103-120
Tretmans J and Brinksma E (2003) Torx: automated model based testing. In: Hartman A, Dussa- Zieger K (eds) Proc First Eur Conf Model-Driven Softw Eng, Nurnburg, Germany
Tretmans J (2008) Model based testing with labelled transition systems. In: Hierons RM, Bowen JP, Harman M (eds) Formal methods and testing: an outcome of the FORTEST network. LNCS 4949. Springer-Verlag, Berlin, Heidelberg
Tuinhout R (2008) The boundary value fallacy. Testing Experience Magazine
Vegas S, Juristo N, Basili V (2006) Packaging experiences for improving testing technique selection. J Syst Softw 79:1606-1618
Voas J M, McGraw G (1998) Software fault injection. Wiley Interscience
Whittaker J (2003) How to break software. Addison-Wesley
Acknowledgments
The research work reported here was partially conducted within the Softnet Austria competence network (www.soft-net.at) and was funded by the Austrian Federal Ministry of Economics (bm:wa), the province of Styria, the Steirische Wirtschaftsförderungsgesellschaft mbH (SFG) and the city of Vienna within the scope of the Centre for Innovation and Technology (ZIT).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag London Limited
About this paper
Cite this paper
Beer, A., Peischl, B. (2011). Testing of Safety-Critical Systems – a Structural Approach to Test Case Design. In: Dale, C., Anderson, T. (eds) Advances in Systems Safety. Springer, London. https://doi.org/10.1007/978-0-85729-133-2_12
Download citation
DOI: https://doi.org/10.1007/978-0-85729-133-2_12
Published:
Publisher Name: Springer, London
Print ISBN: 978-0-85729-132-5
Online ISBN: 978-0-85729-133-2
eBook Packages: Computer ScienceComputer Science (R0)