[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

A Controlled Experiment to Assess the Benefits of Procedure Argument Type Checking

Published: 01 April 1998 Publication History

Abstract

Type checking is considered an important mechanism for detecting programming errors, especially interface errors. This report describes an experiment to assess the defect-detection capabilities of static, intermodule type checking. The experiment uses ANSI C and Kernighan&Ritchie (K&R) C. The relevant difference is that the ANSI C compiler checks module interfaces (i.e., the parameter lists calls to external functions), whereas K&R C does not. The experiment employs a counterbalanced design in which each of the 40 subjects, most of them CS PhD students, writes two nontrivial programs that interface with a complex library (Motif). Each subject writes one program in ANSI C and one in K&R C. The input to each compiler run is saved and manually analyzed for defects. Results indicate that delivered ANSI C programs contain significantly fewer interface defects than delivered K&R C programs. Furthermore, after subjects have gained some familiarity with the interface they are using, ANSI C programmers remove defects faster and are more productive (measured in both delivery time and functionality implemented).

References

[1]
V.R. Basili and B.T. Perricone, "Software Errors and Complexity: An Empirical Investigation," Comm. ACM, vol. 27, no. 1, pp. 42-52, Jan. 1984.
[2]
B. Beizer, Software Testing Techniques. Van Nostrand Reinhold, 1990.
[3]
K. Bruce, "Typing in Object-Oriented Languages: Achieving Expressibility and Safety," ACM Computing Surveys? 1998, to appear. see http://www.cs.williams.edu~kim/.
[4]
L.B. Christensen, Experimental Methodology. Allyn and Bacon, Needham Heights, Mass., sixth edition, 1994.
[5]
C.R. Cook, J.C. Scholtz, and J.C. Spohrer, eds. Empirical Studies of Programmers: Fifth Workshop, Palo Alto, Calif., Ablex Publishing Corp., Dec. 1993
[6]
A. Ebrahimi, "Novice Programmer Errors: Language Constructs and Plan Composition," Intl. J. Human-Computer Studies, vol. 41, pp. 457-480, 1994.
[7]
M. Eisenstadt, "Tales of Debugging from the Front Lines," {5}, pp. 86-112, 1993.
[8]
P.G. Frankl and S.N. Weiss, "An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing," IEEE Trans. Software Eng., 1993.
[9]
J.D. Gannon, "An Experimental Evaluation of Data Type Conventions," Comm. ACM, 1977.
[10]
R.B. Grady, "Practical Results from Measuring Software Quality," Comm. ACM, vol. 36, no. 11, pp. 62-68, Nov. 1993.
[11]
P. Hudak and M.P. Jones, "Haskell vs. Ada vs. C++ vs. Awk vs. .. An Experiment in Software Prototyping Productivity," technical report, Dept. of Computer Science, Yale Univ., New Haven, Conn., July 1994.
[12]
W. Humphrey, A Discipline for Software Engineering. SEI Series in Software Engineering. Reading, Mass.: Addison-Wesley, 1995.
[13]
M. Nanja and C.R. Cook, "An Analysis of the On-Line Debugging Process," {14}, pp. 172-184, 1987.
[14]
G.M. Olson, S. Sheppard, and E. Soloway, eds., "Empirical Studies of Programmers: Second Workshop," Washington, D.C., Ablex Publishing Corp., Dec. 1987.
[15]
L. Prechelt and W.F. Tichy, "A Controlled Experiment Measuring the Impact of Procedure Argument Type Checking on Programmer Productivity," Technical Report CMU/SEI-96-TR-014, Software Engineering Inst., Carnegie Mellon Univ., Pittsburgh, Penn., June 1996.
[16]
B.A. Sheil, "The Psychological Study of Programming," ACM Computing Surveys, 1981.
[17]
E. Soloway, and S. Iyengar, eds., Empirical Studies of Programmers. Norwood, N.J.: Ablex Publishing Corp., June 1986. (The papers of the First Workshop on Empirical Studies of Programmers, Washington D.C.).
[18]
J.G. Spohrer and E. Soloway, "Analyzing the High Frequency Bugs in Novice Programs," {17}, pp. 230-251, 1986.
[19]
W. Stacy and J. MacMillian, "Cognitive Bias in Software Engineering," Comm. ACM, vol. 38, no. 6, pp. 57-63, June 1995.
[20]
B. Teasley L.M. Leventhal and D.S. Rohlman, "Positive Test Bias in Software Testing by Professionals: What's Right and What's Wrong," Empirical Studies of Programmers: Proc. Fifth Workshop, pp. 206-221. Palo Alto, Calif.: Ablex Publishing Corp., Dec. 1993.
[21]
N. Wirth, "Gedanken zur Software-Explosion," Informatik Spektrum, vol. 17, no. 1, pp. 5-20, Feb. 1994.
[22]
C. Wohlin and P. Runeson, "Certification of Software Components," IEEE Trans. Software Eng., vol. 20, no. 6, pp. 494-499, June 1994.

Cited By

View all
  • (2024)Typed and Confused: Studying the Unexpected Dangers of Gradual TypingProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695549(1858-1870)Online publication date: 27-Oct-2024
  • (2021)An Empirical Study on Type AnnotationsACM Transactions on Software Engineering and Methodology10.1145/343977530:2(1-29)Online publication date: 10-Feb-2021
  • (2018)Interdisciplinary programming language designProceedings of the 2018 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software10.1145/3276954.3276965(133-146)Online publication date: 24-Oct-2018
  • Show More Cited By

Recommendations

Reviews

Mihaela Carstea

Type checking is a valuable mechanism for detecting programming errors, especially interface errors. The purpose of this paper is to provide clear evidence about the effects of type checking. It describes a repeatable controlled experiment that confirms some positive effects. First, when applied to interfaces, type checking reduced the numbers of defects remaining in delivered programs. Second, when programmers used a familiar interface, type checking helped them remove defects more quickly and increased their productivity. The paper is structured in five sections. The introduction briefly presents the experiment and explains its importance. Section 2 is a short description of two closely related studies. The authors conclude that the costs and benefits of interface type checking have not been studied systematically. Section 3 describes the tasks, the subjects, the experiment setup, and the observed variables. The authors then discuss the internal and external validity of the experiment. Each of the 40 subjects, most of them doctoral students in computer science, wrote two nontrivial programs that interfaced with a complex library. Section 4 is an analysis of the experiment, based on statistical data. The tables and graphics present the productivity, defect lifetimes, and defects in delivered programs, and some subjective information about the experimental subjects based on questionnaires. The last section includes conclusions and suggestions for further work. Two appendices present solutions for the two problems used in the experiment. The paper includes a lengthy bibliography. This research will be helpful to software developers in at least three ways. First, we still lack a useful scientific model of the programming process. Understanding the types, frequencies, and circumstances of programmer errors is an important ingredient of such a model. Second, a better understanding of the defect detection capabilities of type checking may enable programmers to improve them. Finally, there are still many environments in which type checking is missing or incomplete, and confirmation of the positive effects of type checking may help close these gaps.

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering  Volume 24, Issue 4
April 1998
80 pages
ISSN:0098-5589
Issue’s Table of Contents

Publisher

IEEE Press

Publication History

Published: 01 April 1998

Author Tags

  1. Type checking
  2. controlled experiment.
  3. defects
  4. productivity
  5. quality

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Typed and Confused: Studying the Unexpected Dangers of Gradual TypingProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695549(1858-1870)Online publication date: 27-Oct-2024
  • (2021)An Empirical Study on Type AnnotationsACM Transactions on Software Engineering and Methodology10.1145/343977530:2(1-29)Online publication date: 10-Feb-2021
  • (2018)Interdisciplinary programming language designProceedings of the 2018 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software10.1145/3276954.3276965(133-146)Online publication date: 24-Oct-2018
  • (2018)Assessing the type annotation burdenProceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering10.1145/3238147.3238173(190-201)Online publication date: 3-Sep-2018
  • (2017)To type or not to typeProceedings of the 39th International Conference on Software Engineering10.1109/ICSE.2017.75(758-769)Online publication date: 20-May-2017
  • (2016)Type unsoundness in practice: an empirical study of DartACM SIGPLAN Notices10.1145/3093334.298922752:2(13-24)Online publication date: 1-Nov-2016
  • (2016)Inferring Types by Mining Class Usage Frequency from Inline CachesProceedings of the 11th edition of the International Workshop on Smalltalk Technologies10.1145/2991041.2991047(1-11)Online publication date: 23-Aug-2016
  • (2016)Type unsoundness in practice: an empirical study of DartProceedings of the 12th Symposium on Dynamic Languages10.1145/2989225.2989227(13-24)Online publication date: 1-Nov-2016
  • (2016)Exploring cheap type inference heuristics in dynamically typed languagesProceedings of the 2016 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software10.1145/2986012.2986017(43-56)Online publication date: 20-Oct-2016
  • (2015)An empirical investigation of the effects of type systems and code completion on API usability using TypeScript and JavaScript in MS visual studioACM SIGPLAN Notices10.1145/2936313.281672051:2(154-167)Online publication date: 21-Oct-2015
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media