This study analyzed student responses to an examination, after the students had completed one semester of instruction in programming. The performance of students on code tracing tasks correlated with their performance on code writing tasks. A correlation was also found between performance on "explain in plain English" tasks and code writing. A stepwise regression, with performance on code writing as the dependent variable, was used to construct a path diagram. The diagram suggests the possibility of a hierarchy of programming related tasks. Knowledge of programming constructs forms the bottom of the hierarchy, with "explain in English", Parson's puzzles, and the tracing of iterative code forming one or more intermediate levels in the hierarchy.
References
[1]
Adelson, B. When novices surpass experts: The difficulty of a task may increase with expertise. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10, 3 (1984), 483--495.
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J., & Wittrock, M. C. (eds). (2001). A taxonomy for learning and teaching and assessing: A revision of Bloom's taxonomy of educational objectives. New York: Addision Wesley Longman Inc.
Bera, A., Jarque, C. (1980), "Efficient tests for normality, homoscedasticity and serial independence of regression residuals". Economics Letters 6 (3): 255--259.
Biggs, J. B., & Collis, K. F. (1982). Evaluating the quality of learning: The SOLO taxonomy (Structure of the Observed Learning Outcome). New York: Academic Press.
Biggs, J. B. & Collis, K. F. Evaluating the quality of learning: The SOLO taxonomy (Structure of the Observed Learning Outcome). New York, Academic Press, 1982.
Lister, R. On blooming first year programming and its blooming assessment. Proceedings of the Australasian Conference on Computer Science Education. ACM Press, 2000, 158--162.
Lister, R., Simon, B., Thompson, E., Whalley, J. and Prasad C. Not seeing the forest for the trees: novice programmers and the SOLO taxonomy. ACM SIGCSE Bulletin,38(3), 2006, 118 -- 122
Parsons, D. and Haden, P. (2006). Parson's Programming Puzzles: A Fun and Effective Learning Tool for First Programming Courses. In Proc. Eighth Australasian Computing Education Conference (ACE2006), Hobart, Australia. CRPIT, 52. Tolhurst, D. and Mann, S., Eds., ACS. 157--163.
Philpott, A, Robbins, P., and Whalley, J. Accessing The Steps on the Road to Relational Thinking. Proceedings of the 20th Annual NACCQ. Mann, S and Bridgeman, N. (eds), Nelson, NZ, July 8-11, 2007, 286.
Rasch, G. (1960/1980). Probabilistic models for some intelligence and attainment tests. (Copenhagen, Danish Institute for Educational Research), expanded edition (1980) with foreword and afterword by B.D. Wright. Chicago: The University of Chicago Press.
Soloway, E. and Ehrlich, K, Bonar, J., and Greenspan, J. (1983) What do novices know about programming? In B. Shneiderman and A. Badre, editors, Directions in Human-Computer Interactions, 27--53. Ablex, Inc., Norwood, NJ.
Thompson, E., Whalley, J., Lister, R., Simon, B. (2006) Code Classification as a Learning and Assessment Exercise for Novice Programmers. Proceedings of the 19th Annual Conference of the National Advisory Committee on Computing Qualifications, NACCQ, Wellington, New Zealand, July 7-10. pp. 291--298. http://bitweb.tekotago.ac.nz/staticdata/allpapers/2006/papers/291.pdf
Thompson, E., Luxton-Reilly, A., Whalley, J., Hu, M., & Robbins, P. (2008). Bloom's taxonomy for CS assessment. Proceedings of the Tenth Australasian Computing Education Conference (ACE2008), Wollongong, Australia, January 2008.
Whalley, J. L., Lister, R., Thompson, E., Clear, T., Robbins, P., Kumar, P. K. A., & Prasad, C. (2006). An Australasian Study of Reading and Comprehension Skills in Novice Programmers, using the Bloom and SOLO Taxonomies. Proceedings. of the Eighth Australasian Computing Education Conference (ACE2006) (pp. 243--252). Hobart, Australia :CRIPT.
Whalley, J, Clear, T, and Lister, R. (2007), The Many Ways of the BRACElet Project. Bulletin of Applied Computing and Information Technology (BACIT) Vol. 5, Issue 1. ISSN 1176--4120. http://www.naccq.co.nz/bacit/0501/2007Whalley_BRACELET_Ways.htm
Whalley, J and Robbins, P (2007) Report on the Fourth BRACElet Workshop. Bulletin of Applied Computing and Information Technology (BACIT) Vol. 5, Issue 1. ISSN 1176-4120. http://www.naccq.ac.nz/bacit/0501/2007Whalley_BRACELET_Workshop.htm
Wiedenbeck, S., Fix, V. & Scholtz, J. Characteristics of the mental representations of novice and expert programmers: An empirical study. International Journal of Man-Machine Studies, 39 (1993) 793--812.
Jin CRinard MSalakhutdinov RKolter ZHeller KWeller AOliver NScarlett JBerkenkamp F(2024)Emergent representations of program semantics in language models trained on programsProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692961(22160-22184)Online publication date: 21-Jul-2024
Poitras ECharles Crane BDempsey DSiegel AFarion C(2024)Programming Language Learning in K-12 EducationEmpowering STEM Educators With Digital Tools10.4018/979-8-3693-9806-7.ch010(227-260)Online publication date: 1-Nov-2024
Veldthuis MHermans FHermans FBohrer R(2024)A Word about Programming: Applying a Natural Language Vocabulary Acquisition Model to Programming EducationProceedings of the 2024 ACM SIGPLAN International Symposium on SPLASH-E10.1145/3689493.3689985(56-65)Online publication date: 17-Oct-2024
This paper reports on a replication of earlier studies into a possible hierarchy of programming skills. In this study, the students from whom data was collected were at a university that had not provided data for earlier studies. Also, the students were ...
ICER '09: Proceedings of the fifth international workshop on Computing education research workshop
The way in which novice programmers learn to write code is of considerable interest to computing education researchers. One research approach to understanding how beginners acquire their programming abilities has been to look at student performance in ...
ITiCSE '08: Proceedings of the 13th annual conference on Innovation and technology in computer science education
This paper explores the programming knowledge of novices using Biggs' SOLO taxonomy. It builds on previous work of Lister et al. (2006) and addresses some of the criticisms of that work. The research was conducted by studying the exam scripts for 120 ...
Do students who complete an introductory programming course really know how to program__?__ Most computer science instructors have probably wondered about this at one time or another. The authors of this paper do not claim to answer the question, but they have taken one more step in discussing this issue, by continuing a series of investigations into a possible hierarchy of programming-related skills: filling in missing or misplaced statements in a program fragment, code tracing, describing in English what a piece of code does (either line-by-line or an overall description of the computation), and so on.
The three specific questions addressed in this study are: Is skill in tracing associated with the ability to write a program__?__ Is skill in reading a program (and describing what it does) associated with the ability to write a program__?__ Is student performance on the exam consistent with a hierarchy of programming-related skills__?__
Students were given an exam at the end of an introductory programming course in Java. About half the students (38 out of 78) agreed to participate in the experiment. (The authors note that this sample seemed to reflect the overall grade distribution of the larger group.) The questions were classified into various categories, for example, basic Java constructs, sequencing of instructions, tracing (both iterative and noniterative), explaining what code does, writing new code, and handling exceptions. The student grades on these questions were analyzed in a variety of ways, starting with descriptive statistics and then proceeding to "a more sophisticated statistical analysis, which examines the relationships between the exam questions, and which provides evidence (for or against) a hierarchy of programming-related skills."
A few words of disclaimer: I am writing this review as a long-time (over 35 years) instructor in a variety of languages, including PL/I, Pascal, C, and C++, not as a statistician. I know a reasonable amount about introductory programming courses, but relatively little about the statistical methods that are used in the paper. In particular, much of the detailed analysis of the data presented here is beyond me, for example: "A degree of multicollinearity is to be expected in any valid assessment instrument, but has the consequence that estimates of the relative contribution of each predictor to a criterion have an increased confidence interval, even though the overall regression remains stable."
The authors found "strong support for an association between code tracing and code writing skills, particularly when the tracing involved loops." They found somewhat weaker support for an association between code reading and code writing skills. As for the hierarchy of skills, they identify elementary tracing skills and sequencing at the lowest level, followed by more complex (iterative) tracing and explaining, followed by writing. The topic of exceptions seemed to be unconnected to the other items. On the one hand, these conclusions seem obvious to anyone who has taught introductory programming courses. On the other hand, it is nice to get backing even for obvious points.
Of course, there are a number of questions that can be raised about this study. (It is important to note that the authors themselves point out a number of potential problems with their analysis.) In particular, the results are based on one exam, given to one group of students, and the grading of parts of the exam may be somewhat subjective. I have noted over the course of many years that there is a variation from class to class, from term to term, and from student to student. How much time was spent in class emphasizing tracing versus other topics__?__ How much did the students practice before the exam__?__ Did the students spend more time on some of the earlier exam questions rather than the later ones__?__ Did the overall reading and writing ability of the students play a role in answering some of the questions__?__
There are many skills associated with learning how to program, and for most students there is a natural progression of these skills, from tracing to describing to coming up with a solution to a brand new problem. However, this hierarchy is not strict for all students; a small number of students can write a program, but are very poor at tracing or explaining what another program does.
To sum up, this paper performs a significant service by providing evidence for a number of important points about how students learn how to program, no matter how obvious some of these points may seem.
Online Computing Reviews Service
Access critical reviews of Computing literature here
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]
Jin CRinard MSalakhutdinov RKolter ZHeller KWeller AOliver NScarlett JBerkenkamp F(2024)Emergent representations of program semantics in language models trained on programsProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692961(22160-22184)Online publication date: 21-Jul-2024
Poitras ECharles Crane BDempsey DSiegel AFarion C(2024)Programming Language Learning in K-12 EducationEmpowering STEM Educators With Digital Tools10.4018/979-8-3693-9806-7.ch010(227-260)Online publication date: 1-Nov-2024
Veldthuis MHermans FHermans FBohrer R(2024)A Word about Programming: Applying a Natural Language Vocabulary Acquisition Model to Programming EducationProceedings of the 2024 ACM SIGPLAN International Symposium on SPLASH-E10.1145/3689493.3689985(56-65)Online publication date: 17-Oct-2024
Feldman MAnderson C(2024)Non-Expert Programmers in the Generative AI FutureProceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction for Work10.1145/3663384.3663393(1-19)Online publication date: 25-Jun-2024
Smith DDenny PFowler MJoyner DKim MWang XXia M(2024)Prompting for Comprehension: Exploring the Intersection of Explain in Plain English Questions and Prompt WritingProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3662039(39-50)Online publication date: 9-Jul-2024
Hassan MZilles CDorodchi MZhange MCooper S(2024)Evaluating Algorithm Visualizations, Debuggers, and Execution Toward Helping Students Understand CodeProceedings of the 2024 on ACM Virtual Global Computing Education Conference V. 210.1145/3649409.3691095(320-321)Online publication date: 5-Dec-2024
Smith DZilles CMonga MLonati VBarendsen ESheard JPaterson J(2024)Code Generation Based Grading: Evaluating an Auto-grading Mechanism for "Explain-in-Plain-English" QuestionsProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653582(171-177)Online publication date: 3-Jul-2024
Graf OThorgeirsson SSu ZMonga MLonati VBarendsen ESheard JPaterson J(2024)Assessing Live Programming for Program ComprehensionProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653547(520-526)Online publication date: 3-Jul-2024
de Oliveira Neto FDobslaw FRoychoudhury APaiva AAbreu RStorey MGama KSiegmund J(2024)Building Collaborative Learning: Exploring Social Annotation in Introductory ProgrammingProceedings of the 46th International Conference on Software Engineering: Software Engineering Education and Training10.1145/3639474.3640063(12-21)Online publication date: 14-Apr-2024
Lehtinen TKoutcheme CHellas ARoychoudhury APaiva AAbreu RStorey MGama KSiegmund J(2024)Let's Ask AI About Their Programs: Exploring ChatGPT's Answers To Program Comprehension QuestionsProceedings of the 46th International Conference on Software Engineering: Software Engineering Education and Training10.1145/3639474.3640058(221-232)Online publication date: 14-Apr-2024