[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2899415.2899422acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article
Open access

Towards a Systematic Review of Automated Feedback Generation for Programming Exercises

Published: 11 July 2016 Publication History

Abstract

Formative feedback, aimed at helping students to improve their work, is an important factor in learning. Many tools that offer programming exercises provide automated feedback on student solutions. We are performing a systematic literature review to find out what kind of feedback is provided, which techniques are used to generate the feedback, how adaptable the feedback is, and how these tools are evaluated. We have designed a labelling to classify the tools, and use Narciss' feedback content categories to classify feedback messages. We report on the results of the first iteration of our search in which we coded 69 tools. We have found that tools do not often give feedback on fixing problems and taking a next step, and that teachers cannot easily adapt tools to their own needs.

References

[1]
K. M. Ala-Mutka. A survey of automated assessment approaches for programming assignments. Computer Science Education, 15(2):83--102, 2005.
[2]
D. Boud and E. Molloy, editors. Feedback in higher and professional education: understanding it and doing it well. 2012.
[3]
J. C. Caiza and J. M. Del Alamo. Programming assignments automatic grading: review of tools and implementations. In INTED, pages 5691--5700, 2013.
[4]
F. P. Deek, K.-W. Ho, and H. Ramadhan. A critical analysis and evaluation of web-based environments for program development. The Internet and Higher Education, 3(4):223--269, 2000.
[5]
F. P. Deek and J. A. McHugh. A survey and critical analysis of tools for learning programming. Computer Science Education, 8(2):130--178, 1998.
[6]
C. Douce, D. Livingstone, and J. Orwell. Automatic test-based assessment of programming: A review. Journal on Educational Resources in Computing (JERIC), 5(3), 2005.
[7]
M. Gómez-Albarrán. The Teaching and Learning of Programming: A Survey of Supporting Software Tools. The Computer Journal, 48(2):130--144, 2005.
[8]
P. Gross and K. Powers. Evaluating assessments of novice programming environments. In ICER, pages 99--110, 2005.
[9]
M. Guzdial. Programming environments for novices. In Computer Science Education Research, pages 127--154. 2004.
[10]
P. Ihantola, T. Ahoniemi, V. Karavirta, and O. Seppälä. Review of recent systems for automatic assessment of programming assignments. In Koli Calling, pages 86--93, 2010.
[11]
C. Kelleher and R. Pausch. Lowering the barriers to programming: A taxonomy of programming environments and languages for novice programmers. ACM Computing Surveys, 37(2):83--137, 2005.
[12]
H. Keuning, J. Jeuring, and B. Heeren. Towards a systematic review of automated feedback generation for programming exercises -- extended version. Technical Report UU-CS-2016-001, 2016.
[13]
N.-T. Le and N. Pinkwart. Towards a classification for programming exercises. In Workshop on AI-supported Education for Computer Science, pages 51--60, 2014.
[14]
N.-T. Le, S. Strickroth, S. Gross, and N. Pinkwart. A review of ai-supported tutoring approaches for learning programming. In Advanced Computational Methods for Knowledge Engineering, pages 267--279. 2013.
[15]
D. C. Merrill, B. J. Reiser, M. Ranney, and J. G. Trafton. Effective tutoring techniques: A comparison of human tutors and intelligent tutoring systems. Journal of the Learning Sciences, 2(3):277--305, 1992.
[16]
S. Narciss. Feedback strategies for interactive learning tasks. Handbook of research on educational communications and technology, pages 125--144, 2008.
[17]
J. C. Nesbit, L. Liu, Q. Liu, and O. O. Adesope. Work in Progress: Intelligent Tutoring Systems in Computer Science and Software Engineering. In ASEE Annual Conference & Exposition, pages 1--12, 2015.
[18]
A. Pears, S. Seidman, L. Malmi, L. Mannila, E. Adams, J. Bennedsen, M. Devlin, and J. Paterson. A survey of literature on the teaching of introductory programming. SIGCSE Bull., 39(4):204--223, 2007.
[19]
N. Pillay. Developing intelligent programming tutors for novice programmers. SIGCSE Bull., 35(2):78--82, 2003.
[20]
K. A. Rahman and M. J. Nordin. A review on the static analysis approach in the automated programming assessment systems. In National conference on programming, 2007.
[21]
R. Romli, S. Sulaiman, and K. Z. Zamli. Automatic programming assessment and test data generation: a review on its approaches. In Int. Symp. in Information Technology, pages 1186--1192, 2010.
[22]
V. J. Shute. Focus on formative feedback. Review of Educational Research, 78(1):153--189, 2008.
[23]
M. Striewe and M. Goedicke. A review of static analysis approaches for programming exercises. In Computer Assisted Assessment. Research into E-Assessment, pages 100--113. 2014.
[24]
M. Ulloa. Teaching and learning computer programming: a survey of student problems, teaching methods, and automated instructional tools. SIGCSE Bull., 12(2):48--64, 1980.

Cited By

View all
  • (2024)Assessment Automation of Complex Student Programming AssignmentsEducation Sciences10.3390/educsci1401005414:1(54)Online publication date: 1-Jan-2024
  • (2024)PyDex: Repairing Bugs in Introductory Python Assignments using LLMsProceedings of the ACM on Programming Languages10.1145/36498508:OOPSLA1(1100-1124)Online publication date: 29-Apr-2024
  • (2024)Open Source Language Models Can Provide Feedback: Evaluating LLMs' Ability to Help Students Using GPT-4-As-A-JudgeProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653612(52-58)Online publication date: 3-Jul-2024
  • Show More Cited By

Index Terms

  1. Towards a Systematic Review of Automated Feedback Generation for Programming Exercises

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ITiCSE '16: Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education
      July 2016
      394 pages
      ISBN:9781450342315
      DOI:10.1145/2899415
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 11 July 2016

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. automated feedback
      2. learning programming
      3. programming tools
      4. systematic literature review

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      ITiCSE '16
      Sponsor:

      Acceptance Rates

      ITiCSE '16 Paper Acceptance Rate 56 of 147 submissions, 38%;
      Overall Acceptance Rate 552 of 1,613 submissions, 34%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)454
      • Downloads (Last 6 weeks)63
      Reflects downloads up to 03 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Assessment Automation of Complex Student Programming AssignmentsEducation Sciences10.3390/educsci1401005414:1(54)Online publication date: 1-Jan-2024
      • (2024)PyDex: Repairing Bugs in Introductory Python Assignments using LLMsProceedings of the ACM on Programming Languages10.1145/36498508:OOPSLA1(1100-1124)Online publication date: 29-Apr-2024
      • (2024)Open Source Language Models Can Provide Feedback: Evaluating LLMs' Ability to Help Students Using GPT-4-As-A-JudgeProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653612(52-58)Online publication date: 3-Jul-2024
      • (2024)AI-Tutoring in Software Engineering EducationProceedings of the 46th International Conference on Software Engineering: Software Engineering Education and Training10.1145/3639474.3640061(309-319)Online publication date: 14-Apr-2024
      • (2024)CAGE: A Tool for Code Assessment and Grading2024 9th International Conference on Computer Science and Engineering (UBMK)10.1109/UBMK63289.2024.10773535(1070-1075)Online publication date: 26-Oct-2024
      • (2024)Impact of an Interactive Learning Platform on Programming Student Outcomes: A Case Study of Teloprogramo2024 43rd International Conference of the Chilean Computer Science Society (SCCC)10.1109/SCCC63879.2024.10767622(1-7)Online publication date: 28-Oct-2024
      • (2024)Practicing Abstraction Skills Through Diagrammatic Reasoning Over CAFÉ 2.02024 IEEE Global Engineering Education Conference (EDUCON)10.1109/EDUCON60312.2024.10578665(1-10)Online publication date: 8-May-2024
      • (2024)Project VoLL-KIKI - Künstliche Intelligenz10.1007/s13218-024-00846-9Online publication date: 5-May-2024
      • (2024)Conducting Final Programming Exams with Auto-Grading and Code Evaluation ToolsInternational Joint Conferences10.1007/978-3-031-75016-8_25(269-278)Online publication date: 16-Nov-2024
      • (2024)Towards an Automatic Scoring for Code Assessment in the Universities: A Case Study the Algerian University13th International Conference on Information Systems and Advanced Technologies “ICISAT 2023”10.1007/978-3-031-60591-8_5(50-60)Online publication date: 28-Aug-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media