[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2538862.2538896acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article

Increasing the effectiveness of automated assessment by increasing marking granularity and feedback units

Published: 05 March 2014 Publication History

Abstract

Computer-based assessment is a useful tool for handling large-scale classes and is extensively used in the automated assessment of student programming assignments in Computer Science. The forms that this assessment takes, however, can vary widely from simple acknowledgement to a detailed analysis of output, structure and code. This study focusses on output analysis of submitted student assignment code and the degree to which changes in automated feedback influence student marks and persistence in submission. Data was collected over a four year period, over 22 courses but we focus on one course for this paper. Assignments were grouped by the number of different units of automated feedback that were delivered per assignment to investigate if students changed their submission behaviour or performance as the possible set of marks, that a student could achieve, changed. We discovered that pre-deadline results improved as the number of feedback units increase and that post-deadline activity was also improved as more feedback units were available.

References

[1]
K. M. Ala-Mutka. A survey of automated assessment approaches for programming assignments. Computer Science Education, 15(2):83--102, 2005.
[2]
H. Andrade and Y. Du. Student perspectives on rubric-referenced assessment. Practical Assessment Research & Evaluation, 10(3), Apr. 2005.
[3]
M. Ben-Ari. Constructivism in computer science education. In Proceedings of the twenty-ninth SIGCSE technical symposium on Computer science education, SIGCSE '98, pages 257--261, New York, NY, USA, 1998. ACM.
[4]
P. C. Blumenfeld, E. Soloway, R. W. Marx, J. S. Krajcik, M. Guzdial, and A. Palincsar. Motivating Project-Based Learning: Sustaining the Doing, Supporting the Learning. Educational Psychologist, 26(3):369--398, 1991.
[5]
C. Douce, D. Livingstone, and J. Orwell. Automatic test-based assessment of programming: A review. J. Educ. Resour. Comput., 5(3), Sept. 2005.
[6]
S. H. Edwards, J. Snyder, M. A. Pérez-Quiñones, A. Allevato, D. Kim, and B. Tretola. Comparing effective and ineffective behaviors of student programmers. In Proceedings of the fifth international workshop on Computing education research workshop, ICER '09, New York, NY, USA, 2009. ACM.
[7]
S. Fincher, M. Petre, and M. Clark, editors. Computer Science Project Work: Principles and Pragmatics. Springer, 2001.
[8]
S. Fitzgerald, B. Hanks, R. Lister, R. McCauley, and L. Murphy. What are we thinking when we grade programs? In Proceeding of the 44th ACM technical symposium on Computer science education, SIGCSE'13, pages 471--476, New York, NY, USA, 2013. ACM.
[9]
P. Ihantola, T. Ahoniemi, V. Karavirta, and O. Seppälä. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research, Koli Calling '10, pages 86--93, New York, NY, USA, 2010. ACM.
[10]
A. Jonsson and G. Svingby. The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2):130 -- 144, 2007.
[11]
N. Kalogeropoulos, I. Tzigounakis, E. A. Pavlatou, and A. G. Boudouvis. Computer-based assessment of student performance in programing courses. Computer Applications in Engineering Education, 2011.
[12]
V. Karavirta, A. Korhonen, and L. Malmi. On the use of resubmissions in automatic assessment systems. Computer Science Education, 16(3):229--240, 2006.
[13]
D. Kay, T. Scott, P. Isaacson, and K. Reek. Automated grading assistance for student programs. In Proceedings of the 25th SIGCSE technical symposium on Computer science education, pages 381--382, 1994.
[14]
K. A. Naudé, J. H. Greyling, and D. Vogts. Marking student programs using graph similarity. Comput. Educ., 54(2):545--561, Feb. 2010.
[15]
U. Nikula, O. Gotel, and J. Kasurinen. A motivation guided holistic rehabilitation of the first programming course. Trans. Comput. Educ., 11(4):24:1--24:38, Nov. 2011.
[16]
E. Panadero, J. A. Tapia, and J. A. Huertas. Rubrics and self-assessment scripts effects on self-regulation, learning and self-efficacy in secondary education. Learning and Individual Differences, 22(6):806 -- 813, 2012.
[17]
D. R. Sadler. Interpretations of criteria-based assessment and grading in higher education. Assessment & Evaluation in Higher Education, 30(2):175--194, 2005.
[18]
D. R. Sadler. Perils in the meticulous specification of goals and assessment criteria. Assessment in Education: Principles, Policy & Practice, 14(3):387--392, 2007.

Cited By

View all
  • (2023)Relationship Between Implicit Intelligence Beliefs and Maladaptive Self-Regulation of LearningACM Transactions on Computing Education10.1145/359518723:3(1-23)Online publication date: 20-Jun-2023
  • (2023)Unit Testing Challenges with Automated Marking2023 30th Asia-Pacific Software Engineering Conference (APSEC)10.1109/APSEC60848.2023.00067(544-548)Online publication date: 4-Dec-2023
  • (2022)Automated Code Assessment for Education: Review, Classification and Perspectives on Techniques and ToolsSoftware10.3390/software10100021:1(3-30)Online publication date: 8-Feb-2022
  • Show More Cited By

Index Terms

  1. Increasing the effectiveness of automated assessment by increasing marking granularity and feedback units

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGCSE '14: Proceedings of the 45th ACM technical symposium on Computer science education
    March 2014
    800 pages
    ISBN:9781450326056
    DOI:10.1145/2538862
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 March 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. automated assessment
    2. feedback
    3. student performance

    Qualifiers

    • Research-article

    Conference

    SIGCSE '14
    Sponsor:

    Acceptance Rates

    SIGCSE '14 Paper Acceptance Rate 108 of 274 submissions, 39%;
    Overall Acceptance Rate 1,595 of 4,542 submissions, 35%

    Upcoming Conference

    SIGCSE TS 2025
    The 56th ACM Technical Symposium on Computer Science Education
    February 26 - March 1, 2025
    Pittsburgh , PA , USA

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)49
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 19 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Relationship Between Implicit Intelligence Beliefs and Maladaptive Self-Regulation of LearningACM Transactions on Computing Education10.1145/359518723:3(1-23)Online publication date: 20-Jun-2023
    • (2023)Unit Testing Challenges with Automated Marking2023 30th Asia-Pacific Software Engineering Conference (APSEC)10.1109/APSEC60848.2023.00067(544-548)Online publication date: 4-Dec-2023
    • (2022)Automated Code Assessment for Education: Review, Classification and Perspectives on Techniques and ToolsSoftware10.3390/software10100021:1(3-30)Online publication date: 8-Feb-2022
    • (2022)Adaptive Assessment and Content Recommendation in Online Programming Courses: On the Use of Elo-ratingACM Transactions on Computing Education10.1145/351188622:3(1-27)Online publication date: 9-Jun-2022
    • (2022)Write a lineProceedings of the ACM/IEEE 44th International Conference on Software Engineering: Software Engineering Education and Training10.1145/3510456.3514159(265-276)Online publication date: 21-May-2022
    • (2022)Metacognition and Self-Regulation in Programming Education: Theories and Exemplars of UseACM Transactions on Computing Education10.1145/348705022:4(1-31)Online publication date: 15-Sep-2022
    • (2022)LARVA: Learning Analytics Recollection and Visualization Agents2022 International Symposium on Computers in Education (SIIE)10.1109/SIIE56031.2022.9982310(1-6)Online publication date: 17-Nov-2022
    • (2022)Write a Line: Tests with Answer Templates and String Completion Hints for Self-Learning in a CS1 Course2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET)10.1109/ICSE-SEET55299.2022.9794157(265-276)Online publication date: May-2022
    • (2022)Efficient Structural Analysis of Source Code for Large Scale Applications in Education2022 IEEE Global Engineering Education Conference (EDUCON)10.1109/EDUCON52537.2022.9766748(24-30)Online publication date: 28-Mar-2022
    • (2022)Automated Assessment in Computer Science: A Bibliometric Analysis of the LiteratureLearning Technologies and Systems10.1007/978-3-031-33023-0_11(122-134)Online publication date: 21-Nov-2022
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media