[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3059009.3059033acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article

Examining a Student-Generated Question Activity Using Random Topic Assignment

Published: 28 June 2017 Publication History

Abstract

Students and instructors expend significant effort, respectively, preparing to be examined and preparing students for exams. This paper investigates question authoring, where students create practice questions as a preparation activity prior to an exam, in an introductory programming context. The key contribution of this study as compared to previous work is an improvement to the design of the experiment. Students were randomly assigned the topics that their questions should target, removing a selection bias that has been a limitation of earlier work. We conduct a large-scale between-subjects experiment (n = 700) and find that students exhibit superior performance on exam questions that relate to the topics they were assigned when compared to those students preparing questions on other assigned topics.

References

[1]
V. Barr and D. Trytten. Using Turing's Craft Codelab to support CS1 students as they learn to program. ACM Inroads, 7(2):67--75, May 2016.
[2]
Y. Cao and L. Porter. Evaluating student learning from collaborative group tests in introductory computing. In Proc. ACM Tech. Symp. on CS Ed (SIGCSE '17), pages 99--104. ACM, 2017.
[3]
P. R. Denner and J. P. Rickards. A developmental comparison of the effects of provided and generated questions on text recall. Contemporary Educational Psychology, 12(2):135--146, 1987.
[4]
P. Denny. Generating practice questions as a preparation strategy for introductory programming exams. In Proc. ACM Tech. Symp. on CS Ed (SIGCSE '15), pages 278--283. ACM, 2015.
[5]
P. Denny, D. Cukierman, and J. Bhaskar. Measuring the effect of inventing practice exercises on learning in an introductory programming course. In Proceedings of the 15th Koli Calling Conference on Computing Education Research, pages 13--22. ACM, 2015.
[6]
P. Denny, A. Luxton-Reilly, and J. Hamer. The PeerWise system of student contributed assessment questions. In Proceedings of the Tenth Conference on Australasian Computing Education - Volume 78, pages 69--74. Australian Computer Society, Inc., 2008.
[7]
P. Denny, A. Luxton-Reilly, J. Hamer, and H. Purchase. Coverage of course topics in a student generated MCQ repository. In Proc. ACM Conf. on Innovation and Technology in CS Ed (ITiCSE '09), pages 11--15, 2009.
[8]
L. Ding and R. Beichner. Approaches to data analysis of multiple-choice questions. Phys. Rev. ST Phys. Educ. Res., 5:020103, Sep 2009.
[9]
J. Dunlosky. Strengthening the student toolbox: Study strategies to boost learning. American Educator, 37(3):12--21, September 2013.
[10]
P. W. Foos, J. J. Mora, and S. Tkacz. Student study techniques and the generation effect. Journal of Educational Psychology, 86(4):567--576, 1994.
[11]
R. Gurung. How do students really study (and does it matter)? Teaching of Psychology, 32:238--240, 2005.
[12]
D. Hovemeyer and J. Spacco. Cloudcoder: A web-based programming exercise system. J. Comput. Sci. Coll., 28(3):30--30, Jan. 2013.
[13]
J. R. Lehman and K. M. Lehman. The relative effects of experimenter and subject generated questions on learning from museum case exhibits. Journal of Research in Science Teaching, 21(9):931--935, 1984.
[14]
R. Lister, E. S. Adams, S. Fitzgerald, W. Fone, J. Hamer, M. Lindholm, R. McCartney, J. E. Moström, K. Sanders, O. Seppälä, B. Simon, and L. Thomas. A multi-national study of reading and tracing skills in novice programmers. SIGCSE Bull., 36(4):119--150, June 2004.
[15]
A. Luxton-Reilly, D. Bertinshaw, P. Denny, B. Plimmer, and R. Sheehan. The impact of question generation activities on performance. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education, pages 391--396. ACM, 2012.
[16]
L. Malmi, V. Karavirta, A. Korhonen, and J. Nikander. Experiences on automatically assessed algorithm simulation exercises with different resubmission policies. J. Educ. Resour. Comput., 5(3), Sept. 2005.
[17]
D. Marchena Parreira, A. Petersen, and M. Craig. PCRS-C: Helping students learn C. In Proc. ACM Conf. on Innovation and Technology in CS Ed (ITiCSE '15), pages 347--347. ACM, 2015.
[18]
S. Mishra and S. Iyer. An exploration of problem posing-based activities as an assessment tool and as an instructional strategy. Research and Practice in Technology Enhanced Learning, 10(1), 2015.
[19]
A. Petersen, M. Craig, and P. Denny. Employing multiple-answer multiple choice questions. In Proc. ACM Conf. on Innovation and Technology in CS Ed (ITiCSE '16), pages 252--253. ACM, 2016.
[20]
A. Petersen, M. Craig, and D. Zingaro. Reviewing CS1 exam question content. In Proceedings of the 42nd ACM Technical Symposium on Computer Science Education, pages 631--636. ACM, 2011.
[21]
H. L. Roediger and A. C. Butler. The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1):20--27, 2011.
[22]
H. L. Roediger and A. C. Butler. Retrieval practice (testing) effect. In H. L. Pashler (Ed.), Encyclopedia of the Mind, Sage Publishing, pages 660--661, 2013.
[23]
J. Sheard, Simon, A. Carbone, D. Chinn, M.-J. Laakso, T. Clear, M. de Raadt, D. D'Souza, J. Harland, R. Lister, A. Philpott, and G. Warburton. Exploring programming assessment instruments: a classification scheme for examination questions. In Proc. of the 7th International Workshop on Computing Education Research, pages 33--38. ACM, 2011.
[24]
Simon, D. Chinn, M. de Raadt, A. Philpott, J. Sheard, M.-J. Laakso, D. D'Souza, J. Skene, A. Carbone, T. Clear, R. Lister, and G. Warburton. Introductory programming: Examining the exams. In Proceedings of the 14th Australasian Computing Education Conference, pages 61--70, 2012.
[25]
N. Slamecka and P. Graf. The generation effect: Delineation of a phenomenon. Journal of Experimental Psychology, 4:592--604, 1978.

Cited By

View all
  • (2023)Chat Overflow: Artificially Intelligent Models for Computing Education - renAIssance or apocAIypse?Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 110.1145/3587102.3588773(3-4)Online publication date: 29-Jun-2023
  • (2023)Experiences from Learnersourcing SQL Exercises: Do They Cover Course Topics and Do Students Use Them?Proceedings of the 25th Australasian Computing Education Conference10.1145/3576123.3576137(123-131)Online publication date: 30-Jan-2023
  • (2023)Crowdsourcing the Evaluation of Multiple-Choice Questions Using Item-Writing Flaws and Bloom's TaxonomyProceedings of the Tenth ACM Conference on Learning @ Scale10.1145/3573051.3593396(25-34)Online publication date: 20-Jul-2023
  • Show More Cited By

Index Terms

  1. Examining a Student-Generated Question Activity Using Random Topic Assignment

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ITiCSE '17: Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education
    June 2017
    412 pages
    ISBN:9781450347044
    DOI:10.1145/3059009
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 28 June 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. question authoring
    2. self-testing
    3. study behaviours

    Qualifiers

    • Research-article

    Conference

    ITiCSE '17
    Sponsor:

    Acceptance Rates

    ITiCSE '17 Paper Acceptance Rate 56 of 175 submissions, 32%;
    Overall Acceptance Rate 552 of 1,613 submissions, 34%

    Upcoming Conference

    ITiCSE '25
    Innovation and Technology in Computer Science Education
    June 27 - July 2, 2025
    Nijmegen , Netherlands

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)10
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 03 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Chat Overflow: Artificially Intelligent Models for Computing Education - renAIssance or apocAIypse?Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 110.1145/3587102.3588773(3-4)Online publication date: 29-Jun-2023
    • (2023)Experiences from Learnersourcing SQL Exercises: Do They Cover Course Topics and Do Students Use Them?Proceedings of the 25th Australasian Computing Education Conference10.1145/3576123.3576137(123-131)Online publication date: 30-Jan-2023
    • (2023)Crowdsourcing the Evaluation of Multiple-Choice Questions Using Item-Writing Flaws and Bloom's TaxonomyProceedings of the Tenth ACM Conference on Learning @ Scale10.1145/3573051.3593396(25-34)Online publication date: 20-Jul-2023
    • (2023)Reducing Procrastination Without Sacrificing Students' Autonomy Through Optional Weekly Presentations of Student-Generated ContentProceedings of the 54th ACM Technical Symposium on Computer Science Education V. 110.1145/3545945.3569839(151-157)Online publication date: 2-Mar-2023
    • (2023)Lessons Learned From Four Computing Education Crowdsourcing SystemsIEEE Access10.1109/ACCESS.2023.325364211(22982-22992)Online publication date: 2023
    • (2022)Experience Report on a Student-Organized AI CourseProceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 110.1145/3502718.3524805(103-109)Online publication date: 7-Jul-2022
    • (2022)Automatic Generation of Programming Exercises and Code Explanations Using Large Language ModelsProceedings of the 2022 ACM Conference on International Computing Education Research - Volume 110.1145/3501385.3543957(27-43)Online publication date: 3-Aug-2022
    • (2022)Assessing the Quality of Student-Generated Short Answer Questions Using GPT-3Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption10.1007/978-3-031-16290-9_18(243-257)Online publication date: 5-Sep-2022
    • (2021)Selecting student-authored questions for summative assessmentsResearch in Learning Technology10.25304/rlt.v29.251729Online publication date: 3-Feb-2021
    • (2021)The Impact of Multiple Choice Question Design on Predictions of PerformanceProceedings of the 23rd Australasian Computing Education Conference10.1145/3441636.3442306(66-72)Online publication date: 2-Feb-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media