[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3632620.3671103acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
research-article
Open access

Evaluating Contextually Personalized Programming Exercises Created with Generative AI

Published: 12 August 2024 Publication History

Abstract

Programming skills are typically developed through completing various hands-on exercises. Such programming problems can be contextualized to students’ interests and cultural backgrounds. Prior research in educational psychology has demonstrated that context personalization of exercises stimulates learners’ situational interests and positively affects their engagement. However, creating a varied and comprehensive set of programming exercises for students to practice on is a time-consuming and laborious task for computer science educators. Previous studies have shown that large language models can generate conceptually and contextually relevant programming exercises. Thus, they offer a possibility to automatically produce personalized programming problems to fit students’ interests and needs. This article reports on a user study conducted in an elective introductory programming course that included contextually personalized programming exercises created with GPT-4. The quality of the exercises was evaluated by both the students and the authors. Additionally, this work investigated student attitudes towards the created exercises and their engagement with the system. The results demonstrate that the quality of exercises generated with GPT-4 was generally high. What is more, the course participants found them engaging and useful. This suggests that AI-generated programming problems can be a worthwhile addition to introductory programming courses, as they provide students with a practically unlimited pool of practice material tailored to their personal interests and educational needs.

References

[1]
Abejide Ade-Ibijola. 2019. Syntactic Generation of Practice Novice Programs in Python. In ICT Education: 47th Annual Conference of the Southern African Computer Lecturers’ Association, SACLA 2018, Gordon’s Bay, South Africa, June 18–20, 2018, Revised Selected Papers 47. Springer, 158–172.
[2]
Joe Michael Allen, Frank Vahid, Kelly Downey, and Alex Daniel Edgcomb. 2018. Weekly Programs and in a CS and Class: Experiences and with Auto-graded Many-small and Programs (MSP). In 2018 ASEE Annual Conference & Exposition.
[3]
Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. 2023. Programming Is Hard - Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1(SIGCSE 2023). ACM. https://doi.org/10.1145/3545945.3569759
[4]
Matthew L. Bernacki, Meghan J. Greene, and Nikki G. Lobczowski. 2021. A Systematic Review of Research on Personalized Learning: Personalized by Whom, to What, How, and for What Purpose(s)?Educational Psychology Review 33, 4 (2021), 1675–1715. https://doi.org/10.1007/s10648-021-09615-8
[5]
Matthew L. Bernacki and Candace Walkington. 2018. The role of situational interest in personalized learning.Journal of Educational Psychology 110, 6 (2018), 864–881. https://doi.org/10.1037/edu0000250
[6]
Dennis Bouvier, Ellie Lovellette, John Matta, Bedour Alshaigy, Brett A. Becker, Michelle Craig, Jana Jackova, Robert McCartney, Kate Sanders, and Mark Zarb. 2016. Novice Programmers and the Problem Description Effect. In Proceedings of the 2016 ITiCSE Working Group Reports(ITiCSE ’16). ACM. https://doi.org/10.1145/3024906.3024912
[7]
Chen Cao. 2023. Scaffolding CS1 Courses with a Large Language Model-Powered Intelligent Tutoring System. In 28th International Conference on Intelligent User Interfaces(IUI ’23). ACM. https://doi.org/10.1145/3581754.3584111
[8]
Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. 2022. Codet: Code generation with generated tests. arXiv preprint arXiv:2207.10397 (2022).
[9]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, Igor Babuschkin, Suchir Balaji, Shantanu Jain, Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew M. Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating Large Language Models Trained on Code. ArXiv abs/2107.03374 (2021). https://api.semanticscholar.org/CorpusID:235755472
[10]
Shane Costello and John Roodenburg. 2015. Acquiescence response bias—Yeasaying and higher education. The Educational and Developmental Psychologist 32, 2 (2015), 105–119.
[11]
Mayela Coto, Sonia Mora, Beatriz Grass, and Juan Murillo-Morera. 2021. Emotions and programming learning: systematic mapping. Computer Science Education 32, 1 (2021), 30–65. https://doi.org/10.1080/08993408.2021.1920816
[12]
Michelle Craig, Jacqueline Smith, and Andrew Petersen. 2017. Familiar contexts and the difficulty of programming problems. In Proceedings of the 17th Koli Calling International Conference on Computing Education Research(Koli Calling 2017). ACM. https://doi.org/10.1145/3141880.3141898
[13]
Andre Del Carpio Gutierrez, Paul Denny, and Andrew Luxton-Reilly. 2024. Evaluating Automatically Generated Contextualised Programming Exercises. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1(SIGCSE 2024). ACM. https://doi.org/10.1145/3626252.3630863
[14]
Jacquelynne S. Eccles and Allan Wigfield. 2020. From expectancy-value theory to situated expectancy-value theory: A developmental, social cognitive, and sociocultural perspective on motivation. Contemporary Educational Psychology 61 (2020), 101859. https://doi.org/10.1016/j.cedpsych.2020.101859
[15]
John Edwards, Joseph Ditton, Dragan Trninic, Hillary Swanson, Shelsey Sullivan, and Chad Mano. 2020. Syntax Exercises in CS1. In Proceedings of the 2020 ACM Conference on International Computing Education Research(ICER ’20). ACM. https://doi.org/10.1145/3372782.3406259
[16]
Rosa Falotico and Piero Quatto. 2015. Fleiss’ kappa statistic without paradoxes. Quality & Quantity 49 (2015), 463–470.
[17]
Alvan R Feinstein and Domenic V Cicchetti. 1990. High agreement but low kappa: I. The problems of two paradoxes. Journal of clinical epidemiology 43, 6 (1990), 543–549.
[18]
Emilio Ferrara. 2023. Should ChatGPT be biased? Challenges and risks of bias in large language models. First Monday (2023). https://doi.org/10.5210/fm.v28i11.13346
[19]
James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, and James Prather. 2022. The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. In Proceedings of the 24th Australasian Computing Education Conference(ACE ’22). ACM. https://doi.org/10.1145/3511861.3511863
[20]
James Finnie-Ansley, Paul Denny, Andrew Luxton-Reilly, Eddie Antonio Santos, James Prather, and Brett A. Becker. 2023. My AI Wants to Know if This Will Be on the Exam: Testing OpenAI’s Codex on CS2 Programming Exercises. In Proceedings of the 25th Australasian Computing Education Conference(ACE ’23). ACM. https://doi.org/10.1145/3576123.3576134
[21]
Max Fowler, David H. Smith IV, Mohammed Hassan, Seth Poulsen, Matthew West, and Craig Zilles. 2022. Reevaluating the relationship between explaining, tracing, and writing skills in CS1 in a replication study. Computer Science Education 32, 3 (2022), 355–383. https://doi.org/10.1080/08993408.2022.2079866
[22]
Karin J Gerritsen-van Leeuwenkamp, Desirée Joosten-Ten Brinke, and Liesbeth Kester. 2019. Students’ perceptions of assessment quality related to their learning approaches and learning outcomes. Studies in Educational Evaluation 63 (2019), 72–82.
[23]
Ronald A Goebel and Charles G Stewart. 1971. Effects of experimenter bias and induced subject expectancy on hypnotic susceptibility.Journal of Personality and Social Psychology 18, 2 (1971), 263.
[24]
Judith TM Gulikers, Theo J Bastiaens, Paul A Kirschner, and Liesbeth Kester. 2006. Relations between student perceptions of assessment authenticity, study approaches and learning outcome. Studies in Educational Evaluation 32, 4 (2006), 381–400. https://doi.org/10.1016/j.stueduc.2006.10.003
[25]
Mark Guzdial. 2007. Contextualized Computing Education Increasing Retention by Making Computing Relevant. (2007).
[26]
Mark Guzdial. 2010. Does contextualized computing education help?ACM Inroads 1, 4 (2010), 4–6. https://doi.org/10.1145/1869746.1869747
[27]
Kilem Li Gwet. 2008. Computing inter-rater reliability and its variance in the presence of high agreement. Brit. J. Math. Statist. Psych. 61, 1 (2008), 29–48.
[28]
Kilem L Gwet. 2014. Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters. Advanced Analytics, LLC.
[29]
David Z. Hambrick, Brooke N. Macnamara, and Frederick L. Oswald. 2020. Is the Deliberate Practice View Defensible? A Review of Evidence and Discussion of Issues. Frontiers in Psychology 11 (2020). https://doi.org/10.3389/fpsyg.2020.01134
[30]
Monica J Harris and Robert Rosenthal. 1985. Mediation of interpersonal expectancy effects: 31 meta-analyses.Psychological bulletin 97, 3 (1985), 363.
[31]
Mohammad Hassany, Peter Brusilovsky, Jiaze Ke, Kamil Akhuseyinoglu, and Arun Balajiee Lekshmi Narayanan. 2024. Human-AI Co-Creation of Worked Examples for Programming Classes. arXiv preprint arXiv:2402.16235 (2024).
[32]
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, and Juha Sorva. 2023. Exploring the responses of large language models to beginner programmers’ help requests. In Proceedings of the 2023 ACM Conference on International Computing Education Research-Volume 1. 93–105.
[33]
Suzanne Hidi and K. Ann Renninger. 2006. The Four-Phase Model of Interest Development. Educational Psychologist 41, 2 (2006), 111–127. https://doi.org/10.1207/s15326985ep4102_4
[34]
Sigve Høgheim and Rolf Reber. 2015. Supporting interest of middle school students in mathematics through context personalization and example choice. Contemporary Educational Psychology 42 (2015), 17–25. https://doi.org/10.1016/j.cedpsych.2015.03.006
[35]
Sigve Høgheim and Rolf Reber. 2017. Eliciting Mathematics Interest: New Directions for Context Personalization and Example Choice. The Journal of Experimental Education 85, 4 (2017), 597–613. https://doi.org/10.1080/00220973.2016.1268085
[36]
Mollie Jordan, Kevin Ly, and Adalbert Gerald Soosai Raj. 2024. Need a Programming Exercise Generated in Your Native Language? ChatGPT’s Got Your Back: Automatic Generation of Non-English Programming Exercises Using OpenAI GPT-3.5. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1. 618–624.
[37]
Breanna Jury, Angela Lorusso, Juho Leinonen, Paul Denny, and Andrew Luxton-Reilly. 2024. Evaluating LLM-generated Worked Examples in an Introductory Programming Course. In Proceedings of the 26th Australasian Computing Education Conference(ACE 2024). ACM. https://doi.org/10.1145/3636243.3636252
[38]
Natalie Kiesler, Dominic Lohr, and Hieke Keuning. 2023. Exploring the potential of large language models to generate formative programming feedback. In 2023 IEEE Frontiers in Education Conference (FIE). IEEE, 1–5.
[39]
Olivier Klein, Stéphane Doyen, Christophe Leys, Pedro A Magalhães de Saldanha da Gama, Sarah Miller, Laurence Questienne, and Axel Cleeremans. 2012. Low hopes, high expectations: Expectancy effects and the replicability of behavioral experiments. Perspectives on Psychological Science 7, 6 (2012), 572–584.
[40]
Charles Koutcheme. 2023. Training Language Models for Programming Feedback Using Automated Repair Tools. Springer Nature Switzerland, 830–835. https://doi.org/10.1007/978-3-031-36272-9_79
[41]
Charles Koutcheme, Sami Sarsa, Juho Leinonen, Arto Hellas, and Paul Denny. 2023. Automated Program Repair Using Generative Models for Code Infilling. Springer Nature Switzerland, 798–803. https://doi.org/10.1007/978-3-031-36272-9_74
[42]
Juho Leinonen, Francisco Enrique Vicente Castro, and Arto Hellas. 2022. Time-on-Task Metrics for Predicting Performance. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education-Volume 1. 871–877.
[43]
Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, and Arto Hellas. 2023. Comparing Code Explanations Created by Students and Large Language Models. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1(ITiCSE 2023). ACM. https://doi.org/10.1145/3587102.3588785
[44]
Juho Leinonen, Paul Denny, and Jacqueline Whalley. 2021. Exploring the Effects of Contextualized Problem Descriptions on Problem Solving. In Proceedings of the 23rd Australasian Computing Education Conference(ACE ’21). ACM. https://doi.org/10.1145/3441636.3442302
[45]
Juho Leinonen, Arto Hellas, Sami Sarsa, Brent Reeves, Paul Denny, James Prather, and Brett A. Becker. 2023. Using Large Language Models to Enhance Programming Error Messages. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1(SIGCSE 2023). ACM. https://doi.org/10.1145/3545945.3569770
[46]
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022. Competition-level code generation with AlphaCode. Science 378, 6624 (2022), 1092–1097. https://doi.org/10.1126/science.abq1158
[47]
Mark Liffiton, Brad E Sheese, Jaromir Savelka, and Paul Denny. 2023. CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes. In Proceedings of the 23rd Koli Calling International Conference on Computing Education Research(Koli Calling ’23). ACM. https://doi.org/10.1145/3631802.3631830
[48]
Ellie Lovellette, Dennis J Bouvier, and John Matta. 2024. Contextualization, Authenticity, and the Problem Description Effect. ACM Transactions on Computing Education (2024). https://doi.org/10.1145/3643864
[49]
Ellie Lovellette, John Matta, Dennis Bouvier, and Roger Frye. 2017. Just the Numbers: An Investigation of Contextualization of Problems for Novice Programmers. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education(SIGCSE ’17). ACM. https://doi.org/10.1145/3017680.3017726
[50]
Anna Ly, John Edwards, Michael Liut, and Andrew Petersen. 2021. Revisiting Syntax Exercises in CS1. In Proceedings of the 22st Annual Conference on Information Technology Education(SIGITE ’21). ACM. https://doi.org/10.1145/3450329.3476855
[51]
Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023. Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1(SIGCSE 2023). ACM. https://doi.org/10.1145/3545945.3569785
[52]
Lauri Malmi, Judy Sheard, Jane Sinclair, Päivi Kinnunen, and Simon. 2023. Domain-Specific Theories of Teaching Computing: Do they Inform Practice?. In Proceedings of the 23rd Koli Calling International Conference on Computing Education Research(Koli Calling ’23). ACM. https://doi.org/10.1145/3631802.3631810
[53]
Ference Marton and Roger Säljö. 1976. On qualitative differences in learning: I—Outcome and process. British Journal of Educational Psychology 46, 1 (1976), 4–11.
[54]
Joseph E. Michaelis and David Weintrop. 2022. Interest Development Theory in Computing Education: A Framework and Toolkit for Researchers and Designers. ACM Trans. Comput. Educ. 22, 4 (2022). https://doi.org/10.1145/3487054
[55]
Julia Mullen, Chansup Byun, Vijay Gadepally, Siddharth Samsi, Albert Reuther, and Jeremy Kepner. 2017. Learning by doing, High Performance Computing education in the MOOC era. J. Parallel and Distrib. Comput. 105 (2017), 105–115. https://doi.org/10.1016/j.jpdc.2017.01.015
[56]
Maciej Pankiewicz and Ryan S Baker. 2023. Large Language Models (GPT) for automating feedback on programming assignments. arXiv preprint arXiv:2307.00150 (2023).
[57]
Philipp Peess, Annabell Brocker, Rene Roepke, and Ulrik Schroeder. 2023. A Grammar and Parameterization-Based Generator for Python Programming Exercises. (2023).
[58]
Fábio Perez and Ian Ribeiro. 2022. Ignore Previous Prompt: Attack Techniques For Language Models. In NeurIPS ML Safety Workshop.
[59]
Tung Phung, Victor-Alexandru Pădurean, José Cambronero, Sumit Gulwani, Tobias Kohn, Rupak Majumdar, Adish Singla, and Gustavo Soares. 2023. Generative AI for Programming Education: Benchmarking ChatGPT, GPT-4, and Human Tutors. arXiv preprint arXiv:2306.17156 (2023).
[60]
Philip M Podsakoff, Scott B MacKenzie, Jeong-Yeon Lee, and Nathan P Podsakoff. 2003. Common method biases in behavioral research: a critical review of the literature and recommended remedies.Journal of applied psychology 88, 5 (2003), 879.
[61]
Ferran Prados, Imma Boada, Josep Soler, and Jordi Poch. 2005. Automatic generation and correction of technical exercises. In International conference on engineering and computer education: Icece, Vol. 5.
[62]
James Prather, Paul Denny, Juho Leinonen, Brett A. Becker, Ibrahim Albluwi, Michelle Craig, Hieke Keuning, Natalie Kiesler, Tobias Kohn, Andrew Luxton-Reilly, Stephen MacNeil, Andrew Petersen, Raymond Pettit, Brent N. Reeves, and Jaromir Savelka. 2023. The Robots Are Here: Navigating the Generative AI Revolution in Computing Education. In Proceedings of the 2023 Working Group Reports on Innovation and Technology in Computer Science Education(ITiCSE 2023). ACM. https://doi.org/10.1145/3623762.3633499
[63]
Ben Puryear and Gina Sprint. 2022. Github copilot in the classroom: learning to code with AI assistance. J. Comput. Sci. Coll. 38, 1 (2022).
[64]
Danijel Radošević, Tihomir Orehavocki, and Zlatko Stapić. 2010. Automatic On-line Generation of Student’s Exercises in Teaching Programming. In Central European Conference on Information and Intelligent Systems. Faculty of Organization and Informatics Varazdin, 87.
[65]
Robert Rosenthal and Donald B Rubin. 1978. Interpersonal expectancy effects: The first 345 studies. Behavioral and Brain Sciences 1, 3 (1978), 377–386.
[66]
Elizabeth Sach. 2012. Teachers and testing: An investigation into teachers’ perceptions of formative assessment. Educational Studies 38, 3 (2012), 261–276.
[67]
Kay Sambell, Liz McDowell, and Sally Brown. 1997. “But is it fair?”: An exploratory study of student perceptions of the consequential validity of assessment. Studies in Educational Evaluation 23, 4 (1997), 349–371.
[68]
Kate Sanders, Jonas Boustedt, Anna Eckerdal, Robert McCartney, and Carol Zander. 2017. Folk Pedagogy: Nobody Doesn’t Like Active Learning. In Proceedings of the 2017 ACM Conference on International Computing Education Research(ICER ’17). ACM. https://doi.org/10.1145/3105726.3106192
[69]
Eddie Antonio Santos, Prajish Prasad, and Brett A Becker. 2023. Always provide context: The effects of code context on programming error message enhancement. In Proceedings of the ACM Conference on Global Computing Education Vol 1. 147–153.
[70]
Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models. In Proceedings of the 2022 ACM Conference on International Computing Education Research - Volume 1(ICER 2022). ACM. https://doi.org/10.1145/3501385.3543957
[71]
Sami Sarsa, Arto Hellas, and Juho Leinonen. 2022. Who Continues in a Series of Lifelong Learning Courses?. In Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 1 (Dublin, Ireland) (ITiCSE ’22). Association for Computing Machinery, New York, NY, USA, 47–53. https://doi.org/10.1145/3502718.3524752
[72]
Johanna Schoenherr. 2024. Personalizing real‐world problems: Posing own problems increases self‐efficacy expectations, intrinsic value, attainment value, and utility value. British Journal of Educational Psychology (2024). https://doi.org/10.1111/bjep.12653
[73]
Mariana Solari, María Isabel Vizquerra, and Anna Engel. 2022. Students’ interests for personalized learning: an analysis guide. European Journal of Psychology of Education 38, 3 (2022), 1073–1109. https://doi.org/10.1007/s10212-022-00656-3
[74]
Peter Sovietov. 2021. Automatic generation of programming exercises. In 2021 1st International Conference on Technology Enhanced Learning in Higher Education (TELE). IEEE, 111–114.
[75]
Sandro Speth, Niklas Meißner, and Steffen Becker. 2023. Investigating the Use of AI-Generated Exercises for Beginner and Intermediate Programming Courses: A ChatGPT Case Study. In 2023 IEEE 35th International Conference on Software Engineering Education and Training (CSEE&T). IEEE, 142–146.
[76]
Katrien Struyven, Filip Dochy, and Steven Janssens. 2003. Students’ perceptions about new modes of assessment in higher education: A review. Optimising new modes of assessment: In search of qualities and standards (2003), 171–223.
[77]
Shelsey Sullivan, Hillary Swanson, and John Edwards. 2021. Student Attitudes Toward Syntax Exercises in CS1. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education(SIGCSE ’21). ACM. https://doi.org/10.1145/3408877.3432399
[78]
Leonard Tetzlaff, Florian Schmiedek, and Garvin Brod. 2020. Developing Personalized Education: A Dynamic Framework. Educational Psychology Review 33, 3 (2020), 863–882. https://doi.org/10.1007/s10648-020-09570-w
[79]
Thomas James Tiam-Lee and Kaoru Sumi. 2018. Procedural Generation of Programming Exercises with Guides Based on the Student’s Emotion. In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 1465–1470. https://doi.org/10.1109/SMC.2018.00255
[80]
Haoye Tian, Weiqi Lu, Tsz On Li, Xunzhu Tang, Shing-Chi Cheung, Jacques Klein, and Tegawendé F. Bissyandé. 2023. Is ChatGPT the Ultimate Programming Assistant – How far is it?https://doi.org/10.48550/ARXIV.2304.11938
[81]
Mart Van Dinther, Filip Dochy, Mien Segers, and Johan Braeken. 2014. Student perceptions of assessment and student self-efficacy in competence-based education. Educational Studies 40, 3 (2014), 330–351.
[82]
Kristina von Hausswolff, Anna Eckerdal, and Michael Thuné. 2020. Learning to program hands-on: a controlled study. In Koli Calling ’20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research(Koli Calling ’20). ACM. https://doi.org/10.1145/3428029.3428058
[83]
Akiyoshi Wakatani and Toshiyuki Maeda. 2015. Automatic Generation of Programming Exercises for Learning Programming Language. In 2015 IEEE/ACIS 14th International Conference on Computer and Information Science (ICIS). 461–465. https://doi.org/10.1109/ICIS.2015.7166637
[84]
Akiyoshi Wakatani and Toshiyuki Maeda. 2016. Evaluation of Software Education Using Auto-generated Exercises. In 2016 IEEE Intl Conference on Computational Science and Engineering (CSE) and IEEE Intl Conference on Embedded and Ubiquitous Computing (EUC) and 15th Intl Symposium on Distributed Computing and Applications for Business Engineering (DCABES). 732–735. https://doi.org/10.1109/CSE-EUC-DCABES.2016.269
[85]
Candace Walkington and Matthew L. Bernacki. 2017. Personalization of Instruction: Design Dimensions and Implications for Cognition. The Journal of Experimental Education 86, 1 (2017), 50–68. https://doi.org/10.1080/00220973.2017.1380590
[86]
Candace Walkington and Matthew L. Bernacki. 2018. Personalizing Algebra to Students’ Individual Interests in an Intelligent Tutoring System: Moderators of Impact. International Journal of Artificial Intelligence in Education 29, 1 (2018), 58–88. https://doi.org/10.1007/s40593-018-0168-1
[87]
Candace Walkington and Matthew L. Bernacki. 2020. Appraising research on personalized learning: Definitions, theoretical alignment, advancements, and future directions. Journal of Research on Technology in Education 52, 3 (2020), 235–252. https://doi.org/10.1080/15391523.2020.1747757
[88]
Sierra Wang, John Mitchell, and Chris Piech. 2024. A large scale RCT on effective error messages in CS1. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1. 1395–1401.
[89]
Michel Wermelinger. 2023. Using GitHub Copilot to Solve Simple Programming Problems. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1(SIGCSE 2023). ACM. https://doi.org/10.1145/3545945.3569830
[90]
Benjamin Xie, Dastyni Loksa, Greg L. Nelson, Matthew J. Davidson, Dongsheng Dong, Harrison Kwik, Alex Hui Tan, Leanne Hwa, Min Li, and Amy J. Ko. 2019. A theory of instruction for introductory programming skills. Computer Science Education 29, 2–3 (2019), 205–253. https://doi.org/10.1080/08993408.2019.1565235
[91]
Laura Zavala and Benito Mendoza. 2018. On the Use of Semantic-Based AIG to Automatically Generate Programming Exercises. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education(SIGCSE ’18). Association for Computing Machinery, New York, NY, USA, 14–19. https://doi.org/10.1145/3159450.3159608
[92]
Jun Zhao, Zhihao Zhang, Luhui Gao, Qi Zhang, Tao Gui, and Xuanjing Huang. 2024. LLaMA Beyond English: An Empirical Study on Language Capability Transfer. arxiv:2401.01055 [cs.CL]
[93]
Edward J Zoble and Richard S Lehman. 1969. Interaction of subject and experimenter expectancy effects in a tone length discrimination task. Behavioral Science 14, 5 (1969), 357–363.

Cited By

View all
  • (2024)Embrace, Don’t Avoid: Reimagining Higher Education with Generative Artificial IntelligenceJournal of Educational Management and Learning10.60084/jeml.v2i2.2332:2(81-90)Online publication date: 28-Nov-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICER '24: Proceedings of the 2024 ACM Conference on International Computing Education Research - Volume 1
August 2024
539 pages
ISBN:9798400704758
DOI:10.1145/3632620
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 August 2024

Check for updates

Author Tags

  1. automatic exercise generation
  2. context personalization
  3. generative AI
  4. large language models

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Research Council of Finland

Conference

ICER 2024
Sponsor:

Acceptance Rates

Overall Acceptance Rate 189 of 803 submissions, 24%

Upcoming Conference

ICER 2025
ACM Conference on International Computing Education Research
August 3 - 6, 2025
Charlottesville , VA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)579
  • Downloads (Last 6 weeks)163
Reflects downloads up to 04 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Embrace, Don’t Avoid: Reimagining Higher Education with Generative Artificial IntelligenceJournal of Educational Management and Learning10.60084/jeml.v2i2.2332:2(81-90)Online publication date: 28-Nov-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media