[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3501385.3543971acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
research-article

Launching Registered Report Replications in Computer Science Education Research

Published: 03 August 2022 Publication History

Abstract

Background and Context. The quality of a research field is greatly determined by the quality of its publications. However, in many research fields this quality is endangered because of biases towards publishing positive results, which have led to questionable research practices that negatively impact the reliability of the reported findings.
Objectives. We explored two approaches that could guard against these questionable research practices for computer science education research: 1. The replication of prior studies to confirm (or refute) previous findings and 2. requiring authors to submit a registered report, containing hypotheses, methods, and analytic procedures, before conducting the studies.
Method. Over the span of 18 months, we organized a special issue of the Computer Science Education journal that only accepted registered reports that replicated a previous computer science education research study. Registered reports involve peer review and approval at the research design stage, before data collection begins. The editorial process was thus modified to accommodate multiple rounds of review and a longer time period between original and final submissions. We believe this is the first use of registered reports in computer science education research. A questionnaire gathering feedback on the new process was also administered to the authors of the accepted reports.
Findings. We found seven author teams willing to submit a manuscript for the special issue. Out of this pool, preregistered reports from five teams were accepted to be taken forwards. One team then withdrew because the ethics procedure at their institution exceeded the special issue timeline. The remaining four author teams conducted their studies and resubmitted full papers that were accepted, pending some final corrections. Authors’ feedback about the registered reports process was positive.
Implications. We demonstrated that it is possible to attract interest for registered report replications in a computer science education journal issue, and to successfully conduct the necessary editorial steps. Future efforts should attend to challenges related to modified research and journal submission timelines and consider adding a second review cycle for the first stage of registered reports. While the procedures used in this special issue may be suitable for many research approaches, further discussion is warranted on how they can be combined with exploratory research (as opposed to hypothesis testing research) and how they can be adapted to non-positivist research.

References

[1]
Alireza Ahadi, Arto Hellas, Petri Ihantola, Ari Korhonen, and Andrew Petersen. 2016. Replication in Computing Education Research: Researcher Attitudes and Experiences. In Proceedings of the 16th Koli Calling International Conference on Computing Education Research (Koli, Finland) (Koli Calling ’16). Association for Computing Machinery, New York, NY, USA, 2–11. https://doi.org/10.1145/2999541.2999554
[2]
Marjan Bakker, Coosje L. S. Veldkamp, Olmo R. van den Akker, Marcel A. L. M. van Assen, Elise Crompvoets, How Hwee Ong, and Jelte M. Wicherts. 2020. Recommendations in pre-registrations and internal review board proposals promote formal power analyses but do not increase sample size. PLOS ONE 15, 7 (07 2020), 1–15. https://doi.org/10.1371/journal.pone.0236079
[3]
Richard Border, Emma C. Johnson, Luke M. Evans, Andrew Smolen, Noah Berley, Patrick F. Sullivan, and Matthew C. Keller. 2019. No Support for Historical Candidate Gene or Candidate Gene-by-Interaction Hypotheses for Major Depression Across Multiple Large Samples. American Journal of Psychiatry 176, 5 (2019), 376–387. https://doi.org/10.1176/appi.ajp.2018.18070881 arXiv:https://doi.org/10.1176/appi.ajp.2018.18070881PMID: 30845820.
[4]
Richard Bornat. 2014. Camels and humps: a retraction. (2014). http://eis.sla.mdx.ac.uk/staffpages/r_bornat/papers/camel_hump_retraction.pdf
[5]
Richard Bornat, Saeed Dehnadi, and Simon. 2008. Mental Models, Consistency and Programming Aptitude. In Proceedings of the Tenth Conference on Australasian Computing Education - Volume 78 (Wollongong, NSW, Australia) (ACE ’08). Australian Computer Society, Inc., AUS, 53–61.
[6]
Julia G. Bottesini, Mijke Rhemtulla, and Simine Vazire. 2022. What do participants think of our research practices? An examination of behavioural psychology participants' preferences. Royal Society Open Science 9, 4 (April 2022). https://doi.org/10.1098/rsos.200048
[7]
Carter J. Boyd, Zachary L. Gentry, Kimberly D. Martin, and Soroush Rais-Bahrami. 2019. Factors Associated With the Highest and Lowest Cited Research Articles in Urology Journals. Urology 124(2019), 23–27. https://doi.org/10.1016/j.urology.2018.11.034
[8]
Neil C. C. Brown, Amjad Altadmri, Sue Sentance, and Michael Kölling. 2018. Blackbox, Five Years On: An Evaluation of a Large-Scale Programming Data Collection Project. In Proceedings of the 2018 ACM Conference on International Computing Education Research (Espoo, Finland) (ICER ’18). Association for Computing Machinery, New York, NY, USA, 196–204. https://doi.org/10.1145/3230977.3230991
[9]
Samuel V. Bruton, Mary Medlin, Mitch Brown, and Donald F. Sacco. 2020. Personal Motivations and Systemic Incentives: Scientists on Questionable Research Practices. Science and Engineering Ethics 26, 3 (01 Jun 2020), 1531–1547. https://doi.org/10.1007/s11948-020-00182-9
[10]
Kevin Buffardi. 2020. Assessing Individual Contributions to Software Engineering Projects with Git Logs and User Stories. Association for Computing Machinery, New York, NY, USA, 650–656. https://doi.org/10.1145/3328778.3366948
[11]
Michael E. Caspersen, Kasper Dalgaard Larsen, and Jens Bennedsen. 2007. Mental Models and Programming Aptitude. SIGCSE Bull. 39, 3 (jun 2007), 206–210. https://doi.org/10.1145/1269900.1268845
[12]
Christopher D. Chambers and Loukia Tzavella. 2022. The past, present and future of Registered Reports. Nature Human Behaviour 6, 1 (01 Jan 2022), 29–42. https://doi.org/10.1038/s41562-021-01193-7
[13]
Christina S. Chhin, Katherine A. Taylor, and Wendy S. Wei. 2018. Supporting a Culture of Replication: An Examination of Education and Special Education Research Grants Funded by the Institute of Education Sciences. Educational Researcher 47, 9 (Dec. 2018), 594–605. https://doi.org/10.3102/0013189X18788047 Publisher: American Educational Research Association.
[14]
Alexander M. Clark. 1998. The qualitative-quantitative debate: moving from positivism and confrontation to post-positivism and reconciliation. Journal of Advanced Nursing 27, 6 (1998), 1242–1249. https://doi.org/10.1046/j.1365-2648.1998.00651.x arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1046/j.1365-2648.1998.00651.x
[15]
Andy Cockburn, Pierre Dragicevic, Lonni Besançon, and Carl Gutwin. 2020. Threats of a Replication Crisis in Empirical Computer Science. Commun. ACM 63, 8 (July 2020), 70–79. https://doi.org/10.1145/3360311
[16]
Enrico Coiera, Elske Ammenwerth, Andrew Georgiou, and Farah Magrabi. 2018. Does health informatics have a replication crisis?Journal of the American Medical Informatics Association 25, 8 (04 2018), 963–968. https://doi.org/10.1093/jamia/ocy028 arXiv:https://academic.oup.com/jamia/article-pdf/25/8/963/34150203/ocy028.pdf
[17]
Open Science Collaboration. 2015. Estimating the reproducibility of psychological science. Science 349, 6251 (2015), aac4716. https://doi.org/10.1126/science.aac4716 arXiv:https://www.science.org/doi/pdf/10.1126/science.aac4716
[18]
John W. Creswell. 2008. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 3rd Edition(3rd edition ed.). SAGE Publications, Inc, Thousand Oaks, Calif.
[19]
Jennifer Crocker and M. Lynne Cooper. 2011. Addressing Scientific Fraud. Science 334, 6060 (2011), 1182–1182. https://doi.org/10.1126/science.1216775 arXiv:https://www.science.org/doi/pdf/10.1126/science.1216775
[20]
Joshua Cuevas. 2015. Is learning styles-based instruction effective? A comprehensive analysis of recent research on learning styles. Theory and Research in Education 13, 3 (2015), 308–333. https://doi.org/10.1177/1477878515606621 arXiv:https://doi.org/10.1177/1477878515606621
[21]
Saeed Dehnadi and Richard Bornat. 2006. The camel has two humps (working title). (2006). http://eis.sla.mdx.ac.uk/research/PhDArea/saeed/paper1.pdf
[22]
Brian D. Earp and David Trafimow. 2015. Replication, falsification, and the crisis of confidence in social psychology. Frontiers in Psychology 6 (2015). https://doi.org/10.3389/fpsyg.2015.00621
[23]
Daniele Fanelli. 2009. How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLOS ONE 4, 5 (05 2009), 1–11. https://doi.org/10.1371/journal.pone.0005738
[24]
Klaus Fiedler and Norbert Schwarz. 2016. Questionable Research Practices Revisited. Social Psychological and Personality Science 7, 1 (2016), 45–52. https://doi.org/10.1177/1948550615612150 arXiv:https://doi.org/10.1177/1948550615612150
[25]
Sabrina Finke, Ferenc Kemény, Markus Sommer, Vesna Krnjic, Martin Arendasy, Wolfgang Slany, and Karin Landerl. 2022. Unravelling the Numerical and Spatial Underpinnings of Computational Thinking: a Pre-Registered Replication Study [in press]. Computer Science Education(2022).
[26]
Center for Open Science. n.d. Registered Reports. Retrieved 2022-03-16 from https://www.cos.io/initiatives/registered-reports
[27]
Max Fowler, David Smith, Mohammed Hassan, Seth Poulsen, Matthew West, and Craig Zilles. 2022. Reevaluating the Relationship between Explaining, Tracing, and Writing Skills in CS1 in a Replication Study [in press]. Computer Science Education(2022).
[28]
Malte Friese and Julius Frankenbach. 2020. p-Hacking and publication bias interact to distort meta-analytic effect size estimates.Psychological Methods 25, 4 (2020), 456–471. https://doi.org/10.1037/met0000246
[29]
Andrew Gelman and Eric Loken. 2014. The statistical crisis in science data-dependent analysis—a “garden of forking paths”—explains why many statistically significant comparisons don’t hold up. American scientist 102, 6 (2014), 460.
[30]
Stephen L. George. 2016. Research misconduct and data fraud in clinical trials: prevalence and causal factors. International Journal of Clinical Oncology 21, 1 (01 Feb 2016), 15–21. https://doi.org/10.1007/s10147-015-0887-3
[31]
Stephen L. George and Marc Buyse. 2015. Data fraud in clinical trials. Clinical investigation 5, 2 (2015), 161–173. https://doi.org/10.4155/cli.14.116 25729561[pmid].
[32]
Gowri Gopalakrishna, Gerben ter Riet, Gerko Vink, Ineke Stoop, Jelte M. Wicherts, and Lex M. Bouter. 2022. Prevalence of questionable research practices, research misconduct and their potential explanatory factors: A survey among academic researchers in The Netherlands. PLOS ONE 17, 2 (02 2022), 1–16. https://doi.org/10.1371/journal.pone.0263023
[33]
David B. Grant, Gyöngyi Kovács, and Karen Spens. 2018. Questionable research practices in academia: antecedents and consequences. European Business Review 30, 2 (01 Jan 2018), 101–127. https://doi.org/10.1108/EBR-12-2016-0155
[34]
Qiang Hao, David H. Smith IV, Naitra Iriumi, Michail Tsikerdekis, and Amy J. Ko. 2019. A Systematic Investigation of Replications in Computing Education Research. ACM Trans. Comput. Educ. 19, 4, Article 42 (aug 2019), 18 pages. https://doi.org/10.1145/3345328
[35]
Tamarinde L. Haven, Timothy M. Errington, Kristian Skrede Gleditsch, Leonie van Grootel, Alan M. Jacobs, Florian G. Kern, Rafael Piñeiro, Fernando Rosenblatt, and Lidwine B. Mokkink. 2020. Preregistering Qualitative Research: A Delphi Study. International Journal of Qualitative Methods 19 (2020), 1609406920976417. https://doi.org/10.1177/1609406920976417 arXiv:https://doi.org/10.1177/1609406920976417
[36]
Megan L. Head, Luke Holman, Rob Lanfear, Andrew T. Kahn, and Michael D. Jennions. 2015. The Extent and Consequences of P-Hacking in Science. PLOS Biology 13, 3 (03 2015), 1–15. https://doi.org/10.1371/journal.pbio.1002106
[37]
Sarah Heckman, Jeffrey C. Carver, Mark Sherriff, and Ahmed Al-zubidy. 2021. A Systematic Literature Review of Empiricism and Norms of Reporting in Computing Education Research Literature. ACM Trans. Comput. Educ. 22, 1, Article 3 (oct 2021), 46 pages. https://doi.org/10.1145/3470652
[38]
Emma L Henderson. 2022. A guide to preregistration and Registered Reports. https://doi.org/10.31222/osf.io/x7aqr
[39]
Christopher Lynnly Hovey, Lecia Barker, and Vaughan Nagy. 2019. Survey Results on Why CS Faculty Adopt New Teaching Practices. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE ’19). Association for Computing Machinery, New York, NY, USA, 483–489. https://doi.org/10.1145/3287324.3287420
[40]
Kerry E Howell. 2012. An introduction to the philosophy of methodology. Sage.
[41]
Christopher Hundhausen, Phillip Conrad, Adam Carter, and Olusola Adesope. 2022. Assessing Individual Contributions to Software Engineering Projects: A Replication Study [in press]. Computer Science Education(2022).
[42]
Leslie K. John, George Loewenstein, and Drazen Prelec. 2012. Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science 23, 5 (2012), 524–532. https://doi.org/10.1177/0956797611430953 arXiv:https://doi.org/10.1177/0956797611430953PMID: 22508865.
[43]
Daniel Kahneman. 2011. Thinking, fast and slow. Macmillan.
[44]
Anton Kühberger, Astrid Fritz, and Thomas Scherndl. 2014. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size. PLOS ONE 9, 9 (09 2014), 1–8. https://doi.org/10.1371/journal.pone.0105825
[45]
Mike Lopez, Jacqueline Whalley, Phil Robbins, and Raymond Lister. 2008. Relationships between Reading, Tracing and Writing Skills in Introductory Programming. In Proceedings of the Fourth International Workshop on Computing Education Research(Sydney, Australia) (ICER ’08). Association for Computing Machinery, New York, NY, USA, 101–112. https://doi.org/10.1145/1404520.1404531
[46]
Hugues Lortie-Forgues and Matthew Inglis. 2019. Rigorous Large-Scale Educational RCTs Are Often Uninformative: Should We Be Concerned?Educational Researcher 48, 3 (2019), 158–166. https://doi.org/10.3102/0013189X19832850 arXiv:https://doi.org/10.3102/0013189X19832850
[47]
Jonathan Lung, Jorge Aranda, Steve M. Easterbrook, and Gregory V. Wilson. 2008. On the Difficulty of Replicating Human Subjects Studies in Software Engineering. In Proceedings of the 30th International Conference on Software Engineering (Leipzig, Germany) (ICSE ’08). Association for Computing Machinery, New York, NY, USA, 191–200. https://doi.org/10.1145/1368088.1368115
[48]
Matthew C. Makel, Jonathan A. Plucker, and Boyd Hegarty. 2012. Replications in Psychology Research: How Often Do They Really Occur?Perspectives on Psychological Science 7, 6 (2012), 537–542. https://doi.org/10.1177/1745691612460688 arXiv:https://doi.org/10.1177/1745691612460688PMID: 26168110.
[49]
Zacharias Maniadis, Fabio Tufano, and John A. List. 2017. To Replicate or Not to Replicate? Exploring Reproducibility in Economics through the Lens of a Model and a Pilot Study. The Economic Journal 127, 605 (10 2017), F209–F235. https://doi.org/10.1111/ecoj.12527 arXiv:https://academic.oup.com/ej/article-pdf/127/605/F209/26495669/ejf209.pdf
[50]
Eva Marinus, Zoe Powell, Rosalind Thornton, Genevieve McArthur, and Stephen Crain. 2018. Unravelling the Cognition of Coding in 3-to-6-Year Olds: The Development of an Assessment Tool and the Relation between Coding Ability and Cognitive Compiling of Syntax in Natural Language. In Proceedings of the 2018 ACM Conference on International Computing Education Research (Espoo, Finland) (ICER ’18). Association for Computing Machinery, New York, NY, USA, 133–141. https://doi.org/10.1145/3230977.3230984
[51]
Alison McCook. 2017. “I placed too much faith in underpowered studies:” Nobel Prize winner admits mistakes. Retrieved 9th March, 2022 from https://retractionwatch.com/2017/02/20/placed-much-faith-underpowered-studies-nobel-prize-winner-admits-mistakes/
[52]
Janice M. Morse, Michael Barrett, Maria Mayan, Karin Olson, and Jude Spiers. 2002. Verification Strategies for Establishing Reliability and Validity in Qualitative Research. International Journal of Qualitative Methods 1, 2 (2002), 13–22. https://doi.org/10.1177/160940690200100202 arXiv:https://doi.org/10.1177/160940690200100202
[53]
Philip M. Newton and Mahallad Miah. 2017. Evidence-Based Higher Education – Is the Learning Styles ‘Myth’ Important?Frontiers in Psychology 8 (2017). https://doi.org/10.3389/fpsyg.2017.00444
[54]
Pentti Nieminen, Gerta Rucker, Jouko Miettunen, James Carpenter, and Martin Schumacher. 2007. Statistically significant papers in psychiatry were cited more often than others. Journal of Clinical Epidemiology 60, 9 (2007), 939–946. https://doi.org/10.1016/j.jclinepi.2006.11.014
[55]
Helen Noble and Joanna Smith. 2015. Issues of validity and reliability in qualitative research. Evidence-Based Nursing 18, 2 (2015), 34–35. https://doi.org/10.1136/eb-2015-102054 arXiv:https://ebn.bmj.com/content/18/2/34.full.pdf
[56]
Ernest Hugh O’Boyle Jr, George Christopher Banks, and Erik Gonzalez-Mulé. 2017. The Chrysalis Effect: How Ugly Initial Results Metamorphosize Into Beautiful Articles. Journal of Management 43, 2 (2017), 376–399. https://doi.org/10.1177/0149206314527133 arXiv:https://doi.org/10.1177/0149206314527133
[57]
Abdul Hameed Panhwar, Sanaullah Ansari, and Asif Ali Shah. 2017. Post-positivism: An effective paradigm for social and educational research. International Research Journal of Arts & Humanities (IRJAH) 45, 45(2017).
[58]
David Peterson. 2016. The Baby Factory: Difficult Research Objects, Disciplinary Standards, and the Production of Statistical Significance. Socius 2(2016), 2378023115625071. https://doi.org/10.1177/2378023115625071 arXiv:https://doi.org/10.1177/2378023115625071
[59]
David Peterson and Aaron Panofsky. 2021. Self-correction in science: The diagnostic and integrative motives for replication. Social Studies of Science 51, 4 (2021), 583–605. https://doi.org/10.1177/03063127211005551 arXiv:https://doi.org/10.1177/03063127211005551PMID: 33764246.
[60]
Karl Popper. 2002. The logic of scientific discovery(second ed.). Routledge.
[61]
Felipe Romero. 2019. Philosophy of science and the replicability crisis. Philosophy Compass 14, 11 (2019), e12633. https://doi.org/10.1111/phc3.12633 arXiv:https://compass.onlinelibrary.wiley.com/doi/pdf/10.1111/phc3.12633e12633 PHCO-1228.R1.
[62]
Marcos Román-González, Juan-Carlos Pérez-González, and Carmen Jiménez-Fernández. 2017. Which cognitive abilities underlie computational thinking? Criterion validity of the Computational Thinking Test. Computers in Human Behavior 72 (2017), 678–691. https://doi.org/10.1016/j.chb.2016.08.047
[63]
Anne M. Scheel, Mitchell R. M. J. Schijen, and Daniël Lakens. 2021. An Excess of Positive Results: Comparing the Standard Psychology Literature With Registered Reports. Advances in Methods and Practices in Psychological Science 4, 2(2021), 25152459211007467. https://doi.org/10.1177/25152459211007467 arXiv:https://doi.org/10.1177/25152459211007467
[64]
Marta Serra-Garcia and Uri Gneezy. 2021. Nonreplicable publications are cited more than replicable ones. Science Advances 7, 21 (2021), eabd1705. https://doi.org/10.1126/sciadv.abd1705 arXiv:https://www.science.org/doi/pdf/10.1126/sciadv.abd1705
[65]
Judy Sheard, S. Simon, Margaret Hamilton, and Jan Lönnberg. 2009. Analysis of Research into the Teaching and Learning of Programming. In Proceedings of the Fifth International Workshop on Computing Education Research Workshop (Berkeley, CA, USA) (ICER ’09). Association for Computing Machinery, New York, NY, USA, 93–104. https://doi.org/10.1145/1584322.1584334
[66]
Andrew K Shenton. 2004. Strategies for ensuring trustworthiness in qualitative research projects. Education for information 22, 2 (2004), 63–75.
[67]
Michael Shindler, Natalia Pinpin, Mia Markovic, Frederick Reiber, Jee Hoon Kim, Giles Pierre Nunez Carlos, Mine Dogucu, Mark Hong, Michael Luu, Brian Anderson, Aaron Cote, Matthew Ferland, Palak Jain, Tyler LaBonte, Leena Mathur, Ryan Moreno, and Ryan Sakuma. 2022. Replication of ”Student Misconceptions of Dynamic Programming” [in press]. Computer Science Education(2022).
[68]
E. Soloway. 1986. Learning to Program = Learning to Construct Mechanisms and Explanations. Commun. ACM 29, 9 (sep 1986), 850–858. https://doi.org/10.1145/6592.6594
[69]
Poorna Talkad Sukumar and Ronald Metoyer. 2019. Replication and transparency of qualitative research from a constructivist perspective. (2019). https://doi.org/10.31219/osf.io/6efvp
[70]
Josh Tenenberg. 2014. Asking Research Questions: Theoretical Presuppositions. ACM Trans. Comput. Educ. 14, 3, Article 16 (sep 2014), 8 pages. https://doi.org/10.1145/2644924
[71]
Sarah J. Tracy. 2010. Qualitative Quality: Eight “Big-Tent” Criteria for Excellent Qualitative Research. Qualitative Inquiry 16, 10 (2010), 837–851. https://doi.org/10.1177/1077800410383121 arXiv:https://doi.org/10.1177/1077800410383121
[72]
Rivka Tuval-Mashiach. 2021. Is replication relevant for qualitative research?Qualitative Psychology 8, 3 (2021), 365–377. https://doi.org/10.1037/qup0000217
[73]
Dylan Wiliam. 2022. How should educational research respond to the replication “crisis” in the social sciences? Reflections on the papers in the Special Issue. Educational Research and Evaluation 0, 0 (2022), 1–7. https://doi.org/10.1080/13803611.2021.2022309 arXiv:https://doi.org/10.1080/13803611.2021.2022309
[74]
Yu Xie, Kai Wang, and Yan Kong. 2021. Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis. Science and Engineering Ethics 27, 4 (29 Jun 2021), 41. https://doi.org/10.1007/s11948-021-00314-9
[75]
Shamama Zehra, Aishwarya Ramanathan, Larry Yueli Zhang, and Daniel Zingaro. 2018. Student Misconceptions of Dynamic Programming. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (Baltimore, Maryland, USA) (SIGCSE ’18). Association for Computing Machinery, New York, NY, USA, 556–561. https://doi.org/10.1145/3159450.3159528

Cited By

View all
  • (2024)Are a Static Analysis Tool Study's Findings Static? A ReplicationProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653545(80-86)Online publication date: 3-Jul-2024
  • (2024)Evaluating the Effectiveness of a Testing Checklist Intervention in CS2: An Quasi-experimental Replication StudyProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 110.1145/3632620.3671102(55-64)Online publication date: 12-Aug-2024
  • (2024)Using Benchmarking Infrastructure to Evaluate LLM Performance on CS Concept Inventories: Challenges, Opportunities, and CritiquesProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 110.1145/3632620.3671097(452-468)Online publication date: 12-Aug-2024
  • Show More Cited By

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICER '22: Proceedings of the 2022 ACM Conference on International Computing Education Research - Volume 1
August 2022
372 pages
ISBN:9781450391948
DOI:10.1145/3501385
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 August 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Metascience
  2. Preregistration
  3. Registered Reports
  4. Replication

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICER 2022
Sponsor:
ICER 2022: ACM Conference on International Computing Education Research
August 7 - 11, 2022
Lugano and Virtual Event, Switzerland

Acceptance Rates

Overall Acceptance Rate 189 of 803 submissions, 24%

Upcoming Conference

ICER 2025
ACM Conference on International Computing Education Research
August 3 - 6, 2025
Charlottesville , VA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)48
  • Downloads (Last 6 weeks)2
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Are a Static Analysis Tool Study's Findings Static? A ReplicationProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653545(80-86)Online publication date: 3-Jul-2024
  • (2024)Evaluating the Effectiveness of a Testing Checklist Intervention in CS2: An Quasi-experimental Replication StudyProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 110.1145/3632620.3671102(55-64)Online publication date: 12-Aug-2024
  • (2024)Using Benchmarking Infrastructure to Evaluate LLM Performance on CS Concept Inventories: Challenges, Opportunities, and CritiquesProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 110.1145/3632620.3671097(452-468)Online publication date: 12-Aug-2024
  • (2024)A randomized controlled trial on the nomenclature of scientific computingComputer Science Education10.1080/08993408.2024.2403971(1-29)Online publication date: 10-Nov-2024
  • (2023)Exploring Models and Theories of Spatial Skills in CS through a Multi-National StudyProceedings of the 2023 ACM Conference on International Computing Education Research - Volume 110.1145/3568813.3600129(122-133)Online publication date: 7-Aug-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media