[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3491140.3528323acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesl-at-sConference Proceedingsconference-collections
short-paper

Discrimination of Automatically Generated Questions Used as Formative Practice

Published: 01 June 2022 Publication History

Abstract

Advances in artificial intelligence and automatic question generation (AQG) have made it possible to generate the volume of formative practice questions needed to engage students in learning by doing. These automatically generated (AG) questions can be integrated with textbook content in a courseware environment so that students can practice as they read. Scaling this learn by doing method is a valuable pursuit, as it is proven to cause better learning outcomes (i.e., the doer effect). However, it is also necessary to ensure these AG questions perform equally as well as human-authored (HA) questions. In previous studies, it was found that AG and HA questions were essentially equivalent with respect to student engagement, difficulty, and persistence. While these question performance metrics expanded existing AQG research, this paper further extends this research by evaluating question discrimination using student data from a university Neuroscience course. It is found that the AG questions also perform as well as HA questions with respect to discrimination.

References

[1]
Anderson, L. W. (Ed.), Krathwohl, D. R. (Ed.), Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J., & Wittrock, M.C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom's Taxonomy of Educational Objectives (Complete edition). New York: Longman.
[2]
Baker, F. B. (2001). The basics of item response theory (2nd ed.). ERIC Clearinghouse on Assessment and Evaluation.
[3]
Béguin, A. A., & Glas, C. A. (2001). MCMC estimation and some model-fit analysis of multidimensional IRT models. Psychometrika, 66(4), 541--561.
[4]
Bock, R. D., & Aitkin, M. (1981). Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika, 46(4), 443--459.
[5]
Fox, J.-P. (2010). Bayesian item response modeling: Theory and applications. New York: Springer.
[6]
Hoffman, M. D., & Gelman, A. (2014). The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 1593--1623.
[7]
Jerome, B., Van Campenhout, R., & Johnson, B. G. (2021, June). Automatic question generation and the SmartStart application. Proceedings of the Eighth ACM Conference on Learning@Scale (pp. 365--366). https://doi.org/10.1145/3430895.3460878
[8]
Junker, B. W., Patz, R. J., & VanHoudnos, N. M. (2016). Markov chain Monte Carlo for item response models. In W. J. van der Linden (Ed.), Handbook of item response theory: Volume 2: Statistical tools (pp. 271--312). Chapman & Hall/CRC Press.
[9]
Koedinger, K. R., McLaughlin, E. A., Jia, J. Z., & Bier, N. L. (2016, April). Is the doer effect a causal relationship? How can we tell and why it's important. Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (pp. 388--397). Edinburgh, United Kingdom. http://dx.doi.org/10.1145/2883851.2883957
[10]
Kurdi, G., Leo, J., Parsia, B., Sattler, U., & Al-Emari, S. (2020). A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30(1), 121--204. https://doi.org/10.1007/s40593-019-00186-y
[11]
Sahin, A., & Anil, D. (2017). The effects of test length and sample size on item parameters in item response theory. Educational Science: Theory and Practice, 17(1), 321--335.
[12]
Salvatier, J., Wiecki, T. V., & Fonnesbeck, C. (2016). Probabilistic programming in Python using PyMC3. PeerJ Computer Science, 2, e55.
[13]
Sinharay, S., Johnson, M. S., & Stern, H. S. (2006). Posterior predictive assessment of item response theory models. Applied Psychological Measurement, 30(4), 298--321.
[14]
Van Campenhout, R., Brown, N., Jerome, B., Dittel, J. S., & Johnson, B. G. (2021, June). Toward effective courseware at scale: Investigating automatically generated questions as formative practice. Proceedings of the Eighth ACM Conference on Learning@Scale (pp. 295--298). https://doi.org/10.1145/3430895.3460162
[15]
Van Campenhout, R., Dittel, J. S., Jerome, B., & Johnson, B. G. (2021). Transforming textbooks into learning by doing environments: an evaluation of textbook-based automatic question generation. Third Workshop on Intelligent Textbooks at the 22nd International Conference on Artificial Intelligence in Education. CEUR Workshop Proceedings, ISSN 1613-0073 (pp. 1--12). http://ceur-ws.org/Vol-2895/paper06.pdf
[16]
Van Campenhout, R. Johnson, B. G., & Olsen, J. A. (2021). The doer effect: Replicating findings that doing causes learning. Proceedings of eLmL 2021: The Thirteenth International Conference on Mobile, Hybrid, and On-line Learning. ISSN 2308--4367 (pp. 1--6). https://www.thinkmind.org/index.php?view=article&articleid=elml_2021_1_10_58001
[17]
VitalSource Supplemental Data Repository. https://github.com/vitalsource/data
[18]
Watson, N. V., & Breedlove, S. M. (2021). The mind's machine: Foundations of brain and behavior (4th ed.). Sunderland, Massachusetts: Sinauer Associates.

Cited By

View all
  • (2024)Automatically Generated Practice in the Classroom: Exploring Performance and Impact Across Courses2024 International Conference on Software, Telecommunications and Computer Networks (SoftCOM)10.23919/SoftCOM62040.2024.10721828(1-6)Online publication date: 26-Sep-2024
  • (2024)Forums, Feedback, and Two Kinds of AI: A Selective History of Learning @ ScaleProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3664667(376-382)Online publication date: 9-Jul-2024
  • (2024)An Investigation of Automatically Generated Feedback on Student Behavior and LearningProceedings of the 14th Learning Analytics and Knowledge Conference10.1145/3636555.3636901(850-856)Online publication date: 18-Mar-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
L@S '22: Proceedings of the Ninth ACM Conference on Learning @ Scale
June 2022
491 pages
ISBN:9781450391580
DOI:10.1145/3491140
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 June 2022

Check for updates

Author Tags

  1. artificial intelligence
  2. automatic question generation
  3. automatically generated questions
  4. courseware
  5. formative practice
  6. in vivo experimentation.
  7. item response theory
  8. natural language processing
  9. question discrimination

Qualifiers

  • Short-paper

Conference

L@S '22
L@S '22: Ninth (2022) ACM Conference on Learning @ Scale
June 1 - 3, 2022
NY, New York City, USA

Acceptance Rates

Overall Acceptance Rate 117 of 440 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)23
  • Downloads (Last 6 weeks)8
Reflects downloads up to 31 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Automatically Generated Practice in the Classroom: Exploring Performance and Impact Across Courses2024 International Conference on Software, Telecommunications and Computer Networks (SoftCOM)10.23919/SoftCOM62040.2024.10721828(1-6)Online publication date: 26-Sep-2024
  • (2024)Forums, Feedback, and Two Kinds of AI: A Selective History of Learning @ ScaleProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3664667(376-382)Online publication date: 9-Jul-2024
  • (2024)An Investigation of Automatically Generated Feedback on Student Behavior and LearningProceedings of the 14th Learning Analytics and Knowledge Conference10.1145/3636555.3636901(850-856)Online publication date: 18-Mar-2024
  • (2024)Automatic Question Generation for Spanish Textbooks: Evaluating Spanish Questions Generated with the Parallel Construction MethodInternational Journal of Artificial Intelligence in Education10.1007/s40593-024-00394-1Online publication date: 1-Apr-2024
  • (2023)Exploring Student Persistence with Automatically Generated Practice through Interaction Patterns2023 International Conference on Software, Telecommunications and Computer Networks (SoftCOM)10.23919/SoftCOM58365.2023.10271578(1-6)Online publication date: 21-Sep-2023
  • (2023)Encouraging Critical Thinking Support System: Question Generation and Lecture Slide RecommendationsProceedings of the Tenth ACM Conference on Learning @ Scale10.1145/3573051.3596173(287-291)Online publication date: 20-Jul-2023
  • (2023)Engaging in Student-Centered Educational Data Science Through Learning EngineeringEducational Data Science: Essentials, Approaches, and Tendencies10.1007/978-981-99-0026-8_1(3-40)Online publication date: 30-Apr-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media