[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3573051.3593381acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesl-at-sConference Proceedingsconference-collections
research-article

Assessing the Fairness of Course Success Prediction Models in the Face of (Un)equal Demographic Group Distribution

Published: 20 July 2023 Publication History

Abstract

In recent years, predictive models have been increasingly used by education practitioners and stakeholders to leverage actionable insights to support student success. Usually, model selection (i.e., the decision of which predictive model to use) is largely based on the predictive performance of the models. Nevertheless, it has become important to consider fairness as an integral part of the criteria for model selection. Might a model be unfair towards certain demographic groups? Might it systematically perform poorly for certain demographic groups? Indeed, prior studies affirm this. Which model out of the lot should we choose then? Additionally, prior studies suggest demographic group imbalance in the training dataset to be a source of such unfairness. If so, would the fairness of the predictive models improve if the demographic group distribution in the training dataset becomes balanced? This study seeks to answer these questions. Firstly, we analyze the fairness of 4 of the commonly used state-of-the-art models to predict course success for 3 IT courses in a large public Australian university. Specifically, we investigate if the models serve different demographic groups equally. Secondly, to address the identified unfairnes---supposedly caused by the demographic group imbalance---we train the models on 3 types ofbalanced data and investigate again if the unfairness was mitigated. We found that none of the predictive models wasconsistently fair in all 3 courses. This suggests that model selection decision should be carefully made by both the researchers and stakeholders as the per the requirement of the domain of application. Furthermore, we found that balancing demographic groups (and class labels) is not enough---albeit can be an initial step---to ensure fairness of predictive models in education. An implication of this is that sometimes, the source of unfairness may not be immediately apparent. Therefore, "blindly" attributing the unfairness to demographic group imbalance may cause the unfairness to persist even when the data becomes balanced. We hope that our findings can guide practitioners and relevant stakeholders in making well-informed decisions.

References

[1]
Henry Anderson, Afshan Boodhwani, and Ryan S Baker. 2019. Assessing the Fairness of Graduation Predictions. In Proceedings of The 12th International Conference on Educational Data Mining (EDM 2019). 488 -- 491.
[2]
Lovenoor Aulck, Dev Nambi, Nishant Velagapudi, Joshua Blumenstock, and Jevin West. 2019. Mining University Registrar Records to Predict First-Year Undergraduate Attrition. International Educational Data Mining Society (2019).
[3]
Ryan S Baker and Aaron Hawn. 2021. Algorithmic bias in education. International Journal of Artificial Intelligence in Education (2021), 1--41.
[4]
Gustavo EAPA Batista, Ronaldo C Prati, and Maria C Monard. 2005. Balancing strategies and class overlapping. In International symposium on intelligent data analysis. Springer, 24--35.
[5]
Vaclav Bayer, Martin Hlosta, and Miriam Fernandez. 2021. Learning Analytics and Fairness: Do Existing Algorithms Serve Everyone Equally?. In International Conference on Artificial Intelligence in Education. Springer, 71--75.
[6]
Michael Butterworth. 2018. The ICO and artificial intelligence: The role of fairness in the GDPR framework. Computer Law & Security Review, Vol. 34, 2 (2018), 257--268.
[7]
Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, Vol. 16 (2002), 321--357.
[8]
Oscar Blessed Deho, Srecko Joksimovic, Jiuyong Li, Chen Zhan, Jixue Liu, and Lin Liu. 2022a. Should Learning Analytics Models Include Sensitive Attributes? Explaining the Why. IEEE Transactions on Learning Technologies (2022).
[9]
Oscar Blessed Deho, Lin Liu, Srecko Joksimovic, Jiuyong Li, Chen Zhan, and Jixue Liu. 2022b. Assessing the Causal Impact of Online Instruction Due to COVID-19 on Students' Grades and Its Aftermath on Grade Prediction Models. In Proceedings of the 2022 ACM Conference on Information Technology for Social Good (Limassol, Cyprus) (GoodIT '22). Association for Computing Machinery, New York, NY, USA, 32--38. https://doi.org/10.1145/3524458.3547232
[10]
Oscar Blessed Deho, Chen Zhan, Jiuyong Li, Jixue Liu, Lin Liu, and Thuc Duy Le. 2022c. How do the existing fairness metrics and unfairness mitigation algorithms contribute to ethical learning analytics? British Journal of Educational Technology (2022).
[11]
Francesca Del Bonifro, Maurizio Gabbrielli, Giuseppe Lisanti, and Stefano Pio Zingaro. 2020. Student dropout prediction. In International Conference on Artificial Intelligence in Education. Springer, 129--140.
[12]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214--226.
[13]
Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 259--268.
[14]
Josh Gardner, Christopher Brooks, and Ryan Baker. 2019a. Evaluating the Fairness of Predictive Student Models Through Slicing Analysis. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge (Tempe, AZ, USA) (LAK19). Association for Computing Machinery, New York, NY, USA, 225--234. https://doi.org/10.1145/3303772.3303791
[15]
Josh Gardner, Christopher Brooks, and Ryan Baker. 2019b. Evaluating the fairness of predictive student models through slicing analysis. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge. 225--234.
[16]
Rajni Garg. 2018. PREDICTING STUDENT PERFORMANCE OF DIFFERENT REGIONS OF PUNJAB USING CLASSIFICATION TECHNIQUES. International Journal of Advanced Research in Computer Science, Vol. 9, 1 (2018), 236--241.
[17]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems. 3315--3323.
[18]
Martin Hlosta, Zdenek Zdrahal, and Jaroslav Zendulka. 2017. Ouroboros: early identification of at-risk students without models based on legacy data. In Proceedings of the seventh international learning analytics & knowledge conference. Association for Computing Machinery, New York, NY, USA, 6--15. https://doi.org/10.1145/3027385.3027449
[19]
Qian Hu and Huzefa Rangwala. 2020. Towards Fair Educational Data Mining: A Case Study on Detecting At-risk Students. In Proceedings of The 13th International Conference on Educational Data Mining (EDM 2020). 431--437.
[20]
Stephen Hutt, Margo Gardner, Angela L Duckworth, and Sidney K D'Mello. 2019. Evaluating Fairness and Generalizability in Models Predicting On-Time Graduation from College Applications. International Educational Data Mining Society (2019).
[21]
Suhang Jiang, Katerina Schenke, Jacquelynne Sue Eccles, Di Xu, and Mark Warschauer. 2018. Cross-national comparison of gender differences in the enrollment in and completion of science, technology, engineering, and mathematics Massive Open Online Courses. PloS one, Vol. 13, 9 (2018), e0202463.
[22]
Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, Vol. 33, 1 (2012), 1--33.
[23]
René F Kizilcec and Hansol Lee. 2022. Algorithmic fairness in education. In The ethics of artificial intelligence in education. Routledge, 174--202.
[24]
Vitomir Kovanovi?, Sre?ko Joksimovi?, Zak Waters, Dragan Ga?evi?, Kirsty Kitto, Marek Hatala, and George Siemens. 2016. Towards automated content analysis of discussion transcripts: A cognitive presence case. In Proceedings of the sixth international conference on learning analytics & knowledge. 15--24.
[25]
Catherine Kung and Renzhe Yu. 2020. Interpretable Models Do Not Compromise Accuracy or Fairness in Predicting College Success. In Proceedings of the Seventh ACM Conference on Learning@ Scale. 413--416.
[26]
Sha Lele, Dragan Gasevic, and Guanliang Chen. 2022. Lessons from Debiasing Data for Fair and Accurate Predictive Modeling in Education. (2022).
[27]
Xiu Li, Lulu Xie, and Huimin Wang. 2016. Grade Prediction in MOOCs. In 2016 IEEE Intl Conference on Computational Science and Engineering (CSE) and IEEE Intl Conference on Embedded and Ubiquitous Computing (EUC) and 15th Intl Symposium on Distributed Computing and Applications for Business Engineering (DCABES). 386--392. https://doi.org/10.1109/CSE-EUC-DCABES.2016.213
[28]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st international conference on neural information processing systems. 4768--4777.
[29]
Xiaofeng Ma and Zhurong Zhou. 2018. Student pass rates prediction using optimized support vector machine and decision tree. In 2018 IEEE 8th annual computing and communication workshop and conference (CCWC). IEEE, 209--215.
[30]
Matthew C Makel and Jonathan A Plucker. 2014. Facts are more important than novelty: Replication in the education sciences. Educational Researcher, Vol. 43, 6 (2014), 304--316.
[31]
Lisa McKenna, Rebecca Vanderheide, and Ingrid Brooks. 2016. Is graduate entry education a solution to increasing numbers of men in nursing? Nurse education in practice, Vol. 17 (2016), 74--77.
[32]
Pedro Plaza, Manuel Castro, Julia Merino, Teresa Restivo, Aruquia Peixoto, Carina Gonzalez, Antonio Menacho, Félix Garc'ia-Loro, Elio Sancristobal, Manuel Blazquez, et al. 2020. Educational Robotics for All: Gender, Diversity, and Inclusion in STEAM. In 2020 IEEE Learning With MOOCS (LWMOOCS). IEEE, 19--24.
[33]
Nathan Rountree, Janet Rountree, Anthony Robins, and Robert Hannah. 2004. Interacting factors that predict success and failure in a CS1 course. ACM SIGCSE Bulletin, Vol. 36, 4 (2004), 101--104.
[34]
Lele Sha, Mladen Raković, Angel Das, Dragan Gavs ević, and Guanliang Chen. 2022. Leveraging class balancing techniques to alleviate algorithmic bias for predictive tasks in education. IEEE Transactions on Learning Technologies, Vol. 15, 4 (2022), 481--492.
[35]
Lele Sha, Mladen Rakovic, Alexander Whitelock-Wainwright, David Carroll, Victoria M Yew, Dragan Gasevic, and Guanliang Chen. 2021. Assessing Algorithmic Fairness in Automatic Classifiers of Educational Forum Posts. In International Conference on Artificial Intelligence in Education. Springer, 381--394.
[36]
C. E. Shannon. 1948. A mathematical theory of communication. The Bell System Technical Journal, Vol. 27, 3 (1948), 379--423. https://doi.org/10.1002/j.1538--7305.1948.tb01338.x
[37]
George Siemens. 2013. Learning analytics: The emergence of a discipline. American Behavioral Scientist, Vol. 57, 10 (2013), 1380--1400.
[38]
Robert Summers, Helen Higson, and Elisabeth Moores. 2022. The impact of disadvantage on higher education engagement during different delivery modes: a pre- versus peri-pandemic comparison of learning analytics data. Assessment & Evaluation in Higher Education, Vol. 0, 0 (2022), 1--11.
[39]
Mack Sweeney, Jaime Lester, and Huzefa Rangwala. 2015. Next-term student grade prediction. In 2015 IEEE International Conference on Big Data (Big Data). 970--975. https://doi.org/10.1109/BigData.2015.7363847
[40]
Sajid Umair and Muhammad Majid Sharif. 2018. Predicting students grades using artificial neural networks and support vector machine. IGI Global, 5169--5182.
[41]
Yang Xu and Kevin Wilson. 2021. Early alert systems during a pandemic: A simulation study on the impact of concept drift. In LAK21: 11th International Learning Analytics and Knowledge Conference. 504--510.
[42]
Tsung-Yen Yang, Christopher G Brinton, Carlee Joe-Wong, and Mung Chiang. 2017. Behavior-based grade prediction for MOOCs via time series neural networks. IEEE Journal of Selected Topics in Signal Processing, Vol. 11, 5 (2017), 716--728.
[43]
Renzhe Yu, Hansol Lee, and René F. Kizilcec. 2021. Should College Dropout Prediction Models Include Protected Attributes?. In Proceedings of the Eighth ACM Conference on Learning @ Scale (Virtual Event, Germany) (L@S '21). Association for Computing Machinery, New York, NY, USA, 91--100. https://doi.org/10.1145/3430895.3460139
[44]
Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International conference on machine learning. PMLR, 325--333.

Cited By

View all
  • (2025)Fairness for machine learning software in educationJournal of Systems and Software10.1016/j.jss.2024.112244219:COnline publication date: 1-Jan-2025
  • (2024)Contexts Matter but How? Course-Level Correlates of Performance and Fairness Shift in Predictive Model TransferProceedings of the 14th Learning Analytics and Knowledge Conference10.1145/3636555.3636936(713-724)Online publication date: 18-Mar-2024
  • (2024)Algorithmic Bias in BERT for Response Accuracy Prediction: A Case Study for Investigating Population ValidityJournal of Educational Measurement10.1111/jedm.12420Online publication date: 27-Oct-2024
  • Show More Cited By

Index Terms

  1. Assessing the Fairness of Course Success Prediction Models in the Face of (Un)equal Demographic Group Distribution

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      L@S '23: Proceedings of the Tenth ACM Conference on Learning @ Scale
      July 2023
      445 pages
      ISBN:9798400700255
      DOI:10.1145/3573051
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 20 July 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. course success prediction
      2. fairness in learning analytics
      3. protected group imbalance

      Qualifiers

      • Research-article

      Funding Sources

      • Postgraduate Research Scholarship of University of South Australia
      • Australian Research Council

      Conference

      L@S '23
      L@S '23: Tenth ACM Conference on Learning @ Scale
      July 20 - 22, 2023
      Copenhagen, Denmark

      Acceptance Rates

      Overall Acceptance Rate 117 of 440 submissions, 27%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)97
      • Downloads (Last 6 weeks)4
      Reflects downloads up to 09 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Fairness for machine learning software in educationJournal of Systems and Software10.1016/j.jss.2024.112244219:COnline publication date: 1-Jan-2025
      • (2024)Contexts Matter but How? Course-Level Correlates of Performance and Fairness Shift in Predictive Model TransferProceedings of the 14th Learning Analytics and Knowledge Conference10.1145/3636555.3636936(713-724)Online publication date: 18-Mar-2024
      • (2024)Algorithmic Bias in BERT for Response Accuracy Prediction: A Case Study for Investigating Population ValidityJournal of Educational Measurement10.1111/jedm.12420Online publication date: 27-Oct-2024
      • (2024)Using Keystroke Behavior Patterns to Detect Nonauthentic Texts in Writing Assessments: Evaluating the Fairness of Predictive ModelsJournal of Educational Measurement10.1111/jedm.12416Online publication date: 18-Oct-2024
      • (2024)Towards Fair Detection of AI-Generated Essays in Large-Scale Writing AssessmentsArtificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky10.1007/978-3-031-64312-5_38(317-324)Online publication date: 2-Jul-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media