[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

A Personalized Framework for Consumer and Producer Group Fairness Optimization in Recommender Systems

Published: 05 June 2024 Publication History

Abstract

In recent years, there has been an increasing recognition that when machine learning (ML) algorithms are used to automate decisions, they may mistreat individuals or groups, with legal, ethical, or economic implications. Recommender systems are prominent examples of these ML systems that aid users in making decisions. The majority of past literature research on recommender systems fairness treats user and item fairness concerns independently, ignoring the fact that recommender systems function in a two-sided marketplace. In this article, we propose CP-FairRank, an optimization-based re-ranking algorithm that seamlessly integrates fairness constraints from both the consumer and producer side in a joint objective framework. The framework is generalizable and may take into account varied fairness settings based on group segmentation, recommendation model selection, and domain, which is one of its key characteristics. For instance, we demonstrate that the system may jointly increase consumer and producer fairness when (un)protected consumer groups are defined on the basis of their activity level and main-streamness, while producer groups are defined according to their popularity level. For empirical validation, through large-scale on eight datasets and four mainstream collaborative filtering recommendation models, we demonstrate that our proposed strategy is able to improve both consumer and producer fairness without compromising or very little overall recommendation quality, demonstrating the role algorithms may play in avoiding data biases. Our results on different group segmentation also indicate that the amount of improvement can vary and is dependent on group segmentation, indicating that the amount of bias produced and how much the algorithm can improve it depend on the protected group definition, a factor that, to our knowledge, has not been examined in great depth in previous studies but rather is highlighted by the results discovered in this study.

References

[1]
Himan Abdollahpouri, Gediminas Adomavicius, Robin Burke, Ido Guy, Dietmar Jannach, Toshihiro Kamishima, Jan Krasnodebski, and Luiz Pizzato. 2020. Multistakeholder recommendation: Survey and research directions. User Model. User-Adapt. Interact. 30, 1 (2020), 127–158.
[2]
Himan Abdollahpouri, Robin Burke, and Bamshad Mobasher. 2019. Managing popularity bias in recommender systems with personalized re-ranking. In Proceedings of the 32nd International Flairs Conference.
[3]
Himan Abdollahpouri, Masoud Mansoury, Robin Burke, and Bamshad Mobasher. 2019. The unfairness of popularity bias in recommendation. arXiv:1907.13286. Retrieved from https://arxiv.org/abs/1907.13286
[4]
Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher, and Edward Malthouse. 2021. User-centered evaluation of popularity bias in recommender systems. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. 119–129.
[5]
Enrique Amigó, Yashar Deldjoo, Stefano Mizzaro, and Alejandro Bellogín. 2023. A unifying and general account of fairness measurement in recommender systems. Inf. Process. Manage. 60, 1 (2023), 103115.
[6]
Vito Walter Anelli, Yashar Deldjoo, Tommaso Di Noia, Daniele Malitesta, Vincenzo Paparella, and Claudio Pomo. 2023. Auditing consumer-and producer-fairness in graph collaborative filtering. In European Conference on Information Retrieval. Springer, 33–48.
[7]
Reuben Binns. 2018. Fairness in machine learning: Lessons from political philosophy. In Proceedings of the Conference on Fairness, Accountability and Transparency. PMLR, 149–159.
[8]
Ludovico Boratto, Gianni Fenu, and Mirko Marras. 2021. Interplay between upsampling and regularization for provider fairness in recommender systems. User Model. User-Adapt. Interact. 31, 3 (2021), 421–455.
[9]
Robin Burke. 2017. Multisided fairness for recommendation. arXiv:1707.00093. Retrieved from https://arxiv.org/abs/1707.00093
[10]
Robin D. Burke, Himan Abdollahpouri, Bamshad Mobasher, and Trinadh Gupta. 2016. Towards multi-stakeholder utility evaluation of recommender systems. In UMAP (Extended Proceedings), Vol. 750.
[11]
Simon Caton and Christian Haas. 2020. Fairness in machine learning: A survey. arXiv:2010.04053. Retrieved from https://arxiv.org/abs/2010.04053
[12]
Abhijnan Chakraborty, Aniko Hannak, Asia J. Biega, and Krishna Gummadi. 2017. Fair sharing for sharing economy platforms. In Fairness, Accountability and Transparency in Recommender Systems-Workshop on Responsible Recommendation.
[13]
Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2020. Bias and debias in recommender system: A survey and future directions. arXiv:2010.03240. Retrieved from https://arxiv.org/abs/2010.03240
[14]
Maurizio Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. 2019. Are we really making much progress? A worrying analysis of recent neural recommendation approaches. In Proceedings of the 13th ACM Conference on Recommender Systems. 101–109.
[15]
Yashar Deldjoo. 2023. Fairness of ChatGPT and the role of explainable-guided prompts. arXiv:2307.11761. Retrieved from https://arxiv.org/abs/2307.11761.
[16]
Yashar Deldjoo. 2024. Understanding biases in chatgpt-based recommender systems: Provider fairness, temporal stability, and recency. arXiv:2401.10545. Retrieved from https://arxiv.org/abs/2401.10545.
[17]
Yashar Deldjoo, Vito Walter Anelli, Hamed Zamani, Alejandro Bellogin, and Tommaso Di Noia. 2021. A flexible framework for evaluating user and item fairness in recommender systems. User Model. User-Adapt. Interact. (2021), 1–55.
[18]
Yashar Deldjoo, Alejandro Bellogin, and Tommaso Di Noia. 2021. Explaining recommender systems fairness and accuracy through the lens of data characteristics. Inf. Process. Manage. 58, 5 (2021), 102662.
[19]
Yashar Deldjoo, Dietmar Jannach, Alejandro Bellogin, Alessandro Difonzo, and Dario Zanzonelli. 2023. Fairness in recommender systems: Research landscape and future directions. User Model. User-Adapt. Interact. (2023), 1–50.
[20]
Yashar Deldjoo, Markus Schedl, Paolo Cremonesi, and Gabriella Pasi. 2018. Content-based multimedia recommendation systems: Definition and application domains. In Proceedings of the 9th Italian Information Retrieval Workshop.
[21]
Virginie Do, Sam Corbett-Davies, Jamal Atif, and Nicolas Usunier. 2021. Two-sided fairness in rankings via Lorenz dominance. In Advances in Neural Information Processing Systems, Vol. 34.
[22]
Qiang Dong, Shuang-Shuang Xie, Xiaofan Yang, and Yuan Yan Tang. 2020. User-item matching for recommendation fairness: A view from item-providers. arXiv:2009.14474. Retrieved from https://arxiv.org/abs/2009.14474
[23]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. 214–226.
[24]
Michael D. Ekstrand, Anubrata Das, Robin Burke, and Fernando Diaz. 2021. Fairness and discrimination in information access systems. arXiv:2105.05779. Retrieved from https://arxiv.org/abs/2105.05779
[25]
Yingqiang Ge, Shuchang Liu, Ruoyuan Gao, Yikun Xian, Yunqi Li, Xiangyu Zhao, Changhua Pei, Fei Sun, Junfeng Ge, Wenwu Ou, et al. 2021. Towards long-term fairness in recommendation. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 445–453.
[26]
Elizabeth Gómez, Ludovico Boratto, and Maria Salamó. 2022. Provider fairness across continents in collaborative recommender systems. Inf. Process. Manage. 59, 1 (2022), 102719.
[27]
Prem Gopalan, Jake M. Hofman, and David M. Blei. 2015. Scalable recommendation with hierarchical poisson factorization. In Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence (UAI’15). 326–335.
[28]
Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web. 173–182.
[29]
Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In Proceedings of the 8th IEEE International Conference on Data Mining. IEEE, 263–272.
[30]
Rashidul Islam, Kamrun Naher Keya, Ziqian Zeng, Shimei Pan, and James Foulds. 2021. Debiasing career recommendations with neural fair collaborative filtering. In Proceedings of the Web Conference 2021. 3779–3790.
[31]
Saeedeh Karimi, Hossein A. Rahmani, Mohammadmehdi Naghiaei, and Leila Safari. 2023. Provider fairness and beyond-accuracy trade-offs in recommender systems. arXiv:2309.04250. Retrieved from https://arxiv.org/abs/2309.04250
[32]
Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980. Retrieved from https://arxiv.org/abs/1412.6980
[33]
Ömer Kırnap, Fernando Diaz, Asia Biega, Michael Ekstrand, Ben Carterette, and Emine Yilmaz. 2021. Estimation of fair ranking metrics with incomplete judgments. In Proceedings of the Web Conference 2021. 1065–1075.
[34]
Yunqi Li, Hanxiong Chen, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2021. User-oriented fairness in recommendation. In Proceedings of the Web Conference 2021. 624–632.
[35]
Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. 2018. Variational autoencoders for collaborative filtering. In Proceedings of the World Wide Web Conference. 689–698.
[36]
Chen Lin, Xinyi Liu, Guipeng Xv, and Hui Li. 2021. Mitigating sentiment bias for recommender systems. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 31–40.
[37]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Comput. Surv. 54, 6 (2021), 1–35.
[38]
Mohammadmehdi Naghiaei, Hossein A. Rahmani, Mohammad Aliannejadi, and Nasim Sonboli. 2022. Towards confidence-aware calibrated recommendation. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 4344–4348.
[39]
Mohammadmehdi Naghiaei, Hossein A. Rahmani, and Mahdi Dehghan. 2022. The unfairness of popularity bias in book recommendation. arXiv:2202.13446. Retrieved from https://arxiv.org/abs/2202.13446
[40]
Mohammadmehdi Naghiaei, Hossein A. Rahmani, and Yashar Deldjoo. 2022. Cpfair: Personalized consumer and producer fairness re-ranking for recommender systems. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 770–779.
[41]
Fatemeh Nazary, Yashar Deldjoo, and Tommaso Di Noia. 2023. ChatGPT-HealthPrompt. Harnessing the power of XAI in prompt-based healthcare decision support using ChatGPT. In European Conference on Artificial Intelligence. Springer, 382–397.
[42]
Rong Pan, Yunhong Zhou, Bin Cao, Nathan N. Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. 2008. One-class collaborative filtering. In Proceedings of the 8th IEEE International Conference on Data Mining. IEEE, 502–511.
[43]
Gourab K. Patro, Arpita Biswas, Niloy Ganguly, Krishna P. Gummadi, and Abhijnan Chakraborty. 2020. Fairrec: Two-sided fairness for personalized recommendations in two-sided platforms. In Proceedings of the Web Conference. 1194–1204.
[44]
Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2009. Measuring discrimination in socially-sensitive decision records. In Proceedings of the SIAM International Conference on Data Mining. SIAM, 581–592.
[45]
Hossein A. Rahmani, Yashar Deldjoo, and Tommaso Di Noia. 2022. The role of context fusion on accuracy, beyond-accuracy, and fairness of point-of-interest recommendation systems. Expert Syst. Appl. 205 (2022), 117700.
[46]
Hossein A. Rahmani, Yashar Deldjoo, Ali Tourani, and Mohammadmehdi Naghiaei. 2022. The unfairness of active users and popularity bias in point-of-interest recommendation. In International Workshop on Algorithmic Bias in Search and Recommendation. Springer, 56–68.
[47]
Hossein A. Rahmani, Mohammadmehdi Naghiaei, Ali Tourani, and Yashar Deldjoo. 2022. Exploring the impact of temporal bias in point-of-interest recommendation. In Proceedings of the 16th ACM Conference on Recommender Systems (RecSys’22).
[48]
Aghiles Salah, Quoc-Tuan Truong, and Hady W. Lauw. 2020. Cornac: A comparative framework for multimodal recommender systems. J. Mach. Learn. Res. 21 (2020), 95–1.
[49]
Flavia Salutari, Jerome Ramos, Hossein A. Rahmani, Leonardo Linguaglossa, and Aldo Lipani. 2023. Quantifying the bias of transformer-based language models for african american english in masked language modeling. In Pacific-Asia Conference on Knowledge Discovery and Data Mining. Springer, 532–543.
[50]
Markus Schedl and Elisabeth Lex. 2023. Fairness of information access systems: Detecting and mitigating harmful biases in information retrieval and recommender systems. In Personalized Human-Computer Interaction. de Gruyter, 59–78.
[51]
Quoc-Tuan Truong, Aghiles Salah, Thanh-Binh Tran, Jingyao Guo, and Hady W. Lauw. 2021. Exploring cross-modality utilization in recommender systems. IEEE Internet Comput. (2021).
[52]
Lequn Wang and Thorsten Joachims. 2021. User fairness, item fairness, and diversity for rankings in two-sided markets. In Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval. 23–41.
[53]
Xi Wang, Hossein Rahmani, Jiqun Liu, and Emine Yilmaz. 2023. Improving conversational recommendation systems via bias analysis and language-model-enhanced data augmentation. In Findings of the Association for Computational Linguistics (EMNLP’23). 3609–3622.
[54]
Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021. Ethical and social risks of harm from Language Models. arxiv:2112.04359 [cs.CL]. Retrieved from https://arxiv.org/abs/2112.04359
[55]
Haolun Wu, Chen Ma, Bhaskar Mitra, Fernando Diaz, and Xue Liu. 2022. A multi-objective optimization framework for multi-stakeholder fairness-aware recommendation. ACM Trans. Inf. Syst. 41, 2 (2022), 1–29.
[56]
Yao Wu, Jian Cao, Guandong Xu, and Yudong Tan. 2021. TFROM: A two-sided fairness-aware recommendation model for both customers and providers. arXiv:2104.09024. Retrieved from https://arxiv.org/abs/2104.09024
[57]
Bruna Wundervald. 2021. Cluster-based quotas for fairness improvements in music recommendation systems. Int. J. Multimedia Inf. Retriev. 10, 1 (2021), 25–32.
[58]
Emre Yalcin and Alper Bilge. 2021. Investigating and counteracting popularity bias in group recommendations. Inf. Process. Manage. 58, 5 (2021), 102608.
[59]
Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Is ChatGPT Fair for recommendation? evaluating fairness in large language model recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems (RecSys’23), Jie Zhang, Li Chen, Shlomo Berkovsky, Min Zhang, Tommaso Di Noia, Justin Basilico, Luiz Pizzato, and Yang Song (Eds.). ACM, 993–999. DOI:
[60]
Ziwei Zhu, Yun He, Xing Zhao, Yin Zhang, Jianling Wang, and James Caverlee. 2021. Popularity-opportunity bias in collaborative filtering. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 85–93.

Cited By

View all
  • (2024)Are We Explaining the Same Recommenders? Incorporating Recommender Performance for Evaluating ExplainersProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3691709(1113-1118)Online publication date: 8-Oct-2024

Index Terms

  1. A Personalized Framework for Consumer and Producer Group Fairness Optimization in Recommender Systems

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Recommender Systems
    ACM Transactions on Recommender Systems  Volume 2, Issue 3
    September 2024
    245 pages
    EISSN:2770-6699
    DOI:10.1145/3613671
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 June 2024
    Online AM: 05 March 2024
    Accepted: 24 January 2024
    Revised: 19 December 2023
    Received: 26 February 2023
    Published in TORS Volume 2, Issue 3

    Check for updates

    Author Tags

    1. Responsible IR
    2. recommender systems
    3. fairness
    4. ranking
    5. bias mitigation
    6. consumer and provider
    7. multi-stakeholder

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)294
    • Downloads (Last 6 weeks)63
    Reflects downloads up to 11 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Are We Explaining the Same Recommenders? Incorporating Recommender Performance for Evaluating ExplainersProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3691709(1113-1118)Online publication date: 8-Oct-2024

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media