[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3630106.3658932acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Auditing GPT's Content Moderation Guardrails: Can ChatGPT Write Your Favorite TV Show?

Published: 05 June 2024 Publication History

Abstract

Large language models (LLMs) are increasingly appearing in consumer-facing products. To prevent problematic use, the organizations behind these systems have put content moderation guardrails in place that prevent the models from generating content they consider harmful. However, most of these enforcement standards and processes are opaque. Although they play a major role in the user experience of these tools, automated content moderation tools have received relatively less attention than other aspects of the models. This study undertakes an algorithm audit of OpenAI’s ChatGPT with the goal of better understanding its content moderation guardrails and their potential biases. To evaluate performance on a broad cultural range of content, we generate a dataset of 100 popular United States television shows with one to three synopses for each episode in the first season of each show (3,309 total synopses). We probe GPT’s content moderation endpoint (ME) to identify violating content both in the synopses themselves, and in GPT’s own outputs when asked to generate a script based on each synopsis, also comparing with ME outputs on 81 real scripts from the same TV shows (269,578 total ME outputs). Our findings show that a large number of GPT-generated and real scripts flag as content violations (about 18% of GPT scripts and 69% of real ones). Using metadata, we find that TV maturity ratings, as well as certain genres (Animation, Crime, Fantasy, and others) are statistically significantly related to a script’s likelihood of flagging. We conclude by discussing the implications of LLM self-censorship and directions for future research on their moderation procedures.

References

[1]
Rediet Abebe, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G. Robinson. 2020. Roles for Computing in Social Change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). Association for Computing Machinery, New York, NY, USA, 252–260. https://doi.org/10.1145/3351095.3372871 event-place: Barcelona, Spain.
[2]
Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Large language models associate Muslims with violence. Nature Machine Intelligence 3, 6 (June 2021), 461–463. https://doi.org/10.1038/s42256-021-00359-2 Number: 6 Publisher: Nature Publishing Group.
[3]
The Internet Archive. 2023. The Internet Archive. archive.org
[4]
Jack Bandy. 2021. Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits. Proc. ACM Hum.-Comput. Interact. 5, CSCW1 (April 2021). https://doi.org/10.1145/3449148 Place: New York, NY, USA Publisher: Association for Computing Machinery.
[5]
Pragyan Banerjee, Abhinav Java, Surgan Jandial, Simra Shahid, Shaz Furniturewala, Balaji Krishnamurthy, and Sumit Bhatia. 2023. All Should Be Equal in the Eyes of Language Models: Counterfactually Aware Fair Text Generation. arxiv:2311.05451 [cs.CL]
[6]
Francesco Barbieri, Jose Camacho-Collados, Luis Espinosa Anke, and Leonardo Neves. 2020. TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification. In Findings of the Association for Computational Linguistics: EMNLP 2020. 1644–1650.
[7]
Gonzalo Molpeceres Barrientos, Rocío Alaiz-Rodríguez, Víctor González-Castro, and Andrew C Parnell. 2020. Machine learning techniques for the detection of inappropriate erotic content in text. International Journal of Computational Intelligence Systems 13, 1 (2020), 591–603.
[8]
BBC. 2023. BBC Writers Script Library. https://www.bbc.co.uk/writers/scripts/
[9]
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922
[10]
Sunay Bhat, Jeffrey Jiang, Omead Pooladzandi, and Gregory Pottie. 2023. De-Biasing Generative Models using Counterfactual Methods. arxiv:2207.01575 [cs.LG]
[11]
Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems(NIPS’16). Curran Associates Inc., Red Hook, NY, USA, 4356–4364. event-place: Barcelona, Spain.
[12]
Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification. In Companion Proceedings of The 2019 World Wide Web Conference (San Francisco, USA) (WWW ’19). Association for Computing Machinery, New York, NY, USA, 491–500. https://doi.org/10.1145/3308560.3317593
[13]
Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency(Proceedings of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
[14]
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183–186. https://doi.org/10.1126/science.aal4230 _eprint: https://www.science.org/doi/pdf/10.1126/science.aal4230.
[15]
M. Keith Chen. 2016. Dynamic Pricing in a Labor Market: Surge Pricing and Flexible Work on the Uber Platform. In Proceedings of the 2016 ACM Conference on Economics and Computation (Maastricht, The Netherlands) (EC ’16). Association for Computing Machinery, New York, NY, USA, 455. https://doi.org/10.1145/2940716.2940798
[16]
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems 30 (2017).
[17]
The Script Lab An Arts Industry Company. 2023. The Script Lab. https://thescriptlab.com/
[18]
Christine Basta Marta R Costa-juss and Noe Casas. 2019. Evaluating the underlying gender bias in contextualized word embeddings. GeBNLP 2019 (2019), 33.
[19]
The Movie Database. 2023. The Movie Database API. https://www.themoviedb.org/
[20]
Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the international AAAI conference on web and social media, Vol. 11. 512–515.
[21]
Ona De Gibert, Naiara Pérez, Aitor García Pablos, and Montse Cuadros. 2018. Hate Speech Dataset from a White Supremacy Forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2). 11–20.
[22]
Mandalit Del Barco. [n. d.]. Striking Hollywood scribes ponder AI in the writer’s room. NPR ([n. d.]). https://www.npr.org/2023/05/18/1176876301/striking-hollywood-writers-contemplate-ai
[23]
Ángel Díaz and Laura Hecht-Felella. 2021. Double standards in social media content moderation. Brennan Center for Justice, https://www.brennancenter.org/our-work/research-reports/double-standards-social-media-content-moderation (2021).
[24]
Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and Mitigating Unintended Bias in Text Classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society(AIES ’18). Association for Computing Machinery, New York, NY, USA, 67–73. https://doi.org/10.1145/3278721.3278729 event-place: New Orleans, LA, USA.
[25]
Maria Eriksson and Anna Johansson. 2017. Tracking Gendered Streams. Culture Unbound: Journal of Current Cultural Research 9 (Oct. 2017), 163–183. https://doi.org/10.3384/cu.2000.1525.1792163
[26]
Inc. Fandom. 2023. Fandom. https://www.fandom.com/
[27]
Alex Freedman. 2023. TV Calling. https://www.tv-calling.com/
[28]
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2020, Trevor Cohn, Yulan He, and Yang Liu (Eds.). Association for Computational Linguistics, Online, 3356–3369. https://doi.org/10.18653/v1/2020.findings-emnlp.301
[29]
Nitesh Goyal, Ian D Kivlichan, Rachel Rosen, and Lucy Vasserman. 2022. Is your toxicity my toxicity? exploring the impact of rater identity on toxicity annotation. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–28.
[30]
Mary L Gray and Siddharth Suri. 2019. Ghost work: How to stop Silicon Valley from building a new global underclass. Harper Business.
[31]
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 3309–3326.
[32]
IMDb. 2019. Top 100 most watched tv shows of all time. https://web.archive.org/web/20231104142125/https://www.imdb.com/list/ls095964455/
[33]
Scribd Inc.2023. Scribd. https://www.scribd.com/home
[34]
8FLiX Institute. 2023. 8FLiX. https://8flix.com/
[35]
Jigsaw. 2024. How it Works: Using Machine Learning to Reduce Toxicity Online. https://perspectiveapi.com/how-it-works/
[36]
Matthew Kay, Cynthia Matuszek, and Sean A. Munson. 2015. Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(CHI ’15). Association for Computing Machinery, New York, NY, USA, 3819–3828. https://doi.org/10.1145/2702123.2702520 event-place: Seoul, Republic of Korea.
[37]
Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring Bias in Contextualized Word Representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, Marta R. Costa-jussà, Christian Hardmeier, Will Radford, and Kellie Webster (Eds.). Association for Computational Linguistics, Florence, Italy, 166–172. https://doi.org/10.18653/v1/W19-3823
[38]
Irene Kwok and Yuzhou Wang. 2013. Locate the hate: Detecting tweets against blacks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 27. 1621–1622.
[39]
Miron Lakomy. 2023. Artificial Intelligence as a Terrorism Enabler? Understanding the Potential Impact of Chatbots and Image Generators on Online Terrorist Activities. Studies in Conflict & Terrorism 0, 0 (2023), 1–21. https://doi.org/10.1080/1057610X.2023.2259195 arXiv:https://doi.org/10.1080/1057610X.2023.2259195
[40]
Michelle S Lam, Ayush Pandit, Colin H Kalicki, Rachit Gupta, Poonam Sahoo, and Danaë Metaxa. 2023. Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising. Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (2023), 1–37.
[41]
Anja Lambrecht and Catherine Tucker. 2019. Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads. Management Science 65, 7 (July 2019), 2966–2981. https://doi.org/10.1287/mnsc.2018.3093 Publisher: INFORMS.
[42]
Alyssa Lees, Vinh Q Tran, Yi Tay, Jeffrey Sorensen, Jai Gupta, Donald Metzler, and Lucy Vasserman. 2022. A new generation of perspective api: Efficient multilingual character-level transformers. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 3197–3207.
[43]
Li Lucy and David Bamman. 2021. Gender and Representation Bias in GPT-3 Generated Stories. In Proceedings of the Third Workshop on Narrative Understanding, Nader Akoury, Faeze Brahman, Snigdha Chaturvedi, Elizabeth Clark, Mohit Iyyer, and Lara J. Martin (Eds.). Association for Computational Linguistics, Virtual, 48–55. https://doi.org/10.18653/v1/2021.nuse-1.5
[44]
Yuvraj Malik and Zaheer Kachwala. 2023. What caused the Hollywood writers’ strike and is it over?Reuters (Sept. 2023). https://www.reuters.com/world/us/is-hollywood-writers-strike-over-2023-09-25/
[45]
Todor Markov, Chong Zhang, Sandhini Agarwal, Florentine Eloundou Nekoul, Theodore Lee, Steven Adler, Angela Jiang, and Lilian Weng. 2023. A Holistic Approach to Undesired Content Detection in the Real World. Proceedings of the AAAI Conference on Artificial Intelligence 37, 12 (Jun. 2023), 15009–15018. https://doi.org/10.1609/aaai.v37i12.26752
[46]
Katelyn Mei, Sonia Fereidooni, and Aylin Caliskan. 2023. Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1699–1710. https://doi.org/10.1145/3593013.3594109
[47]
Danaë Metaxa, Michelle A Gan, Su Goh, Jeff Hancock, and James A Landay. 2021. An image of society: Gender and racial representation and impact in image search results for occupations. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–23.
[48]
Danaë Metaxa, Joon Sung Park, Ronald E. Robertson, Karrie Karahalios, Christo Wilson, Jeff Hancock, and Christian Sandvig. 2021. Auditing Algorithms: Understanding Algorithmic Systems from the Outside In. Foundations and Trends® in Human–Computer Interaction 14, 4 (Nov. 2021), 272–344. https://doi.org/10.1561/1100000083 Publisher: Now Publishers, Inc.
[49]
Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive language detection in online user content. In Proceedings of the 25th international conference on world wide web. 145–153.
[50]
OpenAI. 2023. GPT-4 System Card. https://cdn.openai.com/papers/gpt-4-system-card.pdf
[51]
OpenAI. 2023. GPT-4 Technical Report. https://arxiv.org/abs/2303.08774v3
[52]
OpenAI. 2023. OpenAI API Platform. https://platform.openai.com
[53]
OpenAI. 2024. Terms of Use. https://openai.com/policies/terms-of-use
[54]
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems 35 (2022), 27730–27744.
[55]
Anaelia Ovalle, Palash Goyal, Jwala Dhamala, Zachary Jaggers, Kai-Wei Chang, Aram Galstyan, Richard Zemel, and Rahul Gupta. 2023. “I’m fully who I am”: Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1246–1266. https://doi.org/10.1145/3593013.3594078
[56]
Orestis Papakyriakopoulos and Arwa M Mboya. 2023. Beyond algorithmic bias: a socio-computational interrogation of the google search by image algorithm. Social Science Computer Review 41, 4 (2023), 1100–1125.
[57]
Sunghyun Park, Seung-won Hwang, Fuxiang Chen, Jaegul Choo, Jung-Woo Ha, Sunghun Kim, and Jinyeong Yim. 2019. Paraphrase Diversification Using Counterfactual Debiasing. Proceedings of the AAAI Conference on Artificial Intelligence 33, 01 (Jul. 2019), 6883–6891. https://doi.org/10.1609/aaai.v33i01.33016883
[58]
Billy Perrigo. 2023. Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time (2023).
[59]
Ilir Rama, Lucia Bainotti, Alessandro Gandini, Giulia Giorgi, Silvia Semenzin, Claudio Agosti, Giulia Corona, and Salvatore Romano. 2023. The platformization of gender and sexual identities: an algorithmic analysis of Pornhub. Porn Studies 10, 2 (2023), 154–173.
[60]
Sarah T Roberts. 2019. Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
[61]
Paul Röttger, Bertie Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert. 2021. HateCheck: Functional Tests for Hate Speech Detection Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 41–58.
[62]
David Rozado. 2023. The Political Biases of ChatGPT. Social Sciences 12, 3 (2023). https://doi.org/10.3390/socsci12030148
[63]
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th annual meeting of the association for computational linguistics. 1668–1678.
[64]
The Daily Script. 2023. The Daily Script. https://www.dailyscript.com/index.html
[65]
Wai Man Si, Michael Backes, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, Savvas Zannettou, and Yang Zhang. 2022. Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (Los Angeles, CA, USA) (CCS ’22). Association for Computing Machinery, New York, NY, USA, 2659–2673. https://doi.org/10.1145/3548606.3560599
[66]
Script Slug. 2023. Script Slug. https://www.scriptslug.com/
[67]
Spectrum. [n. d.]. TV and Movie Ratings with Descriptions. https://www.spectrum.net/support/tv/tv-and-movie-ratings-descriptions
[68]
Till Speicher, Muhammad Ali, Giridhari Venkatadri, Filipe Nunes Ribeiro, George Arvanitakis, Fabrício Benevenuto, Krishna P. Gummadi, Patrick Loiseau, and Alan Mislove. 2018. Potential for Discrimination in Online Targeted Advertising. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency(Proceedings of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR, 5–19. https://proceedings.mlr.press/v81/speicher18a.html
[69]
Dirk HR Spennemann. 2023. Exploring Ethical Boundaries: Can ChatGPT Be Prompted to Give Advice on How to Cheat in University Assignments? (2023).
[70]
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. Advances in Neural Information Processing Systems 33 (2020), 3008–3021.
[71]
Latanya Sweeney. 2013. Discrimination in Online Ad Delivery. https://doi.org/10.2139/ssrn.2208240
[72]
Pittawat Taveekitworachai, Febri Abdullah, Mustafa Can Gursesli, Mury F. Dewantoro, Siyuan Chen, Antonio Lanata, Andrea Guazzini, and Ruck Thawonmas. 2023. Breaking Bad: Unraveling Influences and Risks of User Inputs to ChatGPT for Game Story Generation. In Interactive Storytelling, Lissa Holloway-Attaway and John T. Murray (Eds.). Springer Nature Switzerland, Cham, 285–296.
[73]
Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, 2021. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359 (2021),  .
[74]
Writers Guild of America. 2023. Writers Guild of America Calls Strike, Effective Tuesday, May 2. https://www.wga.org/news-events/news/press/writers-guild-of-america-calls-strike-effective-tuesday-may-2
[75]
TV Writing. 2023. TV Writing. https://sites.google.com/site/tvwriting/home?authuser=0
[76]
Yueqi Xie, Jingwei Yi, Jiawei Shao, Justin Curl, Lingjuan Lyu, Qifeng Chen, Xing Xie, and Fangzhao Wu. 2023. Defending ChatGPT against jailbreak attack via self-reminders. Nature Machine Intelligence  ,   (2023), 1–11.
[77]
Haoran Zhang, Amy X. Lu, Mohamed Abdalla, Matthew McDermott, and Marzyeh Ghassemi. 2020. Hurtful words: quantifying biases in clinical contextual word embeddings. In Proceedings of the ACM Conference on Health, Inference, and Learning (Toronto, Ontario, Canada) (CHIL ’20). Association for Computing Machinery, New York, NY, USA, 110–120. https://doi.org/10.1145/3368555.3384448
[78]
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender Bias in Contextualized Word Embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Jill Burstein, Christy Doran, and Thamar Solorio (Eds.). Association for Computational Linguistics, Minneapolis, Minnesota, 629–634. https://doi.org/10.18653/v1/N19-1064
[79]
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Martha Palmer, Rebecca Hwa, and Sebastian Riedel (Eds.). Association for Computational Linguistics, Copenhagen, Denmark, 2979–2989. https://doi.org/10.18653/v1/D17-1323

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
June 2024
2580 pages
ISBN:9798400704505
DOI:10.1145/3630106
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 June 2024

Check for updates

Author Tags

  1. AI system audit
  2. content moderation
  3. text generation

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 571
    Total Downloads
  • Downloads (Last 12 months)571
  • Downloads (Last 6 weeks)159
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media