[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3514094.3534177acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

Mimetic Models: Ethical Implications of AI that Acts Like You

Published: 27 July 2022 Publication History

Abstract

An emerging theme in artificial intelligence research is the creation of models to simulate the decisions and behavior of specific people, in domains including game-playing, text generation, and artistic expression. These models go beyond earlier approaches in the way they are tailored to individuals, and the way they are designed for interaction rather than simply the reproduction of fixed, pre-computed behaviors. We refer to these as mimetic models, and in this paper we develop a framework for characterizing the ethical and social issues raised by their growing availability. Our framework includes a number of distinct scenarios for the use of such models, and considers the impacts on a range of different participants, including the target being modeled, the operator who deploys the model, and the entities that interact with it.

Supplementary Material

MP4 File (AIES22-mm158.mp4)
Mimetic Models: Ethical Implications of AI that Acts like You, summary and discussion of four scenarios

References

[1]
Acemoglu, D., and Restrepo, P. Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives 33, 2 (May 2019), 3--30.
[2]
Acemoglu, D., and Restrepo, P. Robots and jobs: Evidence from us labor markets. Journal of Political Economy 128, 6 (2020), 2188--2244.
[3]
Agarwal, S., Farid, H., Gu, Y., He, M., Nagano, K., and Li, H. Protecting world leaders against deep fakes. In CVPR workshops (2019), vol. 1.
[4]
Ajder, H., Patrini, G., Cavalli, F., and Cullen, L. The state of deepfakes landscape, threats, and impact, 2019.
[5]
Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., Ring, R., Rutherford, E., Cabi, S., Han, T., Gong, Z., Samangooei, S., Monteiro, M., Menick, J., Borgeaud, S., Brock, A., Nematzadeh, A., Sharifzadeh, S., Binkowski, M., Barreira, R., Vinyals, O., Zisserman, A., and Simonyan, K. Flamingo: a visual language model for few-shot learning. ArXiv abs/2204.14198 (2022).
[6]
Bard, N., Foerster, J. N., Chandar, S., Burch, N., Lanctot, M., Song, H. F., Parisotto, E., Dumoulin, V., Moitra, S., Hughes, E., et al. The hanabi challenge: A new frontier for ai research. Artificial Intelligence 280 (2020), 103216.
[7]
Barocas, S., Hardt, M., and Narayanan, A. Fairness in machine learning. NeurIPS tutorial 1 (2017), 2.
[8]
Bird, J. J., Faria, D. R., Ekárt, A., Premebida, C., and Ayrosa, P. P. Lstm and gpt-2 synthetic speech transfer learning for speaker recognition to overcome data scarcity. arXiv preprint arXiv:2007.00659 (2020).
[9]
Bobadilla, J., Ortega, F., Hernando, A., and Gutiérrez, A. Recommender systems survey. Knowledge-based systems 46 (2013), 109--132.
[10]
Brin, D. Kiln People. Tor, 2002.
[11]
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. Openai gym. arXiv preprint arXiv:1606.01540 (2016).
[12]
Buolamwini, J., and Gebru, T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (2018), PMLR, pp. 77--91.
[13]
Chaturvedi, S., Goldwasser, D., and Daumé III, H. Predicting instructor's intervention in mooc forums. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2014), pp. 1501--1511.
[14]
Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., Ryder, N., Pavlov, M., Power, A., Kaiser, L., Bavarian, M., Winter, C., Tillet, P., Such, F. P., Cummings, D., Plappert, M., Chantzis, F., Barnes, E., Herbert-Voss, A., Guss, W. H., Nichol, A., Paino, A., Tezak, N., Tang, J., Babuschkin, I., Balaji, S., Jain, S., Saunders, W., Hesse, C., Carr, A. N., Leike, J., Achiam, J., Misra, V., Morikawa, E., Radford, A., Knight, M., Brundage, M., Murati, M., Mayer, K., Welinder, P., McGrew, B., Amodei, D., McCandlish, S., Sutskever, I., and Zaremba, W. Evaluating large language models trained on code. CoRR abs/2107.03374 (2021).
[15]
Chopra, S., Gianforte, R., and Sholar, J. Meet percy: The cs 221 teaching assistant chatbot. ACM Transactions on Graphics 1, 1 (2016), 1--8.
[16]
Chouldechova, A., and Roth, A. A snapshot of the frontiers of fairness in machine learning. Communications of the ACM 63, 5 (2020), 82--89.
[17]
Ci, Y., Ma, X., Wang, Z., Li, H., and Luo, Z. User-guided deep anime line art colorization with conditional adversarial networks. In Proceedings of the 26th ACM international conference on Multimedia (2018), pp. 1536--1544.
[18]
Cífka, O., Şimşekli, U., and Richard, G. Groove2groove: One-shot music style transfer with supervision from synthetic data. IEEE/ACM Transactions on Audio, Speech, and Language Processing 28 (2020), 2638--2650.
[19]
Cimini, A. Walking to the gallery: Sondra Perry's ?It's in the game" in San Diego in five fragments. Sound Studies 4, 2 (2018), 178--200.
[20]
Crawford, K., and Schultz, J. Big data and due process: Toward a framework to redress predictive privacy harms. BCL Rev. 55 (2014), 93.
[21]
Dechant, M. J., Birk, M. V., Shiban, Y., Schnell, K., and Mandryk, R. L. How avatar customization affects fear in a game-based digital exposure task for social anxiety. Proceedings of the ACM on Human-Computer Interaction 5 (2021), 1--27.
[22]
Department of Health, Education, and Welfare, and National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research . The belmont report. ethical principles and guidelines for the protection of human subjects of research. The Journal of the American College of Dentists 81, 3 (2014), 4--13.
[23]
Dhou, K. Towards a better understanding of chess players' personalities: A study using virtual chess players. In International Conference on Human-Computer Interaction (2018), Springer, pp. 435--446.
[24]
Ducheneaut, N., Wen, M.-H., Yee, N., and Wadley, G. Body and mind: a study of avatar personalization in three virtual worlds. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2009).
[25]
Easterbrook, F. H., and Fischel, D. R. Voting in corporate law. The journal of Law and Economics 26, 2 (1983), 395--427.
[26]
Egan, G. Diaspora. Gollancz, 2002.
[27]
Egan, G. Zendegi. Gollancz, 2010.
[28]
Fazelpour, S., and Lipton, Z. C. Algorithmic fairness from a non-ideal perspective. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2020), pp. 57--63.
[29]
Finocchiaro, J., Maio, R., Monachou, F., Patro, G. K., Raghavan, M., Stoica, A.-A., and Tsirtsis, S. Bridging machine learning and mechanism design towards algorithmic fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (2021), pp. 489--503.
[30]
Fish, B., and Stark, L. Reflexive design for fairness and other human values in formal models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (2021), pp. 89--99.
[31]
Fox, J., and Bailenson, J. N. Virtual self-modeling: The effects of vicarious reinforcement and identification on exercise behaviors. Media Psychology 12 (2009), 1--25.
[32]
Gatys, L. A., Ecker, A. S., and Bethge, M. Image Style Transfer Using Convolutional Neural Networks, 2016.
[33]
Geigle, C., and Zhai, C. Modeling MOOC Student Behavior With Two-Layer Hidden Markov Models. In L@S '17: Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale. Association for Computing Machinery, New York, NY, USA, Apr 2017, pp. 205--208.
[34]
Gitlin, J. War Stories: How Forza learned to love neural nets to train AI drivers, Dec 2021. [Online; accessed 7. Dec. 2021].
[35]
Greenberg, D. William goldman: The writer who brought watergate to the screen. Politico (December 2018).
[36]
Guan, M., Gulshan, V., Dai, A., and Hinton, G. Who said what: Modeling individual labelers improves classification. In Proceedings of the AAAI Conference on Artificial Intelligence (2018), vol. 32.
[37]
Guo, W., and Caliskan, A. Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (2021), pp. 122--133.
[38]
Guzdial, M., Liao, N., Chen, J., Chen, S.-Y., Shah, S., Shah, V., Reno, J., Smith, G., and Riedl, M. O. Friend, collaborator, student, manager: How design of an ai-driven game level editor affects creators. In Proceedings of the 2019 CHI conference on human factors in computing systems (2019), pp. 1--13.
[39]
Harcourt, B. E. Risk as a proxy for race: The dangers of risk assessment. Federal Sentencing Reporter 27, 4 (2015), 237--243.
[40]
Hill, D. W. Avatar ethics: Beyond images and signs. Journal for Cultural Research 17 (2013), 69--84.
[41]
Hooshyar, D., Yousefi, M., and Lim, H. Data-driven approaches to game player modeling: a systematic literature review. ACM Computing Surveys (CSUR) 50, 6 (2018), 1--19.
[42]
Hu, H., Lerer, A., Cui, B., Pineda, L., Brown, N., and Foerster, J. Off-belief learning. In International Conference on Machine Learning (2021), PMLR, pp. 4369--4379.
[43]
Jacob, A. P., Wu, D. J., Farina, G., Lerer, A., Bakhtin, A., Andreas, J., and Brown, N. Modeling strong and human-like gameplay with kl-regularized search. arXiv preprint arXiv:2112.07544 (2021).
[44]
Jaderberg, M., Czarnecki, W. M., Dunning, I., Marris, L., Lever, G., Castaneda, A. G., Beattie, C., Rabinowitz, N. C., Morcos, A. S., Ruderman, A., et al. Human-level performance in 3d multiplayer games with population-based reinforcement learning. Science 364, 6443 (2019), 859--865.
[45]
Jahanshahi, H., Kazmi, S., and Cevik, M. Auto response generation in online medical chat services. arXiv preprint arXiv:2104.12755 (2021).
[46]
Jenner, M. Is this tviv? on netflix, tviii and binge-watching. New media & society 18, 2 (2016), 257--273.
[47]
Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q. V., Sung, Y.-H., Li, Z., and Duerig, T. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML (2021).
[48]
Kang, W.-C., and McAuley, J. Self-attentive sequential recommendation. In 2018 IEEE International Conference on Data Mining (ICDM) (2018), IEEE, pp. 197--206.
[49]
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., and Mullainathan, S. Human decisions and machine predictions. The quarterly journal of economics 133, 1 (2018), 237--293.
[50]
Kleinberg, J., Ludwig, J., Mullainathan, S., and Sunstein, C. R. Discrimination in the age of algorithms. Journal of Legal Analysis 10 (2018), 113--174.
[51]
Kober, J., Bagnell, J. A., and Peters, J. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research 32, 11 (2013), 1238--1274.
[52]
Koch, J., Lucero, A., Hegemann, L., and Oulasvirta, A. May ai?: Design ideation with cooperative contextual bandits. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019).
[53]
Kokkinakis, A., York, P., Patra, M., Robertson, J., Kirman, B., Coates, A., Pedrassoli Chitayat, A., Demediuk, S. P., Drachen, A., Hook, J. D., et al. Metagaming and metagames in esports. International Journal of Esports (2021).
[54]
Korshunov, P., and Marcel, S. The Threat of Deepfakes to Computer and Human Visions. Springer International Publishing, Cham, 2022, pp. 97--115.
[55]
Krishnan, P., Kovvuri, R., Pang, G., Vassilev, B., and Hassner, T. Style Brush: Transfer of Text Aesthetics from a Single Example. arXiv (Jun 2021).
[56]
Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., et al. The science of fake news. Science 359, 6380 (2018), 1094--1096.
[57]
Li, Y., Wang, N., Liu, J., and Hou, X. Demystifying Neural Style Transfer. arXiv (Jan 2017).
[58]
Liang, C., Proft, J., Andersen, E., and Knepper, R. A. Implicit communication of actionable information in human-ai teams. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019), pp. 1--13.
[59]
Liu, Z.-S., Wang, L.-W., Siu, W. C., and Kalogeiton, V. S. Name your style: An arbitrary artist-aware image style transfer. ArXiv abs/2202.13562 (2022).
[60]
Ma, C., Ji, Z., and Gao, M. Neural style transfer improves 3d cardiovascular mr image segmentation on inconsistent data. In International Conference on Medical Image Computing and Computer-Assisted Intervention (2019), Springer, pp. 128--136.
[61]
McAuley, J. Personalized Machine Learning. Cambridge University Press, 2022.
[62]
McCarthy, W. The wellstone. Bantam Books, 2003.
[63]
McIlroy-Young, R., Sen, S., Kleinberg, J., and Anderson, A. Aligning superhuman ai with human behavior: Chess as a model system. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2020), pp. 1677--1687.
[64]
McIlroy-Young, R., Wang, R., Sen, S., Kleinberg, J., and Anderson, A. Learning personalized models of human behavior in chess. arXiv preprint arXiv:2008.10086 (2020).
[65]
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54, 6 (2021), 1--35.
[66]
Mehrabi, N., Morstatter, F., Saxena, N. A., Lerman, K., and Galstyan, A. G. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54 (2021), 1--35.
[67]
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., and Gebru, T. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency (2019), pp. 220--229.
[68]
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. nature 518, 7540 (2015), 529--533.
[69]
Moračik, M., Schmid, M., Burch, N., Lisỳ, V., Morrill, D., Bard, N., Davis, T., Waugh, K., Johanson, M., and Bowling, M. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science 356, 6337 (2017), 508--513.
[70]
Morgenstern, O., and Von Neumann, J. Theory of games and economic behavior. Princeton university press, 1953.
[71]
Morris, C. Former NCAA athletes win video game lawsuit against EA. NBC News (Aug 2013).
[72]
Narayanan, A., and Shmatikov, V. Robust de-anonymization of large sparse datasets. In 2008 IEEE Symposium on Security and Privacy (sp 2008) (2008), IEEE, pp. 111--125.
[73]
Ng, A., and Jordan, M. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Advances in neural information processing systems 14 (2001).
[74]
Park, T., Liu, M.-Y., Wang, T.-C., and Zhu, J.-Y. Gaugan: Semantic image synthesis with spatially adaptive normalization. In ACM SIGGRAPH 2019 Real-Time Live! (New York, NY, USA, 2019), SIGGRAPH '19, Association for Computing Machinery.
[75]
Peck, T. C., Seinfeld, S., Aglioti, S. M., and Slater, M. Putting yourself in the skin of a black avatar reduces implicit racial bias. Consciousness and Cognition 22 (2013), 779--787.
[76]
Pohl, F. Gateway. St. Martin's Press, 1977.
[77]
Porres, D. A mayan warrior getting ready, in the style of rembrandt. https://twitter.com/PDillis/status/1530297800453496833, May 2022.
[78]
Qi, D., Su, L., Song, J., Cui, E., Bharti, T., and Sacheti, A. Imagebert: Cross-modal pre-training with large-scale weak-supervised image-text data. ArXiv abs/2001.07966 (2020).
[79]
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (2021), PMLR, pp. 8748--8763.
[80]
Raghavan, M., Barocas, S., Kleinberg, J., and Levy, K. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 conference on fairness, accountability, and transparency (2020), pp. 469--481.
[81]
Ramazzotti, M., Buscema, P. M., Massini, G., and Della Torre, F. Encoding and simulating the past. a machine learning approach to the archaeological information. In 2018 Metrology for Archaeology and Cultural Heritage (MetroArchaeo) (2018), IEEE, pp. 39--44.
[82]
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022).
[83]
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. Zero-shot text-to-image generation. ArXiv abs/2102.12092 (2021).
[84]
Ranzini, G., and Lutz, C. Love at first swipe? explaining tinder self-presentation and motives. Mobile Media & Communication 5, 1 (2017), 80--101.
[85]
Resnick, P., and Varian, H. R. Recommender systems. Communications of the ACM 40, 3 (1997), 56--58.
[86]
Ressmeyer, R., Masling, S., and Liao, M. "deep faking" political twitter using transfe r learning and gpt-2, 2019.
[87]
Rezaee, Z. Corporate governance and ethics. John Wiley & Sons, 2008.
[88]
Rosner, H. The Ethics of a Deepfake Anthony Bourdain Voice in "Roadrunner". New Yorker (Jul 2021).
[89]
Sanakoyeu, A., Kotovenko, D., Lang, S., and Ommer, B. A style-aware content loss for real-time hd style transfer. In proceedings of the European conference on computer vision (ECCV) (2018), pp. 698--714.
[90]
Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., and Vertesi, J. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency (2019), pp. 59--68.
[91]
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of go with deep neural networks and tree search. nature 529, 7587 (2016), 484--489.
[92]
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362, 6419 (2018), 1140--1144.
[93]
Simmons, M., and Lee, J. S. Catfishing: A look into online dating and impersonation. In International Conference on Human-Computer Interaction (2020), Springer, pp. 349--358.
[94]
Stein, R. What We Learnedifmmode--elsetextemdashfiSolving Standard - Hipsters of the Coast, Nov 2015.
[95]
Suh, K.-S., Kim, H., and Suh, E.-K. What if your avatar looks like you? dual-congruity perspectives for avatar use. MIS Q. 35 (2011), 711--729.
[96]
Taddeo, M., and Floridi, L. How ai can be a force for good. Science 361, 6404 (2018), 751--752.
[97]
Tantaros, A. Electronic Arts, identity thief? Nydailynews (Jan 2019).
[98]
Thomas, K. Sports Video Game Suit Gets to Heart of First Amendment Clash. N.Y. Times (Nov 2010).
[99]
Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., and Ortega-Garcia, J. Deepfakes and beyond: A survey of face manipulation and fake detection. Information Fusion 64 (2020), 131--148.
[100]
Tomašev, N., Paquet, U., Hassabis, D., and Kramnik, V. Assessing game balance with alphazero: Exploring alternative rule sets in chess. arXiv preprint arXiv:2009.04374 (2020).
[101]
Tora, M. Faceswap, 2018.
[102]
Vaccari, C., and Chadwick, A. Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media Society 6, 1 (2020), 2056305120903408.
[103]
Vinge, V. The Cookie Monster. Analog Science Fiction and Fact, 2003.
[104]
Whittaker, M., Alper, M., Bennett, C. L., Hendren, S., Kaziunas, L., Mills, M., Morris, M. R., Rankin, J., Rogers, E., Salas, M., et al. Disability, bias, and ai. AI Now Institute (2019).
[105]
Wolfendale, J. My avatar, my self: Virtual harm and attachment. Ethics and Information Technology 9 (2006), 111--119.
[106]
Wu, S., Tang, Y., Zhu, Y., Wang, L., Xie, X., and Tan, T. Session-based recommendation with graph neural networks. In Proceedings of the AAAI conference on artificial intelligence (2019), vol. 33, pp. 346--353.
[107]
Xu, P., Hospedales, T. M., Yin, Q., Song, Y.-Z., Xiang, T., and Wang, L. Deep learning for free-hand sketch: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).
[108]
Yee, N., and Bailenson, J. N. The proteus effect: The effect of transformed self-representation on behavior. Human Communication Research 33 (2007), 271--290.
[109]
Yee, N., Bailenson, J. N., and Ducheneaut, N. Implications of transformed digital self-representation on online and offline behavior. Communication Research 36 (2009), 285--312.
[110]
Yee, N., Bailenson, J. N., and Ducheneaut, N. The proteus effect. Communication Research 36 (2009), 285--312.
[111]
Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., and Choi, Y. Defending against neural fake news. Advances in neural information processing systems 32 (2019).
[112]
Zhang, S., Dinan, E., Urbanek, J., Szlam, A., Kiela, D., and Weston, J. Personalizing Dialogue Agents: I have a dog, do you have pets too? arXiv (Jan 2018).
[113]
Zimmer, M. "but the data is already public": on the ethics of research in facebook. In The Ethics of Information Technologies. Routledge, 2020, pp. 229--241.
[114]
Zwitter, A. Big data ethics. Big Data & Society 1, 2 (2014), 2053951714559253.

Cited By

View all
  • (2024)Dittos: Personalized, Embodied Agents That Participate in Meetings When You Are UnavailableProceedings of the ACM on Human-Computer Interaction10.1145/36870338:CSCW2(1-28)Online publication date: 8-Nov-2024
  • (2024)A Culturally Sensitive Test to Evaluate Nuanced GPT HallucinationIEEE Transactions on Artificial Intelligence10.1109/TAI.2023.33328375:6(2739-2751)Online publication date: Jun-2024
  • (2023)Stepping Stones for Self-LearningGenerative AI in Teaching and Learning10.4018/979-8-3693-0074-9.ch005(85-142)Online publication date: 5-Dec-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
July 2022
939 pages
ISBN:9781450392471
DOI:10.1145/3514094
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 July 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. artificial intelligence
  2. ethics
  3. generative models
  4. machine learning
  5. mimetic models

Qualifiers

  • Research-article

Funding Sources

  • Simons Foundation
  • Canada Foundation for Innovation (CFI)
  • Ontario Research Fund (ORF)
  • Natural Sciences and Engineering Research Council of Canada (NSERC)
  • Vannevar Bush Faculty Fellows Program
  • MacArthur Foundation

Conference

AIES '22
Sponsor:
AIES '22: AAAI/ACM Conference on AI, Ethics, and Society
May 19 - 21, 2021
Oxford, United Kingdom

Acceptance Rates

Overall Acceptance Rate 61 of 162 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)146
  • Downloads (Last 6 weeks)21
Reflects downloads up to 09 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Dittos: Personalized, Embodied Agents That Participate in Meetings When You Are UnavailableProceedings of the ACM on Human-Computer Interaction10.1145/36870338:CSCW2(1-28)Online publication date: 8-Nov-2024
  • (2024)A Culturally Sensitive Test to Evaluate Nuanced GPT HallucinationIEEE Transactions on Artificial Intelligence10.1109/TAI.2023.33328375:6(2739-2751)Online publication date: Jun-2024
  • (2023)Stepping Stones for Self-LearningGenerative AI in Teaching and Learning10.4018/979-8-3693-0074-9.ch005(85-142)Online publication date: 5-Dec-2023
  • (2022)Learning Models of Individual Behavior in ChessProceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3534678.3539367(1253-1263)Online publication date: 14-Aug-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media