[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Public Access

Attitudes and Folk Theories of Data Subjects on Transparency and Accuracy in Emotion Recognition

Published: 07 April 2022 Publication History

Abstract

The growth of technologies promising to infer emotions raises political and ethical concerns, including concerns regarding their accuracy and transparency. A marginalized perspective in these conversations is that of data subjects potentially affected by emotion recognition. Taking social media as one emotion recognition deployment context, we conducted interviews with data subjects (i.e., social media users) to investigate their notions about accuracy and transparency in emotion recognition and interrogate stated attitudes towards these notions and related folk theories. We find that data subjects see accurate inferences as uncomfortable and as threatening their agency, pointing to privacy and ambiguity as desired design principles for social media platforms. While some participants argued that contemporary emotion recognition must be accurate, others raised concerns about possibilities for contesting the technology and called for better transparency. Furthermore, some challenged the technology altogether, highlighting that emotions are complex, relational, performative, and situated. In interpreting our findings, we identify new folk theories about accuracy and meaningful transparency in emotion recognition. Overall, our analysis shows an unsatisfactory status quo for data subjects that is shaped by power imbalances and a lack of reflexivity and democratic deliberation within platform governance.

Supplementary Material

ZIP File (v6cscw1078aux.zip)
Interview Protocols

References

[1]
Art. 4 GDPR -- Definitions. General Data Protection Regulation (GDPR). Retrieved January 12, 2021 from https://gdpr-info.eu/art-4-gdpr/
[2]
Doris Allhutter, Florian Cech, Fabian Fischer, Gabriel Grill, and Astrid Mager. 2020. Algorithmic profiling of job seekers in Austria: how austerity politics are made effective. Frontiers in Big Data 3. https://doi.org/10.3389/fdata.2020.00005
[3]
Tawfiq Ammari, Jofish Kaye, Janice Y. Tsai, and Frank Bentley. 2019. Music, Search, and IoT: How People (Really) Use Voice Assistants. ACM Transactions on Computer-Human Interaction 26, 3: 17:1--17:28. https://doi.org/10.1145/3311956
[4]
Tawfiq Ammari, Sarita Yardi Schoenebeck, and Meredith Ringel Morris. 2014. Accessing social support and overcoming judgment on social media among parents of children with special needs. In Eighth International AAAI Conference on Weblogs and Social Media.
[5]
Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20, 3: 973--989.
[6]
Nazanin Andalibi and Justin Buss. 2020. The Human in Emotion Recognition on Social Media: Attitudes, Outcomes, Risks. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20), 1--16. https://doi.org/10.1145/3313831.3376680
[7]
Nazanin Andalibi and Andrea Forte. 2018. Announcing pregnancy loss on Facebook: A decision-making framework for stigmatized disclosures on identified social network sites. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1--14.
[8]
Nazanin Andalibi and Andrea Forte. 2018. Responding to Sensitive Disclosures on Social Media: A Decision-Making Framework. ACM Transactions on Computer-Human Interaction 25, 6: 31:1--31:29. https://doi.org/10.1145/3241044
[9]
Nazanin Andalibi, Margaret E Morris, and Andrea Forte. 2018. Testing waters, sending clues: Indirect disclosures of socially stigmatized experiences on social media. Proceedings of the ACM on Human-Computer Interaction 2, CSCW: 1--23.
[10]
Julia Angwin and Jeff Larson. 2016. Machine Bias. ProPublica. Retrieved October 29, 2018 from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[11]
Paul M. Aoki and Allison Woodruff. 2005. Making space for stories: ambiguity in the design of personal communication systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '05), 181--190. https://doi.org/10.1145/1054972.1054998
[12]
Solon Barocas and Andrew D. Selbst. 2016. Big data's disparate impact. Cal. L. Rev. 104: 671.
[13]
Ruha Benjamin. 2019. Race after technology: Abolitionist tools for the new jim code. John Wiley & Sons.
[14]
H. Russell Bernard and Harvey Russell Bernard. 2012. Social research methods: Qualitative and quantitative approaches. Sage.
[15]
Jayadev Bhaskaran and Isha Bhallamudi. 2019. Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis. arXiv:1906.10256 [cs]. Retrieved August 14, 2020 from http://arxiv.org/abs/1906.10256
[16]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ?It's Reducing a Human Being to a Percentage": Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 1--14. https://doi.org/10.1145/3173574.3173951
[17]
Abeba Birhane. 2021. The Impossibility of Automating Ambiguity. Artificial Life: 1--18. https://doi.org/10.1162/artl_a_00336
[18]
Kirsten Boehner, Rogério DePaula, Paul Dourish, and Phoebe Sengers. 2007. How emotion is made and measured. International Journal of Human-Computer Studies 65, 4: 275--291.
[19]
Geoffrey C. Bowker and Susan Leigh Star. 1999. Sorting things out?: classification and its consequences. MIT Press.
[20]
danah boyd and Kate Crawford. 2012. Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, communication & society 15, 5: 662--679.
[21]
David R. Brake. 2017. The Invisible Hand of the Unaccountable Algorithm: How Google, Facebook and Other Tech Companies Are Changing Journalism. In Digital Technology and Journalism: An International Comparative Perspective, Jingrong Tong and Shih-Hung Lo (eds.). Springer International Publishing, Cham, 25--46. https://doi.org/10.1007/978--3--319--55026--8_2
[22]
Robert Brannon. 1976. The male sex role and what its done for us lately. The Forty-Nine Percent Majority, edited by R. Brannon and D. David. Reading, MA: AddisonWesley: 145.
[23]
Anna Brown, Alexandra Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. 2019. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19, 1--12. https://doi.org/10.1145/3290605.3300271
[24]
Jed R. Brubaker, Lynn S. Dombrowski, Anita M. Gilbert, Nafiri Kusumakaulika, and Gillian R. Hayes. 2014. Stewarding a legacy: responsibilities and relationships in the management of post-mortem data. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 4157--4166.
[25]
Taina Bucher. 2017. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society 20, 1: 30--44. https://doi.org/10.1080/1369118X.2016.1154086
[26]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency, 77--91.
[27]
Jenna Burrell, Zoe Kahn, Anne Jonas, and Daniel Griffin. 2019. When Users Control the Algorithms: Values Expressed in Practices on Twitter. Proceedings of the ACM on Human-Computer Interaction 3, CSCW: 138:1--138:20. https://doi.org/10.1145/3359240
[28]
Carrie J. Cai, Jonas Jongejan, and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI '19), 258--262. https://doi.org/10.1145/3301275.3302289
[29]
Rafael A. Calvo and Sidney D'Mello. 2010. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on affective computing 1, 1: 18--37.
[30]
John M. Carrol. 1999. Five reasons for scenario-based design. In Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences. 1999. HICSS-32. Abstracts and CD-ROM of Full Papers, 11-pp.
[31]
John Carson. 2020. Quantification--Affordances and Limits. Scholarly Assessment Reports 2, 1.
[32]
Daniel Carter. 2018. Reimagining the Big Data assemblage. Big Data & Society 5, 2: 2053951718818194. https://doi.org/10.1177/2053951718818194
[33]
Stevie Chancellor, Eric P. S. Baumer, and Munmun De Choudhury. 2019. Who is the "Human" in Human-Centered Machine Learning: The Case of Predicting Mental Health from Social Media. Proceedings of the ACM on Human-Computer Interaction 3, CSCW: 147:1--147:32. https://doi.org/10.1145/3359249
[34]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O'Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), 1--12. https://doi.org/10.1145/3290605.3300789
[35]
Sami Coll. 2014. Power, knowledge, and the subjects of privacy: understanding privacy as the ally of surveillance. Information, Communication & Society 17, 10: 1250--1263.
[36]
R. W. Connell. 2005. Masculinities. University of California Press, Berkeley, Calif.
[37]
Juliet Corbin and Anselm Strauss. 2014. Basics of qualitative research: Techniques and procedures for developing grounded theory. Sage publications.
[38]
Dan Cosley, Shyong K. Lam, Istvan Albert, Joseph A. Konstan, and John Riedl. 2003. Is seeing believing?: how recommender system interfaces affect users' opinions. In Proceedings of the SIGCHI conference on Human factors in computing systems, 585--592.
[39]
Sasha Costanza-Chock. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. MIT Press.
[40]
Nick Couldry and Ulises A. Mejias. 2019. The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.
[41]
Rob Cover. 2012. Performing and undoing identity online: Social networking, identity theories and the incompatibility of online profiles and friendship regimes: Convergence. https://doi.org/10.1177/1354856511433684
[42]
Kate Crawford. 2021. The atlas of AI. Yale University Press.
[43]
Jennifer L. Croissant. 2014. Agnotology: Ignorance and absence or towards a sociology of things that aren't there. Social Epistemology 28, 1: 4--25.
[44]
Deborah Sarah David and Robert Brannon. 1976. The Forty-nine percent majority: The male sex role. Addison-Wesley Pub. Co, Reading, Mass.
[45]
Michael A. DeVito, Jeremy Birnholtz, Jeffery T. Hancock, Megan French, and Sunny Liu. 2018. How People Form Folk Theories of Social Media Feeds and What It Means for How We Study Self-Presentation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 120.
[46]
Michael A. DeVito, Darren Gergle, and Jeremy Birnholtz. 2017. "Algorithms ruin everything": #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17), 3163--3174. https://doi.org/10.1145/3025453.3025659
[47]
Michael A. DeVito, Jeffrey T. Hancock, Megan French, Jeremy Birnholtz, Judd Antin, Karrie Karahalios, Stephanie Tong, and Irina Shklovski. 2018. The Algorithm and the User: How Can HCI Use Lay Understandings of Algorithmic Systems? In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, 1--6. https://doi.org/10.1145/3170427.3186320
[48]
Nicholas Diakopoulos. 2015. Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism 3, 3: 398--415.
[49]
Mark Diaz, Isaac Johnson, Amanda Lazar, Anne Marie Piper, and Darren Gergle. 2018. Addressing Age-Related Bias in Sentiment Analysis. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18, 1--14. https://doi.org/10.1145/3173574.3173986
[50]
Lilian Edwards and Michael Veale. 2017. Slave to the Algorithm: Why a Right to an Explanation Is Probably Not the Remedy You Are Looking for. Duke Law & Technology Review 16: 18.
[51]
Malin Eiband, Sarah Theres Völkel, Daniel Buschek, Sophia Cook, and Heinrich Hussmann. 2019. When people and algorithms meet: user-reported problems in intelligent everyday applications. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI '19), 96--106. https://doi.org/10.1145/3301275.3302262
[52]
Paul Ekman. 2004. Emotions revealed. Bmj 328, Suppl S5.
[53]
Paul Ekman and Wallace V. Friesen. 2003. Unmasking the face: A guide to recognizing emotions from facial clues. Ishk.
[54]
Nicole B. Ellison, Charles Steinfield, and Cliff Lampe. 2007. The benefits of Facebook "friends:" Social capital and college students' use of online social network sites. Journal of computer-mediated communication 12, 4: 1143--1168.
[55]
Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First i like it, then i hide it: Folk theories of social feeds. In Proceedings of the 2016 cHI conference on human factors in computing systems, 2371--2382.
[56]
Motahhare Eslami, Sneha R. Krishna Kumaran, Christian Sandvig, and Karrie Karahalios. 2018. Communicating Algorithmic Process in Online Behavioral Advertising. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 1--13. https://doi.org/10.1145/3173574.3174006
[57]
Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. " I always assumed that I wasn't really that close to [her]" Reasoning about Invisible Algorithms in News Feeds. In Proceedings of the 33rd annual ACM conference on human factors in computing systems, 153--162.
[58]
Casey Fiesler. 2021. Innovating like an optimist, preparing like a pessimist: Ethical speculation and the legal imagination. Colorado Technology Law Journal 19, 1.
[59]
Casey Fiesler and Nicholas Proferes. 2018. "Participant" Perceptions of Twitter Research Ethics. Social Media+ Society 4, 1: 2056305118763366.
[60]
Janet Finch. 1987. The vignette technique in survey research. Sociology 21, 1: 105--114.
[61]
Fabian Fischer. 2019. The Accuracy Paradox of Algorithmic Classification. In Conference Proceedings of the 18th Annual STS Conference Graz 2019: Critical Issues in Science, Technology and Society Studies, 105--120.
[62]
Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Information Systems (TOIS) 14, 3: 330--347.
[63]
Oscar H. Gandy. 2016. Coming to terms with chance: Engaging rational discrimination and cumulative disadvantage. Routledge.
[64]
Patricia Garcia, Tonia Sutherland, Marika Cifor, Anita Say Chan, Lauren Klein, Catherine D'Ignazio, and Niloufar Salehi. 2020. No: Critical Refusal as Feminist Data Practice. In Conference Companion Publication of the 2020 on Computer Supported Cooperative Work and Social Computing (CSCW '20 Companion), 199--202. https://doi.org/10.1145/3406865.3419014
[65]
William W. Gaver, Jacob Beaver, and Steve Benford. 2003. Ambiguity as a resource for design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '03), 233--240. https://doi.org/10.1145/642611.642653
[66]
Susan A Gelman and Cristine H Legare. 2011. Concepts and folk theories. Annual review of anthropology 40: 379--398.
[67]
Tarleton Gillespie. 2012. Can an algorithm be wrong? Limn 1, 2: 9.
[68]
Tarleton Gillespie. 2016. Algorithm. In Digital Keywords (edited by Ben Peters). Princeton, N.J.: Princeton University Press.
[69]
Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
[70]
Lisa Gitelman. 2013. Raw data is an oxymoron. MIT press.
[71]
Mary L. Gray and Siddharth Suri. 2019. Ghost work: how to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt, Boston.
[72]
Ben Green. 2020. The False Promise of Risk Assessments: Epistemic Reform and the Limits of Fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*'20). ACM. https://doi. org/10.1145/3351095.3372869.
[73]
Gabriel Grill. 2021. Future protest made risky: Examining social media based civil unrest prediction research and products. Computer Supported Cooperative Work (CSCW): 1--29.
[74]
Ian Hacking. 1995. The looping effects of human kinds. In Causal cognition: A multidisciplinary debate. Clarendon Press/Oxford University Press, New York, NY, US, 351--394.
[75]
Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham. 2018. Gender Recognition or Gender Reductionism? The Social Implications of Embedded Gender Recognition Systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 1--13. https://doi.org/10.1145/3173574.3173582
[76]
Maximilian Heimstädt. 2017. Openwashing: A decoupling perspective on organizational transparency. Technological forecasting and social change 125: 77--86.
[77]
Andrew C. High, Anne Oeldorf-Hirsch, and Saraswathi Bellur. 2014. Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook. Computers in Human Behavior 34: 79--88.
[78]
Mireille Hildebrandt. 2017. Privacy As Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning. Social Science Research Network, Rochester, NY. https://doi.org/10.2139/ssrn.3081776
[79]
Tad Hirsch, Kritzia Merced, Shrikanth Narayanan, Zac E. Imel, and David C. Atkins. 2017. Designing contestability: Interaction design, machine learning, and mental health. In Proceedings of the 2017 Conference on Designing Interactive Systems, 95--99.
[80]
Arlie Russell Hochschild. 2012. The managed heart: Commercialization of human feeling. Univ of California Press.
[81]
Bell Hooks. 2004. The will to change: Men, masculinity, and love. Beyond Words/Atria Books.
[82]
Rhidian Hughes. 1998. Considering the vignette technique and its application to a study of drug injecting and HIV risk and safer behaviour. Sociology of Health & Illness 20, 3: 381--400.
[83]
Sarah E. Igo. 2018. The known citizen: A history of privacy in modern America. Harvard University Press.
[84]
Jane Im, Jill Dimond, Melody Berton, Una Lee, Katherine Mustelier, Mark S. Ackerman, and Eric Gilbert. 2021. Yes: Affirmative Consent as a Theoretical Framework for Understanding and Imagining Social Platforms. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1--18. http://doi.org/10.1145/3411764.3445778
[85]
Sheila Jasanoff. 2005. Technologies of humility: Citizen participation in governing science. In Wozu Experten? Springer, 370--389.
[86]
Sheila Jasanoff and Sang-Hyun Kim. 2015. Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. University of Chicago Press. https://doi.org/10.7208/chicago/9780226276663.001.0001
[87]
Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2019. Does Transparency in Moderation Really Matter? User Behavior After Content Removal Explanations on Reddit. Proceedings of the ACM on Human-Computer Interaction 3, CSCW: 150:1--150:27. https://doi.org/10.1145/3359252
[88]
Shagun Jhaver, Yoni Karpfen, and Judd Antin. 2018. Algorithmic Anxiety and Coping Strategies of Airbnb Hosts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 1--12. https://doi.org/10.1145/3173574.3173995
[89]
Nadia Karizat, Daniel Delmonaco, Motahhare Eslami, and Nazanin Andalibi. 2021. Algorithmic Folk Theories and Identity: How TikTok Users Co-Produce Knowledge of Identity and Engage in Algorithmic Resistance. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2: 1--44.
[90]
Michael Kearns and Aaron Roth. 2019. The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.
[91]
Jakko Kemper and Daan Kolkman. 2018. Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society: 1--16.
[92]
Svetlana Kiritchenko and Saif M. Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. arXiv preprint arXiv:1805.04508.
[93]
Rob Kitchin. 2014. The real-time city? Big data and smart urbanism. GeoJournal 79, 1: 1--14. https://doi.org/10.1007/s10708-013--9516--8
[94]
Rob Kitchin and Tracey Lauriault. 2014. Towards Critical Data Studies: Charting and Unpacking Data Assemblages and Their Work. Social Science Research Network, Rochester, NY. Retrieved June 12, 2020 from https://papers.ssrn.com/abstract=2474112
[95]
James Kite, Bridget C. Foley, Anne C. Grunseit, and Becky Freeman. 2016. Please Like Me: Facebook and Public Health Communication. PLOS ONE 11, 9: e0162765. https://doi.org/10.1371/journal.pone.0162765
[96]
René F. Kizilcec. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2390--2395.
[97]
David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. The parable of Google Flu: traps in big data analysis. Science 343, 6176: 1203--1205.
[98]
Min Kyung Lee, Anuraag Jain, Hea Jin Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome Control for Fair Algorithmic Mediation. Proceedings of the ACM on Human-Computer Interaction 3, CSCW: 182:1--182:26. https://doi.org/10.1145/3359284
[99]
Cindy Lin and Silvia Margot Lindtner. 2021. Techniques of Use: Confronting Value Systems of Productivity, Progress, and Usefulness in Computing and Design. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1--16. http://doi.org/10.1145/3411764.3445237
[100]
Deborah Lupton. 2020. Thinking With Care About Personal Data Profiling: A More-Than-Human Approach. International Journal of Communication 14, 0: 19.
[101]
Astrid Mager and Christian Katzenbach. 2021. Future imaginaries in the making and governing of digital technology: Multiple, Contested, Commodified. New Media & Society 23, 2: 223--236. https://doi.org/10.1177/1461444820929321
[102]
Lydia Manikonda and Munmun De Choudhury. 2017. Modeling and Understanding Visual Attributes of Mental Health Disclosures in Social Media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 170--181. Retrieved July 9, 2021 from http://doi.org/10.1145/3025453.3025932
[103]
Adam McCann. 2019. Most Diverse Cities in the U.S. WalletHub. Retrieved September 7, 2020 from https://web.archive.org/web/20190923085620/https://wallethub.com/edu/most-diverse-cities/12690/
[104]
Andrew McStay. 2016. Empathic media and advertising: Industry, policy, legal and citizen perspectives (the case for intimacy). Big Data & Society 3, 2: 205395171666686. https://doi.org/10.1177/2053951716666868
[105]
Andrew McStay. 2019. Emotional AI and EdTech: serving the public good? Learning, Media and Technology 0, 0: 1--14. https://doi.org/10.1080/17439884.2020.1686016
[106]
Marius Miron, Songül Tolan, Emilia Gómez, and Carlos Castillo. 2020. Evaluating causes of algorithmic bias in juvenile criminal recidivism. Artificial Intelligence and Law. https://doi.org/10.1007/s10506-020-09268-y
[107]
Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip (eds.). 2020. Your computer is on fire. The MIT Press, Cambridge, Massachusetts.
[108]
Deirdre K. Mulligan, Daniel Kluttz, and Nitin Kohli. 2019. Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions. Available at SSRN 3311894.
[109]
Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, 375--385.
[110]
Frank Pasquale. 2015. The black box society: The secret algorithms that control money and information. Harvard University Press.
[111]
Matteo Pasquinelli and Vladan Joler. 2020. The Nooscope Manifested Artificial Intelligence as Instrument of Knowledge Extractivism. AI and Society: 23.
[112]
Jürgen Pfeffer, Katja Mayer, and Fred Morstatter. 2018. Tampering with Twitter's Sample API. EPJ Data Science 7, 1: 50.
[113]
Jean-Christophe Plantin, Carl Lagoze, Paul N. Edwards, and Christian Sandvig. 2018. Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society 20, 1: 293--310.
[114]
Nikolaus Poechhacker and Severin Kacianka. 2021. Algorithmic Accountability in Context. Socio-Technical Perspectives on Structural Causal Models. Frontiers in Big Data 3. https://doi.org/10.3389/fdata.2020.519957
[115]
Theodore M. Porter. 1996. Trust in numbers: The pursuit of objectivity in science and public life. Princeton University Press.
[116]
Emilee Rader and Rebecca Gray. 2015. Understanding user beliefs about algorithmic curation in the Facebook news feed. In Proceedings of the 33rd annual ACM conference on human factors in computing systems, 173--182.
[117]
Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (AIES '19), 429--435. https://doi.org/10.1145/3306618.3314244
[118]
Kat Roemmich and Nazanin Andalibi. 2021. Data Subjects' Conceptualizations of and Attitudes toward Automatic Emotion Recognition-enabled Wellbeing Interventions on Social Media. Proceedings of ACM in Human Computer Interaction 5, CSCW2: 1--34.
[119]
Legacy Russell. 2020. Glitch Feminism: A Manifesto. Verso.
[120]
Jathan Sadowski and Roy Bendor. 2019. Selling smartness: Corporate narratives and the smart city as a sociotechnical imaginary. Science, Technology, & Human Values 44, 3: 540--563.
[121]
Javier Sánchez-Monedero and Lina Dencik. 2020. The politics of deceptive borders: ?biomarkers of deceit' and the case of iBorderCtrl. Information, Communication & Society 0, 0: 1--18. https://doi.org/10.1080/1369118X.2020.1792530
[122]
Christian Sandvig. 2015. Seeing the Sort: The Aesthetic and Industrial Defense of "The Algorithm" | NMC Media-N. Media-N: Journal of the New Media Caucus 10, 3. Retrieved August 14, 2020 from http://median.newmediacaucus.org/art-infrastructures-information/seeing-the-sort-the-aesthetic-and-industrial-defense-of-the-algorithm/
[123]
Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry: 1--23.
[124]
Michael Sawh. 2019. Getting all emotional: Wearables that are trying to monitor how we feel. Wareable. Retrieved August 28, 2020 from https://www.wareable.com/wearable-tech/wearables-that-track-emotion-7278
[125]
Nete Schwennesen. 2019. Algorithmic assemblages of care: imaginaries, epistemologies and repair work. Sociology of Health & Illness 41, S1: 176--192. https://doi.org/10.1111/1467--9566.12900
[126]
Irving Seidman. 2005. Interviewing as Qualitative Research: A Guide for Researchers in Education and the Social Sciences, 3rd Edition. Teachers College Press, New York.
[127]
Aaron Springer and Steve Whittaker. 2019. Progressive disclosure: empirically motivated approaches to designing effective transparency. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI '19), 107--120. https://doi.org/10.1145/3301275.3302322
[128]
Luke Stark. 2018. Algorithmic psychometrics and the scalable subject. Social Studies of Science 48, 2: 204--231. https://doi.org/10.1177/0306312718772094
[129]
Luke Stark. 2019. Affect and Emotion in digitalSTS. DigitalSTS: A Field Guide for Science & Technology Studies: 117--135.
[130]
Luke Stark and Jesse Hoey. 2020. The Ethics of Emotion in AI Systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 782--793.
[131]
H. Strömfelt, Y. Zhang, and B. W. Schuller. 2017. Emotion-augmented machine learning: Overview of an emerging domain. In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), 305--312. https://doi.org/10.1109/ACII.2017.8273617
[132]
Lucy Suchman. 2007. Human-machine reconfigurations: Plans and situated actions. Cambridge University Press.
[133]
Latanya Sweeney. 2013. Discrimination in online ad delivery. Queue 11, 3: 10.
[134]
Keeanga-Yamahtta Taylor. 2019. Race for profit: How banks and the real estate industry undermined black homeownership. UNC Press Books.
[135]
Zeynep Tufekci. 2014. Big questions for social media big data: Representativeness, validity and other methodological pitfalls. In Eighth international AAAI Conference on Weblogs and Social Media.
[136]
Zeynep Tufekci. 2015. Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Colo. Tech. LJ 13: 203.
[137]
Kristen Vaccaro, Dylan Huang, Motahhare Eslami, Christian Sandvig, Kevin Hamilton, and Karrie Karahalios. 2018. The Illusion of Control: Placebo Effects of Control Settings. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 1--13. https://doi.org/10.1145/3173574.3173590
[138]
Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. "At the End of the Day Facebook Does What It Wants": How Users Experience Contesting Algorithmic Content Moderation. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2: 167:1--167:22. https://doi.org/10.1145/3415238
[139]
Nicholas A. Valentino, Ted Brader, Eric W. Groenendyk, Krysha Gregorowicz, and Vincent L. Hutchings. 2011. Election Night's Alright for Fighting: The Role of Emotions in Political Participation. The Journal of Politics 73, 1: 156--170. https://doi.org/10.1017/S0022381610000939
[140]
José Van Dijck. 2014. Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society 12, 2: 197--208.
[141]
Qiaosi Wang, Shan Jing, David Joyner, Lauren Wilcox, Hong Li, Thomas Plötz, and Betsy Disalvo. 2020. Sensing Affect to Empower Students: Learner Perspectives on Affect-Sensitive Technology in Large Educational Contexts. In Proceedings of the Seventh ACM Conference on Learning@ Scale, 63--76.
[142]
Rick Wash. 2010. Folk models of home computer security. In Proceedings of the Sixth Symposium on Usable Privacy and Security (SOUPS '10), 1--16. https://doi.org/10.1145/1837110.1837125
[143]
Christian von der Weth, Ashraf Abdul, Shaojing Fan, and Mohan Kankanhalli. 2020. Helping Users Tackle Algorithmic Threats on Social Media: A Multimedia Research Agenda. In Proceedings of the 28th ACM International Conference on Multimedia (MM '20), 4425--4434. https://doi.org/10.1145/3394171.3414692
[144]
Ben Williamson. 2018. Silicon startup schools: technocracy, algorithmic imaginaries and venture philanthropy in corporate education reform. Critical Studies in Education 59, 2: 218--236.
[145]
Langdon Winner. 1980. Do artifacts have politics? Daedalus: 121--136.
[146]
Richmond Y. Wong, Deirdre K. Mulligan, and John Chuang. 2017. Using science fiction texts to surface user reflections on privacy. In Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers, 213--216.
[147]
Allison Woodruff, Sarah E. Fox, Steven Rousso-Schindler, and Jeffrey Warshaw. 2018. A Qualitative Exploration of Perceptions of Algorithmic Fairness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 1--14. https://doi.org/10.1145/3173574.3174230
[148]
Meredith GF Worthen. 2014. An invitation to use craigslist ads to recruit respondents from stigmatized groups for qualitative interviews. Qualitative Research 14, 3: 371--383.
[149]
Ali Yadollahi, Ameneh Gholipour Shahraki, and Osmar R. Zaiane. 2017. Current state of text sentiment analysis from opinion to emotion mining. ACM Computing Surveys (CSUR) 50, 2: 1--33.
[150]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1--12.
[151]
Biqiao Zhang and Emily Mower Provost. 2019. Automatic recognition of self-reported and perceived emotions. In Multimodal Behavior Analysis in the Wild. Elsevier, 443--470.
[152]
Shoshana Zuboff. 2015. Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology 30, 1: 75--89.
[153]
Shoshana Zuboff. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs, New York.

Cited By

View all
  • (2025)Google knows me too well! Coping with perceived surveillance in an algorithmic profiling contextComputers in Human Behavior10.1016/j.chb.2024.108536165(108536)Online publication date: Apr-2025
  • (2024)Exploring the Utility of Emotion Recognition Systems in HealthcareUsing Machine Learning to Detect Emotions and Predict Human Psychology10.4018/979-8-3693-1910-9.ch011(245-271)Online publication date: 12-Apr-2024
  • (2024)Emotion Algorithm Analysis and Expression Optimization of Film and Television Drama LinesApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-19309:1Online publication date: 5-Aug-2024
  • Show More Cited By

Index Terms

  1. Attitudes and Folk Theories of Data Subjects on Transparency and Accuracy in Emotion Recognition

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 6, Issue CSCW1
    CSCW1
    April 2022
    2511 pages
    EISSN:2573-0142
    DOI:10.1145/3530837
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 April 2022
    Published in PACMHCI Volume 6, Issue CSCW1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. accuracy
    2. affect recognition
    3. algorithm
    4. emotion recognition
    5. emotional artificial intelligence
    6. ethics
    7. folk theories
    8. social media
    9. transparency

    Qualifiers

    • Research-article

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)472
    • Downloads (Last 6 weeks)65
    Reflects downloads up to 12 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Google knows me too well! Coping with perceived surveillance in an algorithmic profiling contextComputers in Human Behavior10.1016/j.chb.2024.108536165(108536)Online publication date: Apr-2025
    • (2024)Exploring the Utility of Emotion Recognition Systems in HealthcareUsing Machine Learning to Detect Emotions and Predict Human Psychology10.4018/979-8-3693-1910-9.ch011(245-271)Online publication date: 12-Apr-2024
    • (2024)Emotion Algorithm Analysis and Expression Optimization of Film and Television Drama LinesApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-19309:1Online publication date: 5-Aug-2024
    • (2024)Borderline decisions?: Lack of justification for automatic deception detection at EU bordersTATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis10.14512/tatup.33.1.3433:1(34-40)Online publication date: 15-Mar-2024
    • (2024)Reparations of the horse? Algorithmic reparation and overspecialized remediesBig Data & Society10.1177/2053951724127067011:3Online publication date: 24-Sep-2024
    • (2024)U.S. Job-Seekers' Organizational Justice Perceptions of Emotion AI-Enabled InterviewsProceedings of the ACM on Human-Computer Interaction10.1145/36869938:CSCW2(1-42)Online publication date: 8-Nov-2024
    • (2024)Reflective Design for Informal Participatory Algorithm Auditing: A Case Study with Emotion AIProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685411(1-17)Online publication date: 13-Oct-2024
    • (2024)What should we do with Emotion AI? Towards an Agenda for the Next 30 YearsCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3689135(98-101)Online publication date: 11-Nov-2024
    • (2024)Powered by AIProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314147:4(1-24)Online publication date: 12-Jan-2024
    • (2024)Constructing Capabilities: The Politics of Testing Infrastructures for Generative AIProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659009(1838-1849)Online publication date: 3-Jun-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media