[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies

Published: 18 October 2021 Publication History

Abstract

The spread of AI-embedded systems involved in human decision making makes studying human trust in these systems critical. However, empirically investigating trust is challenging. One reason is the lack of standard protocols to design trust experiments. In this paper, we present a survey of existing methods to empirically investigate trust in AI-assisted decision making and analyse the corpus along the constitutive elements of an experimental protocol. We find that the definition of trust is not commonly integrated in experimental protocols, which can lead to findings that are overclaimed or are hard to interpret and compare across studies. Drawing from empirical practices in social and cognitive studies on human-human trust, we provide practical guidelines to improve the methodology of studying Human-AI trust in decision-making contexts. In addition, we bring forward research opportunities of two types: one focusing on further investigation regarding trust methodologies and the other on factors that impact Human-AI trust.

References

[1]
Sander Ackermans, Debargha Dey, Peter Ruijten, Raymond H. Cuijpers, and Bastian Pfleging. 2020. The Effects of Explicit Intention Communication, Conspicuous Sensors, and Pedestrian Attitude in Interactions with Automated Vehicles. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376197
[2]
Ighoyota Ben. Ajenaghughrure, Sonia C. Sousa, Ilkka Johannes Kosunen, and David Lamas. 2019. Predictive Model to Assess User Trust: A Psycho-Physiological Approach. In Proceedings of the 10th Indian Conference on Human-Computer Interaction (IndiaHCI '19). Association for Computing Machinery, New York, NY, USA, 10. https://doi.org/10.1145/3364183.3364195
[3]
Ban Al-Ani, Matthew J. Bietz, Yi Wang, Erik Trainer, Benjamin Koehne, Sabrina Marczak, David Redmiles, and Rafael Prikladnicki. 2013. Globally Distributed System Developers: Their Trust Expectations and Processes. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (San Antonio, Texas, USA) (CSCW '13). Association for Computing Machinery, New York, NY, USA, 563--574. https://doi.org/10.1145/2441776.2441840
[4]
Alper Alan, Enrico Costanza, Joel Fischer, Sarvapali D. Ramchurn, Tom Rodden, and Nicholas R. Jennings. 2014. A Field Study of Human-Agent Interaction for Electricity Tariff Switching. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS '14). International Foundation for Autonomous Agents and Multiagent Systems, New York, NY, USA, 965--972.
[5]
Carlos Alós-Ferrer and Federica Farolfi. 2019. Trust Games and Beyond. Frontiers in Neuroscience 13, 887 (Sept. 2019), 1--14. https://doi.org/10.3389/fnins.2019.00887
[6]
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, and et al. 2019. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, Article 3, 13 pages. https://doi.org/10.1145/3290605.3300233
[7]
Bengt-Erik Andersson and Stig-Göran Nilsson. 1964. Studies in the reliability and validity of the critical incident technique. Journal of Applied Psychology 48 (1964), 398--403. https://doi.org/10.1037/h0042025
[8]
Sean Andrist, Erin Spannan, and Bilge Mutlu. 2013. Rhetorical Robots: Making Robots More Effective Speakers Using Linguistic Cues of Expertise. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI '13). IEEE Press, New York, NY, USA, 341--348.
[9]
Melanie J. Ashleigh and Edgar Meyer. 2011. Deepening the understanding of trust: combining repertory grid and narrative to explore the uniqueness of trust. In Handbook of Research Methods on Trust, Fergus Lyon, Guido Möllering, and Mark Saunders (Eds.). Edward Elgar, Cheltenham, UK; Northampton, MA, USA, Chapter 14, 138--148.
[10]
Maryam Ashoori and Justin D. Weisz. 2019. In AI We Trust? Factors That Influence Trustworthiness of AI-infused Decision-Making Processes.
[11]
Benoit A. Aubert and Barbara L. Kelsey. 2003. Further Understanding of Trust and Performance in Virtual Teams. Small Group Research 34, 5 (2003), 575--618. https://doi.org/10.1177/1046496403256011 arXiv:https://doi.org/10.1177/1046496403256011
[12]
Reinhard Bachmann. 2011. Utilising repertory grids in macro- level comparative studies. In Handbook of Research Methods on Trust, Fergus Lyon, Guido Möllering, and Mark Saunders (Eds.). Edward Elgar, Cheltenham, UK; Northampton, MA, USA, Chapter 13, 130--137.
[13]
Brad M. Barber and Terrance Odean. 2001. Boys Will be Boys: Gender, Overconfidence, and Common Stock Investment. The Quarterly Journal of Economics 116, 1 (2001), 261--292. http://www.jstor.org/stable/2696449
[14]
Roy F. Baumeister. 1984. Choking under pressure: Self-consciousness and paradoxical effects of incentives on skillful performance. Journal of Personality and Social Psychology 46, 3 (1984), 610--620. https://doi.org/10.1037/0022--3514.46.3.610
[15]
Defense Innovation Board. 2019. AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense. Technical Report. United States Department of Defense, Virginia, United States. 11 pages. https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB$_$AI$_$PRINCIPLES$_$PRIMARY$_$DOCUMENT.PDF
[16]
Gerd Bohner and Nina Dickel. 2011. Attitudes and Attitude Change. Annual Review of Psychology 62, 1 (2011), 391--417. https://doi.org/10.1146/annurev.psych.121208.131609 20809791.
[17]
Iris Bohnet, Fiona Greig, Benedikt Herrmann, and Richard Zeckhauser. 2008. Betrayal Aversion: Evidence from Brazil, China, Oman, Switzerland, Turkey, and the United States. American Economic Review 98, 1 (2008), 294--310. http://dx.doi.org/10.1257/aer.98.1.294
[18]
S. Boon and J. Holmes. 1991. The dynamics of interpersonal trust: resolving uncertainty in the ace of risk. In Cooperation and Prosocial Behaviour, R. Hinde and J. Gorebel (Eds.). Cambridge University Press, Cambridge, 190--211.
[19]
Gerard Breeman. 2011. Hermeneutic methods in trust research. In Handbook of Research Methods on Trust, Fergus Lyon, Guido Möllering, and Mark Saunders (Eds.). Edward Elgar, Cheltenham, UK; Northampton, MA, USA, Chapter 15, 149--160.
[20]
Gerard Engelbert Breeman. 2006. Cultivating trust : how do public policies become trusted. Ph.D. Dissertation. Dept. of Public Administration, Faculty of Social and Behavioural Sciences, Leiden University.
[21]
Tom Bridgwater, Manuel Giuliani, Anouk van Maris, Greg Baker, Alan Winfield, and Tony Pipe. 2020. Examining Profiles for Robotic Risk Assessment: Does a Robot's Approach to Risk Affect User Trust?. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI '20). Association for Computing Machinery, New York, NY, USA, 23--31. https://doi.org/10.1145/3319502.3374804
[22]
Anna Brown, Alexandra Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. 2019. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-Making in Child Welfare Services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300271
[23]
Marc Brysbaert. 2019. How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables. Journal of Cognition 2, 1, 28. https://doi.org/10.5334/joc.72
[24]
Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020. Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI '20). Association for Computing Machinery, New York, NY, USA, 454--464. https://doi.org/10.1145/3377325.3377498
[25]
Terence Burnham, Kevin McCabe, and Vernon Smith. 2000. Friend-or-foe intentionality priming in an extensive form trust game. Journal of Economic Behavior & Organization 43, 1 (2000), 57--73. https://EconPapers.repec.org/RePEc:eee:jeborg:v:43:y:2000:i:1:p:57--73
[26]
Carrie J. Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S. Corrado, Martin C. Stumpe, and Michael Terry. 2019. Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3290605.3300234
[27]
Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proc. ACM Hum.-Comput. Interact. 3, CSCW (2019), 24. https://doi.org/10.1145/3359206
[28]
Kelly Caine. 2016. Local Standards for Sample Size at CHI. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). Association for Computing Machinery, New York, NY, USA, 981--992. https://doi.org/10.1145/2858036.2858498
[29]
COLIN F. CAMERER and ROBIN M. HOGARTH. 1999. The Effects of Financial Incentives in Experiments: A Review and Capital-Labor-Production Framework. Journal of Risk and Uncertainty 19, 1/3 (1999), 7--42. http://www.jstor.org/stable/41760945
[30]
Cristiano Castelfranchi and Rino Falcone. 2010. Socio-Cognitive Model of Trust: Basic Ingredients. John Wiley & Sons, Ltd, Chichester, United Kingdom, Chapter 2, 35--94. https://doi.org/10.1002/9780470519851.ch2
[31]
Raja Chatila, Virginia Dignum, Michael Fisher, Fosca Giannotti, Katharina Morik, Stuart Russell, and Karen Yeung. 2021. Trustworthy AI. Springer International Publishing, Cham, 13--39. https://doi.org/10.1007/978--3-030--69128--8_2
[32]
A. Chatzimparmpas, R. M. Martins, I. Jusufi, K. Kucher, F. Rossi, and A. Kerren. 2020. The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations. Computer Graphics Forum 39, 3 (2020), 713--756. https://doi.org/10.1111/cgf.14034 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14034
[33]
Shih-Yi Chien, Michael Lewis, Katia Sycara, Jyi-Shane Liu, and Asiye Kumru. 2018. The Effect of Culture on Trust in Automation: Reliability and Workload. ACM Trans. Interact. Intell. Syst. 8, 4, Article 29 (Nov. 2018), 31 pages. https://doi.org/10.1145/3230736
[34]
Michael Chromik, Florian Lachner, and Andreas Butz. 2020. ML for UX? - An Inventory and Predictions on the Use of Machine Learning Techniques for UX Research. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3419249.3420163
[35]
I. Glenn Cohen. 2020. Informed Consent and Medical Artificial Intelligence: What to Tell the Patient? Georgetown Law Journal 108 (2020), 1425--1469. https://doi.org/10.2139/ssrn.3529576
[36]
European Commission. 2020. On Artificial Intelligence - A European approach to excellence and trust. Technical Report. European Commission, Brussels, Belgium. 27 pages. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020$_$en.pdf
[37]
James Cox. 2004. How to identify trust and reciprocity. Games and Economic Behavior 46, 2 (2004), 260--281. https://EconPapers.repec.org/RePEc:eee:gamebe:v:46:y:2004:i:2:p:260--281
[38]
Henriette Cramer, Vanessa Evers, Nicander Kemper, and Bob Wielinga. 2008. Effects of Autonomy, Traffic Conditions and Driver Personality Traits on Attitudes and Trust towards In-Vehicle Agents. In Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 03 (WI-IAT '08). IEEE Computer Society, New York, NY, USA, 477--482. https://doi.org/10.1109/WIIAT.2008.326
[39]
Steven C. Currall and Timothy A. Judge. 1995. Measuring trust between organizational boundary role persons. Organizational Behavior and Human Decision Processes 64, 2 (1995), 151--170. https://doi.org/10.1006/obhd.1995.1097
[40]
Shuchisnigdha Deb, Lesley Strawderman, Daniel W. Carruth, Janice DuBien, Brian Smith, and Teena M. Garrison. 2017. Development and validation of a questionnaire to assess pedestrian receptivity toward fully autonomous vehicles. Transportation Research Part C: Emerging Technologies 84 (2017), 178 -- 195. https://doi.org/10.1016/j.trc.2017.08.029
[41]
Lifang Deng and Wai Chan. 2017. Testing the Difference Between Reliability Coefficients Alpha and Omega. Educational and psychological measurement 77, 2 (Apr 2017), 185--203. https://doi.org/10.1177/0013164416658325
[42]
Munjal Desai, Poornima Kaniarasu, Mikhail Medvedev, Aaron Steinfeld, and Holly Yanco. 2013. Impact of Robot Failures and Feedback on Real-Time Trust. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI '13). IEEE Press, New York, NY, USA, 251--258.
[43]
Morton Deutsch. 1958. Trust and suspicion. Journal of Conflict Resolution 2, 4 (1958), 265--279. https://doi.org/10.1177/002200275800200401 arXiv:https://doi.org/10.1177/002200275800200401
[44]
Morton Deutsch. 1960. The Effect of Motivational Orientation upon Trust and Suspicion. Human Relations 13, 2 (1960), 123--139. https://doi.org/10.1177/001872676001300202 arXiv:https://doi.org/10.1177/001872676001300202
[45]
M Deutsch. 1960. Trust, trustworthiness, and the F scale. Journal of abnormal and social psychology 61 (July 1960), 138-140. https://doi.org/10.1037/h0046501
[46]
Graham Dietz and Deanne N. Den Hartog. 2006. Measuring trust inside organisations. Personnel Review 35 (2006), 557--588. https://doi.org/10.1108/00483480610682299
[47]
Kurt Dirks and Donald Ferrin. 2002. Trust in Leadership: Meta-Analytic Findings and Implications for Research and Practice. The Journal of applied psychology 87 (09 2002), 611--28. https://doi.org/10.1037//0021--9010.87.4.611
[48]
Jaimie Drozdal, Justin Weisz, Dakuo Wang, Gaurav Dass, Bingsheng Yao, Changruo Zhao, Michael Muller, Lin Ju, and Hui Su. 2020. Trust in AutoML: Exploring Information Needs for Establishing Trust in Automated Machine Learning Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI '20). Association for Computing Machinery, New York, NY, USA, 297--307. https://doi.org/10.1145/3377325.3377501
[49]
Thomas J. Dunn, Thom Baguley, and Vivienne Brunsden. 2014. From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology 105, 3 (2014), 399--412. https://doi.org/10.1111/bjop.12046 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/bjop.12046
[50]
Malin Eiband, Sarah Theres Völkel, Daniel Buschek, Sophia Cook, and Heinrich Hussmann. 2019. When People and Algorithms Meet: User-Reported Problems in Intelligent Everyday Applications. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI '19). Association for Computing Machinery, New York, NY, USA, 96--106. https://doi.org/10.1145/3301275.3302262
[51]
Fredrick Ekman, Mikael Johansson, and Jana Sochor. 2016. Creating Appropriate Trust for Autonomous Vehicle Systems: A Framework for HMI Design. IEEE Transactions on Human-Machine Systems 48, 1 (01 2016), 95--101.
[52]
Anthony M. Evans and Joachim I. Krueger. 2009. The Psychology (and Economics) of Trust. Social and Personality Psychology Compass 3, 6 (2009), 1003--1017. https://doi.org/10.1111/j.1751--9004.2009.00232.x
[53]
Xiaocong Fan, Sooyoung Oh, Michael McNeese, John Yen, Haydee Cuevas, Laura Strater, and Mica R. Endsley. 2008. The Influence of Agent Reliability on Trust in Human-Agent Collaboration. In Proceedings of the 15th European Conference on Cognitive Ergonomics: The Ergonomics of Cool Interaction (ECCE '08). Association for Computing Machinery, New York, NY, USA, 8. https://doi.org/10.1145/1473018.1473028
[54]
D. S. Fareri, L. J. Chang, and M. R. Delgado. 2012. Effects of direct social experience on trust decisions and neural reward circuitry. Front Neurosci 6 (2012), 148.
[55]
Ernst Fehr. 2009. ON THE ECONOMICS AND BIOLOGY OF TRUST. Journal of the Euro- pean Economic Association 7, 2--3 (2009), 235--266. https://doi.org/10.1162/JEEA.2009.7.2--3.235 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1162/JEEA.2009.7.2--3.235
[56]
Shi Feng and Jordan Boyd-Graber. 2019. What Can AI Do for Me? Evaluating Machine Learning Interpretations in Cooperative Play. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI '19). Association for Computing Machinery, New York, NY, USA, 229--239. https://doi.org/10.1145/3301275.3302265
[57]
J. C. Flanagan. 1954. The critical incident technique. The Psychological Bulletin 51, 4 (1954), 327--358.
[58]
Jerry Floersch, Jeffrey L. Longhofer, Derrick Kranke, and Lisa Townsend. 2010. Integrating Thematic, Grounded Theory and Narrative Analysis: A Case Study of Adolescent Psychotropic Treatment. Qualitative Social Work 9, 3 (2010), 407--425. https://doi.org/10.1177/1473325010362330 arXiv:https://doi.org/10.1177/1473325010362330
[59]
Jonathan B. Freeman. 2018. Doing Psychological Science by Hand. Current Directions in Psychological Science 27, 5 (2018), 315--323. https://doi.org/10.1177/0963721417746793 arXiv:https://doi.org/10.1177/0963721417746793
[60]
Anna-Katharina Frison, Laura Aigner, Philipp Wintersberger, and Andreas Riener. 2018. Who is Generation A? Investigating the Experience of Automated Driving for Different Age Groups. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '18). Association for Computing Machinery, New York, NY, USA, 94--104. https://doi.org/10.1145/3239060.3239087
[61]
Anna-Katharina Frison, Philipp Wintersberger, Andreas Riener, Clemens Schartmüller, Linda Ng Boyle, Erika Miller, and Klemens Weigl. 2019. In UX We Trust: Investigation of Aesthetics and Usability of Driver-Vehicle Interfaces and Their Impact on the Perception of Automated Driving. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3290605.3300374
[62]
Ernestine Fu, Mishel Johns, David A. B. Hyde, Srinath Sibi, Martin Fischer, and David Sirkin. 2020. Is Too Much System Caution Counterproductive? Effects of Varying Sensitivity and Automation Levels in Vehicle Collision Avoidance Systems. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3313831.3376300
[63]
C. Ashley Fulmer and Michele J. Gelfand. 2012. At What Level (and in Whom) We Trust: Trust Across Multiple Organizational Levels. Journal of Management 38, 4 (2012), 1167--1230. https://doi.org/10.1177/0149206312439327 arXiv:https://doi.org/10.1177/0149206312439327
[64]
AXA Research Fund. 2019. Artificial Intelligence: Fostering Trust. Technical Report. AXA. 45 pages. https://www.axa-research.org/en/news/AI-research-guide
[65]
G20. 2019. G20 Ministerial Statement on Trade and Digital Economy. Technical Report. G20, Brussels, Belgium. 14 pages. http://trade.ec.europa.eu/doclib/press/index.cfm?id=2027
[66]
Diego Gambetta. [n.d.]. Can We Trust Trust? Department of Sociology,University of Oxford, Oxford, United Kingdom.
[67]
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers. Proc. ACM Hum.-Comput. Interact. 4, 3, Article 235 (Jan. 2021), 28 pages. https://doi.org/10.1145/3432934
[68]
Nicole Gillespie. 2003. Measuring trust in working relationships: The behavioral trust inventory. Melbourne Business School, Melbourne, Australia.
[69]
Nicole Gillespie. 2011. Measuring trust in organizational contexts: An overview of survey-based measures. In Handbook of Research Methods on Trust, Fergus Lyon, Guido Möllering, and Mark Saunders (Eds.). Edward Elgar, Cheltenham, UK; Northampton, MA, USA, Chapter 17, 175--188.
[70]
Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA) (2018), 80--89.
[71]
Alyssa Glass, Deborah L. McGuinness, and Michael Wolverton. 2008. Toward Establishing Trust in Adaptive Agents. In Proceedings of the 13th International Conference on Intelligent User Interfaces (IUI '08). Association for Computing Machinery, New York, NY, USA, 227--236. https://doi.org/10.1145/1378773.1378804
[72]
Ella Glikson and Anita Woolley. 2020. Human trust in artificial intelligence: Review of empirical research (in press). The Academy of Management Annals 14, 2 (August 2020), 62. https://doi.org/10.5465/annals.2018.0057
[73]
Uri Gneezy and Aldo Rustichini. 2000. Pay Enough or Don't Pay at All. The Quarterly Journal of Economics 115, 3 (2000), 791--810. http://www.jstor.org/stable/2586896
[74]
Dietz Graham and Den Hartog Deanne N. 2006. Measuring trust inside organisations. Personnel Review 35, 5 (01 Jan 2006), 557--588. https://doi.org/10.1108/00483480610682299
[75]
Dara Gruber, Ashley Aune, and Wilma Koutstaal. 2018. Can Semi-Anthropomorphism Influence Trust and Compliance? Exploring Image Use in App Interfaces. In Proceedings of the Technology, Mind, and Society (TechMindSociety '18). Association for Computing Machinery, New York, NY, USA, 6. https://doi.org/10.1145/3183654.3183700
[76]
Jonathan Grudin. 2009. AI and HCI: Two Fields Divided by a Common Focus. AI Magazine 30, 4 (September 2009), 48--57. https://doi.org/10.1609/aimag.v30i4.2271
[77]
Kunal Gupta, Ryo Hajika, Yun Suen Pai, Andreas Duenser, Martin Lochner, and Mark Billinghurst. 2019. In AI We Trust: Investigating the Relationship between Biosignals, Trust and Cognitive Load in VR. In 25th ACM Symposium on Virtual Reality Software and Technology (VRST '19). Association for Computing Machinery, New York, NY, USA, 10. https://doi.org/10.1145/3359996.3364276
[78]
Andreas Gutscher. 2007. A Trust Model for an Open, Decentralized Reputation System. In Trust Management. Springer US, New Brunswick, Canada, 285--300. https://doi.org/10.1007/978-0--387--73655--6_19
[79]
Özgür Gürerk, Andrea Bönsch, Lucas Braun, Christian Grund, Christine Harbring, Thomas Kittsteiner, and Andreas Staffeldt. 2014. Experimental Economics in Virtual Reality.
[80]
Peter A. Hancock, Deborah R. Billings, Kristin E. Schaefer, Jessie Y. C. Chen, Ewart J. de Visser, and Raja Parasuraman. 2011. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Human Factors 53, 5 (2011), 517--527. https://doi.org/10.1177/0018720811417254
[81]
Jason L. Harman, John O'Donovan, Tarek Abdelzaher, and Cleotilde Gonzalez. 2014. Dynamics of Human Trust in Recommender Systems. In Proceedings of the 8th ACM Conference on Recommender Systems (RecSys '14). Association for Computing Machinery, New York, NY, USA, 305--308. https://doi.org/10.1145/2645710.2645761
[82]
IBM Watson Health. 2020. Artificial Intelligence in medicine. https://www.ibm.com/watson-health/learn/artificial-intelligence-medicine
[83]
Rebecca Heilweil. 2019. Artificial intelligence will help determine if you get your next job. https://www.vox.com/recode/2019/12/12/20993665/artificial-intelligence-ai-job-screen
[84]
Tove Helldin, Göran Falkman, Maria Riveiro, and Staffan Davidsson. 2013. Presenting System Uncertainty in Automotive UIs for Supporting Trust Calibration in Autonomous Driving. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '13). Association for Computing Machinery, New York, NY, USA, 210--217. https://doi.org/10.1145/2516540.2516554
[85]
Ralph Hertwig and Andreas Ortmann. 2003. Economists' and Psychologists' Experimental Practices: How They Differ, Why They Differ, And How they Could Converge. Vol. 1. Oxford University Press, Oxford, United Kingdom, Chapter 13, 253--272. https://books.google.fr/books?id=fOI31h_G6UkC&pg=PA260&lpg=PA260&dq=financial+incentives+and+trust+experiment&source=bl&ots=-CRrjQeHv_&sig=ACfU3U2ID0VJinKgmlUgpFsomoQMDO2GnQ&hl=en&sa=X&ved=2ahUKEwiA4O3C27XqAhVNOBoKHWGJBakQ6AEwDnoECAsQAQ#v=onepage&q=financial% 20incentives%20and%20trust%20experiment&f=false
[86]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Human Factors 57, 3 (2015), 407--434. https://doi.org/10.1177/0018720814547570
[87]
Kai Holländer, Philipp Wintersberger, and Andreas Butz. 2019. Overtrust in External Cues of Automated Vehicles: An Experimental Investigation. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '19). Association for Computing Machinery, New York, NY, USA, 211--221. https://doi.org/10.1145/3342197.3344528
[88]
John Holmes and John Rempel. 1985. Trust in Close Relationships. Journal of Personality and Social Psychology 49 (07 1985). https://doi.org/10.1037//0022--3514.49.1.95
[89]
Sungsoo Ray Hong, Jessica Hullman, and Enrico Bertini. 2020. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs. Proc. ACM Hum.-Comput. Interact. 4, 1, Article 068 (May 2020), 26 pages. https://doi.org/10.1145/3392878
[90]
Larue Tone Hosmer. 1995. Trust: The Connecting Link between Organizational Theory and Philosophical Ethics. The Academy of Management Review 20, 2 (1995), 379--403. http://www.jstor.org/stable/258851
[91]
Hsiao-Ying Huang and Masooda Bashir. 2017. Personal Influences on Dynamic Trust Formation in Human-Agent Interaction. In Proceedings of the 5th International Conference on Human Agent Interaction (HAI '17). Association for Computing Machinery, New York, NY, USA, 233--243. https://doi.org/10.1145/3125739.3125749
[92]
Lenard Huff and Lane Kelley. 2003. Levels of Organizational Trust in Individualist versus Collectivist Societies: A Seven-Nation Study. Organization Science 14, 1 (2003), 81--90. http://www.jstor.org/stable/3086035
[93]
J. S. Hyde. 2005. The gender similarities hypothesis. Am Psychol 60, 6 (Sep 2005), 581--592.
[94]
Brett W. Israelsen and Nisar R. Ahmed. 2019. "Dave...I Can Assure You ...That It's Going to Be All Right ..." A Definition, Case for, and Survey of Algorithmic Assurances in Human-Autonomy Trust Relationships. ACM Comput. Surv. 51, 6, Article 113 (Jan. 2019), 37 pages. https://doi.org/10.1145/3267338
[95]
Joi Ito. 2018. What the Boston School Bus Schedule Can Teach Us About AI. https://www.wired.com/story/joi-ito-ai-and-bus-routes/
[96]
Jiun-Yin Jian, Ann M. Bisantz, and Colin G. Drury. 2000. Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics 4, 1 (2000), 53--71. https://doi.org/10.1207/S15327566IJCE0401_04
[97]
Zhuochen Jin, Shuyuan Cui, Shunan Guo, David Gotz, Jimeng Sun, and Nan Cao. 2020. CarePre: An Intelligent Clinical Decision Assistance System. ACM Trans. Comput. Healthcare 1, 1 (2020), 20. https://doi.org/10.1145/3344258
[98]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (01 Sep 2019), 389--399. https://doi.org/10.1038/s42256-019-0088--2
[99]
Noel D. Johnson and Alexandra A. Mislin. 2011. Trust games: A meta-analysis. Journal of Economic Psychology 32, 5 (June 2011), 865--889. https://doi.org/10.1016/j.joep.2011.05.00
[100]
Angie M. Johnston, Candice M. Mills, and Asheley R. Landrum. 2015. How do children weigh competence and benevolence when deciding whom to trust? Cognition 144 (2015), 76 -- 90. https://doi.org/10.1016/j.cognition.2015.07.015
[101]
K. G. Jöreskog. 1967. A GENERAL APPROACH TO CONFIRMATORY MAXIMUM LIKELIHOOD FACTOR ANALYSIS. ETS Research Bulletin Series 1967, 2 (1967), 183--202. https://doi.org/10.1002/j.2333--8504.1967.tb00991.x arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/j.2333--8504.1967.tb00991.x
[102]
Daniel Kahneman. 2000. Evaluation by Moments: Past and Future. Cambridge University Press & Russell Sage Foundation, New York, USA, Chapter 38, 693--708. https://doi.org/10.1017/CBO9780511803475.039
[103]
Peter H. Kim, Cecily D. Cooper, Kurt T. Dirks, and Donald L. Ferrin. 2013. Repairing trust with individuals vs. groups. Organizational Behavior and Human Decision Processes 120, 1 (2013), 1--14. https://doi.org/10.1016/j.obhdp.2012.08.0
[104]
F. H. Knight. 1921. Risk, Uncertainty, and Profit. Houghton Mifflin, New York, USA. https://fraser.stlouisfed.org/files/docs/publications/books/risk/riskuncertaintyprofit.pdf
[105]
Bran Knowles, Mark Rouncefield, Mike Harding, Nigel Davies, Lynne Blair, James Hannon, John Walden, and Ding Wang. 2015. Models and Patterns of Trust. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (Vancouver, BC, Canada) (CSCW '15). Association for Computing Machinery, New York, NY, USA, 328--338. https://doi.org/10.1145/2675133.2675154
[106]
Melissa A. Koenig and Vikram K. Jaswal. 2011. Characterizing Children's Expectations About Expertise and Incompetence: Halo or Pitchfork Effects? Child Development 82, 5 (2011), 1634--1647. http://www.jstor.org/stable/41289869
[107]
Agnieszka Kolasinska, Ivano Lauriola, and Giacomo Quadrio. 2019. Do People Believe in Artificial Intelligence? A Cross-Topic Multicultural Study. In Proceedings of the 5th EAI International Conference on Smart Objects and Technologies for Social Good (GoodTechs '19). Association for Computing Machinery, New York, NY, USA, 31--36. https://doi.org/10.1145/3342428.3342667
[108]
KPMG. 2019. Controlling AI: The imperative for transparency and explainability. Technical Report. KPMG. 28 pages. https://advisory.kpmg.us/articles/2019/controlling-ai.html
[109]
Matthias Kraus, Nicolas Wagner, and Wolfgang Minker. 2020. Effects of Proactive Dialogue Strategies on Human-Computer Trust. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (Genoa, Italy) (UMAP '20). Association for Computing Machinery, New York, NY, USA, 107--116. https://doi.org/10.1145/3340631.3394840
[110]
Sari Kujala, Virpi Roto, Kaisa Väänänen, Evangelos Karapanos, and Arto Sinnelä. 2011. UX Curve: A method for evaluating long-term user experience. Interact. Comput. 23 (2011), 473--483. https://doi.org/10.1016/j.intcom.2011.06.005
[111]
Philipp Kulms and Stefan Kopp. 2019. More Human-Likeness, More Trust? The Effect of Anthropomorphism on Self-Reported and Behavioral Trust in Continued and Interdependent Human-Agent Cooperation. In Proceedings of Mensch Und Computer 2019 (MuC'19). Association for Computing Machinery, New York, NY, USA, 31--42. https://doi.org/10.1145/3340764.3340793
[112]
Vivian Lai and Chenhao Tan. 2019. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 29--38. https://doi.org/10.1145/3287560.3287590
[113]
Asheley R. Landrum, Candice M. Mills, and Angie M. Johnston. 2013. When do children trust the expert? Benevolence information influences children's trust more than expertise. Developmental Science 16, 4 (2013), 622--638. https://doi.org/10.1111/desc.12059 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/desc.12059
[114]
Alexander Lascaux. 2008. Trust and uncertainty: a critical re-assessment. International Review of Sociology 18 (03 2008), 1--18. https://doi.org/10.1080/03906700701823613
[115]
John Lee and Neville Moray. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243--1270. https://doi.org/10.1080/00140139208967392
[116]
John Lee and Katrina See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human factors 46 (February 2004), 50--80. https://doi.org/10.1518/hfes.46.1.50.30392
[117]
John D. Lee and Neville Moray. 1994. Trust, self-confidence, and operators' adaptation to automation. International Journal of Human-Computer Studies 40, 1 (1994), 153 -- 184. https://doi.org/10.1006/ijhc.1994.1007
[118]
Min Hun Lee, Daniel P. Siewiorek, Asim Smailagic, Alexandre Bernardino, and Sergi Bermúdez i Badia. 2020. Co-Design and Evaluation of an Intelligent Decision Support System for Stroke Rehabilitation Assessment. Proc. ACM Hum.-Comput. Interact. 4, 2, Article 156 (Oct. 2020), 27 pages. https://doi.org/10.1145/3415227
[119]
Roy Lewicki and Chad Brinsfield. 2011. Measuring trust beliefs and behaviours. In Handbook of Research Methods on Trust, Fergus Lyon, Guido Möllering, and Mark Saunders (Eds.). Edward Elgar, Cheltenham, UK; Northampton, MA, USA, Chapter 3, 29--39. https://doi.org/10.4337/9781781009246.00013
[120]
Roy J. Lewicki, Daniel J. McAllister, and Robert J. Bies. 1998. Trust and Distrust: New Relationships and Realities. The Academy of Management Review 23, 3 (1998), 438--458. http://www.jstor.org/stable/259288
[121]
J. David Lewis and Andrew Weigert. 1985. Trust as a Social Reality. Social Forces 63, 4 (1985), 967--985. http://www.jstor.org/stable/2578601
[122]
Ian Li, Jodi Forlizzi, Anind Dey, and Sara Kiesler. 2007. My Agent as Myself or Another: Effects on Credibility and Listening to Advice. In Proceedings of the 2007 Conference on Designing Pleasurable Products and Interfaces (DPPI '07). Association for Computing Machinery, New York, NY, USA, 194--208. https://doi.org/10.1145/1314161.1314179
[123]
Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--15. https://doi.org/10.1145/3313831.3376590
[124]
Sarah Lichtenstein, Baruch Fischhoff, and Lawrence D. Phillips. 1977. Calibration of Probabilities: The State of the Art. In Decision Making and Change in Human Affairs. Springer Netherlands, Netherlands, 275--324. https://doi.org/10.1007/978--94-010--1276--8_19
[125]
James L. Loomis. 1959. Communication, the Development of Trust, and Cooperative Behavior. Human Relations 12, 4 (1959), 305--315. https://doi.org/10.1177/001872675901200402 arXiv:https://doi.org/10.1177/001872675901200402
[126]
Ewa Luger and Abigail Sellen. 2016. ?Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). Association for Computing Machinery, New York, NY, USA, 5286--5297. https://doi.org/10.1145/2858036.2858288
[127]
Niklas Luhmann. 1979. Trust and Power (1 ed.). Wiley, Chichester, Toronto.
[128]
Nikolas Luhmann. 2000. Familiarity, Confidence, Trust: Problems and Alternatives. In Trust: Making and Breaking Cooperative Relations, Diego Gambetta (Ed.). Basil Blackwell, Oxford, United Kingdom, 94--107.
[129]
Fergus Lyon, Guido Möllering, and Mark Saunders. 2015. Handbook of Research Methods on Trust: Second Edition. Edward Elgar Publishing, Cheltenham, United Kingdom. 1--343 pages. https://doi.org/10.4337/9781782547419
[130]
Maria A. Madsen and Shirley Gregor. 2000. Measuring Human-Computer Trust. In Proceedings of the 11 th Australasian Conference on Information Systems. Australasian Conference on Information Systems (ACIS), Brisbane, Australia, 6--8.
[131]
Danielle Magaldi and Matthew Berler. 2020. Semi-structured Interviews. Springer International Publishing, Cham, 4825--4830. https://doi.org/10.1007/978--3--319--24612--3_857
[132]
Mora Maldonado, Ewan Dunbar, and Emmanuel Chemla. 2019. Mouse tracking as a window into decision making. Behavior Research Methods 51, 3 (01 Jun 2019), 1085--1101. https://doi.org/10.3758/s13428-018-01194-x
[133]
C. Mantzavinos. 2020. Hermeneutics. In The Stanford Encyclopedia of Philosophy (spring 2020 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University, Stanford, CA, USA.
[134]
Ronald Marshall. 2003. Building trust early: The influence of first and second order expectations on trust in international channels of distribution. International Business Review 12 (08 2003), 421--443. https://doi.org/10.1016/S0969--5931(03)00037--4
[135]
Rob Matheson. 2019. Automating artificial intelligence for medical decision-making. http://news.mit.edu/2019/automating-ai-medical-decisions-0806
[136]
Steffen Maurer, Rainer Erbach, Issam Kraiem, Susanne Kuhnert, Petra Grimm, and Enrico Rukzio. 2018. Designing a Guardian Angel: Giving an Automated Vehicle the Possibility to Override Its Driver. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '18). Association for Computing Machinery, New York, NY, USA, 341--350. https://doi.org/10.1145/3239060.3239078
[137]
James H. Mayer, Roger C.;Davis. 1999. The effect of the performance appraisal system on trust for management: A field quasi-experiment. Business and Industrial Personnel 84, 1 (1999), 123--136. https://doi.org/10.1037/0021--9010.84.1.123
[138]
Roger C. Mayer, James H. Davis, and F. David Schoorman. 1995. An Integrative Model of Organizational Trust. The Academy of Management Review 20, 3 (1995), 709--734. http://www.jstor.org/stable/258792
[139]
Daniel J. McAllister. 1995. Affect- and Cognition-Based Trust as Foundations for Interpersonal Cooperation in Organizations. The Academy of Management Journal 38, 1 (1995), 24--59. http://www.jstor.org/stable/256727
[140]
Bill McEvily and Marco Tortoriello. 2011. Measuring trust in organisational research: Review and recommendations. Journal of Trust Research 1, 1 (2011), 23--63. https://doi.org/10.1080/21515581.2011.552424
[141]
D. Mcknight and Norman Chervany. 2001. Trust and Distrust Definitions: One Bite at a Time. In Trust in Cyber-societies: Integrating the Human and Artificial Perspectives, R. Falcone, M. Singh, and Y. H. Tan (Eds.). Springer, Heidelberg, Germany, 27--54. https://doi.org/10.1007/3--540--45547--7_3
[142]
D. Harrison McKnight, Vivek Choudhury, and Charles Kacmar. 2002. Developing and Validating Trust Measures for e-Commerce: An Integrative Typology. Information Systems Research 13, 3 (2002), 334--359. https://doi.org/10.1287/isre.13.3.334.81
[143]
D. Harrison McKnight, Larry L. Cummings, and Norman L. Chervany. 1998. Initial Trust Formation in New Organizational Relationships. Academy of Management Review 23, 3 (1998), 473--490. https://doi.org/10.5465/amr.1998.926622
[144]
Stephanie M. Merritt. 2011. Affective Processes in Human--Automation Interactions. Human Factors 53, 4 (2011), 356--370. https://doi.org/10.1177/0018720811411912 arXiv:https://doi.org/10.1177/0018720811411912
[145]
Joachim Meyer and John D. Lee. 2013. Trust, Reliance, and Compliance. In The Oxford Handbook of Cognitive Engineering, John D. Lee and Alex Kirlik (Eds.). Oxford University Press, Oxford, UK, 1--29. https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199757183.001.0001/oxfordhb-9780199757183-e-6
[146]
Microsoft. 2018. Responsible bots: 10 guidelines for developers of conversational AI. Technical Report. Microsoft, USA. 5 pages. https://www.microsoft.com/en-us/research/publication/responsible-bots/
[147]
Michael Moore. 2012. Confirmatory factor analysis. The Guilford Press, NY, USA, 361--379.
[148]
Drew M. Morris, Jason M. Erno, and June J. Pilcher. 2017. Electrodermal Response and Automation Trust during Simulated Self-Driving Car Use. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, 1 (2017), 1759--1762. https://doi.org/10.1177/1541931213601921 arXiv:https://doi.org/10.1177/1541931213601921
[149]
B.M. Muir. 1989. Operators' Trust in and Use of Automatic Controllers in a Supervisory Process Control Task. University of Toronto, Toronto, Canada. https://books.google.fr/books?id=T94NSwAACAAJ
[150]
Lea S. Muller, Sarah M. MeeBen, Meinald T. Thielsch, Christoph Nohe, Dennis M. Riehle, and Guido Hertel. 2020. Do Not Disturb! Trust in Decision Support Systems Improves Work Outcomes under Certain Conditions. Association for Computing Machinery, New York, NY, USA, 229--237. https://doi.org/10.1145/3404983.3405515
[151]
Deirdre K. Mulligan, Joshua A. Kroll, Nitin Kohli, and Richmond Y. Wong. 2019. This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 119 (Nov. 2019), 36 pages. https://doi.org/10.1145/3359221
[152]
Robert Münscher and Torsten M. Kühlmann. 2011. Using critical incident technique in trust research. In Handbook of Research Methods on Trust, Fergus Lyon, Guido Möllering, and Mark Saunders (Eds.). Edward Elgar, Cheltenham, UK; Northampton, MA, USA, Chapter 14, 161--172.
[153]
Michael Naef and Jürgen Schupp. 2009. Measuring Trust: Experiments and Surveys in Contrast and Combination. IZA Discussion Papers 4087. Institute of Labor Economics (IZA). https://ideas.repec.org/p/iza/izadps/dp4087.html
[154]
Manisha Natarajan and Matthew Gombolay. 2020. Effects of Anthropomorphism and Accountability on Trust in Human Robot Interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI '20). Association for Computing Machinery, New York, NY, USA, 33--42. https://doi.org/10.1145/3319502.3374839
[155]
Domen Novak. 2014. Engineering Issues in Physiological Computing. 17--38. https://doi.org/10.1007/978--1--4471--6392--3_2
[156]
Kenya Freeman Oduor and Christopher S. Campbell. 2007. Deciding When to Trust Automation in a Policy-Based City Management Game: Policity. In Proceedings of the 2007 Symposium on Computer Human Interaction for the Management of Information Technology (CHIMIT '07). Association for Computing Machinery, New York, NY, USA, 2--es. https://doi.org/10.1145/1234772.1234775
[157]
Institute of Business Ethics. 2018. Business Ethics and Artificial Intelligence. Technical Report. Internet Society, London, UK. https://www.ibe.org.uk/resource/ibe-briefing-58-business-ethics-and-artificial-intelligence-pdf.html
[158]
Royal College of Physicians. 2018. Artificial intelligence (AI) in health. Technical Report. Royal College of Physicians, London, UK. 1 pages. https://www.rcplondon.ac.uk/projects/outputs/artificial-intelligence-ai-health
[159]
White House Office of Science and Technology Policy. 2020. American AI Initiative: Year One Annual Report. Technical Report. White House Office of Science and Technology Policy, Brussels, Belgium. 36 pages. https://www.whitehouse.gov/ai/
[160]
Claus Offe. [n.d.]. How can we trust our fellow citizens? Cambridge UP, Cambridge, United Kingdom.
[161]
Roobina Ohanian. 1990. Construction and Validation of a Scale to Measure Celebrity Endorsers' Perceived Expertise, Trustworthiness, and Attractiveness. Journal of Advertising 19, 3 (oct 1990), 39--52. https://doi.org/10.1080/00913367.1990.10673191
[162]
Special Interest Group on Artificial Intelligence. 2019. Dutch Artificial Intelligence Manifesto. Technical Report. ICT Research Platform Nederland, The Netherlands. 15 pages. http://ii.tudelft.nl/bnvki/wp-content/uploads/2018/09/Dutch-AI-Manifesto.pdf
[163]
Tobias O.Nyumba, Kerrie Wilson, Christina J. Derrick, and Nibedita Mukherjee. 2018. The use of focus group discussion methodology: Insights from two decades of application in conservation. Methods in Ecology and Evolution 9, 1 (2018), 20--32. https://doi.org/10.1111/2041--210X.12860 arXiv:https://besjournals.onlinelibrary.wiley.com/doi/pdf/10.1111/2041--210X.12860
[164]
Cathy O'Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, USA.
[165]
Joon Sung Park, Rick Barber, Alex Kirlik, and Karrie Karahalios. 2019. A Slow Algorithm Improves Users' Assessments of the Algorithm's Accuracy. In Proceedings of the 2019 Conference on Computer Supported Cooperative Work (CSCW '19), Vol. 3. Association for Computing Machinery, New York, NY, USA, 15. https://doi.org/10.1145/3359204
[166]
Dhaval Parmar, Stefán Ólafsson, Dina Utami, Prasanth Murali, and Timothy Bickmore. 2020. Navigating the Combinatorics of Virtual Agent Design Space to Maximize Persuasion. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1010--1018.
[167]
Samir Passi and Steven J. Jackson. 2018. Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 136 (Nov. 2018), 28 pages. https://doi.org/10.1145/3274405
[168]
P. Ivan Pavlov. 2010. Conditioned reflexes: An investigation of the physiological activity of the cerebral cortex. Annals of neurosciences 17, 3 (Jul 2010), 136--141. https://doi.org/10.5214/ans.0972--7531.1017309 25205891[pmid].
[169]
Carl J. Pearson, Allaire K. Welk, William A. Boettcher, Roger C. Mayer, Sean Streck, Joseph M. Simons-Rudolph, and Christopher B. Mayhorn. 2016. Differences in Trust between Human and Automated Decision Aids. In Proceedings of the Symposium and Bootcamp on the Science of Security (HotSos '16). Association for Computing Machinery, New York, NY, USA, 95--98. https://doi.org/10.1145/2898375.2898385
[170]
Brandon S. Perelman, Arthur W. Evans III, and Kristin E. Schaefer. 2020. Where Do You Think You're Going? Characterizing Spatial Mental Models from Planned Routes. J. Hum.-Robot Interact. 9, 4, Article 23 (May 2020), 55 pages. https://doi.org/10.1145/3385008
[171]
Patricia Perry. 2011. Concept Analysis: Confidence/Self-confidence. Nursing Forum 46, 4 (2011), 218--230. https://doi.org/10.1111/j.1744--6198.2011.00230.x arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1744--6198.2011.00230.x
[172]
J. Paul Peter. 1979. Reliability: A Review of Psychometric Basics and Recent Marketing Practices. Journal of Marketing Research 16, 1 (1979), 6--17. http://www.jstor.org/stable/3150868
[173]
Gjalt-Jorn Peters. 2014. The alpha and the omega of scale reliability and validity: Why and how to abandon Conbach's alpha and the route towards more comprehensive assessment od scale quality. Euro Health Psychologist 16 (01 2014), 56--69.
[174]
Jonathan A. Plucker. 2003. Exploratory and Confirmatory Factor Analysis in Gifted Education: Examples with Self-Concept Data. Journal for the Education of the Gifted 27, 1 (2003), 20--35. https://doi.org/10.1177/016235320302700103 arXiv:https://doi.org/10.1177/016235320302700103
[175]
J. Potter and D. Edwards. 1996. Discourse Analysis. Macmillan Education UK, London, 419--425. https://doi.org/10.1007/978--1--349--24483--6_63
[176]
Pearl Pu and Li Chen. 2006. Trust Building with Explanation Interfaces. In Proceedings of the 11th International Conference on Intelligent User Interfaces (IUI '06). Association for Computing Machinery, New York, NY, USA, 93--100. https://doi.org/10.1145/1111449.1111475
[177]
David V. Pynadath, Ning Wang, Ericka Rovira, and Michael J. Barnes. 2018. Clustering Behavior to Recognize Subjective Beliefs in Human-Agent Teams. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS '18). International Foundation for Autonomous Agents and Multiagent Systems, New York, NY, USA, 1495--1503.
[178]
Bako Rajaonah, Franoise Anceaux, Nicolas Tricot, and Marie-Pierre Pacaux-Lemoine. 2006. Trust, Cognitive Control, and Control: The Case of Drivers Using an Auto-Adaptive Cruise Control. In Proceedings of the 13th Eurpoean Conference on Cognitive Ergonomics: Trust and Control in Complex Socio-Technical Systems (ECCE '06). Association for Computing Machinery, New York, NY, USA, 17--24. https://doi.org/10.1145/1274892.1274896
[179]
Bako Rajaonah, Françoise Anceaux, and Fabrice Vienne. 2006. Study of driver trust during cooperation with adaptive cruise control. Le travail humain 69, 2 (2006), 99--127. https://doi.org/10.3917/th.692.0099
[180]
Samantha Reig, Selena Norman, Cecilia G. Morales, Samadrita Das, Aaron Steinfeld, and Jodi Forlizzi. 2018. A Field Study of Pedestrians and Autonomous Vehicles. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '18). Association for Computing Machinery, New York, NY, USA, 198--209. https://doi.org/10.1145/3239060.3239064
[181]
Robin M. Richter, Maria Jose Valladares, and Steven C. Sutherland. 2019. Effects of the Source of Advice and Decision Task on Decisions to Request Expert Advice. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI '19). Association for Computing Machinery, New York, NY, USA, 469--475. https://doi.org/10.1145/3301275.3302279
[182]
Loo Robert. 2002. A caveat on using single-item versus multiple-item scales. Journal of Managerial Psychology 17, 1 (01 Jan 2002), 68--75. https://doi.org/10.1108/02683940210415933
[183]
Lionel P. Robert. 2016. Monitoring and Trust in Virtual Teams. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (San Francisco, California, USA) (CSCW '16). Association for Computing Machinery, New York, NY, USA, 245--259. https://doi.org/10.1145/2818048.2820076
[184]
Jr Robert B. Lount, Chen-Bo Zhong, Niro Sivanathan, and J. Keith Murnighan. 2008. Getting Off on the Wrong Foot: The Timing of a Breach and the Restoration of Trust. Personality and Social Psychology Bulletin 34, 12 (2008), 1601--1612. https://doi.org/10.1177/0146167208324512 arXiv:https://doi.org/10.1177/0146167208324512 19050335.
[185]
Paul Robinette, Wenchen Li, Robert Allen, Ayanna M. Howard, and Alan R. Wagner. 2016. Overtrust of Robots in Emergency Evacuation Scenarios. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction (HRI '16). IEEE Press, New York, NY, USA, 101--108.
[186]
Mark A. Robinson. 2018. Using multi-item psychometric scales for research and practice in human resource management. Human Resource Management 57, 3 (2018), 739--750. https://doi.org/10.1002/hrm.21852 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/hrm.21852
[187]
John T. Roscoe. 1969. Fundamental research statistics for the behavioral sciences. New York Holt, Rinehart and Winston. http://openlibrary.org/books/OL5685768M
[188]
Elizabeth Rosenzweig. 2015. Usability Testing. In Successful User Experience: Strategies and Roadmaps, Elizabeth Rosenzweig (Ed.). Morgan Kaufmann, Boston, MA, USA, Chapter 7, 131 -- 154. https://doi.org/10.1016/B978-0--12--800985--7.00007--7
[189]
Casey Ross and Ike Swetlitz. 2017. IBM pitched its Watson supercomputer as a revolution in cancer care. It's nowhere close. https://www.statnews.com/2017/09/05/watson-ibm-cancer/
[190]
Jennifer M. Ross. 2008. Moderators of trust and reliance across multiple decision aids. Ph.D. Dissertation. Department of Psychology in the College of Sciences at the University of Central Florida.
[191]
Julian B. Rotter. 1980. Interpersonal trust, trustworthiness, and gullibility. American Psychologist 35, 1 (1980), 1--7. https://doi.org/10.1037/0003-066X.35.1.1
[192]
Denise Rousseau, Sim Sitkin, Ronald Burt, and Colin Camerer. 1998. Not So Different After All: A Cross-discipline View of Trust. Academy of Management Review 23 (July 1998). https://doi.org/10.5465/AMR.1998.926617
[193]
Nicole Salomons, Michael van der Linden, Sarah Strohkorb Sebo, and Brian Scassellati. 2018. Humans Conform to Robots: Disambiguating Trust, Truth, and Conformity. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI '18). Association for Computing Machinery, New York, NY, USA, 187--195. https://doi.org/10.1145/3171221.3171282
[194]
Willem E. Saris and Irmtraud N. Gallhofer. 2007. Criteria for the Quality of Survey Measures. John Wiley & Sons, Ltd, Hoboken, New Jersey, USA, 173--217. https://doi.org/10.1002/9780470165195.ch9 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/9780470165195.ch9
[195]
Kristin E. Schaefer. 2013. The Perception And Measurement Of Human-robot Trust. Ph.D. Dissertation. Department of Psychology in the College of Sciences at the University of Central Florida.
[196]
James Schaffer, John O'Donovan, James Michaelis, Adrienne Raglin, and Tobias Höllerer. 2019. I Can Do Better than Your AI: Expertise and Explanations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI '19). Association for Computing Machinery, New York, NY, USA, 240--251. https://doi.org/10.1145/3301275.3302308
[197]
Hanna Schneider, Julia Wayrauther, Mariam Hassib, and Andreas Butz. 2019. Communicating Uncertainty in Fertility Prognosis. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--11. https://doi.org/10.1145/3290605.3300391
[198]
Haeseung Seo, Aiping Xiong, and Dongwon Lee. 2019. Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation. In Proceedings of the 10th ACM Conference on Web Science (WebSci '19). Association for Computing Machinery, New York, NY, USA, 265--274. https://doi.org/10.1145/3292522.3326012
[199]
Accenture Federal Services. 2019. Responsible AI: A Framework for Building Trust in your AI Solutions. Technical Report. Accenture. 13 pages. https://www.accenture.com/us-en/insights/us-federal-government/ai-is-ready-are-we
[200]
Fred Shaffer and J. P. Ginsberg. 2017. An Overview of Heart Rate Variability Metrics and Norms. Frontiers in public health 5 (28 Sep 2017), 258--258. https://doi.org/10.3389/fpubh.2017.00258 29034226[pmid].
[201]
Ameneh Shamekhi, Q. Vera Liao, Dakuo Wang, Rachel K. E. Bellamy, and Thomas Erickson. 2018. Face Value? Exploring the Effects of Embodiment for a Group Facilitation Agent. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3173574.3173965
[202]
Klaas Sijtsma. 2008. On the Use, the Misuse, and the Very Limited Usefulness of Cronbach's Alpha. Psychometrika 74, 1 (11 Dec 2008), 107. https://doi.org/10.1007/s11336-008--9101-0
[203]
Sim B. Sitkin and Nancy L. Roth. 1993. Explaining the Limited Effectiveness of Legalistic "Remedies" for Trust/Distrust. Organization Science 4, 3 (1993), 367--392. http://www.jstor.org/stable/2634950
[204]
Internet Society. 2017. Artificial intelligence and machine learning: policy paper. Technical Report. Internet Society, Reston, Virginia, United States. https://www.internetsociety.org/resources/doc/2017/artificial-intelligence-and-machine-learning-policy-paper/
[205]
Cassie Solomon, Mark Schneider, and Gregory P. Shea. 2018. How AI-based Systems Can Improve Medical Outcomes. https://knowledge.wharton.upenn.edu/article/ai-based-systems-can-improve-medical-outcomes/
[206]
Donna Spencer and Todd Warfel. 2004. Card sorting: a definitive guide. https://boxesandarrows.com/card-sorting-a-definitive-guide/
[207]
Nicole Sultanum, Michael Brudno, Daniel Wigdor, and Fanny Chevalier. 2018. More Text Please! Understanding and Supporting the Use of Visualization for Clinical Text Overview. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3173574.3173996
[208]
Haoye Sun, Willem J. M. I. Verbeke, Rumen Pozharliev, Richard P. Bagozzi, Fabio Babiloni, and Lei Wang. 2019. Framing a trust game as a power game greatly affects interbrain synchronicity between trustor and trustee. Social Neuroscience 14, 6 (Dec. 2019), 635--648. https://doi.org/10.1080/17470919.2019.1566171
[209]
Harini Suresh, Natalie Lao, and Ilaria Liccardi. 2020. Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making. In 12th ACM Conference on Web Science (Southampton, United Kingdom) (WebSci '20). Association for Computing Machinery, New York, NY, USA, 315--324. https://doi.org/10.1145/3394231.3397922
[210]
Steven C. Sutherland, Casper Harteveld, and Michael E. Young. 2015. The Role of Environmental Predictability and Costs in Relying on Automation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). Association for Computing Machinery, New York, NY, USA, 2535--2544. https://doi.org/10.1145/2702123.2702609
[211]
Richard S. Sutton and Andrew G. Barto. 1998. Introduction to Reinforcement Learning (1st ed.). MIT Press, Cambridge, MA, USA.
[212]
Jason Tashea. 2017. Courts Are Using AI to Sentence Criminals. https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/
[213]
AI Taskforce. 2019. Report of Estonia's AI Taskforce. Technical Report. Republic of Estonia Government Office and Republic of Estonia Ministry of Economic Affairs and Communications, Estonia. 47 pages. https://ec.europa.eu/knowledge4policy/ai-watch/estonia-ai-strategy-report
[214]
Hiroyuki Tokushige, Takuji Narumi, Sayaka Ono, Yoshitaka Fuwamoto, Tomohiro Tanikawa, and Michitaka Hirose. 2017. Trust Lengthens Decision Time on Unexpected Recommendations in Human-Agent Interaction. In Proceedings of the 5th International Conference on Human Agent Interaction (HAI '17). Association for Computing Machinery, New York, NY, USA, 245--252. https://doi.org/10.1145/3125739.3125751
[215]
Ilaria Torre, Emma Carrigan, Rachel McDonnell, Katarina Domijan, Killian McCabe, and Naomi Harte. 2019. The Effect of Multimodal Emotional Expression and Agent Appearance on Trust in Human-Agent Interaction. In Motion, Interaction and Games (MIG '19). Association for Computing Machinery, New York, NY, USA, 6. https://doi.org/10.1145/3359566.3360065
[216]
Ilaria Torre, Jeremy Goslin, Laurence White, and Debora Zanatto. 2018. Trust in Artificial Voices: A "Congruency Effect" of First Impressions and Behavioural Experience. In Proceedings of the Technology, Mind, and Society (TechMindSociety '18). Association for Computing Machinery, New York, NY, USA, 6. https://doi.org/10.1145/3183654.3183691
[217]
Italo Trizano-Hermosilla and Jesús M. Alvarado. 2016. Best Alternatives to Cronbach's Alpha Reliability in Realistic Conditions: Congeneric and Asymmetrical Measurements. Frontiers in Psychology 7 (2016), 769. https://doi.org/10.3389/fpsyg.2016.00769
[218]
UNI Global Union. 2017. 10 Principles for Ethical AI. Technical Report. UNI Global Union, Nyon, Switzerland. 10 pages. http://www.thefutureworldofwork.org/opinions/10-principles-for-ethical-ai/
[219]
Hanneke Hooft van Huysduynen, Jacques Terken, and Berry Eggen. 2018. Why Disable the Autopilot?. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '18). Association for Computing Machinery, New York, NY, USA, 247--257. https://doi.org/10.1145/3239060.3239063
[220]
Peter-Paul van Maanen, Francien Wisse, Jurriaan van Diggelen, and Robbert-Jan Beun. 2011. Effects of Reliance Support on Team Performance by Advising and Adaptive Autonomy. In Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02 (WI-IAT '11). IEEE Computer Society, New York, NY, USA, 280--287. https://doi.org/10.1109/WI-IAT.2011.117
[221]
Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3173574.3174014
[222]
Cédric Villani, Yann Bonnet, Bertrand Rondepierre, et al. 2018. For a meaningful artificial intelligence: Towards a French and European strategy. Conseil national du numérique, France.
[223]
Rudolf von Sinner. 2005. Trust and Convivência. The Ecumenical Review 57, 3 (2005), 322--341. https://doi.org/10.1111/j.1758--6623.2005.tb00554.x arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1758--6623.2005.tb00554.x
[224]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--15. https://doi.org/10.1145/3290605.3300831
[225]
Lin Wang, Pei-Luen Patrick Rau, Vanessa Evers, Benjamin Krisper Robinson, and Pamela Hinds. 2010. When in Rome: The Role of Culture & Context in Adherence to Robot Recommendations. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI '10). IEEE Press, New York, NY, USA, 359--366.
[226]
M. Wang, A. Hussein, R. F. Rojas, K. Shafi, and H. A. Abbass. 2018. EEG-Based Neural Correlates of Trust in Human-Autonomy Interaction. In 2018 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, Bangalore, India, 350--357.
[227]
Ning Wang, David V. Pynadath, and Susan G. Hill. 2016. The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems (AAMAS '16). International Foundation for Autonomous Agents and Multiagent Systems, New York, NY, USA, 997--1005.
[228]
Ning Wang, David V. Pynadath, and Susan G. Hill. 2016. Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction (HRI '16). IEEE Press, New York, NY, USA, 109--116.
[229]
Eva K. Wendt, Bengt Fridlund, and Evy Lidell. 2004. Trust and confirmation in a gynecologic examination situation: a critical incident technique analysis. Acta obstetricia et gynecologica Scandinavica 83 12 (2004), 1208--1215.
[230]
Lawrence R. Wheeless and Janis Grotz. 1977. The measurement of trust and its relationship to self-disclosure. Human Communication Research 3, 3 (1977), 250--257. https://doi.org/10.1111/j.1468--2958.1977.tb00523.x
[231]
T. Whelan. 2008. Social Presence in Multi-User Virtual Environments : A Review and Measurement Framework for Organizational Research.
[232]
Philipp Wintersberger, Tamara von Sawitzky, Anna-Katharina Frison, and Andreas Riener. 2017. Traffic Augmentation as a Means to Increase Trust in Automated Driving Systems. In Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter (CHItaly '17). Association for Computing Machinery, New York, NY, USA, 7. https://doi.org/10.1145/3125571.3125600
[233]
Allison Woodruff, Sarah E. Fox, Steven Rousso-Schindler, and Jeffrey Warshaw. 2018. A Qualitative Exploration of Perceptions of Algorithmic Fairness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3173574.3174230
[234]
Jun Xiao, John Stasko, and Richard Catrambone. 2007. The Role of Choice and Customization on Users' Interaction with Embodied Conversational Agents: Effects on Perception and Performance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '07). Association for Computing Machinery, New York, NY, USA, 1293--1302. https://doi.org/10.1145/1240624.1240820
[235]
Yaqi Xie, Indu P Bodala, Desmond C. Ong, David Hsu, and Harold Soh. 2019. Robot Capability and Intention in Trust-Based Decisions across Tasks. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI '19). IEEE Press, New York, NY, USA, 39--47.
[236]
Rodrigo Ya Apmez-Gallardo and Sandra Valenzuela-Suazo. 2012. Critical incidents of trust erosion in leadership of head nurses. Revista Latino-Americana de Enfermagem 20 (02 2012), 143 -- 150. http://www.scielo.br/scielo.php'script=sci$_$arttext&pid=S0104--11692012000100019&nrm=iso
[237]
Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L. Arendt. 2020. How Do Visual Explanations Foster End Users' Appropriate Trust in Machine Learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI '20). Association for Computing Machinery, New York, NY, USA, 189--201. https://doi.org/10.1145/3377325.3377480
[238]
Qian Yang, John Zimmerman, Aaron Steinfeld, Lisa Carey, and James F. Antaki. 2016. Investigating the Heart Pump Implant Decision Process: Opportunities for Decision Support Tools to Help. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). Association for Computing Machinery, New York, NY, USA, 4477--4488. https://doi.org/10.1145/2858036.2858373
[239]
X. Jessie Yang, Vaibhav V. Unhelkar, Kevin Li, and Julie A. Shah. 2017. Evaluating Effects of User Experience and System Transparency on Trust in Automation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI '17). Association for Computing Machinery, New York, NY, USA, 408--416. https://doi.org/10.1145/2909824.3020230
[240]
J. Frank Yates. 1990. Judgment and decision making. Prentice-Hall, Inc, Englewood Cliffs, NJ, US. xvi, 430--xvi, 430 pages.
[241]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300509
[242]
Louise C. Young and Gerald S. Albaum. 2002. Developing a measure of trust in retail relationships : a direct selling application. School of Marketing, University of Technology of Sydney, Sydney Broadway, N.S.W, Australia.
[243]
Bowen Yu, Ye Yuan, Loren Terveen, Zhiwei Steven Wu, Jodi Forlizzi, and Haiyi Zhu. 2020. Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-Offs Across Multiple Objectives. Association for Computing Machinery, New York, NY, USA, 1245--1257. https://doi.org/10.1145/3357236.3395528
[244]
Kun Yu, Shlomo Berkovsky, Dan Conway, Ronnie Taib, Jianlong Zhou, and Fang Chen. 2016. Trust and Reliance Based on System Accuracy. In Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization (UMAP '16). Association for Computing Machinery, New York, NY, USA, 223--227. https://doi.org/10.1145/2930238.2930290
[245]
Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017. User Trust Dynamics: An Investigation Driven by Differences in System Performance. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (IUI '17). Association for Computing Machinery, New York, NY, USA, 307--317. https://doi.org/10.1145/3025171.3025219
[246]
Kun Yu, Shlomo Berkovsky, Ronnie Taib, Jianlong Zhou, and Fang Chen. 2019. Do I Trust My Machine Teammate? An Investigation from Perception to Decision. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI '19). Association for Computing Machinery, New York, NY, USA, 460--468. https://doi.org/10.1145/3301275.3302277
[247]
Beste F. Yuksel, Penny Collisson, and Mary Czerwinski. 2017. Brains or Beauty: How to Engender Trust in User-Agent Interactions. ACM Trans. Internet Technol. 17, 1 (2017), 20. https://doi.org/10.1145/2998572
[248]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 295--305. https://doi.org/10.1145/3351095.3372852

Cited By

View all
  • (2025)Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic reviewFrontiers in Computer Science10.3389/fcomp.2024.15210666Online publication date: 6-Jan-2025
  • (2025)Artificial intelligence-based organizational decision-making in recruitment practiceHuman Systems Management10.3233/HSM-24004444:1(173-186)Online publication date: 11-Feb-2025
  • (2025)Assessing the interplay of trust dynamics, personalization, ethical AI practices, and tourist behavior in the adoption of AI-driven smart tourism technologiesJournal of Open Innovation: Technology, Market, and Complexity10.1016/j.joitmc.2024.10045511:1(100455)Online publication date: Mar-2025
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 5, Issue CSCW2
CSCW2
October 2021
5376 pages
EISSN:2573-0142
DOI:10.1145/3493286
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 October 2021
Published in PACMHCI Volume 5, Issue CSCW2

Permissions

Request permissions for this article.

Check for updates

Badges

  • Honorable Mention

Author Tags

  1. artificial intelligence
  2. decision making
  3. methodology
  4. trust

Qualifiers

  • Research-article

Funding Sources

  • ANR-11-LABX-65

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,777
  • Downloads (Last 6 weeks)228
Reflects downloads up to 02 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic reviewFrontiers in Computer Science10.3389/fcomp.2024.15210666Online publication date: 6-Jan-2025
  • (2025)Artificial intelligence-based organizational decision-making in recruitment practiceHuman Systems Management10.3233/HSM-24004444:1(173-186)Online publication date: 11-Feb-2025
  • (2025)Assessing the interplay of trust dynamics, personalization, ethical AI practices, and tourist behavior in the adoption of AI-driven smart tourism technologiesJournal of Open Innovation: Technology, Market, and Complexity10.1016/j.joitmc.2024.10045511:1(100455)Online publication date: Mar-2025
  • (2025)The critical role of trust in adopting AI-powered educational technology for learning: An instrument for measuring student perceptionsComputers and Education: Artificial Intelligence10.1016/j.caeai.2025.1003688(100368)Online publication date: Jun-2025
  • (2024)Enhancing user experience and trust in advanced LLM-based conversational agentsComputing and Artificial Intelligence10.59400/cai.v2i2.14672:2(1467)Online publication date: 17-Aug-2024
  • (2024)Do patients prefer a human doctor, artificial intelligence, or a blend, and is this preference dependent on medical discipline? Empirical evidence and implications for medical practiceFrontiers in Psychology10.3389/fpsyg.2024.142217715Online publication date: 12-Aug-2024
  • (2024)Exploring the Concept of Explainable AI and Developing Information Governance Standards for Enhancing Trust and Transparency in Handling Customer DataSSRN Electronic Journal10.2139/ssrn.4879025Online publication date: 2024
  • (2024)Comparative Analysis of Ethical Decision-Making and Trust Dynamics: Human Reasoning vs. ChatGPT-3 NarrativesSSRN Electronic Journal10.2139/ssrn.4771255Online publication date: 2024
  • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 21-Sep-2024
  • (2024)Broken Trust: Does the Agent Matter?Proceedings of the 12th International Conference on Human-Agent Interaction10.1145/3687272.3688307(34-43)Online publication date: 24-Nov-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media