[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

Understanding User Reliance on AI in Assisted Decision-Making

Published: 11 November 2022 Publication History

Abstract

Proper calibration of human reliance on AI is fundamental to achieving complementary performance in AI-assisted human decision-making. Most previous works focused on assessing user reliance, and more broadly trust, retrospectively, through user perceptions and task-based measures. In this work, we explore the relationship between eye gaze and reliance under varying task difficulties and AI performance levels in a spatial reasoning task. Our results show a strong positive correlation between percent gaze duration on the AI suggestion and user AI task agreement, as well as user perceived reliance. Moreover, user agency is preserved particularly when the task is easy and when AI performance is low or inconsistent. Our results also reveal nuanced differences between reliance and trust. We discuss the potential of using eye gaze to gauge human reliance on AI in real-time, enabling adaptive AI assistance for optimal human-AI team performance.

References

[1]
Kumar Akash, Wan-Lin Hu, Neera Jain, and Tahira Reid. 2018. A classification model for sensing human trust in machines using EEG and GSR. ACM Transactions on Interactive Intelligent Systems (TiiS), Vol. 8, 4 (2018), 1--20.
[2]
A Selin Atalay, H Onur Bodur, and Dina Rasolofoarison. 2012. Shining in the center: Central gaze cascade effect on product choice. Journal of Consumer Research, Vol. 39, 4 (2012), 848--866.
[3]
Nora Balfe, Sarah Sharples, and John R Wilson. 2018. Understanding is key: An analysis of factors pertaining to trust in a real-world automation system. Human factors, Vol. 60, 4 (2018), 477--495.
[4]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.
[5]
James H Bertera and Keith Rayner. 2000. Eye movements and the span of the effective stimulus in visual search. Perception & psychophysics, Vol. 62, 3 (2000), 576--585.
[6]
D Birchall. 2015. Spatial ability in radiologists: a necessary prerequisite? The British journal of radiology, Vol. 88, 1049 (2015), 20140511.
[7]
Eric Bogert, Aaron Schecter, and Richard T Watson. 2021. Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific reports, Vol. 11, 1 (2021), 1--9.
[8]
Ali Borji and Laurent Itti. 2014. Defending Yarbus: Eye movements reveal observers' task. Journal of vision, Vol. 14, 3 (2014), 29--29.
[9]
Zana Bucc inca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--21.
[10]
Adrian Bussone, Simone Stumpf, and Dympna O'Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 international conference on healthcare informatics. IEEE, 160--169.
[11]
Béatrice Cahour and Jean-Francc ois Forzy. 2009. Does projection into use improve trust and exploration? An example with a cruise control system. Safety science, Vol. 47, 9 (2009), 1260--1270.
[12]
James F Cavanagh, Thomas V Wiecki, Angad Kochar, and Michael J Frank. 2014. Eye tracking and pupillometry are indicators of dissociable latent decision processes. Journal of Experimental Psychology: General, Vol. 143, 4 (2014), 1476.
[13]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O'Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 chi conference on human factors in computing systems. 1--12.
[14]
HR Chennamma and Xiaohui Yuan. 2013. A survey on eye-gaze tracking techniques. arXiv preprint arXiv:1312.6410 (2013).
[15]
Chun-Wei Chiang and Ming Yin. 2021. You'd Better Stop! Understanding Human Reliance on Machine Learning Models under Covariate Shift. In 13th ACM Web Science Conference 2021. 120--129.
[16]
Mary T Dzindolet, Scott A Peterson, Regina A Pomranky, Linda G Pierce, and Hall P Beck. 2003. The role of trust in automation reliance. International journal of human-computer studies, Vol. 58, 6 (2003), 697--718.
[17]
John M Findlay and Iain D Gilchrist. 1998. Eye guidance and visual search. In Eye guidance in reading and scene perception. Elsevier, 295--312.
[18]
Alexandra Frischen, Andrew P Bayliss, and Steven P Tipper. 2007. Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychological bulletin, Vol. 133, 4 (2007), 694.
[19]
Yashesh Gaur, Walter S Lasecki, Florian Metze, and Jeffrey P Bigham. 2016. The effects of automatic speech recognition quality on human transcription latency. In Proceedings of the 13th International Web for All Conference. 1--8.
[20]
Mackenzie G Glaholt and Eyal M Reingold. 2009. Stimulus exposure and gaze bias: A further test of the gaze cascade model. Attention, Perception, & Psychophysics, Vol. 71, 3 (2009), 445--450.
[21]
S Larry Goldenberg, Guy Nir, and Septimiu E Salcudean. 2019. A new era: artificial intelligence and machine learning in prostate cancer. Nature Reviews Urology, Vol. 16, 7 (2019), 391--403.
[22]
Fu Guo, Yi Ding, Weilin Liu, Chang Liu, and Xuefeng Zhang. 2016. Can eye-tracking data be measured to assess product design?: Visual attention mechanism should be considered. International Journal of Industrial Ergonomics, Vol. 53 (2016), 229--235.
[23]
Kunal Gupta, Ryo Hajika, Yun Suen Pai, Andreas Duenser, Martin Lochner, and Mark Billinghurst. 2019. In ai we trust: Investigating the relationship between biosignals, trust and cognitive load in vr. In 25th ACM Symposium on Virtual Reality Software and Technology. 1--10.
[24]
Mary M Hayhoe. 2017. Vision and action. Annual review of vision science, Vol. 3 (2017), 389--413.
[25]
Mary Hegarty, Madeleine Keehner, Peter Khooshabeh, and Daniel R Montello. 2009. How spatial abilities enhance, and are enhanced by, dental education. Learning and Individual Differences, Vol. 19, 1 (2009), 61--70.
[26]
John M Henderson and Andrew Hollingworth. 1998. Eye movements during scene viewing: An overview. Eye guidance in reading and scene perception (1998), 269--293.
[27]
Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 (2018).
[28]
Chien-Ming Huang, Sean Andrist, Allison Sauppé, and Bilge Mutlu. 2015. Using gaze patterns to predict task intent in collaboration. Frontiers in psychology, Vol. 6 (2015), 1049.
[29]
Chien-Ming Huang and Bilge Mutlu. 2016. Anticipatory robot control for efficient human-robot collaboration. In 2016 11th ACM/IEEE international conference on human-robot interaction (HRI). IEEE, 83--90.
[30]
Shigao Huang, Jie Yang, Simon Fong, and Qi Zhao. 2020. Artificial intelligence in cancer diagnosis and prognosis: Opportunities and challenges. Cancer letters, Vol. 471 (2020), 61--71.
[31]
Maia Jacobs, Melanie F Pradier, Thomas H McCoy, Roy H Perlis, Finale Doshi-Velez, and Krzysztof Z Gajos. 2021. How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Translational psychiatry, Vol. 11, 1 (2021), 1--9.
[32]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.
[33]
Vivian Lai, Han Liu, and Chenhao Tan. 2020. " Why is' Chicago'deceptive?" Towards Building Model-Driven Tutorials for Humans. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13.
[34]
Vivian Lai and Chenhao Tan. 2019a. On human predictions with explanations and predictions of machine learning models: A case study on deception detection. In Proceedings of the conference on fairness, accountability, and transparency. 29--38.
[35]
Vivian Lai and Chenhao Tan. 2019b. On human predictions with explanations and predictions of machine learning models: A case study on deception detection. In Proceedings of the conference on fairness, accountability, and transparency. 29--38.
[36]
Vijaya Lakshmi and Jacqueline Corbett. 2020. How artificial intelligence improves agricultural productivity and sustainability: A global thematic analysis. In Proceedings of the 53rd Hawaii International Conference on System Sciences.
[37]
Michael F Land and Mary Hayhoe. 2001. In what ways do eye movements contribute to everyday activities? Vision research, Vol. 41, 25--26 (2001), 3559--3565.
[38]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors, Vol. 46, 1 (2004), 50--80.
[39]
Sophie Lemonnier, Roland Brémond, and Thierry Baccino. 2014. Discriminating cognitive processes with eye movements in a decision-making driving task. Journal of Eye Movement Research, Vol. 7, 4 (2014).
[40]
Firas Lethaus and Jürgen Rataj. 2007. Do eye movements reflect driving manoeuvres? IET Intelligent Transport Systems, Vol. 1, 3 (2007), 199--204.
[41]
Zhiyuan ?Jerry" Lin, Jongbin Jung, Sharad Goel, and Jennifer Skeem. 2020. The limits of human predictions of recidivism. Science advances, Vol. 6, 7 (2020), eaaz0652.
[42]
Han Liu, Vivian Lai, and Chenhao Tan. 2021. Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making. arXiv preprint arXiv:2101.05303 (2021).
[43]
Geoffrey R Loftus. 1981. Tachistoscopic simulations of eye fixations on pictures. Journal of Experimental Psychology: Human Learning and Memory, Vol. 7, 5 (1981), 369.
[44]
Zhuoran Lu and Ming Yin. 2021. Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.
[45]
Maria Madsen and Shirley Gregor. 2000. Measuring human-computer trust. In 11th australasian conference on information systems, Vol. 53. Citeseer, 6--8.
[46]
Maria Laura Mele and Stefano Federici. 2012. A psychotechnological review on eye-tracking systems: towards user experience. Disability and Rehabilitation: Assistive Technology, Vol. 7, 4 (2012), 261--281.
[47]
Stephanie M Merritt. 2011. Affective processes in human--automation interactions. Human Factors, Vol. 53, 4 (2011), 356--370.
[48]
Takashi Mitsuda and Mackenzie G Glaholt. 2014. Gaze bias during visual preference judgements: Effects of stimulus category and decision instructions. Visual Cognition, Vol. 22, 1 (2014), 11--29.
[49]
Chiyomi Miyajima, Suguru Yamazaki, Takashi Bando, Kentarou Hitomi, Hitoshi Terai, Hiroyuki Okuda, Takatsugu Hirayama, Masumi Egawa, Tatsuya Suzuki, and Kazuya Takeda. 2015. Analyzing driver gaze behavior and consistency of decision making during automated driving. In 2015 IEEE Intelligent Vehicles Symposium (IV). IEEE, 1293--1298.
[50]
Sina Mohseni, Fan Yang, Shiva Pentyala, Mengnan Du, Yi Liu, Nic Lupfer, Xia Hu, Shuiwang Ji, and Eric Ragan. 2020. Machine Learning Explanations to Prevent Overtrust in Fake News Detection. arXiv preprint arXiv:2007.12358 (2020).
[51]
Róbert Móro, Jakub Daráz, and Mária Bieliková. 2014. Visualization of Gaze Tracking Data for UX Testing on the Web. In HT (Doctoral Consortium/Late-breaking Results/Workshops).
[52]
Mahsan Nourani, Chiradeep Roy, Jeremy E Block, Donald R Honeycutt, Tahrima Rahman, Eric Ragan, and Vibhav Gogate. 2021. Anchoring Bias Affects Mental Model Formation and User Reliance in Explainable AI Systems. In 26th International Conference on Intelligent User Interfaces. 340--350.
[53]
Kazuo Okamura and Seiji Yamada. 2020. Adaptive trust calibration for human-AI collaboration. Plos one, Vol. 15, 2 (2020), e0229132.
[54]
Harikumar Pallathadka, Malik Mustafa, Domenic T Sanchez, Guna Sekhar Sajja, Sanjeev Gour, and Mohd Naved. 2021. Impact of machine learning on management, healthcare and agriculture. Materials Today: Proceedings (2021).
[55]
Alexandra Papoutsaki. 2015. Scalable Webcam Eye Tracking by Learning from User Interactions. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. 219--222.
[56]
Alexandra Papoutsaki, James Laskey, and Jeff Huang. 2017. Searchgazer: Webcam eye tracking for remote studies of web search. In Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval. 17--26.
[57]
Alison Parkes. 2017. The effect of individual and task characteristics on decision aid reliance. Behaviour & Information Technology, Vol. 36, 2 (2017), 165--177.
[58]
Rik Pieters and Luk Warlop. 1999. Visual attention during brand choice: The impact of time pressure and task motivation. International Journal of research in Marketing, Vol. 16, 1 (1999), 1--16.
[59]
Stefanie Pietsch and Petra Jansen. 2012. Different mental rotation performance in students of music, sport and education. Learning and Individual Differences, Vol. 22, 1 (2012), 159--163.
[60]
Michael I Posner and Steven E Petersen. 1990. The attention system of the human brain. Annual review of neuroscience, Vol. 13, 1 (1990), 25--42.
[61]
Julia M Puaschunder, Josef Mantl, and Bernd Plank. 2020. Medicine of the future: The power of Artificial Intelligence (AI) and big data in healthcare. RAIS Journal for Social Sciences, Vol. 4, 1 (2020), 1--8.
[62]
Qing-Xing Qu, Le Zhang, Wen-Yu Chao, and Vincent Duffy. 2017. User experience design based on eye-tracking technology: a case study on smartphone APPs. In Advances in applied digital human modeling and simulation. Springer, 303--315.
[63]
Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psychological bulletin, Vol. 124, 3 (1998), 372.
[64]
Erika Rogers. 1996. A study of visual reasoning in medical diagnosis. In Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society. 213--218.
[65]
Toshiki Saito, Ryunosuke Sudo, and Yuji Takano. 2020. The gaze bias effect in toddlers: Preliminary evidence for the developmental study of visual decision-making. Developmental science, Vol. 23, 6 (2020), e12969.
[66]
Rafael Santos, Nuno Santos, Pedro M Jorge, and Arnaldo Abrantes. 2014. Eye gaze as a human-computer interface. Procedia Technology, Vol. 17 (2014), 376--383.
[67]
Elizabeth R Schotter, Raymond W Berry, Craig RM McKenzie, and Keith Rayner. 2010. Gaze bias: Selective encoding and liking effects. Visual Cognition, Vol. 18, 8 (2010), 1113--1132.
[68]
D Selvathi and A Aarthy Poornila. 2017. Breast cancer detection in mammogram images using deep learning technique. Middle-East Journal of Scientific Research, Vol. 25, 2 (2017), 417--426.
[69]
Weston Sewell and Oleg Komogortsev. 2010. Real-time eye gaze tracking with an unmodified commodity webcam employing a neural network. In CHI'10 Extended Abstracts on Human Factors in Computing Systems. 3739--3744.
[70]
Mona SharifHeravi, John R Taylor, Christopher J Stanton, Sandra Lambeth, and Christopher Shanahan. 2020. It's a Disaster! Factors Affecting Trust Development and Repair Following Agent Task Failure. In Proceedings of the 2020 Australasian Conference on Robotics and Automation (ACRA 2020), 8--10 December 2020, Brisbane, Queensland.
[71]
Brook Shiferaw, Luke Downey, and David Crewther. 2019. A review of gaze entropy as a measure of visual scanning efficiency. Neuroscience & Biobehavioral Reviews, Vol. 96 (2019), 353--366.
[72]
Shinsuke Shimojo, Claudiu Simion, Eiko Shimojo, and Christian Scheier. 2003. Gaze bias both reflects and influences preference. Nature neuroscience, Vol. 6, 12 (2003), 1317--1322.
[73]
Claudiu Simion and Shinsuke Shimojo. 2006. Early interactions between orienting, visual sampling and decision making in facial preference. Vision research, Vol. 46, 20 (2006), 3331--3335.
[74]
Ronal Singh, Tim Miller, Joshua Newn, Eduardo Velloso, Frank Vetere, and Liz Sonenberg. 2020. Combining gaze and AI planning for online human intention recognition. Artificial Intelligence, Vol. 284 (2020), 103275.
[75]
Rainer Stiefelhagen and Jie Yang. 1997. Gaze tracking for multimodal human-computer interaction. In 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 4. IEEE, 2617--2620.
[76]
Sarah Tan, Julius Adebayo, Kori Inkpen, and Ece Kamar. 2018. Investigating human machine complementarity for recidivism predictions. arXiv preprint arXiv:1808.09123 (2018).
[77]
Armin W Thomas, Felix Molter, Ian Krajbich, Hauke R Heekeren, and Peter NC Mohr. 2019. Gaze bias differences capture individual choice behaviour. Nature Human Behaviour, Vol. 3, 6 (2019), 625--635.
[78]
David H Uttal and Cheryl A Cohen. 2012. Spatial thinking and STEM education: When, why, and how? In Psychology of learning and motivation. Vol. 57. Elsevier, 147--181.
[79]
Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW2 (2021), 1--39.
[80]
Dayong Wang, Aditya Khosla, Rishab Gargeya, Humayun Irshad, and Andrew H Beck. 2016. Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:1606.05718 (2016).
[81]
Dakuo Wang, Liuping Wang, Zhan Zhang, Ding Wang, Haiyi Zhu, Yvonne Gao, Xiangmin Fan, and Feng Tian. 2021. ?Brilliant AI Doctor" in Rural Clinics: Challenges in AI-Powered Clinical Decision Support System Deployment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--18.
[82]
Min Wang, Aya Hussein, Raul Fernandez Rojas, Kamran Shafi, and Hussein A Abbass. 2018. EEG-based neural correlates of trust in human-autonomy interaction. In 2018 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 350--357.
[83]
Philipp Wintersberger, Tamara von Sawitzky, Anna-Katharina Frison, and Andreas Riener. 2017. Traffic augmentation as a means to increase trust in automated driving systems. In Proceedings of the 12th biannual conference on italian sigchi chapter. 1--7.
[84]
Chuhao Wu, Jackie Cha, Jay Sulek, Tian Zhou, Chandru P Sundaram, Juan Wachs, and Denny Yu. 2020. Eye-tracking metrics predict perceived workload in robotic surgical skills training. Human factors, Vol. 62, 8 (2020), 1365--1386.
[85]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the effect of accuracy on trust in machine learning models. In Proceedings of the 2019 chi conference on human factors in computing systems. 1--12.
[86]
Thorsten O Zander, Matti Gaertner, Christian Kothe, and Roman Vilimek. 2010. Combining eye gaze input with a brain-computer interface for touchless human-computer interaction. Intl. Journal of Human-Computer Interaction, Vol. 27, 1 (2010), 38--51.
[87]
Rencheng Zheng, Kimihiko Nakano, Hiromitsu Ishiko, Kenji Hagita, Makoto Kihira, and Toshiya Yokozeki. 2015. Eye-gaze tracking analysis of driver behavior while interacting with navigation systems in an urban area. IEEE Transactions on Human-Machine Systems, Vol. 46, 4 (2015), 546--556.

Cited By

View all
  • (2024)Strategies for Integrating Generative AI into Higher Education: Navigating Challenges and Leveraging OpportunitiesEducation Sciences10.3390/educsci1405050314:5(503)Online publication date: 7-May-2024
  • (2024)mmSpyVR: Exploiting mmWave Radar for Penetrating Obstacles to Uncover Privacy Vulnerability of Virtual RealityProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997728:4(1-29)Online publication date: 21-Nov-2024
  • (2024)UbiHR: Resource-efficient Long-range Heart Rate Sensing on Ubiquitous DevicesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997718:4(1-26)Online publication date: 21-Nov-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 6, Issue CSCW2
CSCW
November 2022
8205 pages
EISSN:2573-0142
DOI:10.1145/3571154
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 November 2022
Published in PACMHCI Volume 6, Issue CSCW2

Check for updates

Author Tags

  1. decision making
  2. decision support tools
  3. gaze
  4. human-ai interaction
  5. trust

Qualifiers

  • Research-article

Funding Sources

  • National Science Foundation

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,669
  • Downloads (Last 6 weeks)277
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Strategies for Integrating Generative AI into Higher Education: Navigating Challenges and Leveraging OpportunitiesEducation Sciences10.3390/educsci1405050314:5(503)Online publication date: 7-May-2024
  • (2024)mmSpyVR: Exploiting mmWave Radar for Penetrating Obstacles to Uncover Privacy Vulnerability of Virtual RealityProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997728:4(1-29)Online publication date: 21-Nov-2024
  • (2024)UbiHR: Resource-efficient Long-range Heart Rate Sensing on Ubiquitous DevicesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997718:4(1-26)Online publication date: 21-Nov-2024
  • (2024)Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility AssessmentProceedings of the ACM on Human-Computer Interaction10.1145/36869228:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)The Algorithm and the Org Chart: How Algorithms Can Conflict with Organizational StructuresProceedings of the ACM on Human-Computer Interaction10.1145/36869038:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)RDGait: A mmWave Based Gait User Recognition System for Complex Indoor Environments Using Single-chip RadarProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785528:3(1-31)Online publication date: 9-Sep-2024
  • (2024)Efficient and Robust Heart Rate Estimation Approach for Noisy Wearable PPG Sensors Using Ideal Representation LearningProceedings of the 2024 ACM International Symposium on Wearable Computers10.1145/3675095.3676606(1-8)Online publication date: 5-Oct-2024
  • (2024)You Can Only Verify When You Know the Answer: Feature-Based Explanations Reduce Overreliance on AI for Easy Decisions, but Not for Hard OnesProceedings of Mensch und Computer 202410.1145/3670653.3670660(156-170)Online publication date: 1-Sep-2024
  • (2024)Large Model for Small Data: Foundation Model for Cross-Modal RF Human Activity RecognitionProceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems10.1145/3666025.3699349(436-449)Online publication date: 4-Nov-2024
  • (2024)Mission: mmWave Radar Person Identification with RGB CamerasProceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems10.1145/3666025.3699340(309-321)Online publication date: 4-Nov-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media