[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Predicting the need for XAI from high-granularity interaction data

Published: 01 July 2023 Publication History

Abstract

Recent advances in Artificial Intelligence (AI) and Machine Learning (ML) brought light on the need for explainability in multiple domains (e.g., healthcare, finance, justice, and recruiting). Explainability or Explainable AI (XAI) can be defined as everything that makes AI more understandable to human beings. However, XAI features may vary according to the AI algorithm used. Beyond XAI features, different AI algorithms vary in terms of speed, performance, and costs associated with training/running models. Knowing when to choose the right algorithm for the task at hand, therefore, is fundamental in multiple AI systems, for instance, AutoML and AutoAI. In this paper, we propose a method to analyze patterns of high-granularity user interface (UI) events (i.e., mouse, keyboard, and additional custom events triggered on the millisecond scale) to predict when users will interact with UI elements that provide explainability for the AI in place. In this context, this paper presents: (1) a user study involving 37 participants (7 in the pilot phase and 30 in the main experiment phase) in which people performed a task of reporting a bug using a text form associated with an AI data quality meter and its XAI UI element and (2) an approach to model micro behavior using n o d e 2 v e c to predict when the interaction with XAI UI element will occur. The proposed approach uses a rich dataset (approximately 129k events) and combines n o d e 2 v e c and a Logistic Regression classifier. Results obtained show we have obtained an event-by-event prediction of the interaction with XAI with an average F-score of 0.90 (σ = 0 . 06). From the presented results, one expects to support researchers in the realm of UI personalization to consider high-granularity interaction data when predicting the need for XAI while users are interacting with AI model outputs.

Highlights

A study detailing different behaviors while users interact with explainability.
A user model approach to predict interaction with explainability.
The proposed approach obtained an average F-score of 0.90.
Groups do not differ in terms of task time nor event stream length.
An extension for XAI taxonomy encompassing interaction with XAI.

References

[1]
Adadi A., Berrada M., Peeking inside the black-box: A survey on explainable artificial intelligence (XAI), IEEE Access 6 (2018) 52138–52160,.
[2]
Arya V., Bellamy R.K.E., Chen P., Dhurandhar A., Hind M., Hoffman S.C., Houde S., Liao Q.V., Luss R., Mojsilovic A., Mourad S., Pedemonte P., Raghavendra R., Richards J.T., Sattigeri P., Shanmugam K., Singh M., Varshney K.R., Wei D., Zhang Y., One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques, CoRR, 2019, arXiv:1909.03012, URL http://arxiv.org/abs/1909.03012.
[3]
Bettenburg, N., Just, S., Schröter, A., Weiss, C., Premraj, R., Zimmermann, T., 2008. What makes a good bug report?. In: Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering. pp. 308–318.
[4]
Boi P., Fenu G., Spano L.D., Vargiu V., Reconstructing user’s attention on the web through mouse movements and perception-based content identification, ACM Trans. Appl. Percept. 13 (3) (2016),. URL https://doi.org/10.1145/2912124.
[5]
Carta T., Paternò F., De Santana V.F., Web usability probe: a tool for supporting remote usability evaluation of web sites, in: Human-Computer Interaction–INTERACT 2011: 13th IFIP TC 13 International Conference, Lisbon, Portugal, September 5-9, 2011, Proceedings, Part IV 13, Springer, 2011, pp. 349–357.
[6]
Chromik M., Eiband M., Buchner F., Krüger A., Butz A., I think I get your point, AI! the illusion of explanatory depth in explainable AI, in: 26th International Conference on Intelligent User Interfaces, IUI ’21, Association for Computing Machinery, New York, NY, USA, 2021, pp. 307–317,. URL https://doi.org/10.1145/3397481.3450644.
[7]
Chudá D., Krátky P., Burda K., Biometric properties of mouse interaction features on the web, Interact. Comput. 30 (5) (2018) 359–377,. URL https://doi.org/10.1093/iwc/iwy015.
[8]
David-John B., Peacock C., Zhang T., Murdison T.S., Benko H., Jonker T.R., Towards gaze-based prediction of the intent to interact in virtual reality, in: ACM Symposium on Eye Tracking Research and Applications, in: ETRA ’21 Short Papers, Association for Computing Machinery, New York, NY, USA, 2021,. URL https://doi.org/10.1145/3448018.3458008.
[9]
Davies S., Roper M., What’s in a bug report?, in: Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’14, Association for Computing Machinery, New York, NY, USA, 2014,. URL https://doi.org/10.1145/2652524.2652541.
[10]
Ferreira J.J., Monteiro M.S., What are people doing about XAI user experience? A survey on AI explainability research and practice, in: International Conference on Human-Computer Interaction, Springer, 2020, pp. 56–73.
[11]
Finzel B., Saranti A., Angerschmid A., Tafler D., Pfeifer B., Holzinger A., Generating explanations for conceptual validation of graph neural networks, KI-Künstliche Intell (2022) 1–15.
[12]
Firdaus S.N., Ding C., Sadeghian A., Retweet prediction considering user’s difference as an author and retweeter, in: 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM, 2016, pp. 852–859,.
[13]
Géron A., Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow, ” O’Reilly Media, Inc.”, 2022.
[14]
Ghosh S., Mandi S., Mitra B., De P., Exploring smartphone keyboard interactions for experience sampling method driven probe generation, in: 26th International Conference on Intelligent User Interfaces, IUI ’21, Association for Computing Machinery, New York, NY, USA, 2021, pp. 133–138,. URL https://doi.org/10.1145/3397481.3450669.
[15]
Grover A., Leskovec J., Node2vec: Scalable feature learning for networks, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, Association for Computing Machinery, New York, NY, USA, 2016, pp. 855–864,. URL https://doi.org/10.1145/2939672.2939754.
[16]
Hall, M., Harborne, D., Tomsett, R., Galetic, V., Quintana-Amate, S., Nottle, A., Preece, A., 2019. A systematic method to understand requirements for explainable AI (XAI) systems. In: Proceedings of the IJCAI Workshop on EXplainable Artificial Intelligence (XAI 2019), Macau, China. 11.
[17]
Hayashi E.C.S., Posada J.E.G., Maike V.R.M.L., Baranauskas M.C.C., Exploring new formats of the self-assessment manikin in the design with children, in: Proceedings of the 15th Brazilian Symposium on Human Factors in Computing Systems, IHC ’16, Association for Computing Machinery, New York, NY, USA, 2016,. URL https://doi.org/10.1145/3033701.3033728.
[18]
Hind M., Explaining explainable AI, XRDS: Crossroads, the ACM Mag. Stud 25 (3) (2019) 16–19.
[19]
Holzinger A., Saranti A., Molnar C., Biecek P., Samek W., Explainable AI methods-a brief overview, in: XxAI-beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, Springer, 2022, pp. 13–38.
[20]
Huang J., Hu K., Tang Q., Chen M., Qi Y., Cheng J., Lei J., Deep position-wise interaction network for CTR prediction, in: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’21, Association for Computing Machinery, New York, NY, USA, 2021, pp. 1885–1889,. URL https://doi.org/10.1145/3404835.3463117.
[21]
Hudec M., Mináriková E., Mesiar R., Saranti A., Holzinger A., Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions, Knowl.-Based Syst. 220 (2021).
[22]
ISO I.O.f.S.I., Ergonomic requirements for office work with visual display terminals (VDTs). part 11: Guidance on usability, 1998.
[23]
Ito M., Netto M.A.S., Santana V.F.d., Providing Data Quality Feedback While End Users Enter Data in Electronic Forms, USPTO, 2019, US Patent n. 10,204,091.
[24]
Leskovec J., Rajaraman A., Ullman J.D., Mining of Massive Data Sets, Cambridge University Press, 2020.
[25]
Levene H., et al., Contributions to probability and statistics, Essays in Honor of Harold Hotell 278 (1960) 292.
[26]
MacKay D.J., Mac Kay D.J., Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003.
[27]
Millecamp M., Htun N.N., Conati C., Verbert K., To explain or not to explain: The effects of personal characteristics when explaining music recommendations, in: Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI ’19, Association for Computing Machinery, New York, NY, USA, 2019, pp. 397–407,. URL https://doi.org/10.1145/3301275.3302313.
[28]
Marcon de Moraes F., de Santana V.F., Braga J.C., Supporting the selection of web content modality based on user interactions logs, in: Proceedings of the 13th International Web for All Conference, in: W4A ’16, Association for Computing Machinery, New York, NY, USA, 2016,. URL https://doi.org/10.1145/2899475.2899500.
[29]
Norman D., The Design of Everyday Things: Revised and Expanded Edition, Basic books, 2013.
[30]
Nourani M., Roy C., Block J.E., Honeycutt D.R., Rahman T., Ragan E., Gogate V., Anchoring bias affects mental model formation and user reliance in explainable AI systems, in: 26th International Conference on Intelligent User Interfaces, IUI ’21, Association for Computing Machinery, New York, NY, USA, 2021, pp. 340–350,. URL https://doi.org/10.1145/3397481.3450639.
[31]
Panigutti C., Perotti A., Pedreschi D., Doctor XAI: An ontology-based approach to black-box sequential data classification explanations, in: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, in: FAT* ’20, Association for Computing Machinery, New York, NY, USA, 2020, pp. 629–639,. URL https://doi.org/10.1145/3351095.3372855.
[32]
Putnam, V., Conati, C., 2019. Exploring the Need for Explainable Artificial Intelligence (XAI) in Intelligent Tutoring Systems (ITS). In: IUI Workshops. 19.
[33]
Rai A., Explainable AI: From black box to glass box, J. the Academy of Market. Sci 48 (1) (2020) 137–141.
[34]
Robertson J., Kokkinakis A.V., Hook J., Kirman B., Block F., Ursu M.F., Patra S., Demediuk S., Drachen A., Olarewaju O., Wait, but why?: Assessing behavior explanation strategies for real-time strategy games, in: 26th International Conference on Intelligent User Interfaces, IUI ’21, Association for Computing Machinery, New York, NY, USA, 2021, pp. 32–42,. URL https://doi.org/10.1145/3397481.3450699.
[35]
Salvucci D.D., Goldberg J.H., Identifying fixations and saccades in eye-tracking protocols, in: Proceedings of the 2000 Symposium on Eye Tracking Research & Applications, ETRA ’00, Association for Computing Machinery, New York, NY, USA, 2000, pp. 71–78,. URL https://doi.org/10.1145/355017.355028.
[36]
Santana V.F.d., Baranauskas M.C.C., WELFIT: A remote evaluation tool for identifying web usage patterns through client-side logging, Int. J. Human-Comput. Stud 76 (2015) 40–49.
[37]
de Santana V.F., Ferreira J.J., de Paula R.A., de Gusmão Cerqueira R.F., An eye gaze model for seismic interpretation support, in: Proceedings of the 2018 ACM Symposium on Eye Tracking Research and Applications, ETRA ’18, Association for Computing Machinery, New York, NY, USA, 2018,. URL https://doi.org/10.1145/3204493.3204554.
[38]
dos Santos T.D., de Santana V.F., Identifying distractors for people with computer anxiety based on mouse fixations, 2022,. URL https://doi.org/10.1093/iwc/iwac025, iwac025.
[39]
Saranti A., Hudec M., Mináriková E., Takáč Z., Groß schedl U., Koch C., Pfeifer B., Angerschmid A., Holzinger A., Actionable explainable AI (AxAI): A practical example with aggregation functions for adaptive classification and textual explanations for interpretable machine learning, Mach. Learn. Knowl. Extract 4 (4) (2022) 924–953.
[40]
Shaphiro S., Wilk M., An analysis of variance test for normality, Biometrika 52 (3) (1965) 591–611.
[41]
So C., Understanding the prediction mechanism of sentiments by XAI visualization, in: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, in: NLPIR 2020, Association for Computing Machinery, New York, NY, USA, 2020, pp. 75–80,. URL https://doi.org/10.1145/3443279.3443284.
[42]
Springer A., Whittaker S., Progressive disclosure: When, why, and how do users want algorithmic transparency information?, ACM Trans. Interact. Intell. Syst. 10 (4) (2020),. URL https://doi.org/10.1145/3374218.
[43]
Szymanski M., Millecamp M., Verbert K., Visual, textual or hybrid: The effect of user expertise on different explanations, in: 26th International Conference on Intelligent User Interfaces, IUI ’21, Association for Computing Machinery, New York, NY, USA, 2021, pp. 109–119,. URL https://doi.org/10.1145/3397481.3450662.
[44]
Wang X., Yin M., Are explanations helpful? A comparative study of the effects of explanations in AI-assisted decision-making, in: 26th International Conference on Intelligent User Interfaces, IUI ’21, Association for Computing Machinery, New York, NY, USA, 2021, pp. 318–328,. URL https://doi.org/10.1145/3397481.3450650.
[45]
Weld D.S., Bansal G., The challenge of crafting intelligible intelligence, Commun. ACM 62 (6) (2019) 70–79,. URL https://doi.org/10.1145/3282486.
[46]
Wilcoxon F., Individual comparisons by ranking methods, Biom. Bull 1 (1945) 80–83.
[47]
Wolf C.T., Ringland K.E., Designing accessible, explainable AI (XAI) experiences, SIGACCESS Access. Comput. (125) (2020),. URL https://doi.org/10.1145/3386296.3386302.
[48]
Yang F., Huang Z., Scholtz J., Arendt D.L., How do visual explanations foster end users’ appropriate trust in machine learning?, in: Proceedings of the 25th International Conference on Intelligent User Interfaces, IUI ’20, Association for Computing Machinery, New York, NY, USA, 2020, pp. 189–201,. URL https://doi.org/10.1145/3377325.3377480.

Cited By

View all
  • (2024)Challenges and Opportunities for Responsible PromptingExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636268(1-4)Online publication date: 11-May-2024
  • (2024)Is It AI or Is It Me? Understanding Users’ Prompt Journey with Text-to-Image Generative AI ToolsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642861(1-13)Online publication date: 11-May-2024
  • (2024)Is mouse dynamics information credible for user behavior research? An empirical investigationComputer Standards & Interfaces10.1016/j.csi.2024.10384990:COnline publication date: 1-Aug-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image International Journal of Human-Computer Studies
International Journal of Human-Computer Studies  Volume 175, Issue C
Jul 2023
112 pages

Publisher

Academic Press, Inc.

United States

Publication History

Published: 01 July 2023

Author Tags

  1. Explainability prediction
  2. Fine-grained interaction
  3. Micro behavior
  4. User behavior analysis
  5. Interaction log analysis
  6. Interaction prediction
  7. Node2vec

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Challenges and Opportunities for Responsible PromptingExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636268(1-4)Online publication date: 11-May-2024
  • (2024)Is It AI or Is It Me? Understanding Users’ Prompt Journey with Text-to-Image Generative AI ToolsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642861(1-13)Online publication date: 11-May-2024
  • (2024)Is mouse dynamics information credible for user behavior research? An empirical investigationComputer Standards & Interfaces10.1016/j.csi.2024.10384990:COnline publication date: 1-Aug-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media