[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3639476.3639759acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

XAIport: A Service Framework for the Early Adoption of XAI in AI Model Development

Published: 24 May 2024 Publication History

Abstract

In this study, we propose the early adoption of Explainable AI (XAI) with a focus on three properties: Quality of explanation, the explanation summaries should be consistent across multiple XAI methods; Architectural Compatibility, for effective integration in XAI, the architecture styles of both the XAI methods and the models to be explained must be compatible with the framework; Configurable operations, XAI explanations are operable, akin to machine learning operations. Thus, an explanation for AI models should be reproducible and tractable to be trustworthy. We present XAIport, a framework of XAI microservices encapsulated into Open APIs to deliver early explanations as observation for learning model quality assurance. XAIport enables configurable XAI operations along with machine learning development. We quantify the operational costs of incorporating XAI with three cloud computer vision services on Microsoft Azure Cognitive Services, Google Cloud Vertex AI, and Amazon Rekognition. Our findings show comparable operational costs between XAI and traditional machine learning, with XAIport significantly improving both cloud AI model performance and explanation stability.

References

[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6 (2018), 52138--52160.
[2]
Amazon. 2023. Amazon Web Services. Amazon. https://aws.amazon.com Accessed: 2023.
[3]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58 (2020), 82--115.
[4]
Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 8 (2013), 1798--1828.
[5]
Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. 2018. Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). 839--847.
[6]
Chinu and Urvashi Bansal. 2023. Explainable AI: To Reveal the Logic of Black-Box Models. New Generation Computing (2023), 1--35.
[7]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. 248--255.
[8]
Christof Ebert, Gorka Gallardo, Josune Hernantes, and Nicolas Serrano. 2016. DevOps. IEEE Software 33, 3 (2016), 94--100.
[9]
Ruigang Fu, Qingyong Hu, Xiaohu Dong, Yulan Guo, Yinghui Gao, and Biao Li. 2020. Axiom-based grad-cam: Towards accurate visualization and explanation of cnns. arXiv preprint arXiv:2008.02312 (2020).
[10]
Google. 2023. Google Cloud Platform. https://cloud.google.com Accessed: 2023.
[11]
Khan Mohammad Habibullah, Gregory Gay, and Jennifer Horkoff. 2022. Nonfunctional requirements for machine learning: An exploration of system scope and interest. In Proceedings of the 1st Workshop on Software Engineering for Responsible AI. 29--36.
[12]
Aspen Hopkins and Serena Booth. 2021. Machine learning practices outside big tech: How resource constraints challenge responsible development. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 134--145.
[13]
Jun Huang, Zerui Wang, Ding Li, and Yan Liu. 2022. The Analysis and Development of an XAI Process on Feature Contribution Explanation. In 2022 IEEE International Conference on Big Data (Big Data). 5039--5048.
[14]
Peng-Tao Jiang, Chang-Bin Zhang, Qibin Hou, Ming-Ming Cheng, and Yunchao Wei. 2021. Layercam: Exploring hierarchical class activation maps for localization. IEEE Transactions on Image Processing 30 (2021), 5875--5888.
[15]
Jang-Hyun Kim, Wonho Choo, and Hyun Oh Song. 2020. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. In International Conference on Machine Learning. PMLR, 5275--5285.
[16]
Dominik Kreuzberger, Niklas Kühl, and Sebastian Hirschl. 2023. Machine Learning Operations (MLOps): Overview, Definition, and Architecture. IEEE Access 11 (2023), 31866--31879.
[17]
Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700 (2019).
[18]
Philippe Lambin, Ralph T. H. Leijenaar, Timo M. Deist, Jurgen Peerlings, Evelyn E.C. de Jong, Janita E. van Timmeren, Sebastian Sanduleanu, Ruben T H M Larue, Aniek J. G. Even, Arthur Jochems, Yvonka van Wijk, H. Woodruff, Johan van Soest, Tim Lustberg, Erik Roelofs, Wouter van Elmpt, Andre Dekker, Felix M. Mottaghy, Joachim E. Wildberger, and Sean Walsh. 2017. Radiomics: the bridge between medical imaging and personalized medicine. Nature Reviews Clinical Oncology 14 (2017), 749--762.
[19]
Nicholas Larsen, Jonathan Stallrich, Srijan Sengupta, Alex Deng, Ron Kohavi, and Nathaniel T Stevens. 2023. Statistical challenges in online controlled experiments: A review of a/b testing methodology. The American Statistician just-accepted (2023), 1--32.
[20]
Ding Li, Yan Liu, Jun Huang, and Zerui Wang. 2023. A Trustworthy View on Explainable Artificial Intelligence Method Evaluation. Computer 56, 4 (2023), 50--60.
[21]
Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf
[22]
Microsoft. 2023. Azure Cloud Services. https://azure.microsoft.com Accessed: 2023.
[23]
Mohammed Bany Muhammad and Mohammed Yeasin. 2020. Eigen-cam: Class activation map using principal components. In 2020 international joint conference on neural networks (IJCNN). IEEE, 1--7.
[24]
Jean Jacques Ohana, Steve Ohana, Eric Benhamou, David Saltiel, and Beatrice Guez. 2021. Explainable AI (XAI) models applied to the multi-agent environment of financial markets. In Explainable and Transparent AI and Multi-Agent Systems: Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3--7, 2021, Revised Selected Papers 3. Springer, 189--207.
[25]
Urja Pawar, Donna O'Shea, Susan Rea, and Ruairi O'Reilly. 2020. Incorporating Explainable Artificial Intelligence (XAI) to aid the Understanding of Machine Learning in the Healthcare Domain. In AICS. 169--180.
[26]
David Rolnick, Priya L Donti, Lynn H Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, et al. 2022. Tackling climate change with machine learning. ACM Computing Surveys (CSUR) 55, 2 (2022), 1--96.
[27]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence 1, 5 (2019), 206--215.
[28]
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618--626.
[29]
Ruey-Kai Sheu and Mayuresh Sunil Pardeshi. 2022. A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System. Sensors 22, 20 (2022), 8068.
[30]
Stefan Studer, Thanh Binh Bui, Christian Drescher, Alexander Hanuschkin, Ludwig Winkler, Steven Peters, and Klaus-Robert Müller. 2021. Towards CRISP-ML (Q): a machine learning process model with quality assurance methodology. Machine learning and knowledge extraction 3, 2 (2021), 392--413.
[31]
Swagger. 2023. OpenAPI Specification. https://swagger.io/specification/ Accessed: 2023.
[32]
Matteo Testi, Matteo Ballabio, Emanuele Frontoni, Giulio Iannello, Sara Moccia, Paolo Soda, and Gennaro Vessio. 2022. MLOps: A taxonomy and a methodology. IEEE Access 10 (2022), 63606--63618.
[33]
Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. 2023. Large language models in medicine. Nature Medicine 29, 8 (Aug 2023), 1930--1940.
[34]
Giulia Vilone and Luca Longo. 2021. Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion 76 (2021), 89--106.
[35]
Steven Euijong Whang and Jae-Gil Lee. 2020. Data collection and quality challenges for deep learning. Proceedings of the VLDB Endowment 13, 12 (2020), 3429--3432.
[36]
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, Online, 38--45. https://www.aclweb.org/anthology/2020.emnlp-demos.6
[37]
Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. 2019. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision. 6023--6032.
[38]
Jan Zacharias, Moritz von Zahn, Johannes Chen, and Oliver Hinz. 2022. Designing a feature selection method based on explainable artificial intelligence. Electronic Markets 32, 4 (2022), 2159--2184.
[39]
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2921--2929.
[40]
Mohd Zuhair, Pronaya Bhattacharya, Vivek Kumar Prasad, Manav Barot, and Monil Modi. 2022. Analysis of Boosting Mechanisms in Cloud-based Intrusion Detection Systems. In 2022 5th International Conference on Contemporary Computing and Informatics (IC3I). 961--966.

Cited By

View all
  • (2024)An Open API Architecture to Discover the Trustworthy Explanation of Cloud AI ServicesIEEE Transactions on Cloud Computing10.1109/TCC.2024.339860912:2(762-776)Online publication date: Apr-2024
  • (2024)Cloud-Based XAI Services for Assessing Open Repository Models Under Adversarial Attacks2024 IEEE International Conference on Software Services Engineering (SSE)10.1109/SSE62657.2024.00031(141-152)Online publication date: 7-Jul-2024

Index Terms

  1. XAIport: A Service Framework for the Early Adoption of XAI in AI Model Development

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        ICSE-NIER'24: Proceedings of the 2024 ACM/IEEE 44th International Conference on Software Engineering: New Ideas and Emerging Results
        April 2024
        127 pages
        ISBN:9798400705007
        DOI:10.1145/3639476
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        In-Cooperation

        • Faculty of Engineering of University of Porto

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 24 May 2024

        Check for updates

        Author Tags

        1. XAI
        2. MLOps
        3. operational cost analysis
        4. deployment strategy

        Qualifiers

        • Research-article

        Conference

        ICSE-NIER'24
        Sponsor:

        Upcoming Conference

        ICSE 2025

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)78
        • Downloads (Last 6 weeks)8
        Reflects downloads up to 14 Dec 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)An Open API Architecture to Discover the Trustworthy Explanation of Cloud AI ServicesIEEE Transactions on Cloud Computing10.1109/TCC.2024.339860912:2(762-776)Online publication date: Apr-2024
        • (2024)Cloud-Based XAI Services for Assessing Open Repository Models Under Adversarial Attacks2024 IEEE International Conference on Software Services Engineering (SSE)10.1109/SSE62657.2024.00031(141-152)Online publication date: 7-Jul-2024

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media