default search action
xAI 2023: Lisbon, Portugal
- Luca Longo
:
Explainable Artificial Intelligence - First World Conference, xAI 2023, Lisbon, Portugal, July 26-28, 2023, Proceedings, Part II. Communications in Computer and Information Science 1902, Springer 2023, ISBN 978-3-031-44066-3
Surveys, Benchmarks, Visual Representations and Applications for xAI
- Igor Cherepanov
, David Sessler
, Alex Ulmer
, Hendrik Lücke-Tieke
, Jörn Kohlhammer
:
Towards the Visualization of Aggregated Class Activation Maps to Analyse the Global Contribution of Class Features. 3-23 - Antonin Poché, Lucas Hervier, Mohamed Chafik Bakkay:
Natural Example-Based Explainability: A Survey. 24-47 - Blerta Abazi Chaushi
, Besnik Selimi, Agron Chaushi
, Marika Apostolova:
Explainable Artificial Intelligence in Education: A Comprehensive Review. 48-71 - Xiaowei Liu, Kevin McAreavey, Weiru Liu
:
Contrastive Visual Explanations for Reinforcement Learning via Counterfactual Rewards. 72-87 - Mohamed Karim Belaid
, Richard Bornemann
, Maximilian Rabus
, Ralf Krestel
, Eyke Hüllermeier
:
Compare-xAI: Toward Unifying Functional Testing Methods for Post-hoc XAI Algorithms into a Multi-dimensional Benchmark. 88-109 - Laura State
, Hadrien Salat
, Stefania Rubrichi
, Zbigniew Smoreda
:
Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal. 110-125 - Daniel Gierse, Felix Neubürger
, Thomas Kopinski
:
A Novel Architecture for Robust Explainable AI Approaches in Critical Object Detection Scenarios Based on Bayesian Neural Networks. 126-147
xAI for Decision-Making and Human-AI Collaboration, for Machine Learning on Graphs with Ontologies and Graph Neural Networks
- Luca Corbucci
, Riccardo Guidotti
, Anna Monreale
:
Explaining Black-Boxes in Federated Learning. 151-163 - Erwin Walraven, Ajaya Adhikari, Cor J. Veenman:
PERFEX: Classifier Performance Explanations for Trustworthy AI Systems. 164-180 - Charles Wan
, Rodrigo Belo
, Leid Zejnilovic
, Susana Lavado
:
The Duet of Representations and How Explanations Exacerbate It. 181-197 - Thales Bertaglia
, Stefan Huber, Catalina Goanta
, Gerasimos Spanakis
, Adriana Iamnitchi
:
Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social Media. 198-213 - Arthur Picard
, Yazan Mualla
, Franck Gechter
, Stéphane Galland
:
Human-Computer Interaction and Explainability: Intersection and Terminology. 214-236 - Javier Jiménez Raboso
, Antonio Manjavacas
, Alejandro Campoy-Nieves
, Miguel Molina-Solana
, Juan Gómez-Romero
:
Explaining Deep Reinforcement Learning-Based Methods for Control of Building HVAC Systems. 237-255 - Martina Cinquini
, Fosca Giannotti
, Riccardo Guidotti
, Andrea Mattei:
Handling Missing Values in Local Post-hoc Explainability. 256-278 - Christophe Labreuche
, Roman Bresson
:
Necessary and Sufficient Explanations of Multi-Criteria Decision Aiding Models, with and Without Interacting Criteria. 279-302 - Eli J. Laird
, Ayesh Madushanka
, Elfi Kraka
, Corey Clark
:
XInsight: Revealing Model Insights for GNNs with Flow-Based Explanations. 303-320 - Hongbo Bo, Yiwen Wu, Zinuo You
, Ryan McConville
, Jun Hong, Weiru Liu
:
What Will Make Misinformation Spread: An XAI Perspective. 321-337 - Jonas Teufel
, Luca Torresi
, Patrick Reiser
, Pascal Friederich
:
MEGAN: Multi-explanation Graph Attention Network. 338-360 - Jonas Teufel
, Luca Torresi
, Pascal Friederich
:
Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies. 361-381 - Claudio Borile, Alan Perotti, André Panisson:
Evaluating Link Prediction Explanations for Graph Neural Networks. 382-401
Actionable eXplainable AI, Semantics and Explainability, and Explanations for Advice-Giving Systems
- Danilo Cavaliere
, Mariacristina Gallo
, Claudio Stanzione:
Propaganda Detection Robustness Through Adversarial Attacks Driven by eXplainable AI. 405-419 - Leonardo Arrighi
, Sylvio Barbon Junior
, Felice Andrea Pellegrino
, Michele Simonato
, Marco Zullich
:
Explainable Automated Anomaly Recognition in Failure Analysis: is Deep Learning Doing it Correctly? 420-432 - Deepan Chakravarthi Padmanabhan
, Paul G. Plöger
, Octavio Arriaga
, Matias Valdenegro-Toro
:
DExT: Detector Explanation Toolkit. 433-456 - Md Shajalal, Sebastian Denef, Md. Rezaul Karim, Alexander Boden, Gunnar Stevens:
Unveiling Black-Boxes: Explainable Deep Learning Models for Patent Classification. 457-474 - Francesco Dibitonto
, Fabio Garcea
, André Panisson
, Alan Perotti
, Lia Morra
:
HOLMES: HOLonym-MEronym Based Semantic Inspection for Convolutional Image Classifiers. 475-498 - Georgii Mikriukov
, Gesina Schwalbe
, Christian Hellert
, Korinna Bade
:
Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability. 499-524 - Alan Perotti
, Simone Bertolotto
, Eliana Pastor
, André Panisson
:
Beyond One-Hot-Encoding: Injecting Semantics to Drive Image Classifiers. 525-548 - Kirill Bykov
, Laura Kopf
, Marina M.-C. Höhne
:
Finding Spurious Correlations with Function-Semantic Contrast Analysis. 549-572 - Zhangyi Wu
, Tim Draws
, Federico Cau
, Francesco Barile
, Alisa Rieger
, Nava Tintarev
:
Explaining Search Result Stances to Opinionated People. 573-596 - Roan Schellingerhout
, Francesco Barile
, Nava Tintarev
:
A Co-design Study for Multi-stakeholder Job Recommender System Explanations. 597-620 - Clara Punzi
, Aleksandra Maslennikova
, Gizem Gezici
, Roberto Pellungrini, Fosca Giannotti:
Explaining Socio-Demographic and Behavioral Patterns of Vaccination Against the Swine Flu (H1N1) Pandemic. 621-635 - Jacqueline Höllig
, Aniek F. Markus
, Jef de Slegte
, Prachi Bagave
:
Semantic Meaningfulness: Evaluating Counterfactual Approaches for Real-World Plausibility and Feasibility. 636-659
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.