[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

A survey of visual analytics for Explainable Artificial Intelligence methods

Published: 01 February 2022 Publication History

Abstract

Deep learning (DL) models have achieved impressive performance in various domains such as medicine, finance, and autonomous vehicle systems with advances in computing power and technologies. However, due to the black-box structure of DL models, the decisions of these learning models often need to be explained to end-users. Explainable Artificial Intelligence (XAI) provides explanations of black-box models to reveal the behavior and underlying decision-making mechanisms of the models through tools, techniques, and algorithms. Visualization techniques help to present model and prediction explanations in a more understandable, explainable, and interpretable way. This survey paper aims to review current trends and challenges of visual analytics in interpreting DL models by adopting XAI methods and present future research directions in this area. We reviewed literature based on two different aspects, model usage and visual approaches. We addressed several research questions based on our findings and then discussed missing points, research gaps, and potential future research directions. This survey provides guidelines to develop a better interpretation of neural networks through XAI methods in the field of visual analytics.

Graphical abstract

Display Omitted

Highlights

A comprehensive survey for visual analytics, particularly those that adopted explainable artificial intelligence (XAI) methods, in interpreting Neural Networks is conducted.
We reviewed the literature based on model usage and visual approaches.
We concluded some visual approaches commonly used to support the illustration of XAI methods for various types of data and machine learning models; however, a generic approach is needed for the field.
We listed several future work including data manipulations, scalability, and bias in data representation, and generalizable real-time visualizations integrating XAI.

References

[1]
Kahng M., Andrews P.Y., Kalro A., Chau D.H., ActiVis: Visual exploration of industry-scale deep neural network models, IEEE Trans Vis Comput Graph 24 (2018) 88–97,.
[2]
Chatzimparmpas A., Martins R.M., Jusufi I., Kucher K., Rossi F., Kerren A., The state of the art in enhancing trust in machine learning models with the use of visualizations, Comput Graph Forum 39 (3) (2020) 713–756,.
[3]
Azodi C.B., Tang J., Shiu S., Opening the black box: interpretable machine learning for geneticists, Trends Genet 36 (6) (2020) 442–455,.
[4]
Daglarli E., Explainable artificial intelligence (XAI) approaches and deep meta-learning models, in: Aceves-Fernandez Marco Antonio (Ed.), Advances and applications in deep learning, IntechOpen, London, UK, 2020, [Chapter 5]. https://doi.org/10.5772/intechopen.92172.
[5]
Liu M., Shi J., Li Z., Li C., Zhu J., Liu S., Towards better analysis of deep convolutional neural networks, IEEE Trans Vis Comput Graph 23 (1) (2017) 91–100,.
[6]
Gunning D., Aha D., DARPA’s explainable artificial intelligence (XAI) program, AI Mag 40 (2) (2019) 44–58,.
[7]
Miller T., Explanation in artificial intelligence: Insights from the social sciences, Artif Intell 267 (2019) 1–38,.
[8]
Strobelt H., Gehrmann S., Pfister H., Rush A.M., LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks, IEEE Trans Vis Comput Graph 24 (1) (2018) 667–676,.
[9]
Chung S, Suh S, Park C, Kang K, Choo J, Kwon BC. ReVACNN: Real-Time visual analytics for convolutional neural network. In: ACM SIGKDD workshop on interactive data exploration and analytics. 2016. p. 30–6.
[10]
Ming Y., Xu P., Cheng F., Qu H., Ren L., ProtoSteer: Steering deep sequence model with prototypes, IEEE Trans Vis Comput Graph 26 (1) (2020) 238–248,.
[11]
Xu F., Uszkoreit H., Du Y., Fan W., Zhao D., Zhu J., Explainable AI: A brief survey on history, research areas, approaches and challenges, in: J. Tang, Kan M.Y., Zhao D., Li S., H. Zan (Eds.), Natural language processing and chinese computing, 2019,.
[12]
Emmert-Streib F., Yli-Harja O., Dehmer M., Explainable artificial intelligence and machine learning: A reality rooted perspective, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2020, pp. 1–13,.
[13]
Liang Y., Li S., Yan C., Li M., Jiang C., Explaining the black-box model: A survey of local interpretation methods for deep neural networks, Neurocomputing 419 (2021) 168–182,.
[14]
Das A., Rad P., Opportunities and challenges in explainable artificial intelligence (XAI): A survey, 2020, pp. 1–24. arXiv:2006.11371.
[15]
Garcia R., Telea A., Silva B.B., Tørresen J., Comba J., A task-and-technique centered survey on visual analytics for deep learning model engineering, Comput Graph 77 (2018) 30–49,.
[16]
Chatzimparmpas A., Martins R.M., Jusufi I., Kerren A., A survey of surveys on the use of visualization for interpreting machine learning models, Inf Vis 19 (3) (2020) 207–233,.
[17]
Hohman F., Kahng M., Pienta R.S., Chau D.H., Visual analytics in deep learning: An interrogative survey for the next frontiers, IEEE Trans Vis Comput Graph 25 (2019) 2674–2693,.
[18]
Das S., Agarwal N., Venugopal D., Sheldon F., Shiva S., Taxonomy and survey of interpretable machine learning method, in: IEEE symposium series on computational intelligence (SSCI), 2020, pp. 670–677,.
[19]
Adadi A., Berrada M., Peeking inside the black-box: a survey on explainable artificial intelligence (XAI), IEEE Access 6 (2018) 52138–52160,.
[20]
Choo J., Liu S., Visual analytics for explainable deep learning, IEEE Comput Graph Appl 38 (4) (2018) 84–92,.
[21]
Ripley B.D., Pattern recognition and neural networks, Cambridge University Press, 2007,.
[22]
Publication-ready NN architecture schematics, 2016, http://alexlenail.me/NN-SVG. [Accessed 20 2020].
[23]
Rai A., Explainable AI: from black box to glass box, J Acad Mark Sci 48 (1) (2020) 137–141,.
[24]
Rodríguez N., Pisoni G., Accessible cultural heritage through explainable artificial intelligence, in: 28th ACM conference on user modeling, adaptation and personalization, 2020, pp. 317–324,.
[25]
Moradi M., Samwald M., Post-hoc explanation of black-box classifiers using confident itemsets, Expert Syst Appl (2021) 165,.
[26]
Arrieta A., Diaz-Rodriguez N., Ser J., Bennetot A., Tabik S., Barbado A., Garcia S., Gil-Lopez S., Molina D., Benjamins R., Chatila R., Herrera F., Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities, and challenges toward responsible AI, Inf Fusion 58 (2019) 82–115,.
[27]
Schoenborn J.M., Althoff K., Recent trends in XAI: A broad overview on current approaches, Methodol Interact ICCBR Workshops. 2567 (2019) 51–60.
[28]
Ribeiro M.T., Singh S., Guestrin C., ‘Why should i trust you?’ Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD international conference of knowledge discovery and data mining, 2016, pp. 1135–1144,.
[29]
Selvaraju R.R., Das A., Vedantam R., Cogswell M., Parikh D., Batra D., Grad-CAM: Visual explanations from deep networks via gradient-based localization, Int J Comput Vis 128 (2) (2020) 336–359,.
[30]
Bach S., Binder A., Montavon G., Klauschen F., Müller K., Samek W., On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, PLoS One 10 (7) (2015) 1–46,.
[31]
Dragoni M., Donadello I., Eccher C., Explainable AI meets persuasiveness: Translating reasoning results into behavioral change advice, Artif Intell Med (2020) 105,.
[32]
Letham B., Rudin C., McCormick T., Madigan D., Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model, Ann Appl Stat 9 (3) (2015) 1350–1371,.
[33]
Caruana R., Lou Y., Gehrke J., Koch P., Sturm M., Elhadad N., Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission, in: Proceedings of the 21st ACM SIGKDD international conference on knowledge discovery and data mining, 2015, pp. 721–1730,.
[34]
Tan S., Caruana R., Hooker G., Lou Y., Distill-and-Compare: auditing black-box models using transparent model distillation, in: Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society, 2018, pp. 303–310,.
[35]
Lundberg S.M., Lee S., A unified approach to interpreting model predictions, in: Proceedings of the 31st international conference on neural information processing systems (NIPS’17), Curran Associates Inc. Red, Hook, NY, USA, 2017, pp. 4768–4777,.
[36]
Shrikumar A., Greenside P., Kundaje A., Learning important features through propagating activation differences, in: Proceedings of the 34th international conference on machine learning, in proceedings of machine learning research, vol. 70, 2017, pp. 3145–3153. Retrieved from http://proceedings.mlr.press/v70/shrikumar17a.html.
[37]
Breiman L., Manual on setting up, using, and understanding random forests v3, Tech Rep 4 (1) (2002) 29. https://www.stat.berkeley.edu/~breiman/Using_random_forests_V3.1.pdf. [Accessed 15 2020].
[38]
Sundararajan M., Taly A., Yan Q., Axiomatic attribution for deep networks, in: Proceedings of the 34th international conference on machine learning, vol. 70, 2017, pp. 3319–3328,.
[39]
Simonyan K., Vedaldi A., Zisserman A., Deep inside convolutional networks: Visualising image classification models and saliency maps, in: 2nd International Conference on Learning Representations, 2014, pp. 1–8. URL arXiv:1312.6034.
[40]
Ribeiro M.T., Singh S., Guestrin C., Anchors: High-precision model-agnostic explanations, in: Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1., 2018, pp. 1527–1535. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11491.
[41]
Zhou B., Khosla A., Lapedriza C., A Y., Oliva A., Torralba A., Learning deep features for discriminative localization, in: IEEE conference on computer vision and pattern recognition (CVPR), 2016, pp. 2921–2929,.
[42]
Zhao J., Karimzadeh M., Masjedi A., Wang T., Zhang X., Crawford M., Ebert D., FeatureExplorer: Interactive feature selection and exploration of regression models for hyperspectral images, in: IEEE visualization conference (VIS), 2019, pp. 161–165,.
[43]
Brooks M., Amershi S., Lee B., Drucker S., Kapoor A., Simard P., FeatureInsight: Visual support for error-driven feature ideation in text classification, in: IEEE conference on visual analytics science and technology (VAST), 2015, pp. 105–112,.
[44]
Krause J., Perer A., Bertini E., INFUSE: Interactive feature selection for predictive modeling of high dimensional data, IEEE Trans Vis Comput Graphics 20 (12) (2014) 1614–1623,.
[45]
Ali M., Jones M.W., Xie X., Williams M., TimeCluster: Dimension reduction applied to temporal data for visual analytics, The Visual Computer. 35 (2019) 1013–1026,.
[46]
Hohman F., Wongsuphasawat K., Kery M.B., Patel K., Understanding and visualizing data iteration in machine learning, in: Proceedings of the CHI conference on human factors in computing systems, 2020, pp. 1–13,.
[47]
May T., Bannach A., Davey J., Ruppert T., Kohlhammer J., Guiding feature subset selection with an interactive visualization, in: IEEE conference on visual analytics science and technology (VAST), 2011, pp. 111–120,.
[48]
Zeng H., Haleem H., Plantaz X., Cao N., Qu H., CNNComparator: Comparative analytics of convolutional neural networks, 2017, URL arXiv:1710.05285.
[49]
Park C., Lee J., Han H., Lee K., ComDia+: An interactive visual analytics system for comparing, diagnosing, and improving multiclass classifiers, in: IEEE Pacific visualization symposium (PacificVis), 2019, pp. 313–317,.
[50]
Steed C., Goodall J., Chae J., Trofimov A., CrossVis: A visual analytics system for exploring heterogeneous multivariate data with applications to materials and climate sciences, Graph Vis Comput 3 (2020),.
[51]
Murugesan S., Malik S., Du F., Koh E., Lai T., DeepCompare: Visual and interactive comparison of deep learning model performance, IEEE Comput Graph Appl 39 (5) (2019) 47–59,.
[52]
Pühringer M., Hinterreiter A., Streit M., InstanceFlow: Visualizing the evolution of classifier confusion on the instance level, in: IEEE Visualization Conference (VIS), 2020, pp. 291–295,.
[53]
Ren D., Amershi S., Lee B., Suh J., Williams J., Squares: Supporting interactive performance analysis for multiclass classifiers, IEEE Trans Vis Comput Graph 23 (1) (2017) 61–70,.
[54]
Alsallakh B., Hanbury A., Hauser H., Miksch S., Rauber A., Visual methods for analyzing probabilistic classification data, IEEE Trans Vis Comput Graph 20 (2014) 1703–1712,.
[55]
Cashman D., Perer A., Chang R., Strobelt H., Ablate, variate, and contemplate: Visual analytics for discovering neural architectures, IEEE Trans Vis Comput Graph 26 (1) (2020) 863–873,.
[56]
Shen J., Shen H., An Information-theoretic visual analysis framework for convolutional neural networks, 2020, URL arXiv:2005.02186.
[57]
Liu M., Shi J., Cao K., Zhu J., Liu S., Analyzing the training processes of deep generative models, IEEE Trans Vis Comput Graph 24 (1) (2018) 77–87,.
[58]
Wang Z.J., Turko R., Shaikh O., Park H., Das N., Hohman F., Kahng M., Chau D.H., CNNExplainer: Learning convolutional neural networks with interactive visualization, IEEE Trans Vis Comput Graph 27 (2020) 1396–1406,.
[59]
Li G., Wang J., Shen H.W., Chen K., Shan G., Lu Z., CNNPruner: Pruning convolutional neural networks with visual analytics, IEEE Trans Vis Comput Graph 27 (2021) 1364–1373,.
[60]
Pezzotti N., Höllt T., Gemert J.V., Lelieveldt B., Eisemann E., Vilanova A., DeepEyes: Progressive visual analytics for designing deep neural networks, IEEE Trans Vis Comput Graph 24 (1) (2018) 98–108,.
[61]
Liu D., Cui W., Jin K., Guo Y., Qu H., DeepTracker: Visualizing the training process of convolutional neural networks, ACM Trans Intell Syst Technol 10 (1) (2018) 1–25,.
[62]
Dang T., Van H., Nguyen H.N., Pham V., Hewett R., DeepVix: Explaining long short-term memory network with high dimensional time series data, in: Proceedings of the 11th international conference on advances in information technology, 2020,.
[63]
Zhang J., Wang Y., Molino P., Li L., Ebert D.S., Manifold: A model-agnostic framework for interpretation and diagnosis of machine learning models, IEEE Trans Vis Comput Graph 25 (1) (2019) 364–373,.
[64]
Kwon B.C., Choi M., Kim J., Choi E., Kim Y.B., Kwon S., Sun J., Choo J., RetainVis: Visual analytics with interpretable and interactive recurrent neural networks on electronic medical records, IEEE Trans Vis Comput Graph 25 (1) (2018) 299–309,.
[65]
Hohman F., Park H., Robinson C., Chau D.H.Polo., Summit: Scaling deep learning interpretability by visualizing activation and attribution summarizations, IEEE Trans Vis Comput Graph 26 (1) (2020) 1096–1106,.
[66]
Rathore A., Chalapathi N., Palande S., Wang B., TopoAct: Visually exploring the shape of activations in deep learning, Comput Graph Forum 40 (2021) 1–16,.
[67]
Ming Y., Qu H., Bertini E., RuleMatrix: Visualizing and understanding classifiers with rules, IEEE Trans Vis Comput Graph 25 (1) (2019) 342–352,.
[68]
Zhao X., Wu Y., Lee D., Cui W., IForest: Interpreting random forests via visual analytics, IEEE Trans Vis Comput Graph 25 (1) (2019) 407–416,.
[69]
Schlegel U., Cakmak E., Keim D.A., ModelSpeX: Model specification using explainable artificial intelligence methods, International workshop on machine learning in visualization for big data 1 (2020) 2–6,.
[70]
Lamy J., Tsopra R., Visual explanation of simple neural networks using interactive rainbow boxes, in: 23rd international conference information visualisation (IV), 2019, pp. 50–55,.
[71]
Collaris D., Wijk J.V., ExplainExplore: Visual exploration of machine learning explanations, in: IEEE Pacific Visualization Symposium (PacificVis), 2020, pp. 26–35,.
[72]
Wang J., Gou L., Zhang W., Yang H., Shen H., Deepvid: Deep visual interpretation and diagnosis for image classifiers via knowledge distillation, IEEE Trans Vis Comput Graph 25 (6) (2019) 2168–2180,.
[73]
Li Y., Fujiwara T., Choi Y.K., Kim K.K., Ma K., A visual analytics system for multi-model comparison on clinical data predictions, Vis Inf 4 (2) (2020) 122–131,.
[74]
Yang F., Huang Z., Scholtz J., Arendt D., How do visual explanations foster end users’ appropriate trust in machine learning?, in: Proceedings of the 25th international conference on intelligent user interfaces, 2020, pp. 189–201,.
[75]
Botari T., Izbicki R., Carvalho A.C.P.L.F. de, Local interpretation methods to machine learning using the domain of the feature space, in: Joint European conference on machine learning and knowledge discovery in databases, 2020, pp. 241–252,.
[76]
Alvarez-Melis D., Jaakkola T.S., Towards robust interpretability with self-explaining neural networks, in: Proceedings of the 32nd international conference on neural information processing systems (NIPS’18), Curran Associates Inc. Red Hook, NY, USA, 2018, pp. 7786–7795,.
[77]
Meske C., Bunde E., Transparency and trust in human-ai-interaction: The role of model-agnostic explanations in computer vision-based decision support, in: Proceedings of the international conference of AI in HCI. 2020, 2020, pp. 54–69,.
[78]
Kim B., Park J., Suh J., Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information, Decis Support Syst 134 (2020),.
[79]
Baptista M., Mishra M., Henriques E., Prendinger H., Using explainable artificial intelligence to interpret remaining useful life, 2020,.
[80]
Lamy J., Sekar B., Guezennec G., Bouaud J., Séroussi B., Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach, Artif Intell Med 94 (2019) 42–53,.
[81]
Li J., Chen X., Hovy E., Jurafsky D., Visualizing and understanding neural models in NLP, in: Conference of the north american chapter of the association for computational linguistics: human language technologies, 2016, pp. 681–691,.
[82]
Islam S.R., Eberle W., Ghafoor S., Towards quantification of explainability in explainable artificial intelligence methods, 2019, URL ArXiv:1911.10104.
[83]
So C., Understanding the prediction mechanism of sentiments by XAI visualization, in: Proceedings of the 4th international conference on natural language processing and information retrieval, 2020, pp. 18–20,.
[84]
Krause J., Dasgupta A., Swartz J., Aphinyanaphongs Y., Bertini E., A workflow for visual diagnostics of binary classifiers using instance-level explanations, in: IEEE conference on visual analytics science and technology (VAST), 2017, pp. 162–172,.
[85]
Spinner T., Schlegel U., Schäfer H., El-Assady M., ExplAIner: A visual analytics framework for interactive and explainable machine learning, IEEE Trans Vis Comput Graph 26 (1) (2020) 1064–1074,.
[86]
Chan G.Y., Bertini E., Nonato L., Barr B., Silva C.T., Melody: generating and visualizing machine learning model summary to understand data and classifiers together, 2020, URL arXiv:2007.10614.
[87]
Chan G.Y., Yuan J., Overton K., Barr B., Rees K., Nonato L., Bertini E., Silva C.T., SUBPLEX: Towards a better understanding of black box model explanations at the subpopulation level, 2020, URL arXiv:2007.10609.
[88]
Lauritsen S.M., Kristensen M., Olsen M.V., Larsen M.S., Lauritsen K.M., Jørgensen M.J., Lange J., Thiesson B., Explainable artificial intelligence model to predict acute critical illness from electronic health records, Nature Commun 11 (1) (2020) 1–11,.
[89]
Alber M., Lapuschkin S., Seegerer P., Hägele M., Schütt K.T., Montavon G., Samek W., Müller K., Dähne S., Kindermans P., INNvestigate neural networks!, J Mach Learn Res 20 (2019) 1–8. Retrieved from http://jmlr.org/papers/v20/18-540.html.
[90]
Cho S., Lee G., Chang W., Choi J., Interpretation of deep temporal representations by selective visualization of internally activated units, 2020, URL ArXiv:2004.12538.
[91]
Angelov P., Soares E., Towards explainable deep neural networks (xDNN), Neural Netw 130 (2020) 185–194,.
[92]
Lu Y., Garcia R., Hansen B., Gleicher M., Maciejewski R., The State-of-the-Art in predictive visual analytics, Comput Graph Forum 36 (3) (2017) 539–562,.
[93]
Jolliffe I.T., Principal component analysis, 2nd ed., 2002,. New York, NY.
[94]
McInnes L., Healy J., Saul N., Großberger L., UMAP: Uniform manifold approximation and projection, J Open Source Softw 3 (2018) 861,.
[95]
Yuan J., Chen C., Yang W., Liu M., Xia J., Liu S., A survey of visual analytics techniques for machine learning, Comput Vis Media 7 (2021) 3–36,.
[96]
Kwon B.C., Eysenbach B., Verma J., Ng K., deFilippi C., Stewart W., Perer A., A Clustervision: Visual supervision of unsupervised clustering, IEEE Trans Vis Comput Graph 24 (1) (2018) 142–151,.
[97]
Maaten L.V., Hinton G.E., Visualizing data using t-SNE, J Mach Learn Res 9 (2008) 2579–2605. Retrieved from https://www.jmlr.org/papers/v9/vandermaaten08a.html.
[98]
Heer J., Bostock M., Ogievetsky V., A Tour through the Visualization Zoo, Queue 8 (5) (2010) 20–30,.
[99]
Joia P., Coimbra D., Cuminato J., Paulovich F., Nonato L., Local affine multidimensional projection, IEEE Trans Vis Comput Graph 17 (2011) 2563–2571,.

Cited By

View all
  • (2024)Usability and Adoption of Graphical Tools for Data-Driven DevelopmentProceedings of Mensch und Computer 202410.1145/3670653.3670658(231-241)Online publication date: 1-Sep-2024
  • (2024)Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human DecisionsACM Transactions on Interactive Intelligent Systems10.1145/366564714:3(1-36)Online publication date: 22-May-2024
  • (2024)VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-MakingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676323(1-21)Online publication date: 13-Oct-2024
  • Show More Cited By

Index Terms

  1. A survey of visual analytics for Explainable Artificial Intelligence methods
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image Computers and Graphics
        Computers and Graphics  Volume 102, Issue C
        Feb 2022
        670 pages

        Publisher

        Pergamon Press, Inc.

        United States

        Publication History

        Published: 01 February 2022

        Author Tags

        1. Explainable Artificial Intelligence
        2. Interpretable neural networks
        3. Visual analytics
        4. Black-box models

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 13 Dec 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Usability and Adoption of Graphical Tools for Data-Driven DevelopmentProceedings of Mensch und Computer 202410.1145/3670653.3670658(231-241)Online publication date: 1-Sep-2024
        • (2024)Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human DecisionsACM Transactions on Interactive Intelligent Systems10.1145/366564714:3(1-36)Online publication date: 22-May-2024
        • (2024)VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-MakingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676323(1-21)Online publication date: 13-Oct-2024
        • (2024)Leveraging Large Language Models to Enhance Domain Expert Inclusion in Data Science WorkflowsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3651115(1-11)Online publication date: 11-May-2024
        • (2024)TransforLearn: Interactive Visual Tutorial for the Transformer ModelIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.332735330:1(891-901)Online publication date: 1-Jan-2024
        • (2024)Explore Your Network in Minutes: A Rapid Prototyping Toolkit for Understanding Neural Networks with Visual AnalyticsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.332657530:1(683-693)Online publication date: 1-Jan-2024
        • (2024)Visual Explanation for Open-Domain Question Answering With BERTIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.324367630:7(3779-3797)Online publication date: 1-Jul-2024
        • (2024)Enabling AutoML for Zero-Touch Network Security: Use-Case Driven AnalysisIEEE Transactions on Network and Service Management10.1109/TNSM.2024.337663121:3(3555-3582)Online publication date: 1-Jun-2024
        • (2024)Visualization for Trust in Machine Learning Revisited: The State of the Field in 2023IEEE Computer Graphics and Applications10.1109/MCG.2024.336088144:3(99-113)Online publication date: 1-May-2024
        • (2024)Intelligent systems in healthcareComputers in Biology and Medicine10.1016/j.compbiomed.2024.108908180:COnline publication date: 18-Nov-2024
        • Show More Cited By

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media