[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Going Beyond Saliency Maps: Training Deep Models to Interpret Deep Models

  • Conference paper
  • First Online:
Information Processing in Medical Imaging (IPMI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12729))

Included in the following conference series:

Abstract

Interpretability is a critical factor in applying complex deep learning models to advance the understanding of brain disorders in neuroimaging studies. To interpret the decision process of a trained classifier, existing techniques typically rely on saliency maps to quantify the voxel-wise or feature-level importance for classification through partial derivatives. Despite providing some level of localization, these maps are not human-understandable from the neuroscience perspective as they often do not inform the specific type of morphological changes linked to the brain disorder. Inspired by the image-to-image translation scheme, we propose to train simulator networks to inject (or remove) patterns of the disease into a given MRI based on a warping operation, such that the classifier increases (or decreases) its confidence in labeling the simulated MRI as diseased. To increase the robustness of training, we propose to couple the two simulators into a unified model based on conditional convolution. We applied our approach to interpreting classifiers trained on a synthetic dataset and two neuroimaging datasets to visualize the effect of Alzheimer’s disease and alcohol dependence. Compared to the saliency maps generated by baseline approaches, our simulations and visualizations based on the Jacobian determinants of the warping field reveal meaningful and understandable patterns related to the diseases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 79.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 99.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Source code can be found at https://github.com/ZucksLiu/DeepInterpret.

References

  1. Zhu, G., Jiang, B., Tong, L., Xie, Y., Zaharchuk, G., Wintermark, M.: Applications of deep learning to neuro-imaging techniques. Front. Neurol. 10, 1–13 (2019)

    Article  Google Scholar 

  2. Lee, G., Nho, K., Kang, B., Sohn, K.A., Kim, D.: Predicting Alzheimer’s disease progression using multi-modal deep learning approach. Sci. Rep. 9, 1–12 (2019)

    Google Scholar 

  3. Willemink, M., et al.: Preparing medical imaging data for machine learning. Radiology 295, 4–15 (2020)

    Article  Google Scholar 

  4. Brammer, M.: The role of neuroimaging in diagnosis and personalized medicine-current position and likely future directions. Dialogues Clin. Neurosci. 11, 389–96 (2009)

    Article  Google Scholar 

  5. Ouyang, J., et al.: Longitudinal pooling & consistency regularization to model disease progression from MRIs. IEEE J. Biomed. Health Inform. (2020, in press)

    Google Scholar 

  6. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806 (2014)

  7. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE ICCV, pp. 618–626 (2017)

    Google Scholar 

  8. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  9. Ghorbani, A., Wexler, J., Zou, J.Y., Kim, B.: Towards automatic concept-based explanations. In: NeurIPS, vol. 32 (2019)

    Google Scholar 

  10. Kaji, S., Kida, S.: Overview of image-to-image translation by use of deep neural networks: denoising, super-resolution, modality conversion, and reconstruction in medical imaging. Radiol. Phys. Technol. 12, 235–248 (2019). https://doi.org/10.1007/s12194-019-00520-y

    Article  Google Scholar 

  11. Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) IPMI 2019. LNCS, vol. 11492, pp. 249–261. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20351-1_19

    Chapter  Google Scholar 

  12. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2242–2251 (2017)

    Google Scholar 

  13. Yang, B., Bender, G., Le, Q.V., Ngiam, J.: CondConv: conditionally parameterized convolutions for efficient inference. In: Advances in Neural Information Processing Systems, pp. 1307–1318 (2019)

    Google Scholar 

  14. Balakrishnan, G., Zhao, A., Sabuncu, M., Guttag, J., Dalca, A.: VoxelMorph: a learning framework for deformable medical image registration. IEEE Trans. Med. Imaging 38, 1788–1800 (2019)

    Article  Google Scholar 

  15. Pini, L., et al.: Brain atrophy in Alzheimer’s disease and aging. Ageing Res. Rev. 30, 25–48 (2016)

    Article  Google Scholar 

  16. Sullivan, E., et al.: The role of aging, drug dependence, and hepatitis C comorbidity in alcoholism cortical compromise. JAMA Psychiatry 75, 474–483 (2018)

    Article  Google Scholar 

  17. Moorman, D.: The role of the orbitofrontal cortex in alcohol use, abuse, and dependence. Prog. Neuro-Psychopharmacol. Biol. Psychiatry 87, 85–107 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by NIH Grants MH113406, AA005965, AA010723, and AA017347, and by Stanford HAI AWS Cloud Credit.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qingyu Zhao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, Z., Adeli, E., Pohl, K.M., Zhao, Q. (2021). Going Beyond Saliency Maps: Training Deep Models to Interpret Deep Models. In: Feragen, A., Sommer, S., Schnabel, J., Nielsen, M. (eds) Information Processing in Medical Imaging. IPMI 2021. Lecture Notes in Computer Science(), vol 12729. Springer, Cham. https://doi.org/10.1007/978-3-030-78191-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78191-0_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78190-3

  • Online ISBN: 978-3-030-78191-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics