VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow
Ada Görgün, Bernt Schiele, Jonas Fischer
🚧 Full code is coming soon! Stay tuned.
🔗 Project Page: adagorgun.github.io/VITAL-Project
📄 Paper: arXiv 2503.22399
Code was tested in virtual environment with Python 3.11. Install requirements:
- torch
- torchvision
- numpy
- Pillow
- matplotlib
- tqdm
Class neuron visualization aims to reveal what a neural network "sees" when it thinks about a specific class (e.g., "dog" or "airplane"). This is done by generating an image that maximally activates the output neuron corresponding to that class. The resulting visualization gives insight into the features the model associates with that category—such as shapes, textures, or patterns.
VITAL enhances this by aligning these visualizations with real-world feature distributions, resulting in clearer and more realistic class representations. This is achieved by matching the generated image's feature distribution to that of real images from the same class through the sort matching algorithm. The result is a more interpretable and meaningful visualization that can help us understand how the model perceives different classes.
👉 You can explore the full implementation in the ./class_fvis/ directory.
Intermediate neuron visualization focuses on understanding how information is represented deep inside the network, rather than just at the classification layer. These internal neurons often respond to abstract concepts like "fur texture" or "wheel shapes," even if they're not directly tied to a class. By visualizing what activates these hidden neurons, we can uncover emergent concepts and compositional features the model builds up to make decisions.
VITAL improves this process by filtering neurons based on their relevance and by guiding the visualizations with real feature statistics—leading to more meaningful and interpretable representations. Instead of just maximizing neuron activation like in traditional methods, VITAL traces how much relevant information flows from the neuron toward the model’s final decision for that class and aligns the feature distribution of generated images with the feature distribution of real images that activates the target neuron the most.
👉 You can explore the full implementation in the ./inner_fvis/ directory.
Coming soon!
Coming soon!
For questions, feel free to contact:
Ada Görgün
📧 agoerguen@mpi-inf.mpg.de
🔗 adagorgun.github.io
If you use this work in your research, please cite:
@misc{gorgun2025vitalunderstandablefeaturevisualization,
title={VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow},
author={Ada Gorgun and Bernt Schiele and Jonas Fischer},
year={2025},
eprint={2503.22399},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.22399},
}
This repository will be updated shortly. Thank you for your interest!