8000 GitHub - StanfordMIMI/MedVAE: [MIDL 2025] Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[MIDL 2025] Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders

License

Notifications You must be signed in to change notification settings

StanfordMIMI/MedVAE

Repository files navigation

MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders

Hugging Face    pypi    arXiv    License

This repository contains the official PyTorch implementation for MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders.

Overview

🫁 What is MedVAE?

MedVAE is a family of six large-scale, generalizable 2D and 3D variational autoencoders (VAEs) designed for medical imaging. It is trained on over one million medical images across multiple anatomical regions and modalities. MedVAE autoencoders encode medical images as downsized latent representations and decode latent representations back to high-resolution images. Across diverse tasks obtained from 20 medical image datasets, we demonstrate that utilizing MedVAE latent representations in place of high-resolution images when training downstream models can lead to efficiency benefits (up to 70x improvement in throughput) while simultaneously preserving clinically-relevant features.

⚡️ Installation

To install MedVAE, you can simply run:

pip install medvae

For an editable installation, use the following commands to clone and install this repository.

git clone https://github.com/StanfordMIMI/MedVAE.git
cd MedVAE
pip install -e .[dev]
pre-commit install
pre-commit

🚀 Inference Instructions

import torch
from medvae import MVAE

fpath = "documentation/data/mmg_data/isJV8hQ2hhJsvEP5rdQNiy.png"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model = MVAE(model_name="medvae_4_3_2d", modality="xray").to(device)
img = model.apply_transform(fpath).to(device)

model.requires_grad_(False)
model.eval()

with torch.no_grad():
    latent = model(img)

We also developed an easy-to-use CLI inference tool for compressing your high-dimensional medical images into usable latents:

medvae_inference -i INPUT_FOLDER -o OUTPUT_FOLDER -model_name MED_VAE_MODEL -modality MODALITY

For more information, please check our inference documentation and demo.

🔧 Finetuning Instructions

Easily finetune MedVAE on your own dataset! Follow the instructions below (requires Python 3.9 and cloning the repository).

Run the following commands depending on your finetuning scenario:

Stage 1 (2D) Finetuning

medvae_finetune experiment=medvae_4x_1c_2d_finetuning

Stage 2 (2D) Finetuning:

medvae_finetune_s2 experiment=medvae_4x_1c_2d_s2_finetuning

Stage 2 (3D) Finetuning:

medvae_finetune experiment=medvae_4x_1c_3d_finetuning

This setup supports multi-GPU training and includes integration with Weights & Biases for experiment tracking.

For detailed finetuning guidelines, see the Finetuning Documentation.

To create classification models using downsized latent representations, refer to the Classification Documentation.

📎 Citation

If you find this repository useful for your work, please cite the following paper:

@misc{varma2025medvaeefficientautomatedinterpretation,
      title={MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders}, 
      author={Maya Varma and Ashwin Kumar and Rogier van der Sluijs and Sophie Ostmeier and Louis Blankemeier and Pierre Chambon and Christian Bluethgen and Jip Prince and Curtis Langlotz and Akshay Chaudhari},
      year={2025},
      eprint={2502.14753},
      archivePrefix={arXiv},
      primaryClass={eess.IV},
      url={https://arxiv.org/abs/2502.14753}, 
}

This repository is powered by Hydra and HuggingFace Accelerate. Our implementation of MedVAE is inspired by prior work on diffusion models from CompVis and Stability AI.

About

[MIDL 2025] Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0