8000 GitHub - ChenLiu-1996/BrainFM
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

ChenLiu-1996/BrainFM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Brain Foundation Model

Krishnaswamy Lab, Yale University

Twitter

Steps to reproduce

  1. Data preparation. If you work on Misha, you can find the preprocessed data at /gpfs/radev/home/cl2482/project/BrainFM/data/Dynamic_Natural_Vision.

1.1 Download Dynamic Natural Vision data.

cd src/data_download
python download_DNV.py

1.2 Unzip (sorry this step is messy) The desired structure is: data/Dynamic_Natural_Vision/{subject1,subject2,subject3,video} Under {subject1,subject2,subject3} there are {fmri/smri} folders. Under {video} there are {seg1.mp4,seg2.mp4,...,test5.mp4}.

1.3 Preprocess the video. This will create a {video_frames} folder.

cd src/preprocessing
python preprocess_DNV_videos.py

1.4 Preprocess the fMRI. This will create a {fMRI_scans} folder.

cd src/preprocessing
python preprocess_DNV_fmri.py
  1. Train.
cd src/
python main.py --batch-size 4 --max-training-iters 64 --max-validation-iters 64

Dependencies

We developed the codebase in a miniconda environment. How we created the conda environment:

# Optional: Update to libmamba solver.
conda update -n base conda
conda install -n base conda-libmamba-solver
conda config --set solver libmamba

conda create --name brainfm pytorch==2.1.0 torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia -c anaconda -c conda-forge -y
conda activate brainfm
conda install scikit-learn scikit-image pillow matplotlib seaborn tqdm -c pytorch -c anaconda -c conda-forge -y
conda install accelerate -c conda-forge -y
conda install nibabel -y
python -m pip install opencv-python
python -m pip install torch_geometric einops
python -m pip install git+https://github.com/openai/CLIP.git


# For NeuroClips
python -m pip install webdataset pytorch-lightning einops kornia open-clip-torch omegaconf transformers
python -m pip install git+https://github.com/openai/CLIP.git
python -m pip install diffusers["torch"]==0.21.4 transformers huggingface_hub==0.25.2
python -m pip install xformers==0.0.22.post7
python -m pip install dalle2-pytorch==1.15.6
python -m pip install huggingfac
51FB
e_hub
python -m pip install natsort

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0