8000 GitHub - rliuMines/CNN-VAE: Variational Autoencoder (VAE) with perception loss implementation in pytorch
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

rliuMines/CNN-VAE

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CNN-VAE

A Res-Net Style VAE with an adjustable perceptual loss using a pre-trained vgg19.
Based off of Deep Feature Consistent Variational Autoencoder

NEW!

Added training script with loss logging etc. Dataset uses Pytorch "ImageFolder" dataset, code assumes there is no pre-defined train/test split and creates one if w fixed random seed so it will be the same every time the code is run.
Basic train command:
python3 train_vae.py -mn test_run --dataset_root #path to dataset root#

Results

Latent space interpolation
Latent space interpolation

Results on validation images of the STL10 dataset at 64x64 with a latent vector size of 512 (images on top are the reconstruction) NOTE: RES_VAE_64_old.py was used to generate the results below
With Perception loss
VAE Trained with perception/feature loss

Without Perception loss
VAE Trained without perception/feature loss

Additional Results - celeba

The images in the STL10 have a lot of variation meaning more "features" need to be encoded in the latent space to achieve a good reconstruction. Using a data-set with less variation (and the same latent vector size) should results in a higher quality reconstructed image.

Celeba trained with perception loss

New Model Test images from VAE trained on CelebA at 128x128 resolution (latent space is therefore 512x2x2) using all layers of the VGG model for the perception loss Celeba 128x128 test images trained with perception loss

About

Variational Autoencoder (VAE) with perception loss implementation in pytorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 68.6%
  • Jupyter Notebook 31.4%
0