Evaluate class incremental learning tasks shifting with popular continual learning algorithms.
The benchmarks come from the following contributions:
- A-GEM: paper (Efficient lifelong learning with A-GEM)
- EMR: paper (On Tiny Episodic Memories in Continual Learning)
- iCaRL: paper (icarl: Incremental classifier and representation learning)
- LUCIR: paper (Learning a Unified Classifier Incrementally via Rebalancing)
- LwF: paper (Learning without forgetting)
- EWC: paper (Overcoming catastrophic forgetting in neural networks)
- ABD: paper (Always be dreaming: A new approach for data-free class-incremental learning)
- SCR: paper (Supervised contrastive replay: Revisiting the nearest class mean classifier in online class-incremental continual learning)
- S&B: paper (Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network)
- E2E: paper (End-to-end incremental learning)
- Bic: paper (Large scale incremental learning)
To install requirements:
python == 3.6
pytorch == 1.8.1
torch == 1.7.0
torchvision >= 0.8
numpy == 1.19.5
matplotlib == 3.3.4
opencv-python == 4.5.1.48
- Install anaconda: https://www.anaconda.com/distribution/
- set up conda environmet & python 3.6, ex:
conda create --name <env_name> python=3.6
conda activate <env_name>
conda env create -f environment.yml -p <anaconda>/envs/<env_name>
📋 Describe how to set up the environment, e.g. pip/conda/docker commands, download datasets, etc...
We conduct experiments on commonly used incremental learning bencnmarks: CIFAR100, miniImageNet.
CIFAR100
is available at cs.toronto.edu. DownloadCIFAR100
and put it under thedataset directory
.miniImageNet
is available at Our Google Drive. DownloadminiImageNet
and make it looks like:
mini-imagenet/
├── images
├── n0210891500001298.jpg
├── n0287152500001298.jpg
...
├── test.csv
├── val.csv
├── train.csv
└── imagenet_class_index.json
You can download pretrained model
here:
- My pre-trained model trained on CIFAR100.
You can download test model
here, then put it under the model directory
:
- My test model trained on CIFAR100.
All commands should be run under the project root directory.
To train the model(s) in the paper, run this command:
sh ./main.sh --input_data dataset --pre_trained pretrained/cifar100-pretrained.pth.tar --network resnet
📋 Describe how to train the models, with example commands on how to train the models in your paper, including the full training procedure and appropriate hyperparameters.
To evaluate my model, run:
sh ./test.sh --model_file model/test-20classes.pth.tar
📋 Describe how to evaluate the trained models on benchmarks reported in the paper, give commands that produce the results (section below).
Special thanks to https://github.com/DRSAD/iCaRL for his iCaRL Networks implementation of which parts were used for this implementation. More details of iCaRL at https://arxiv.org/abs/1611.07725