- This project provides source code for our Joint Data Augmentation and Cosine Confidence Distillation (JDA and CCD).
- This paper is publicly available at the official ACCV proceedings.
-
- 🏆 __SOTA of Knowledge Distillation for student ResNet-18 trained by teacher ResNet-34 on ImageNet.
- pytorch
- numpy
- Ubuntu 20.04 LTS
- please install create environment first.
conda create -n torch python==3.7.10
- then activate this environment.
source activate torch
- then install pytorch
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
CIFAR-100 : download
- download to your folder, named [cifar100_folder]
CKPT: download
- download to your floder, name [ckpt_folder]
-
Modify the paths of the teacher pre-training files and datasets in these two files : multirunc100_jda.sh, multirunc100_jda_ccd.sh
-
run them
bash multirunc100_jda.sh
bash multirunc100_jda_ccd.sh
-
Download the ImageNet dataset to YOUR_IMAGENET_PATH and move validation images to labeled subfolders
- The script may be helpful.
-
Create a datasets subfolder and a symlink to the ImageNet dataset
Folder of ImageNet Dataset:
data/ImageNet
├── train
├── val
The pre-trained backbone weights of ResNet-34 follow the resnet34-333f7ec4.pth downloaded from the official PyTorch.
-
Modify the paths of the teacher pre-training files and datasets in this file : train_student_imagenet.py
-
run this py format file
python train_student_imagenet.py
python train_student_imagenet.py --ccd
Our code is based on torchdistill (e.g., SPKD, KD, SSKD, CRD) and HSAKD (e.g., the training framework).
email : shaoshitong@seu.edu.cn name : shaoshitong
- The code is only provided for academic research. Any commercial use of this code is prohibited.
- The use of our code requires references to this paper.