-
Notifications
You must be signed in to change notification settings - Fork 398
Which GPU did you use? #14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I am also using this implementation, is it impossible to call distributed training? |
Transfer learning was performed using v100. Check the relative time of the tensorboard for the learning time. |
Distributed training on a single node can be executed as follows. python3 -m torch.distributed.launch --nproc_per_node=NUM_OF_GPU train.py --train_batch_size BATCH_SIZE_PER_GPU --name cifar10-100_500 --dataset cifar10 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz |
@jeonsworld Hello, how can I use multi-GPUs? |
There are DataParallel and Distributed ways to use multi-gpu in pytorch.
|
Sorry, there is training-time show in your experiment. I wonder which GPU did you use, and how many of them?
The text was updated successfully, but these errors were encountered: