-
Notifications
You must be signed in to change notification settings - Fork 7
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 9.18 GiB. GPU 3 has a total capacity #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Could you provide additional information? A quick workaround would be to use a smaller model size on a smaller resolution. Best, |
we adopt the model:vit_base_patch14_reg4_dinov2, and img_size: [640, 640], and batch_size is 1 in per gpu. The dataset is my private dataset, which annotation is consisstent with COCO. |
rank1]: Traceback (most recent call last): | 4/84 [00:52<17:32, 0.08it/s] |
I can confirm that validating a ViT-B at resolution 640x640 with a batch size of 1 takes less than 3GB of memory for both panoptic and instance inference on COCO. Can you reproduce the same error using default COCO? Best, |
During the validation process, I'm running out of VRAM even with a batch_size of 1. My GPU is a 4090. How can I solve this?
The text was updated successfully, but these errors were encountered: