-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[BUG] Low GPU Utilization #8278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
FYI @phoeenniixx, @PranavBhatP, @agobbifbk - any ideas? |
@jobs-git can I ask why |
Interesting note, but I am not sure why setting it lower makes the epoch complete faster and overall training is also faster. |
Hard to say, also the batch size so big can cause bottlenecks (transferring data from gpu to cpu) or even the number of workers (try with defaults). |
I was able to trace it to the torch.dataloader, I raised an issue here: manually batching is faster, but dataloader is just too slow. Anyway to bypass data loader and use manual batching? |
Uh oh!
There was an error while loading. Please reload this page.
Describe the bug
GPU utilization has too much idle during training which overall lengthens the calculation time.
Related: sktime/pytorch-forecasting#1426
To Reproduce
Expected behavior
GPU should mostly be busy with little idle here and there
Additional context
Versions
The text was updated successfully, but these errors were encountered: