[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The usage of data.cache() causes the run out of memory. #258

Open
HamLaertes opened this issue Nov 6, 2020 · 5 comments
Open

The usage of data.cache() causes the run out of memory. #258

HamLaertes opened this issue Nov 6, 2020 · 5 comments

Comments

@HamLaertes
Copy link

Hi guys, when I run the command lqz TrainR2BStrongBaseline, I found the memory had been ran out of. My server has 64GB memories.
I check the code and find it will cache both the train_data and the validation_data, and I think this is the reason why my memory is exhausted.
I didn't see any issue related to this problem and I am just curious about do you guys have such a large memory that enough to cache the whole ImageNet dataset(up to 150GB)?
Currently, I delete the code that will cache the dataset and the training process run smoothly. Is there any way to cache part of the training_data

@jneeven
Copy link
Contributor
jneeven commented Nov 6, 2020

Hi! ImageNet is indeed quite large, which is why we use multi-GPU training. Since each of the GPUs comes with several CPUs and 64GB of RAM on the compute platform we use, ImageNet does fit into memory if you use four GPUs in parallel. Unfortunately I don't think there is currently a clean way to cache only part of a dataset; you could try to split the dataset up and get creative with tf.data.Dataset.concatenate and then do the caching before concatenating both parts of the dataset, but I doubt this would be very efficient either way. Not caching the dataset is probably your best solution (although it is unfortunately a bit slow). Good luck!

@AdamHillier
Copy link
Contributor

Just to add, if you're on a single-GPU machine, disabling caching shouldn't have too much of an impact, especially if your dataset is stored on fast memory e.g. an SSD.

@sfalkena
Copy link

Hi, I have an additional question about the caching of Imagenet. I have the possibility to configure my training setup with enough RAM for caching Imagenet. However, when I run a multistage experiment, I am experiencing an increase in RAM for the second stage. I think the problem here lays in the fact that each TrainLarqZooModel caches the dataset again. Are you aware of any way to reuse the dataset across stages or on how to release the dataset from RAM? Perhaps it would make sense to move dataloading one level up so that it gets called once per experiment, regardless if that is a single or multistage experiment.

@jneeven
Copy link
Contributor
jneeven commented Nov 12, 2021

Hi @sfalkena, I have indeed run into this problem before and didn't realise it would apply here as well. Unfortunately I have not found any robust way to remove the dataset from RAM, so your suggestion of moving the dataset one level up sounds like the best approach. In my case I wasn't using ImageNet and I was only running slightly over memory limits, so it was feasible to just slightly increase RAM, but that is of course not a maintainable solution especially at ImageNet scale...

@sfalkena
Copy link

Thanks for your fast answer. I am currently trying if maintaining the cache file on disk would work without slowing down too much. Otherwise I'll try a workaround for now by training the stages separately. If I have a bit more time in the future, I am happy to contribute to moving dataloading a level up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants