Description
🐛 Describe the bug
Hi,
I'm setting the following values
TORCHINDUCTOR_FX_GRAPH_CACHE
TORCHINDUCTOR_CACHE_DIR
I see the cache folder is populated by 3.8G.
I'm creating a tar archive to place the cache on another instance, with same H100 and untar on the other instance. But compile time shows the cache has not been used.
If I'm setting the variables on two instances that share the same network drive, compile on one, then run on the other one, I see that the compile time is still very high, like the cache has not been taken into account.
What are the signatures of the cache elements? If I know better what triggers the cache retrieval, I might find a configuration where I can reuse the cache between instances.
Thanks for your help!
Versions
torch @ https://download.pytorch.org/whl/nightly/cu124/torch-2.6.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
torchaudio @ https://download.pytorch.org/whl/nightly/cu124/torchaudio-2.5.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
torchvision @ https://download.pytorch.org/whl/nightly/cu124/torchvision-0.20.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
pytorch_triton @ https://download.pytorch.org/whl/nightly/pytorch_triton-3.1.0%2B5fe38ffd73-cp311-cp311-linux_x86_64.whl