8000 Torch compile cache · Issue #144859 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Torch compile cache #144859
Closed
Closed
@christopher5106

Description

@christopher5106

🐛 Describe the bug

Hi,

I'm setting the following values

TORCHINDUCTOR_FX_GRAPH_CACHE
TORCHINDUCTOR_CACHE_DIR

I see the cache folder is populated by 3.8G.

I'm creating a tar archive to place the cache on another instance, with same H100 and untar on the other instance. But compile time shows the cache has not been used.

If I'm setting the variables on two instances that share the same network drive, compile on one, then run on the other one, I see that the compile time is still very high, like the cache has not been taken into account.

What are the signatures of the cache elements? If I know better what triggers the cache retrieval, I might find a configuration where I can reuse the cache between instances.

Thanks for your help!

Versions

torch @ https://download.pytorch.org/whl/nightly/cu124/torch-2.6.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
torchaudio @ https://download.pytorch.org/whl/nightly/cu124/torchaudio-2.5.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
torchvision @ https://download.pytorch.org/whl/nightly/cu124/torchvision-0.20.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
pytorch_triton @ https://download.pytorch.org/whl/nightly/pytorch_triton-3.1.0%2B5fe38ffd73-cp311-cp311-linux_x86_64.whl

cc @chauhang @penguinwu

Metadata

Metadata

Assignees

No one assigned

    Labels

    oncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0