8000 torch.compile with InvokeAI: 'UserDefinedObjectVariable' object has no attribute 'proxy' / compute_ancestors KeyError op4 · Issue #155266 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
torch.compile with InvokeAI: 'UserDefinedObjectVariable' object has no attribute 'proxy' / compute_ancestors KeyError op4 #155266
Open
@keturn

Description

@keturn

🐛 Describe the bug

Using current nightly at the encouragement of @StrongerXi here invoke-ai/InvokeAI#8031 (comment) — when attempting to evaluate the chroma model it fails with 'UserDefinedObjectVariable' object has no attribute 'proxy'

I'm a long way from a minimal reproducer, but it does happen consistently.

To some extent this is a regression from 2.7.1 stable. I saw a similar error there (though maybe at a different location in user code), but the difference is that there it errored once and then ran on the second time, whereas with this nightly it doesn't seem to run no matter how many retries I give it.

Error logs

W0605 21:23:56.485000 28353 torch/_dynamo/convert_frame.py:997] [2/8] torch._dynamo hit config.recompile_limit (8)
W0605 21:23:56.485000 28353 torch/_dynamo/convert_frame.py:997] [2/8]    function: 'dequantize_and_run' (/home/ubuntu/src/InvokeAI/invokeai/backend/quantization/gguf/ggml_tensor.py:13)
W0605 21:23:56.485000 28353 torch/_dynamo/convert_frame.py:997] [2/8]    last reason: 2/5: args[0].stride()[0] == args[0].size()[1]*args[0].size()[2]  # (unknown source args[0].stride()[0], please file a bug)
W0605 21:23:56.485000 28353 torch/_dynamo/convert_frame.py:997] [2/8] To log all recompilation reasons, use TORCH_LOGS="recompiles".
W0605 21:23:56.485000 28353 torch/_dynamo/convert_frame.py:997] [2/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html.
/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py:1468: UserWarning: Dynamo does not know how to trace the builtin `<unknown module>.TensorBase._make_wrapper_subclass.` This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind).
If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround.
If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use `torch.compiler.allow_in_graph`.
  torch._dynamo.utils.warn_once(explanation + "\n" + "\n".join(hints))
^Mdenoising:   0%|          | 0/16 [00:19<?, ?it/s]
ESC[31;20m[2025-06-05 21:24:00,492]::[InvokeAI]::ERROR --> Error while invoking session 7bd22a44-fd1a-4fbf-8127-487b9c405529, invocation 359c8693-838c-4a5b-8683-afe01b3fcc90 (chroma_denoise): AttributeError: 'UserDefinedObjectVariable' object has no attribute 'proxy'

from user code:
   File "/home/ubuntu/src/InvokeAI/invokeai/backend/model_manager/load/model_cache/torch_module_autocast/custom_modules/custom_linear.py", line 81, in torch_dynamo_resume_in__autocast_forward_at_80
    bias = cast_to_device(self.bias, input.device)
ESC[0m
ESC[31;20m[2025-06-05 21:24:00,492]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/ubuntu/src/InvokeAI/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/InvokeAI/invokeai/app/invocations/baseinvocation.py", line 241, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/InvokeAI/nodes/chroma/invocations.py", line 347, in invoke
    latents = sampling.denoise_cfg(
              ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/InvokeAI/nodes/chroma/sampling.py", line 79, in denoise_cfg
    pred = model(
           ^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1767, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1778, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/InvokeAI/nodes/chroma/chroma_model.py", line 262, in forward
    img = self.final_layer(img, vec=final_mod)  # (N, T, patch_size ** 2 * out_channels)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1767, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1778, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/InvokeAI/nodes/chroma/layers.py", line 242, in forward
    x = self.linear(x)
        ^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1765, in _wrapped_call_impl
    return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 699, in compile_wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1778, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/InvokeAI/invokeai/backend/model_manager/load/model_cache/torch_module_autocast/custom_modules/custom_linear.py", line 88, in forward
    return self._autocast_forward(input)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/InvokeAI/invokeai/backend/model_manager/load/model_cache/torch_module_autocast/custom_modules/custom_linear.py", line 80, in _autocast_forward
    weight = cast_to_device(self.weight, input.device)
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1469, in __call__
    return self._torchdynamo_orig_callable(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1248, in __call__
    result = self._inner_convert(
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 625, in __call__
    return _compile(
           ^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1144, in _compile
    raise InternalTorchDynamoError(
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1092, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 779, in compile_inner
    return _compile_inner(code, one_graph, hooks, transform)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
    out_code = transform_code_object(code, transform)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1424, in transform_code_object
    transformations(instructions, code_options)
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 265, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 743, in transform
    tracer.run()
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3484, in run
    super().run()
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1359, in run
    while self.step():
          ^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1263, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2287, in LOAD_ATTR
    self._load_attr(inst)
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2275, in _load_attr
    result = BuiltinVariable(getattr).call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 1168, in call_function
    return handler(tx, args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 801, in <lambda>
    tx, [v.realize() for v in args], kwargs
         ^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
    self._cache.realize()
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/variables/lazy.py", line 32, in realize
    self.vt = builder.VariableBuilder(tx, self.source)(self.value)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 438, in __call__
    vt = self._wrap(value)
         ^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 751, in _wrap
    return self.wrap_module(value)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 1736, in wrap_module
    self.mark_static_input(p, guard=freezing)
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 1636, in mark_static_input
    var.proxy.node.meta["tensor_dict"]["_dynamo_static_input_type"] = (
    ^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'UserDefinedObjectVariable' object has no attribute 'proxy'

from user code:
   File "/home/ubuntu/src/InvokeAI/invokeai/backend/model_manager/load/model_cache/torch_module_autocast/custom_modules/custom_linear.py", line 81, in torch_dynamo_resume_in__autocast_forward_at_80
    bias = cast_to_device(self.bias, input.device)

…and also sometimes…

[2025-06-05 21:21:35,836]::[InvokeAI]::ERROR --> Error while invoking session c932655a-70b0-4876-b280-65fc19a3eb2e, invocation ad7791c9-4dd2-486f-a079-fe7c5c435bc6 (chroma_denoise): KeyError: 'op4'

[2025-06-05 21:21:35,836]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/ubuntu/src/InvokeAI/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/InvokeAI/invokeai/app/invocations/baseinvocation.py", line 241, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/InvokeAI/nodes/chroma/invocations.py", line 347, in invoke
    latents = sampling.denoise_cfg(
              ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/InvokeAI/nodes/chroma/sampling.py", line 97, in denoise_cfg
    pred_neg = model(
               ^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1767, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1778, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/InvokeAI/nodes/chroma/chroma_model.py", line 220, in forward
    img, txt = CustomDoubleStreamBlockProcessor.custom_double_block_forward(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/InvokeAI/nodes/chroma/layers.py", line 151, in custom_double_block_forward
    img, txt, img_q = block(img, txt, vec, pe, attn_mask=attn_mask)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1765, in _wrapped_call_impl
    return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 712, in compile_wrapper
    raise e.remove_dynamo_frames() from None  # see TORCHDYNAMO_VERBOSE=1
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 699, in compile_wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1778, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/InvokeAI/nodes/chroma/layers.py", line 102, in forward
    img_qkv = self.img_attn.qkv(img_modulated)
  File "/opt/InvokeAI/nodes/chroma/layers.py", line 104, in torch_dynamo_resume_in_forward_at_102
    img_q, img_k = self.img_attn.norm(img_q, img_k, img_v)
  File "/opt/InvokeAI/nodes/chroma/layers.py", line 109, in torch_dynamo_resume_in_forward_at_104
    txt_qkv = self.txt_attn.qkv(txt_modulated)
  File "/opt/InvokeAI/nodes/chroma/layers.py", line 111, in torch_dynamo_resume_in_forward_at_109
    txt_q, txt_k = self.txt_attn.norm(txt_q, txt_k, txt_v)
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1469, in __call__
    return self._torchdynamo_orig_callable(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1248, in __call__
    result = self._inner_convert(
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 625, in __call__
    return _compile(
           ^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1092, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 779, in compile_inner
    return _compile_inner(code, one_graph, hooks, transform)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
    out_code = transform_code_object(code, transform)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1424, in transform_code_object
    transformations(instructions, code_options)
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 265, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 743, in transform
    tracer.run()
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3484, in run
    super().run()
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1359, in run
    while self.step():
          ^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1263, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 829, in wrapper
    return handle_graph_break(self, inst, speculation.reason)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 891, in handle_graph_break
    all_stack_locals_metadata = self.output.compile_subgraph(
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1410, in compile_subgraph
    self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1683, in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm, self.example_inputs())
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1795, in call_user_compiler
    return self._call_user_compiler(gm, example_inputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1830, in _call_user_compiler
    compiled_fn = compiler_fn(gm, example_inputs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
    compiled_gm = compiler_fn(gm, example_inputs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/__init__.py", line 2365, in __call__
    return compile_fx(model_, inputs_, config_patches=self.config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2389, in compile_fx
    raise e.remove_dynamo_frames() from None  # see TORCHDYNAMO_VERBOSE=1
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2375, in compile_fx
    return aot_autograd(
           ^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 106, in __call__
    cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1189, in aot_module_simplified
    compiled_fn = AOTAutogradCache.load(
                  ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 1057, in load
    compiled_fn = dispatch_and_compile()
                  ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1174, in dispatch_and_compile
    compiled_fn, _ = create_aot_dispatcher_function(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 576, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 836, in _create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
                               ^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 240, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
    return self.compiler_fn(gm, example_inputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2207, in fw_compiler_base
    return inner_compile(
           ^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 717, in compile_fx_inner
    return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
    inner_compiled_fn = compiler_fn(gm, example_inputs)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 887, in _compile_fx_inner
    raise InductorError(e, currentframe()).with_traceback(
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 871, in _compile_fx_inner
    mb_compiled_graph = fx_codegen_and_compile(
                        ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1524, in fx_codegen_and_compile
    return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1402, in codegen_and_compile
    compiled_module = graph.compile_to_module()
                      ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2285, in compile_to_module
    return self._compile_to_module()
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2291, in _compile_to_module
    self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
                                                             ^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2226, in codegen
    self._update_scheduler()
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2220, in _update_scheduler
    self.scheduler = Scheduler(self.operations)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py", line 2025, in __init__
    self._init(nodes)
  File "/home/ubuntu/venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py", line 2091, in _init
    self.compute_ancestors()
  File "/home/ubuntu
9BFF
/venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py", line 2574, in compute_ancestors
    ancestors |= name_to_ancestors[dep_node_name]
                 ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
torch._inductor.exc.InductorError: KeyError: 'op4'

Versions

$ python -m torch.utils.collect_env
<frozen runpy>:128: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour
Collecting environment information...             
PyTorch version: 2.8.0.dev20250605+cu128              
Is debug build: False                             
CUDA used to build PyTorch: 12.8                                                                                        
ROCM used to build PyTorch: N/A                                                                                         
                                                                                                                                                                                                                                                
OS: Ubuntu 24.04.2 LTS (x86_64)                            
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0                                                                      
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
                                                            
Python version: 3.12.9 (main, Feb 12 2025, 14:50:50) [Clang 19.1.6 ] (64-bit runtime)
Python platform: Linux-6.14.0-15-generic-x86_64-with-glibc2.39
Is CUDA available: True               
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY        
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 
Nvidia driver version: 570.148.08  
cuDNN version: Could not collect    
HIP runtime version: N/A              
MIOpen runtime version: N/A           
Is XNNPACK available: True          
                                                            
CPU:                                 
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        39 bits physical, 48 bits virtual
Byte Order:                           Little Endian
CPU(s):                               8
On-line CPU(s) list:                  0-7
Vendor ID:                            GenuineIntel
Model name:                           Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
CPU family:                           6
Model:                                94   
Thread(s) per core:                   2
Core(s) per socket:                   4
Socket(s):                            1
Stepping:                             3
CPU(s) scaling MHz:                   95%
CPU max MHz:                          4000.0000
CPU min MHz:                          800.0000
BogoMIPS:                             6799.81
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl 
xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fau
lt pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_w
indow hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization:                       VT-x
L1d cache:                            128 KiB (4 instances)
L1i cache:                            128 KiB (4 instances)
L2 cache:                             1 MiB (4 instances)
L3 cache:                             8 MiB (1 instance)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-7
Vulnerability Gather data sampling:   Vulnerable: No microcode
Vulnerability Ghostwrite:             Not affected
Vulnerability Itlb multihit:          KVM: Mitigation: Split huge pages
Vulnerability L1tf:                   Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds:                    Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown:               Mitigation; PTI
Vulnerability Mmio stale data:        Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Mitigation; IBRS
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                  Mitigation; Microcode
Vulnerability Tsx async abort:        Mitigation; TSX disabled

Versions of relevant libraries:
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.8.0.87
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] 
[pip3] 
[pip3] pytorch-triton==3.3.1+gitc8757738
[pip3] torch==2.8.0.dev20250605+cu128
[pip3] torchsde==0.2.6
[pip3] torchvision==0.23.0.dev20250605+cu128
[pip3] triton==3.3.1
[conda] Could not collect

cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov

Metadata

Metadata

Assignees

Labels

module: dynamomodule: inductoroncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0