8000 [FX] `concrete_args` unpacking erroneously carries over type annotations · Issue #81902 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
8000
< 8000 div class="gh-header-show ">

[FX] concrete_args unpacking erroneously carries over type annotations #81902

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jamesr66a opened this issue Jul 21, 2022 · 0 comments
Closed
Labels

Comments

@jamesr66a
Copy link
Collaborator
jamesr66a commented Jul 21, 2022

🐛 Describe the bug

import torch
import torch.fx
from typing import List

def foo(x : torch.Tensor, y : torch.Tensor, l : List[torch.Tensor]):
return torch.cat([x, y] + l)

concrete_args = {'l': [torch.fx._symbolic_trace.PH] * 10}
traced = torch.fx.symbolic_trace(foo, concrete_args=concrete_args)

Traceback (most recent call last):
  File "/fsx/users/jamesreed/fx_test.py", line 9, in <module>
    traced = torch.fx.symbolic_trace(foo, concrete_args=concrete_args)
  File "/fsx/users/jamesreed/pytorch/torch/fx/_symbolic_trace.py", line 1035, in symbolic_trace
    return GraphModule(tracer.root, graph, name)
  File "/fsx/users/jamesreed/pytorch/torch/fx/graph_module.py", line 366, in __init__
    self.graph = graph
  File "/fsx/users/jamesreed/pytorch/torch/nn/modules/module.py", line 1309, in __setattr__
    super().__setattr__(name, value)
  File "/fsx/users/jamesreed/pytorch/torch/fx/graph_module.py", line 404, in graph
    self.recompile()
  File "/fsx/users/jamesreed/pytorch/torch/fx/graph_module.py", line 641, in recompile
    cls.forward = _forward_from_src(self._code, python_code.globals)
  File "/fsx/users/jamesreed/pytorch/torch/fx/graph_module.py", line 77, in _forward_from_src
    _exec_with_source(src, globals_copy)
  File "/fsx/users/jamesreed/pytorch/torch/fx/graph_module.py", line 71, in _exec_with_source
    exec(compile(src, key, 'exec'), globals)
  File "<eval_with_key>.0", line 5
    x : torch.Tensor, y : torch.Tensor, l_1, l_2, l_3, l_4, l_5, l_6, l_7, l_8, l_9, l_10, = fx_pytree.tree_flatten_spec([x, y, l], self._in_spec)
                    ^
SyntaxError: invalid syntax

When you print out the code, that it's trying to exec, you get:

def forward(self, x, y, l):
    x : torch.Tensor, y : torch.Tensor, l_1, l_2, l_3, l_4, l_5, l_6, l_7, l_8, l_9, l_10, = fx_pytree.tree_flatten_spec([x, y, l], self._in_spec)
    cat = torch.cat([x, y, l_1, l_2, l_3, l_4, l_5, l_6, l_7, l_8, l_9, l_10]);  x = y = l_1 = l_2 = l_3 = l_4 = l_5 = l_6 = l_7 = l_8 = l_9 = l_10 = None
    return pytree.tree_unflatten([cat], self._out_spec)

This points to this line of codegen:

{', '.join(free_vars)}, = fx_pytree.tree_flatten_spec([{', '.join(function_args)}], self._in_spec)"""

which uses this free_vars:

free_vars.append(f'{node.target}{maybe_type_annotation}{maybe_default_arg}')

Versions

Collecting environment information...
PyTorch version: 1.13.0a0+git08b9544
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.19.6
Libc version: glibc-2.27

Python version: 3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-1051-aws-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: 11.1.105
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7.6.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] functorch==0.3.0a0+347334c
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] torch==1.13.0a0+git08b9544
[pip3] torchdynamo==0.2.0
[pip3] torchtext==0.9.0a0+4de31fc
[pip3] torchvision==0.12.0a0+3e79d14
[conda] blas 1.0 mkl
[conda] functorch 0.3.0a0+347334c dev_0
[conda] mkl 2021.3.0 h06a4308_520
[conda] mkl-include 2021.3.0 h06a4308_520
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] torch 1.13.0a0+git08b9544 dev_0
[conda] torchdynamo 0.2.0 dev_0
[conda] torchtext 0.9.0a0+4de31fc pypi_0 pypi
[conda] torchvision 0.12.0a0+3e79d14 dev_0

cc @ezyang @SherlockNoMad

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants
0