Description
🐛 Describe the bug
The torch.jit.trace_module
API does not respect __jit_ignored_attributes__
, resulting in unwanted access to ignored attributes (including @Property) during tracing. This can cause unexpected exceptions if those attributes are not supposed to be called.
To Reproduce
import torch
import torch.nn as nn
from torch.jit import trace_module
# Define a module with ignored properties
class A(nn.Module):
__jit_ignored_attributes__ = ["ignored", "ignored_return_val"]
def __init__(self):
super().__init__()
@property
def ignored(self):
# This property should not be called during tracing.
raise ValueError("shouldn't be called")
@property
def ignored_return_val(self):
return 1
def forward(self):
return self.ignored_return_val
def test_bug():
example_input = torch.tensor([1.0])
inputs = {'forward': example_input}
traced_module = trace_module(A(), inputs)
if __name__ == "__main__":
test_bug()
Output
ValueError: shouldn't be called
Expected behavior
This shouldn't error because the attribute should be ignored.
Versions
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu122.04) 11.4.0
Clang version: 21.0.0 (++20250526042847+95756e67c230-1exp1~20250526042959.2439)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-138-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True