Description
🐛 Describe the bug
To Reproduce
#!/usr/bin/env python3
import torch
import torch.nn as nn
@torch.jit.script
class Ignored(object):
def __init__(self):
self.count: int = 0
self.items: list = []
def used(self):
self.count += 1
return self.count
@torch.jit.ignore(drop=True)
def ignored(self, x: int, y: str) -> int:
return 99
def uses_ignored(self) -> int:
return self.ignored(5, "hello")
class ModuleWithIgnored(nn.Module):
def __init__(self):
super().__init__()
self.obj = Ignored()
def forward(self):
return self.obj.used()
@torch.jit.export
def calls_ignored(self):
return self.obj.ignored(5, "hello")
@torch.jit.export
def calls_ignored_indirectly(self):
return self.obj.uses_ignored()
python_module = ModuleWithIgnored()
Output
Segmentation fault
Versions
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu122.04) 11.4.0
Clang version: 21.0.0 (++20250526042847+95756e67c230-1exp1~20250526042959.2439)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-138-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True