8000 [Windows UT] CpuTests.test_fractional_max_pool2d2_cpu failed with PyTorch 2025-05-25 nightly wheel · Issue #154697 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
[Windows UT] CpuTests.test_fractional_max_pool2d2_cpu failed with PyTorch 2025-05-25 nightly wheel #154697
Closed
@LifengWang

Description

@LifengWang

🐛 Describe the bug

Reproducer:

python test\inductor\test_torchinductor.py CpuTests.test_fractional_max_pool2d2_cpu

Error message:

__________________ CpuTests.test_fractional_max_pool2d2_cpu ___________________
Traceback (most recent call last):
  File "C:\ProgramData\miniforge3\envs\pt_nightly\lib\unittest\case.py", line 59, in testPartExecutor
    yield
  File "C:\ProgramData\miniforge3\envs\pt_nightly\lib\unittest\case.py", line 591, in run
    self._callTestMethod(testMethod)
  File "C:\ProgramData\miniforge3\envs\pt_nightly\lib\unittest\case.py", line 549, in _callTestMethod
    method()
  File "C:\ProgramData\miniforge3\envs\pt_nightly\lib\site-packages\torch\testing\_internal\common_utils.py", line 3141, in wrapper
    method(*args, **kwargs)
  File "C:\pytorch\test\inductor\test_torchinductor.py", line 13402, in new_test
    return value(self)
  File "C:\pytorch\test\inductor\test_torchinductor.py", line 830, in wrapper
    return fn(self, *args, **kwargs)
  File "C:\pytorch\test\inductor\test_torchinductor.py", line 4872, in test_fractional_max_pool2d2
    assertGeneratedKernelCountEqual(self, 0)
  File "C:\pytorch\test\inductor\test_torchinductor.py", line 759, in assertGeneratedKernelCountEqual
    self.assertEqual(torch._inductor.metrics.generated_kernel_count, expected)
  File "C:\ProgramData\miniforge3\envs\pt_nightly\lib\site-packages\torch\testing\_internal\common_utils.py", line 4089, in assertEqual
    raise error_metas.pop()[0].to_error(  # type: ignore[index]
AssertionError: Scalars are not equal!

Expected 0 but got 4.
Absolute difference: 4
Relative difference: inf

To execute this test, run the following from the base repo dir:
    python test\inductor\test_torchinductor.py CpuTests.test_fractional_max_pool2d2_cpu

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
---------------------------- Captured stdout call -----------------------------
inline_call []
stats [('calls_captured', 3), ('unique_graphs', 1)]
aot_autograd [('total', 1), ('autograd_cache_miss', 1), ('autograd_cache_saved', 1), ('ok', 1)]
inductor [('fxgraph_cache_miss', 1)]
graph_break []

cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @seemethere @malfet @pytorch/pytorch-dev-infra @chuanqi129 @yuchengliu1 @xuhancn @leslie-fang-intel

Versions

Collecting environment information...
PyTorch version: 2.8.0.dev20250525+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Microsoft Windows Server 2022 Datacenter Evaluation (10.0.20348 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.10.17 | packaged by conda-forge | (main, Apr 10 2025, 22:06:35) [MSC v.1943 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.20348-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:

Name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2594
MaxClockSpeed: 2594
L2CacheSize: 40960
L2CacheSpeed: None
Revision: 27142

Name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU1
CurrentClockSpeed: 2594
MaxClockSpeed: 2594
L2CacheSize: 40960
L2CacheSpeed: None
Revision: 27142

Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.8.0.dev20250525+cpu
[pip3] torchaudio==2.6.0.dev20250526+cpu
[pip3] torchvision==0.22.0.dev20250526+cpu
[conda] numpy 2.1.2 pypi_0 pypi
[conda] torch 2.8.0.dev20250525+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250526+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250526+cpu pypi_0 pypi

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: ciRelated to continuous integrationmodule: windowsWindows support for PyTorchneeds reproductionSomeone else needs to try reproducing the issue given the instructions. No action needed from usertriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0