8000 DISABLED test_shape_padding_mps (__main__.GPUTests) · Issue #153719 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
DISABLED test_shape_padding_mps (__main__.GPUTests) #153719
< 8000 div class="Box-sc-g0xbh4-0 bKeiGd prc-PageHeader-Actions-ygtmj" data-component="PH_Actions">
Closed
@pytorch-bot

Description

@pytorch-bot

Platforms: mac, macos

This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.

Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 5 failures and 3 successes.

Debugging instructions (after clicking on the recent samples link):
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:

  1. Click on the workflow logs linked above
  2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
  3. Grep for test_shape_padding_mps
  4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Sample error message
Traceback (most recent call last):
  File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_torchinductor.py", line 10484, in test_shape_padding
    self.common(lambda x, y: torch.bmm(x, y), (x, y))
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_torchinductor.py", line 658, in check_model_gpu
    check_model(
  File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_torchinductor.py", line 499, in check_model
    actual = run(*example_inputs, **kwargs)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 702, in _fn
    raise e.remove_dynamo_frames() from None  # see TORCHDYNAMO_VERBOSE=1
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 1636, in _call_user_compiler
    raise BackendCompilerFailed(
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 1611, in _call_user_compiler
    compiled_fn = compiler_fn(gm, self.example_inputs())
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
    compiled_gm = compiler_fn(gm, example_inputs)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/__init__.py", line 2409, in __call__
    return self.compiler_fn(model_, inputs_, **self.kwargs)
  File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_torchinductor.py", line 491, in compile_fx_wrapper
    return compile_fx(model_, example_inputs_)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 2324, in compile_fx
    return aot_autograd(
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_dynamo/backends/common.py", line 106, in __call__
    cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1189, in aot_module_simplified
    compiled_fn = AOTAutogradCache.load(
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 923, in load
    compiled_fn = dispatch_and_compile()
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1174, in dispatch_and_compile
    compiled_fn, _ = create_aot_dispatcher_function(
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 576, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 836, in _create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 243, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
    return self.compiler_fn(gm, example_inputs)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 2094, in fw_compiler_base
    _recursive_joint_graph_passes(gm)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 456, in _recursive_joint_graph_passes
    joint_graph_passes(gm)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/contextlib.py", line 126, in __exit__
    next(self.gen)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 723, in dynamo_timed
    yield
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/contextlib.py", line 532, in __exit__
    raise exc_details[1]
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/contextlib.py", line 517, in __exit__
    if cb(*exc_details):
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/autograd/profiler.py", line 788, in __exit__
    torch.ops.profiler._record_function_exit._RecordFunction(record)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_ops.py", line 1045, in __call__
    return self._op(*args, **kwargs)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_ops.py", line 890, in handler
    return torch._library.utils.handle_dispatch_mode(
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_library/utils.py", line 296, in handle_dispatch_mode
    return curr_mode.__torch_dispatch__(op_overload, overload_types, args, kwargs)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/utils/_stats.py", line 27, in wrapper
    return fn(*args, **kwargs)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1346, in __torch_dispatch__
    return self.dispatch(func, types, args, kwargs)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 2021, in dispatch
    return self._cached_dispatch_impl(func, types, args, kwargs)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1444, in _cached_dispatch_impl
    entry = cache.get(key, None)
  File "/Users/ec2-user/runner/_work/_temp/conda_environment_15063491922/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1074, in __eq__
    return isinstance(other, _DispatchCacheKey) and self.key == other.key
torch._dynamo.exc.BackendCompilerFailed: backend='compile_fx_wrapper' raised:
NotImplementedError: '__eq__' is not implemented for __torch__.torch.classes.profiler._RecordFunction

Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"


To execute this test, run the following from the base repo dir:
    python test/inductor/test_torchinductor.py GPUTests.test_shape_padding_mps

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0

Test file path: inductor/test_torchinductor.py

For all disabled tests (by GitHub issue), see https://hud.pytorch.org/disabled.

cc @clee2000 @malfet @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: flaky-testsProblem is a flaky test in CImodule: inductormodule: macosMac OS related issuesoncall: pt2skippedDenotes a (flaky) test currently skipped in CI.triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0