-
Notifications
You must be signed in to change notification settings - Fork 24.1k
int / long tensors are summing to float #251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
Comments
Merged
Fixed in #250. |
mrshenli
pushed a commit
to mrshenli/pytorch
that referenced
this issue
Apr 11, 2020
KyleCZH
pushed a commit
to KyleCZH/pytorch
that referenced
this issue
Sep 20, 2021
Fixing -q flag of yum install
eellison
pushed a commit
to eellison/pytorch
that referenced
this issue
Jun 29, 2022
Summary: Enables tracing through this syntax: ``` funcs = frozenset([torch.add]) def fn(x, func): if func in funcs: x = torch.add(x, 1.0) x = torch.mul(x, 1.0) return x ``` This is useful for DBR quantization. Test plan: ``` pytest -vsk test_frozenset_torch_func_contains ```
pytorchmergebot
pushed a commit
that referenced
this issue
May 12, 2023
When tensor is resized, reference array to it's sizes may become invalid. Make a copy in advance. <details> <summary>ASAN report</summary> ``` ================================================================= ==1115867==ERROR: AddressSanitizer: heap-use-after-free on address 0x61000013d790 at pc 0x03ff8e7da360 bp 0x03fff53c83a0 sp 0x03fff53c8390 READ of size 8 at 0x61000013d790 thread T0 #0 0x3ff8e7da35f in c10::SymInt::is_heap_allocated() const /home/user/pytorch/c10/core/SymInt.h:154 #1 0x3ff8e7da35f in c10::SymInt::maybe_as_int() const /home/user/pytorch/c10/core/SymInt.h:215 #2 0x3ff8e7d0a6d in c10::SymInt::sym_eq(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.cpp:69 #3 0x3ff7a9ab0bd in c10::SymInt::operator==(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.h:177 #4 0x3ff7a9aaedd in bool std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++- v11/bits/stl_algobase.h:1162 #5 0x3ff7a9aae4b in bool std::__equal_aux1<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/ stl_algobase.h:1211 #6 0x3ff7a9aae05 in bool std::__equal_aux<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/s tl_algobase.h:1219 #7 0x3ff7a9aad97 in bool std::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_alg obase.h:1556 #8 0x3ff4b23c771 in c10::ArrayRef<c10::SymInt>::equals(c10::ArrayRef<c10::SymInt>) const /home/user/pytorch/c10/util/ArrayRef.h:188 #9 0x3ff4cb91bc1 in bool c10::operator!=<c10::SymInt>(c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>) /home/user/pytorch/c10/util/ArrayRef.h:341 #10 0x3ff6d1b57ff in torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/torch/csrc/autograd/Variab leTypeManual.cpp:408 #11 0x3ff6d1e59c7 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1 0::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 #12 0x3ff6d1e59c7 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10: :ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::Sy mInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::Disp atchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480 #13 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 #14 0x3ff51ca6e8f in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90 #15 0x3ff51ca6e8f in at::Tensor const& c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Ten sor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656 #16 0x3ff5182006b in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c 10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492 #17 0x3ff5182006b in at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2144 #18 0x3ff6d1d5e07 in at::redispatch::resize__symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/RedispatchFunctions.h:2847 #19 0x3ff6d1bbb67 in torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pyto rch/torch/csrc/autograd/VariableTypeManual.cpp:243 #20 0x3ff6d1bd197 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1 0::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10 ::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFu nctionIntoFunctor.h:13 #21 0x3ff6d1bd197 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10: :ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c 10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor .h:480 #22 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 #23 0x3ff5181ead1 in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90 #24 0x3ff5181ead1 in at::Tensor const& c10::Dispatcher::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor co nst& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/at en/src/ATen/core/dispatch/Dispatcher.h:639 #25 0x3ff5181ead1 in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487 #26 0x3ff5181ead1 in at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2137 #27 0x3ff79b44fcf in at::Tensor::resize__symint(c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const aten/src/ATen/core/TensorBody.h:2452 #28 0x3ff79a802db in torch::autograd::THPVariable_resize_(_object*, _object*, _object*)::$_0::operator()(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/us er/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13417 #29 0x3ff7999f1eb in torch::autograd::THPVariable_resize_(_object*, _object*, _object*) /home/user/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13419 #30 0x3ffa2c9b009 in method_vectorcall_VARARGS_KEYWORDS Objects/descrobject.c:344 #31 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #32 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #33 0x3ffa2e05447 in call_function Python/ceval.c:5891 #34 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 #35 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #36 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #37 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #38 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 #39 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 #40 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 #41 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 #42 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #43 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #44 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #45 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #46 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 #47 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 #48 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 #49 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 #50 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #51 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #52 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #53 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #54 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #55 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #56 0x3ffa2e05447 in call_function Python/ceval.c:5891 #57 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 #58 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #59 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #60 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #61 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #62 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 #63 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #64 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #65 0x3ffa2e05447 in call_function Python/ceval.c:5891 #66 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 #67 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #68 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #69 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #70 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #71 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #72 0x3ffa2e05447 in call_function Python/ceval.c:5891 #73 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 #74 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #75 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #76 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #77 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #78 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 #79 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #80 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #81 0x3ffa2e05447 in call_function Python/ceval.c:5891 #82 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 #83 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #84 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #85 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #86 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #87 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 #88 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #89 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #90 0x3ffa2e05447 in call_function Python/ceval.c:5891 #91 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 #92 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #93 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #94 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #95 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #96 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 #97 0x3ffa2c8ab9b in PyVectorcall_Call Objects/call.c:267 #98 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 #99 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 #100 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 #101 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #102 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #103 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #104 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #105 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 #106 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 #107 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 #108 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215 #109 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 #110 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #111 0x3ffa2e05447 in call_function Python/ceval.c:5891 #112 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 #113 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #114 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #115 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #116 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #117 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #118 0x3ffa2e05447 in call_function Python/ceval.c:5891 #119 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 #120 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #121 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #122 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #123 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 #124 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 #125 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 #126 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 #127 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #128 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #129 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #130 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #131 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #132 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #133 0x3ffa2e05447 in call_function Python/ceval.c:5891 #134 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 #135 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #136 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #137 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #138 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #139 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 #140 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #141 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #142 0x3ffa2e05447 in call_function Python/ceval.c:5891 #143 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 #144 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #145 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #146 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #147 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 #148 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 #149 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 #150 0x3ffa2c8ad17 in _PyObject_Call Objects/call.c:305 #151 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 #152 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 #153 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #154 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #155 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #156 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #157 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #158 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #159 0x3ffa2e05447 in call_function Python/ceval.c:5891 #160 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 #161 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #162 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #163 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #164 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #165 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 #166 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #167 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #168 0x3ffa2e05447 in call_function Python/ceval.c:5891 #169 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 #170 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #171 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #172 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #173 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 #174 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 #175 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 #176 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 #177 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #178 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #179 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #180 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #181 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #182 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #183 0x3ffa2e05447 in call_function Python/ceval.c:5891 #184 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 #185 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #186 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #187 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #188 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #189 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #190 0x3ffa2e05447 in call_function Python/ceval.c:5891 #191 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 #192 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #193 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #194 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #195 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 #196 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 #197 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 #198 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 #199 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #200 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #201 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #202 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #203 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #204 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #205 0x3ffa2e05447 in call_function Python/ceval.c:5891 #206 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 #207 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #208 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #209 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #210 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #211 0x3ffa2c8eddd in method_vectorcall Objec 8000 ts/classobject.c:53 #212 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #213 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #214 0x3ffa2e05447 in call_function Python/ceval.c:5891 #215 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 #216 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #217 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #218 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #219 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 #220 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 #221 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 #222 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215 #223 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 #224 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #225 0x3ffa2e05447 in call_function Python/ceval.c:5891 #226 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 #227 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #228 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #229 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #230 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 #231 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 #232 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 #233 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 #234 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #235 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #236 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #237 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #238 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #239 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #240 0x3ffa2e05447 in call_function Python/ceval.c:5891 #241 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 #242 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #243 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #244 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #245 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #246 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 #247 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #248 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 #249 0x3ffa2e05447 in call_function Python/ceval.c:5891 #250 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 #251 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #252 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 #253 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 #254 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 #255 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 #256 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 #257 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215 0x61000013d790 is located 80 bytes inside of 192-byte region [0x61000013d740,0x61000013d800) freed by thread T0 here: #0 0x3ffa3237de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160 #1 0x3ff8e7e3221 in c10::TensorImpl::~TensorImpl() /home/user/pytorch/c10/core/TensorImpl.cpp:75 previously allocated by thread T0 here: #0 0x3ffa323734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99 #1 0x3ff4aeeb3d1 in c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_null_type<c10::TensorImpl> > c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_nul l_type<c10::TensorImpl> >::make<c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >, c10::DispatchKeySet&, caffe2::TypeMeta&>(c10::intrusive_ptr<c10::S torageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >&&, c10::DispatchKeySet&, caffe2::TypeMeta&) /home/user/pytorch/c10/util/intrusive_ptr.h:498 #2 0x3ff76f79e17 (/home/user/pytorch/build/lib.linux-s390x-cpython-310/torch/lib/libtorch_cpu.so+0x2fb79e17) SUMMARY: AddressSanitizer: heap-use-after-free /home/user/pytorch/c10/core/SymInt.h:154 in c10::SymInt::is_heap_allocated() const Shadow bytes around the buggy address: 0x100c2000027aa0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x100c2000027ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x100c2000027ac0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x100c2000027ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x100c2000027ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd =>0x100c2000027af0: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd 0x100c2000027b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x100c2000027b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x100c2000027b20: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x100c2000027b30: 00 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa 0x100c2000027b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==1115867==ABORTING ``` </details> <details> <summary>Additional backtraces (not full)</summary> Memory deallocation: ``` #0 operator delete (ptr=0x61000013d740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160 #1 0x000003ffa77e3222 in c10::TensorImpl::~TensorImpl (this=0x61000013d740) at /home/user/pytorch/c10/core/TensorImpl.cpp:75 #2 0x000003ff63e76e8c in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:291 #3 0x000003ff63e76910 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:370 #4 0x000003ff63e67240 in at::TensorBase::~TensorBase (this=0x3ffd7ec8230) at /home/user/pytorch/aten/src/ATen/core/TensorBase.h:80 #5 0x000003ff63e85ee0 in at::Tensor::~Tensor (this=0x3ffd7ec8230) at aten/src/ATen/core/TensorBody.h:90 #6 0x000003ff63f67304 in resize__functionalization (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:173 #7 0x000003ff63f89258 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) ( this=0x6030000390a0, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 #8 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (functor=0x6030000390a0, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480 #9 0x000003ff6aca560a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > ( unboxed_kernel_func=0x3ff63f88a80 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso r const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>, functor=0x6030000390a0, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 #10 0x000003ff6aca715c in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1b28, opHandle=..., dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:96 #11 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff919400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656 #12 0x000003ff6a82006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff919a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492 #13 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144 #14 0x000003ff861d5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847 #15 0x000003ff861b579e in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:401 ``` Memory access: ``` #0 c10::SymInt::maybe_as_int (this=0x61000013d790) at /home/user/pytorch/c10/core/SymInt.h:215 #1 0x000003ff734d0a6e in c10::SymInt::sym_eq (this=0x61000013d790, sci=...) at /home/user/pytorch/c10/core/SymInt.cpp:69 #2 0x000003ff5f6ab0be in c10::SymInt::operator== (this=0x61000013d790, o=...) at /home/user/pytorch/c10/core/SymInt.h:177 #3 0x000003ff5f6aaede in std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1162 #4 0x000003ff5f6aae4c in std::__equal_aux1<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1211 #5 0x000003ff5f6aae06 in std::__equal_aux<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1219 #6 0x000003ff5f6aad98 in std::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1556 #7 0x000003ff2ff3c772 in c10::ArrayRef<c10::SymInt>::equals (this=0x3ffed7c9900, RHS=...) at /home/user/pytorch/c10/util/ArrayRef.h:188 #8 0x000003ff31891bc2 in c10::operator!=<c10::SymInt> (a1=..., a2=...) at /home/user/pytorch/c10/util/ArrayRef.h:341 #9 0x000003ff51eb5800 in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:408 #10 0x000003ff51ee59c8 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c 10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (this=0x6030007dca40, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 #11 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt >, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional< c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tenso r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480 #12 0x000003ff369a512a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > ( unboxed_kernel_func=0x3ff51ee51f0 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::Ar rayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKern el*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>, functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 #13 0x000003ff369a6e90 in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1bc8, opHandle=..., dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90 #14 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::Arr ayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff5d6400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656 #15 0x000003ff3652006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff5d6a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492 #16 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144 #17 0x000003ff51ed5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847 #18 0x000003ff51ebbb68 in torch::autograd::VariableType::(anonymous namespace)::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:243 ``` </details> Pull Request resolved: #101064 Approved by: https://github.com/Skylion007, https://github.com/albanD
pytorchmergebot
pushed a commit
that referenced
this issue
May 26, 2023
3 disabled functions are attempting out of bounds reads. Disable them until sleef library is fixed. <details> <summary>ASAN report</summary> ``` ================================================================= ==2030580==ERROR: AddressSanitizer: global-buffer-overflow on address 0x03ff70f54570 at pc 0x03ff6704e960 bp 0x03ffce128940 sp 0x03ffce128930 READ of size 4 at 0x03ff70f54570 thread T0 #0 0x3ff6704e95f in vgather_vf_p_vi2 /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129 #1 0x3ff6704e95f in rempif /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:550 #2 0x3ff6704e95f in Sleef_cosf4_u10vxe2 /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:1021 #3 0x3ff67029cfb in Sleef_cosf4_u10 /home/user/pytorch/build/sleef/src/libm/disps390x_128.c:182 #4 0x3ff55d21941 in at::vec::ZVECTOR::Vectorized<float, void> at::vec::ZVECTOR::Vectorized<float, void>::mapSleef<float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __ vector(2)), float, 0>(float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __vector(2))) const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:991 #5 0x3ff5689ad01 in at::vec::ZVECTOR::Vectorized<float, void>::cos() const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:1074 #6 0x3ff5685df97 in at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)#1}::operator()(at::vec::ZVECTOR::Vectorized<float, void>) const /home/ user/pytorch/aten/src/ATen/cpu/vml.h:71 #7 0x3ff5689b691 in void at::vec::map<float, at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)#1}, 0>(at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)#1} const&, float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vec/functional_base.h:239 #8 0x3ff5685e0df in void at::vml::ZVECTOR::vcos<float>(float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vml.h:71 #9 0x3ff563fdde3 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 #10 0x3ff5648e4a3 in operator() /home/user/pytorch/aten/src/ATen/TensorIterator.h:406 #11 0x3ff5663cae1 in callback_fn<at::TensorIteratorBase::loop_2d_from_1d<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> >(c onst at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)>&)::<lambda(char**, const int64_t*, int64_t, int64_t)> > /home/user/pytorch/ c10/util/FunctionRef.h:43 #12 0x3ff4d45a933 in c10::function_ref<void (char**, long const*, long, long)>::operator()(char**, long const*, long, long) const /home/user/pytorch/c10/util/FunctionRef.h:64 #13 0x3ff4d455133 in at::internal::serial_for_each(c10::ArrayRef<long>, c10::ArrayRef<long>, char**, unsigned long, c10::function_ref<void (char**, long const*, long, long)>, at::Range) /home/user/pyt orch/aten/src/ATen/TensorIteratorInternal.h:52 #14 0x3ff4d43b703 in at::TensorIteratorBase::serial_for_each(c10::function_ref<void (char**, long const*, long, long)>, at::Range) const /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:777 #15 0x3ff4d43ab59 in at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long const*, long, long)>, long) /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:749 #16 0x3ff5648e851 in for_each<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> > /home/user/pytorch/aten/src/ATen/TensorItera tor.h:421 #17 0x3ff563fe5f9 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 #18 0x3ff56400915 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 #19 0x3ff56400f1d in at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&) /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 #20 0x3ff4f303007 in void at::native::DispatchStub<void (*)(at::TensorIteratorBase&), at::native::cos_stub>::operator()<at::native::structured_cos_out&>(c10::DeviceType, at::native::structured_cos_out &) /home/user/pytorch/aten/src/ATen/native/DispatchStub.h:158 #21 0x3ff4f2edb3f in at::native::structured_cos_out::impl(at::Tensor const&, at::Tensor const&) /home/user/pytorch/aten/src/ATen/native/UnaryOps.cpp:330 #22 0x3ff526ef739 in wrapper_CPU_cos /home/user/pytorch/build/aten/src/ATen/RegisterCPU.cpp:4307 #23 0x3ff52c651d9 in operator() /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 #24 0x3ff52c651d9 in call /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:463 #25 0x3ff5076df2f in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&) /home/user/pytorch/aten/src/ATen/core /boxing/KernelFunction_impl.h:50 #26 0x3ff5009a93f in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core /boxing/KernelFunction_impl.h:103 #27 0x3ff5009a93f in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)> const&, at::Tensor const&) const /home/user/pytorch/aten/s rc/ATen/core/dispatch/Dispatcher.h:639 #28 0x3ff5009a93f in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)>::call(at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487 #29 0x3ff5009a93f in at::_ops::cos::call(at::Tensor const&) /home/user/pytorch/build/aten/src/ATen/Operators_0.cpp:2215 #30 0x3ff7d813741 in at::Tensor::cos() const /home/user/pytorch/build/aten/src/ATen/core/TensorBody.h:2107 #31 0x3ff7dc0f2b7 in operator() /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2953 #32 0x3ff7dc0faf7 in THPVariable_cos /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2955 #33 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 #34 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 #35 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #36 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 #37 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #38 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #39 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #40 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #41 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 #42 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 #43 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #44 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/ torch/csrc/utils/python_dispatch.cpp:175 #45 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:: PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::operator()(c10::OperatorKernel*, c10::Op eratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87 #46 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:: PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::_FUN(c10::OperatorKernel*, c10::Operator Handle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86 #47 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/b oxing/BoxedKernel_impl.h:41 #48 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/cor e/boxing/KernelFunction_impl.h:43 #49 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:6 91 #50 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417 #51 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421 #52 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15 #53 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c1 0::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61 #54 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10:: IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111 #55 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290 #56 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-lin ux-gnu/11/include/g++-v11/bits/std_function.h:590 #57 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41 #58 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11:: kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764 #59 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829 #60 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549 #61 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::vo id_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439 #62 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /h ome/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408 #63 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249 #64 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224 #65 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929 #66 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 #67 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 #68 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #69 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 #70 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #71 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #72 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #73 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #74 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 #75 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 #76 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 #77 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 #78 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #79 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #80 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #81 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #82 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #83 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #84 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #85 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 #86 0x3ffa5feb289 in call_function Python/ceval.c:5891 #87 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213 #88 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #89 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #90 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #91 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 #92 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 #93 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #94 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #95 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #96 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #97 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #98 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #99 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 #100 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 #101 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #102 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch /torch/csrc/utils/python_dispatch.cpp:175 #103 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch: :PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::operator()(c10::OperatorKernel*, c10::O peratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87 #104 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch: :PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::_FUN(c10::OperatorKernel*, c10::Operato rHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86 #105 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/ boxing/BoxedKernel_impl.h:41 #106 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/co re/boxing/KernelFunction_impl.h:43 #107 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h: 691 #108 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417 #109 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421 #110 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15 #111 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c 10::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61 #112 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10: :IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111 #113 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290 #114 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-li nux-gnu/11/include/g++-v11/bits/std_function.h:590 #115 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41 #116 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11: :kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764 #117 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829 #118 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549 #119 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439 #120 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> / home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408 #121 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249 #122 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224 #123 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929 #124 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 #125 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 #126 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #127 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 #128 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #129 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #130 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #131 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #132 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 #133 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 #134 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 #135 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 #136 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #137 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #138 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #139 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #140 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #141 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #142 0x3ffa5e87d2b in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #143 0x3ffa5e882dd in method_vectorcall Objects/classobject.c:83 #144 0x3ffa5e836d3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #145 0x3ffa5e84b6f in _PyObject_CallFunctionVa Objects/call.c:485 #146 0x3ffa5e84f2d in callmethod Objects/call.c:557 #147 0x3ffa5e85039 in PyObject_CallMethod Objects/call.c:577 #148 0x3ff7f7efa05 in torch::handle_torch_function_no_python_arg_parser(c10::ArrayRef<pybind11::handle>, _object*, _object*, char const*, _object*, char const*, torch::TorchFunctionName) /home/user/py torch/torch/csrc/utils/python_arg_parser.cpp:338 #149 0x3ff7eb09b67 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:827 #150 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549 #151 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439 #152 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> / home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408 #153 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249 #154 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224 #155 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929 #156 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 #157 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 #158 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #159 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 #160 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #161 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #162 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #163 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #164 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 #165 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 #166 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 #167 0x3ffa5e84027 in _PyObject_MakeTpCall Objects/call.c:215 #168 0x3ffa5fd767b in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 #169 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 #170 0x3ffa5feb289 in call_function Python/ceval.c:5891 #171 0x3ffa5fe5ad1 in _PyEval_EvalFrameDefault Python/ceval.c:4181 #172 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #173 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #174 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #175 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #176 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 #177 0x3ffa5feb289 in call_function Python/ceval.c:5891 #178 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213 #179 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #180 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #181 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #182 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 #183 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 #184 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #185 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #186 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #187 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #188 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #189 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #190 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 #191 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 #192 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #193 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #194 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #195 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #196 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #197 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #198 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 #199 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 #200 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #201 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #202 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #203 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #204 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #205 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #206 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 #207 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 #208 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #209 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #210 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #211 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #212 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #213 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #214 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 #215 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 #216 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 #217 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 #218 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #219 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #220 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #221 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #222 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #223 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #224 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 #225 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 #226 0x3ffa5feb289 in call_function Python/ceval.c:5891 #227 0x3ffa5fe5b21 in _PyEval_EvalFrameDefault Python/ceval.c:4198 #228 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #229 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #230 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #231 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 #232 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 #233 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #234 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #235 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #236 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #237 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #238 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #239 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 #240 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 #241 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #242 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #243 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #244 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #245 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #246 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #247 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 #248 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 #249 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 #250 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 #251 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 #252 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 #253 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 #254 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 #255 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 0x03ff70f54570 is located 0 bytes to the right of global variable 'Sleef_rempitabsp' defined in '/home/user/pytorch/third_party/sleef/src/libm/rempitab.c:986:34' (0x3ff70f53f00) of size 1648 SUMMARY: AddressSanitizer: global-buffer-overflow /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129 in vgather_vf_p_vi2 Shadow bytes around the buggy address: 0x10007fee1ea850: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea860: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea870: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea890: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x10007fee1ea8a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[f9]f9 0x10007fee1ea8b0: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==2030580==ABORTING ``` </details> It reproduces when running `pytest -v test/test_ops.py -k test_python_ref__refs_cos_cpu_bfloat16` under address sanitizer on s390x. See also: shibatch/sleef#464 Pull Request resolved: #102266 Approved by: https://github.com/malfet
akashveramd
pushed a commit
to akashveramd/pytorch
that referenced
this issue
Apr 9, 2025
* enable batched_gemm_softmax_gemm_perm_wmma for gfx12 * disable instances with blocksize=256 in attention examples * debuggging * debug * fixed lds_enabled * debugging * Fix and add limit to skiplds feature * Enable skipLds feature and fix compilation bugs * add ck_tile definitions for gfx12 * fix clang format and test/wmma_op * updage instances cmake for gfx12 * disable the test_wmma_op on gfx12 * fix the builds for gfx950 * add gfx12 and gfx950 to default target list * clean-up cmake file * Initial introduction of OFP8 data types. * Renamed FP8 and BF8 tests into FP8_FNUZ and BF8_FNUZ. * Implementation of ConvertFP32Nearest in test_fp8_ocp. * Remove dependence on possibly undeclared alias. * Implement FP8OCP test for stochastic rounding mode. * Implement FP8OCP tests for half_t type conversions. * enable bf16 atomic add on gfx950 * Implement ConvertFP32Nearest test. * Implement ConvertFP32Stochastic test. * Implement ConvertFP16Nearest and ConvertFP16Stochastic tests. * Refactoring. Move FP8 definitions into a separate header file. * Enable easy switching between architectures. * Fix compilation error for gfx942 architecture. * Add fp4 type with constants * only builf gfx950 branch for gfx950 target by default * Enable OCP build of example_gemm_xdl_fp8. * Fix formatting. * fix the build logic for gfx950 * Improve GEMM example verbosity. * Add constexpr where applicable. * fix the logic of enabling XDL and WMMA instances * Improve GEMM example verbosity. * Enable build of example_gemm_xdl_fp8_bf8 test. * Fix tests for gfx1101 architecture. * Build DPP examples only on gfx103 and gfx11 architectures. * Optionaly run either CPU or GPU verifications with GEMM examples. * Extend GeneratorTensor_Sequential to produce values of prescribed data types. * Add missing constructor. * Add scale type and mxfp conversions * Update conversions * Add conversion tests * Fix typo * Improve infrastructure for OFP8 data type support. * BUGFIX. Should not use FP8 as Compute/Accum data type. * Add custom target for grouped_convnd_bwd_weight tests. * Can build `tests` target on gfx950. * Bugfixes on gfx1101 architecture. * Fix dependencies. * Add stochastic rounding tests * Provide single point of truth for FP8 INF and NAN checks * Prevent instantiation of operators that are not supported by FP8 data types * Add FP8 type selection into client_axample CMakeLists.txt * Prevent sccache server from shutting down during build * Fix test success reporting logic * Change default verification method to CPU. GPU verification takes too much time to complete on the emulator. * Add scale <-> float conversions * Add scaled conversions with tests * Add device conversions * Make sure all tests and examples are built for gfx950 * Facilitate testing of FP8 data types on the emulator * Introduce two new tensor generators * Enable instances built for gfx94 to be built on gfx950 * Verify 35_splitk_gemm on floating point numbers. splitk gemm appears to be losing precision VS reference implementation when FP numbers are involved. * Format * Verify 04_gemm_add_add_fastgelu on floating point numbers * Verify 20_grouped_conv_bwd_weight on floating point numbers * Verify 38_grouped_conv_bwd_data_multiple_d on floating point numbers * Verify more tests on floating point data * Fix data types and improve testing verbocity. * Add fp4 vectors * Add debug tests * Upgrade to NPI 573 build docker. * Skip on gemm_universal tests. The tests take too long to complete on the emulator. Need to see if it is possible to reduce the scope of the testing to just FP8 data types. * Add new mfma instructions and examples * Add preprocessor directives for gfx950 specific code * Fix gfx1101 build * Document test availability * Re-enable fp8 gemms for gfx94/95 * Cherry-pick GEMM Universal tests for FP8 data types * Cleanup * Add vector types and tests * Add check_err function * Add tensor generators * CK_USE_GFX94 has already been set on this branch * Fix * Address formatting issues and leftovers * Make fail/pass logic consistent within 01_gemm folder Removed multiple negations in fail/pass logic to propagate `true` as the success indicator. * Fix GPU verification reporting logic. * Update year in copyright notice. * Cleanup * Use `enum class` instead of `enum` * Remove set_property for FP8 tests * Add vector conversions * Fix * Fix linker errror * Clean up * Fix gfx950 conversions * Clean up * Fix more gfx950 conversions * Fix even more gfx950 conversions * Narrowing the scope of PR to OCP FP8 enablement only * Add tests for OCP FP8 vector_type storage * Fix client examples build * Fix typo * Update e8m0 casting * Rename E8M0 type * Update unpack method * Cleanup merge artifacts * Enable gemm kernel on all gfx9 architectures (pytorch#227) * clean-up * Implement `non_native_vector_base` with `ext_vector_type` array. (pytorch#232) * Enable support of 1, 2, 4, and 8-byte custom types in CK. * Fix pool tests for OCP FP8 data type * Fix build * Add ckProfiler gemm instances for new mfma instructions and fix ckProfiler build on gfx950 * fix clang format * Add new mfma instructions and examples * Add preprocessor directives for gfx950 specific code * Add ckProfiler gemm instances for new mfma instructions and fix ckProfiler build on gfx950 * fix clang format * Fix clang format for the newly merged files * Use the existing example instances for fp16 bf16 and int8 * Remove comment on new mfma instructions in MfmaInstr * Update include/ck/tensor_operation/gpu/grid/gridwise_batched_gemm_gemm_xdl_cshuffle_v1.hpp Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> * merge from public repo * Fix ck build * Fix ck build * Use double for max_abs_in_val * Move scaled_type_convert functions to a separate header (pytorch#251) * re-enable building mha lib and gemm_universal_f8 instances for gfx950 * Update library/src/tensor_operation_instance/gpu/CMakeLists.txt Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> * fix typo for CK_USE_OCP_FP8 * fix typo for CK_USE_OCP_FP8 * Add FP6 and BF6 types (pytorch#261) * Add a rounding flag * Add FP6 and BF6 * Add tests Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> * Clean up --------- Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> * fix one more typo * Refactor E8M0 scale implementation (pytorch#262) * Refactor E8M0 scale implementation * Add MXFP6 and MXBF6 conversion methods (pytorch#270) * Add conversions * Add tests * Add docstrings * Add scaled conversions * Add fp6/bf6 tests * Remove misleading fp4 test case * Add docstrings * Clean up * Address comments * Set stricter tolerances for RNE tests * Add missing tests * Add native conversions to float * Revert "Add native conversions to float" This reverts commit 09467111f73b753c8cc3d597533b187940353dab. * Update copyright years * replace the fp6 with bf6 convert calls in test_bf6 * fix test_bf6 * enable smfmac test * [MX FP8] Add Scaled Type Convert Functions for OCP FP8/BF8 data types (pytorch#271) * Move scaled_type_convert functions to a separate header * Introduce MX data tests * Build MX tests only on relevant architectures * Refactor E8M0 scale implementation * Fix `config.h` typo * Cleanup deprecated symbols * Refactor `amd_ck_fp8.hpp` * `scaled_type_convert` for `f8_ocp_t` * Implement test for MX FP8 scaled type convert * Implement test for MX BF8 scaled type convert * Scaled type convert for vectors of 2 FP8 elements * Scaled type convert for vectors of 16 FP8 elements * Implementation of scaled conversion from F32 to F8 * Add tests for scaled conversions from FP32 to FP8 * Add documentation to the test functions * Implementation of scaled conversion from F32x2 to F8x2 * Implementation of scaled conversion from F32x16 to F8x16 * Implementation of scaled conversion from F32x32 to F8x32 * Implementation of scaled conversion from F8x32 to F32x32 * Verified on the emulator * MX FP GEMM - Example Template (pytorch#277) Temporarily uses `DeviceGemmMultiD_ABScale_Xdl_CShuffle_V3` kernel and 128x128 scaling matrices. Must be modified to use MX-native GEMM kernell with 16 or 32 component vectors per scale. Verified on the emulator. * Add vector support * Add tests * Add missing type aliases * Fix test naming * only build mx example for gfx950 * disable CK_USE_AMD_MFMA_GFX950 by default * fic build for multiple archs * fix typo * fix typo * Update unpack signature * Fix merge * Add size checks in pack function * Add a flag * Add conversions * Fix build logic * Update pack/unpack methods * Remove unneeded AsType accessors * Add docstrings * Add a flag to config file * Test the functionality of V_MFMA_F32_16X16X128_F8F6F4 and V_MFMA_F32_32X32X64_F8F6F4 instructions. (pytorch#293) * Introduced MFMA tests * Verified f8f6f4 MFMA Instructions * Move flag logic to scaled_type_convert header * Use pointers instead of array indices * Fix a typo * Update tests and pack functions * Fix gemm gemm on gfx950 * Fix clang format * restore the default gput target lists * fix the jenkinsfile * add missing ifdef --------- Co-authored-by: Jing Zhang <jizhan@amd.com> Co-authored-by: aska-0096 <haocwang@amd.com> Co-authored-by: Jun Liu <Liu.Jun@amd.com> Co-authored-by: Andriy Roshchenko <andriy.roshchenko@amd.com> Co-authored-by: Rostyslav Geyyer <rosty.geyyer@amd.com> Co-authored-by: Rostyslav Geyyer 3D11 <46627076+geyyer@users.noreply.github.com> Co-authored-by: root <root@banff-cyxtera-s83-2.ctr.dcgpu> Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> Co-authored-by: jefyang1 <146495389+jefyang1@users.noreply.github.com> Co-authored-by: jefyang1 <Jeffreyj.Yang@amd.com>
akashveramd
pushed a commit
to akashveramd/pytorch
that referenced
this issue
Apr 9, 2025
* [CK_TILE] Add GetName functions for Gemm Kernels * [CK_TILE] Add GetName for grouped gemm * [CK_TILE] Add GetName for gemm - review changes * [CK_TILE] Print also gemm problem pipeline and shape * [CK_TILE] Print also GemmPipelineScheduler * [CK_TILE] GetName - fixed Scheduler <<operator visibility * [CK_TILE] GetName info adjustments * [CK_TILE] GetName post-merge fix * [CK_TILE] GetName - add general concat function * [CK_TILE] GetName - small adjustments, format change * post merge develop fix * Remove commented code * Merge from internal (pytorch#1857) * enable batched_gemm_softmax_gemm_perm_wmma for gfx12 * disable instances with blocksize=256 in attention examples * debuggging * debug * fixed lds_enabled * debugging * Fix and add limit to skiplds feature * Enable skipLds feature and fix compilation bugs * add ck_tile definitions for gfx12 * fix clang format and test/wmma_op * updage instances cmake for gfx12 * disable the test_wmma_op on gfx12 * fix the builds for gfx950 * add gfx12 and gfx950 to default target list * clean-up cmake file * Initial introduction of OFP8 data types. * Renamed FP8 and BF8 tests into FP8_FNUZ and BF8_FNUZ. * Implementation of ConvertFP32Nearest in test_fp8_ocp. * Remove dependence on possibly undeclared alias. * Implement FP8OCP test for stochastic rounding mode. * Implement FP8OCP tests for half_t type conversions. * enable bf16 atomic add on gfx950 * Implement ConvertFP32Nearest test. * Implement ConvertFP32Stochastic test. * Implement ConvertFP16Nearest and ConvertFP16Stochastic tests. * Refactoring. Move FP8 definitions into a separate header file. * Enable easy switching between architectures. * Fix compilation error for gfx942 architecture. * Add fp4 type with constants * only builf gfx950 branch for gfx950 target by default * Enable OCP build of example_gemm_xdl_fp8. * Fix formatting. * fix the build logic for gfx950 * Improve GEMM example verbosity. * Add constexpr where applicable. * fix the logic of enabling XDL and WMMA instances * Improve GEMM example verbosity. * Enable build of example_gemm_xdl_fp8_bf8 test. * Fix tests for gfx1101 architecture. * Build DPP examples only on gfx103 and gfx11 architectures. * Optionaly run either CPU or GPU verifications with GEMM examples. * Extend GeneratorTensor_Sequential to produce values of prescribed data types. * Add missing constructor. * Add scale type and mxfp conversions * Update conversions * Add conversion tests * Fix typo * Improve infrastructure for OFP8 data type support. * BUGFIX. Should not use FP8 as Compute/Accum data type. * Add custom target for grouped_convnd_bwd_weight tests. * Can build `tests` target on gfx950. * Bugfixes on gfx1101 architecture. * Fix dependencies. * Add stochastic rounding tests * Provide single point of truth for FP8 INF and NAN checks * Prevent instantiation of operators that are not supported by FP8 data types * Add FP8 type selection into client_axample CMakeLists.txt * Prevent sccache server from shutting down during build * Fix test success reporting logic * Change default verification method to CPU. GPU verification takes too much time to complete on the emulator. * Add scale <-> float conversions * Add scaled conversions with tests * Add device conversions * Make sure all tests and examples are built for gfx950 * Facilitate testing of FP8 data types on the emulator * Introduce two new tensor generators * Enable instances built for gfx94 to be built on gfx950 * Verify 35_splitk_gemm on floating point numbers. splitk gemm appears to be losing precision VS reference implementation when FP numbers are involved. * Format * Verify 04_gemm_add_add_fastgelu on floating point numbers * Verify 20_grouped_conv_bwd_weight on floating point numbers * Verify 38_grouped_conv_bwd_data_multiple_d on floating point numbers * Verify more tests on floating point data * Fix data types and improve testing verbocity. * Add fp4 vectors * Add debug tests * Upgrade to NPI 573 build docker. * Skip on gemm_universal tests. The tests take too long to complete on the emulator. Need to see if it is possible to reduce the scope of the testing to just FP8 data types. * Add new mfma instructions and examples * Add preprocessor directives for gfx950 specific code * Fix gfx1101 build * Document test availability * Re-enable fp8 gemms for gfx94/95 * Cherry-pick GEMM Universal tests for FP8 data types * Cleanup * Add vector types and tests * Add check_err function * Add tensor generators * CK_USE_GFX94 has already been set on this branch * Fix * Address formatting issues and leftovers * Make fail/pass logic consistent within 01_gemm folder Removed multiple negations in fail/pass logic to propagate `true` as the success indicator. * Fix GPU verification reporting logic. * Update year in copyright notice. * Cleanup * Use `enum class` instead of `enum` * Remove set_property for FP8 tests * Add vector conversions * Fix * Fix linker errror * Clean up * Fix gfx950 conversions * Clean up * Fix more gfx950 conversions * Fix even more gfx950 conversions * Narrowing the scope of PR to OCP FP8 enablement only * Add tests for OCP FP8 vector_type storage * Fix client examples build * Fix typo * Update e8m0 casting * Rename E8M0 type * Update unpack method * Cleanup merge artifacts * Enable gemm kernel on all gfx9 architectures (pytorch#227) * clean-up * Implement `non_native_vector_base` with `ext_vector_type` array. (pytorch#232) * Enable support of 1, 2, 4, and 8-byte custom types in CK. * Fix pool tests for OCP FP8 data type * Fix build * Add ckProfiler gemm instances for new mfma instructions and fix ckProfiler build on MI350 * fix clang format * Add new mfma instructions and examples * Add preprocessor directives for gfx950 specific code * Add ckProfiler gemm instances for new mfma instructions and fix ckProfiler build on MI350 * fix clang format * Fix clang format for the newly merged files * Use the existing example instances for fp16 bf16 and int8 * Remove comment on new mfma instructions in MfmaInstr * Update include/ck/tensor_operation/gpu/grid/gridwise_batched_gemm_gemm_xdl_cshuffle_v1.hpp Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> * merge from public repo * Fix ck build * Fix ck build * Use double for max_abs_in_val * Move scaled_type_convert functions to a separate header (pytorch#251) * re-enable building mha lib and gemm_universal_f8 instances for gfx950 * Update library/src/tensor_operation_instance/gpu/CMakeLists.txt Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> * fix typo for CK_USE_OCP_FP8 * fix typo for CK_USE_OCP_FP8 * Add FP6 and BF6 types (pytorch#261) * Add a rounding flag * Add FP6 and BF6 * Add tests Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> * Clean up --------- Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> * fix one more typo * Refactor E8M0 scale implementation (pytorch#262) * Refactor E8M0 scale implementation * Add MXFP6 and MXBF6 conversion methods (pytorch#270) * Add conversions * Add tests * Add docstrings * Add scaled conversions * Add fp6/bf6 tests * Remove misleading fp4 test case * Add docstrings * Clean up * Address comments * Set stricter tolerances for RNE tests * Add missing tests * Add native conversions to float * Revert "Add native conversions to float" This reverts commit 09467111f73b753c8cc3d597533b187940353dab. * Update copyright years * replace the fp6 with bf6 convert calls in test_bf6 * fix test_bf6 * enable smfmac test * [MX FP8] Add Scaled Type Convert Functions for OCP FP8/BF8 data types (pytorch#271) * Move scaled_type_convert functions to a separate header * Introduce MX data tests * Build MX tests only on relevant architectures * Refactor E8M0 scale implementation * Fix `config.h` typo * Cleanup deprecated symbols * Refactor `amd_ck_fp8.hpp` * `scaled_type_convert` for `f8_ocp_t` * Implement test for MX FP8 scaled type convert * Implement test for MX BF8 scaled type convert * Scaled type convert for vectors of 2 FP8 elements * Scaled type convert for vectors of 16 FP8 elements * Implementation of scaled conversion from F32 to F8 * Add tests for scaled conversions from FP32 to FP8 * Add documentation to the test functions * Implementation of scaled conversion from F32x2 to F8x2 * Implementation of scaled conversion from F32x16 to F8x16 * Implementation of scaled conversion from F32x32 to F8x32 * Implementation of scaled conversion from F8x32 to F32x32 * Verified on the emulator * MX FP GEMM - Example Template (pytorch#277) Temporarily uses `DeviceGemmMultiD_ABScale_Xdl_CShuffle_V3` kernel and 128x128 scaling matrices. Must be modified to use MX-native GEMM kernell with 16 or 32 component vectors per scale. Verified on the emulator. * Add vector support * Add tests * Add missing type aliases * Fix test naming * only build mx example for gfx950 * disable CK_USE_AMD_MFMA_GFX950 by default * fic build for multiple archs * fix typo * fix typo * Update unpack signature * Fix merge * Add size checks in pack function * Add a flag * Add conversions * Fix build logic * Update pack/unpack methods * Remove unneeded AsType accessors * Add docstrings * Add a flag to config file * Test the functionality of V_MFMA_F32_16X16X128_F8F6F4 and V_MFMA_F32_32X32X64_F8F6F4 instructions. (pytorch#293) * Introduced MFMA tests * Verified f8f6f4 MFMA Instructions * Move flag logic to scaled_type_convert header * Use pointers instead of array indices * Fix a typo * Update tests and pack functions * Fix gemm gemm on gfx950 * Fix clang format * restore the default gput target lists * fix the jenkinsfile * add missing ifdef --------- Co-authored-by: Jing Zhang <jizhan@amd.com> Co-authored-by: aska-0096 <haocwang@amd.com> Co-authored-by: Jun Liu <Liu.Jun@amd.com> Co-authored-by: Andriy Roshchenko <andriy.roshchenko@amd.com> Co-authored-by: Rostyslav Geyyer <rosty.geyyer@amd.com> Co-authored-by: Rostyslav Geyyer <46627076+geyyer@users.noreply.github.com> Co-authored-by: root <root@banff-cyxtera-s83-2.ctr.dcgpu> Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> Co-authored-by: jefyang1 <146495389+jefyang1@users.noreply.github.com> Co-authored-by: jefyang1 <Jeffreyj.Yang@amd.com> * restore cron trigger (pytorch#1863) * add vectorloads on non-k dim for memory pipelines (pytorch#1856) * Support for dtypes (fp8, bf8, bf16 and fp16) for the ck_tile/03_gemm example. (pytorch#1845) * Support bf16/fb8/bf8 datatypes for ck_tile/gemm * remove commented out code. * Addressing code review comments and enabling universal_gemm for all the supported data types. * Merge conflict resolution. * Solve the memory pipeline compilation error. Merge with the new change of CShuffle * finish the feature, pass the tests * Fix the pipeline and add the benchmark script for other data types --------- Co-authored-by: ThomasNing <thomas.ning@amd.com> * Extract prec_str and add separator to concat * GetName add * CK Tile - small fix to hotloop scheduler & KPack value. (pytorch#1867) * Use SmemPack in HotLoop scheduler * Additional debug print information * Change KPack value. Hardcode for now, as without AK1/BK1 there's no good way to determine its value. * Fix HotLoopScheduler MFMA instr parameters. * Resolve merge issues --------- Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com> Co-authored-by: Jing Zhang <jizhan@amd.com> Co-authored-by: aska-0096 <haocwang@amd.com> Co-authored-by: Jun Liu <Liu.Jun@amd.com> Co-authored-by: Andriy Roshchenko <andriy.roshchenko@amd.com> Co-authored-by: Rostyslav Geyyer <rosty.geyyer@amd.com> Co-authored-by: Rostyslav Geyyer <46627076+geyyer@users.noreply.github.com> Co-authored-by: root <root@banff-cyxtera-s83-2.ctr.dcgpu> Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com> Co-authored-by: jefyang1 <146495389+jefyang1@users.noreply.github.com> Co-authored-by: jefyang1 <Jeffreyj.Yang@amd.com> Co-authored-by: jakpiase <jakub.piasecki@amd.com> Co-authored-by: kylasa <sudhir.kylasa@amd.com> Co-authored-by: ThomasNing <thomas.ning@amd.com> Co-authored-by: Adam Osewski <19374865+aosewski@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Somehow,
torch.IntTensor(3).random_(10).sum()
returns a float instead of an int. It's not a huge problem, but this is weird and it requires to callint
where it's not really expected.The text was updated successfully, but these errors were encountered: