-
Notifications
You must be signed in to change notification settings - Fork 24.5k
Extend CSR constructor to support batched indices and values #74542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 0c79aa7 (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
// std::array<int64_t, 2> size = {0, 0}; | ||
auto size = DimVector(IntArrayRef(col_indices.sizes().data(), col_indices.dim() - 1)); | ||
size.push_back(crow_indices.size(-1) - 1); | ||
size.push_back(col_indices.max().item<int64_t>() + 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
col_indices are always guaranteed to be int64_t now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, but here .max().item()
gives a Scalar and .item<int64_t>
casts the Scalar to int64_t
.
return item().to##name(); \ |
Okay, a few tests are really failing. I'll resolve the failures. |
from functools import reduce | ||
for batch_shape in ((), (2,), (2, 3)): | ||
prod = reduce(mul, batch_shape, 1) | ||
crow_indices = torch.tensor([0, 2, 4], device=device).repeat(prod, 1).reshape(*batch_shape, -1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically these indices don't have to be the same for each batch entry. A more powerful test would potentially modify them as well to be different for each batch entry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think generally this looks fine, but let's wait until you resolved the current CI failures.
<
8000
a title="Enable batched csr support for torch.add
Enable batched_csr.to_dense()" data-pjax="true" class="Link--secondary markdown-title" href="/pytorch/pytorch/pull/74542/commits/c12e468b319caa805508cb1fc9bd98fc4b95d94a">Enable batched csr support for torch.add
Enable batched_csr.to_dense()
457c895
to
e5d07e2
Compare
We have recently simplified the CIFlow labels and
|
We have recently simplified the CIFlow labels and
|
@IvanYashchuk - could you rebase this on top of a green commit please? See https://hud.pytorch.org/ (e.g. c5872e6). Hopefully that'll fix the lint CI error. |
The base of this is viable/strict branch with bf16552, which is green. |
@IvanYashchuk - well then let's try rerunning those jobs again. |
@IvanYashchuk - FYI there's a PR that aims to prevent a broken master lint job from holding up PRs that are built upon viable strict. #75199 |
@pytorchbot merge this |
It caused a number of internal only compilation failures, for example see: #74425 (comment) and #74542 (comment) [ghstack-poisoned]
It caused a number of internal only compilation failures, for example see: #74425 (comment) and #74542 (comment) [ghstack-poisoned]
It caused a number of internal only compilation failures, for example see: #74425 (comment) and #74542 (comment) ghstack-source-id: 14889fa Pull Request resolved: #75085
It caused a number of internal only compilation failures, for example see: #74425 (comment) and #74542 (comment) Pull Request resolved: #75085 Approved by: https://github.com/ngimel, https://github.com/albanD
@pytorchbot revert this |
Internal errors: Summary: |
This reverts commit eead599. Reverted #74542 on behalf of https://github.com/b0noI
@malfet - We might want to make this error part of the CI too |
@IvanYashchuk - I merged master and added a simple fix for this and will attempt to merge again once the CI runs green
|
[Edit] This warning is only generated by clang(see really old gcc feature request) , and we do not have CUDA+clang builds configured in our CI at the moment (trying to add this in #75293) |
Summary: It caused a number of internal only compilation failures, for example see: #74425 (comment) and #74542 (comment) Pull Request resolved: #75085 Approved by: https://github.com/ngimel, https://github.com/albanD Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/90a56fc515dbac9534a1a14110f9edf089430f81 Reviewed By: b0noI Differential Revision: D35404322 Pulled By: malfet fbshipit-source-id: aaa7033d0b7cbfcc1d4b3eeff86d09eba428f068
@cpuhrsch, let's try one more time? 🤞 |
@pytorchbot merge this |
Summary: This is the first portion of changes required to enable Batched CSR format described in #60854 (comment). Currently, only the same batch shape for indices and values is allowed. In the future, we could enable "broadcasting" of indices and batched values, as done in xFormers (https://github.com/facebookresearch/xformers/blob/dd96b8d8beda5308fb433c1ef3ff04b7f178c263/xformers/components/attention/_sputnik_sparse.py#L441). This PR adds possibility to construct a batched CSR matrix with `torch.sparse_csr_tensor` and this batched CSR can be converted to a dense tensor with a `.to_dense()` call. Pull Request resolved: #74542 Approved by: https://github.com/cpuhrsch Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/c7ae23b50e5f96261889ab9d55df1be7a6b1d55f Reviewed By: b0noI Differential Revision: D35485699 fbshipit-source-id: fa1c0c5cf256ac886717a9016a83e62ea2772f75
This is the first portion of changes required to enable Batched CSR format described in #60854 (comment).
Currently, only the same batch shape for indices and values is allowed. In the future, we could enable "broadcasting" of indices and batched values, as done in xFormers (https://github.com/facebookresearch/xformers/blob/dd96b8d8beda5308fb433c1ef3ff04b7f178c263/xformers/components/attention/_sputnik_sparse.py#L441).
This PR adds possibility to construct a batched CSR matrix with
torch.sparse_csr_tensor
and this batched CSR can be converted to a dense tensor with a.to_dense()
call.