-
Notifications
You must be signed in to change notification settings - Fork 24.4k
Remove native_functions.yaml dependency from TensorTopK.cu #66794
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
27280aa
Remove native_functions.yaml dependency from TensorTopK.cu
peterbell10 18664f6
Update on "Remove native_functions.yaml dependency from TensorTopK.cu"
peterbell10 19fa3c1
Update on "Remove native_functions.yaml dependency from TensorTopK.cu"
peterbell10 c060b8e
Update on "Remove native_functions.yaml dependency from TensorTopK.cu"
peterbell10 bd58d34
Update on "Remove native_functions.yaml dependency from TensorTopK.cu"
peterbell10 7d6ec9d
Update on "Remove native_functions.yaml dependency from TensorTopK.cu"
peterbell10 2dabfce
Update on "Remove native_functions.yaml dependency from TensorTopK.cu"
5817275
Update on "Remove native_functions.yaml dependency from TensorTopK.cu"
peterbell10 c31fa87
Update on "Remove native_functions.yaml dependency from TensorTopK.cu"
peterbell10 65945dd
Update on "Remove native_functions.yaml dependency from TensorTopK.cu"
9d472d6
Update on "Remove native_functions.yaml dependency from TensorTopK.cu"
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,53 @@ | ||
#include <ATen/native/cuda/TensorTopK.h> | ||
#include <ATen/Functions.h> | ||
#include <ATen/NativeFunctions.h> | ||
#include <ATen/WrapDimUtils.h> | ||
#include <ATen/native/cuda/Sort.h> | ||
|
||
namespace at { | ||
namespace native { | ||
|
||
TORCH_IMPL_FUNC(topk_out_cuda) | ||
(const Tensor& self, | ||
int64_t k, int64_t dim, bool largest, bool sorted, | ||
const Tensor& values, | ||
const Tensor& indices) { | ||
TensorArg topK_arg{values, "topK", 1}, indices_arg{indices, "indices", 2}, input_arg{self, "self", 3}; | ||
checkAllSameGPU(__func__, {topK_arg, indices_arg, input_arg}); | ||
dim = at::maybe_wrap_dim(dim, self); | ||
|
||
// If k is 0 the result is an empty tensor, so we don't need to launch a kernel. | ||
if (k == 0) { | ||
return; | ||
} | ||
|
||
launch_gather_topk_kernel(self, k, dim, largest, sorted, values, indices); | ||
|
||
// Sort the results if the user wants them sorted, since our | ||
// selection routine does not ensure sorting | ||
if (sorted && values.numel() > 1) { | ||
if (should_use_small_sort(values, dim)) { | ||
// This avoids any memory allocations and performs all sorting | ||
// work inplace along the slice | ||
|
||
sortKeyValueInplace(values, indices, dim, largest); | ||
} else { | ||
// Depend upon the backup sort that returns indices, which we | ||
// can use in conjunction with gather to produce the original | ||
// indices. | ||
// This is not the most efficient implementation, especially since | ||
// there are memory allocations performed here. If the user desires | ||
// greater performance, they should torch.gather() the results | ||
// themselves using the reported indices, providing previously | ||
// allocated tensors to receive the results. | ||
|
||
Tensor sortedIndices = at::empty_like(indices); | ||
Tensor sortedValues = at::empty_like(values); | ||
sort_out_cuda(values, dim, largest, sortedValues, sortedIndices); | ||
indices.copy_(indices.gather(dim, sortedIndices)); | ||
values.copy_(sortedValues); | ||
} | ||
} | ||
} | ||
|
||
}} // namespace at::native |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
#pragma once | ||
#include <cstdint> | ||
|
||
namespace at { | ||
class TensorBase; | ||
} | ||
|
||
namespace at { | ||
namespace native { | ||
void launch_gather_topk_kernel( | ||
const TensorBase& self, | ||
int64_t k, int64_t dim, bool largest, bool sorted, | ||
const TensorBase& values, const TensorBase& indices); | ||
}} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out of curiosity, what is the criteria by which these four lines remain in the .cu but the
if (k == 0)
moves?My understanding of your work is that you are prioritizing the following:
Is that roughly the prioritization of your approach here? Under that, moving this block and the
k == 8000 0
check both fit under 3) as the lowest priority.Reordering the
k == 0
check does change the behavior since it now avoids the check about having too many dimensions. Is that OK? FWIW, I like the idea of being stringent on inputs rather than letting a loophole like this let the user get away with an invalid input.Changing topic altogether, do you think that splitting code up this way causes any meaningful harm by creating cross-module optimization barriers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
return
statement must go in the.cpp
file function so we don't launch the sorting kernels. It would make sense to keep theMAX_DIMS
checks together with it, butMAX_DIMS
is defined in a.cuh
header file so needsnvcc
:pytorch/aten/src/ATen/cuda/detail/OffsetCalculator.cuh
Line 19 in 383c1f5
This is mostly right, although
.cu
isn't actually in my criteria anywhere. I'm currently focusing on files that depend onnative_functions.yaml
, prioritized by their compile time (to maximize impact). It just so happens that cuda code is much slower to compile so the top of the list is all cuda files. Somewhat interestingly,GridSample.cpp
was aboveGridSample.cu
in compile time which is why that PR changes both.The top of the list at the moment looks like this:
I wouldn't say that applies here since
MAX_DIMS
is an implementation limitation not an invalid input. If, for example,matmul
allowed empty tensors to have a shape mismatch then I would agree.I don't think there's much the compiler can do here, but things like calling the same tensor method in both functions will have some impact (especially for virtual methods). However, generally speaking, the heavy lifting of these functions are done in the cuda runtime to actually launch the kernel. So, if there is any slow down I expect it to be fairly minimal.