-
Notifications
You must be signed in to change notification settings - 8000 Fork 24.1k
Support view() on batch dimensions for non-contiguous tensors? #3653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There's a good reason for the |
I think Vadim's point is that if the subspace of the non-contiguous tensor that we are modifying is contiguous, then the stride tricks should work, and we could reduce some of the restrictions in |
Ohhh I haven't noticed this, my bad. What "works but doesn't fit some interfaces" means? I think it should give results equivalent to the previous line (if it worked). I think relaxing the constraint on |
Yep, you got me right :) By "works but doesn't fit some interfaces" I meant that |
I'm not sure if I understand, can't you just call |
That's what I currently do. Though import torch
def bmm(A, B):
print(A.is_contiguous(), B.is_contiguous())
return torch.bmm( If |
working on this |
closed via #4062 |
Such view's are needed to implement 4D+
bmm
that can treat all dimensions except last two as batch dimensions (similarly to Linear module's behavior). Unless I move transpose inside thebmm
func (which would not match the existing interface but well), an extracontiguous
call is needed.Does it make sense to support such
view
call? On one hand it breaks invariant thatview
always returns a contiguous tensor, on the other side the situation for several batch dimensions may be common.Earlier discussed in #764
The text was updated successfully, but these errors were encountered: