-
Notifications
You must be signed in to change notification settings - Fork 24.1k
expanded scalar is not contiguous #3919
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
On 2., the docs could be a little cleaner; right now they say "Returns True if this tensor is contiguous in memory in C order," but it's not really clear (to me at least) what "in C order" means in the case of 0-strided (i.e. expanded) tensors. The condition is actually something like "contiguous in memory in C order and each point is non-aliased", "contiguous in memory in C with no strides of size 0" or "memory is equivalent to a contiguous C N-dimensional array of the corresponding size". On 1., we probably need more "viewing" functions, i.e. a "reshape" (that is a no-op for input of the same size", a "view" that always succeeds, and maybe the current view (that requires contiguity). |
|
proposed a similar thing to (1) in #3653 |
I checked how it is in numpy, and they support all the mentioned cases, including zero strides and #3653.
You can verify that a and b share the same memory and inspect
Only it does not work for cuda Tensors as the data pointer to device is barriered off numpy. But I hope I made my point that Tensor.view can work this way and it would be more clear and consistent. |
An expanded Scalar is not C-contiguous in either PyTorch or NumPy.
C-contiguous means that moving to the next element in the array moves one over in the underlying memory block. This is not the case for expanded scalars since. Having the memory block be contiguous in memory is not sufficient. We should add |
Thanks, I understand C-contiguous. But is it the right prerequisite for I believe they are working in numpy and it does not have to do with all mechanics in reshape:
array([[1, 1, 1, 1]]) |
|
That I cannot argue about. I just suggest that there is a good deal of use cases where one wants a |
Yes, agreed. But we want to solve it by adding |
In that case what would be the conditions that the new |
torch.Tensor(1,1).expand(1,2).view(1,2)
Traceback (most recent call last):
File "", line 1, in
RuntimeError: invalid argument 1: input is not contiguous at /pytorch/torch/lib/TH/generic/THTensor.c:231
There are two problems:
I think these are bugs.
As a feature request, may I suggest that some of the reshaping should also succeed on non-contiguous Tensors as long as only dimensions that are contiguous are affected. Examples:
size (4,4, X), strides (1,4, Y) -> size (16, X), stride (1, Y)
The text was updated successfully, but these errors were encountered: