Closed
Description
🐛 Describe the bug
This code works on MPS in pytorch stable 1.12.1
, and also I presume in CUDA (this comes from the stable-diffusion model, which was trained on CUDA originally).
Note: you will need to launch python with PYTORCH_ENABLE_MPS_FALLBACK=1
to get a CPU fallback for aten::index.Tensor
. that might also explain why it works in stable.
from torch import tensor
tensor([1.], device='cpu')[tensor([0], device='mps')]
# result: tensor([1.])
fails in nightly 1.13.0.dev20220917
with:
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
6A32
I was able to change the model code to fix it. but since this worked on CUDA and it's a pattern people are using in the wild: other models may be affected.
Versions
PyTorch version: 1.13.0.dev20220917
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (arm64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.30)
CMake version: version 3.22.1
Libc version: N/A
Python version: 3.10.4 (main, Mar 31 2022, 03:37:37) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.5-arm64-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==1.7.5
[pip3] torch==1.13.0.dev20220917
[pip3] torch-fidelity==0.3.0
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.9.3
[pip3] torchtyping==0.1.4
[pip3] torchvision==0.14.0.dev20220917
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch-lightning 1.7.5 pypi_0 pypi
[conda] torch 1.13.0.dev20220917 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchtyping 0.1.4 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220917 pypi_0 pypi