Closed
Description
def drop_connect(inputs, p, training):
""" Drop connect. """
if not training: return inputs
batch_size = inputs.shape[0]
keep_prob = 1 - p
random_tensor = keep_prob
random_tensor += torch.rand([batch_size, 1, 1, 1], dtype=inputs.dtype) # uniform [0,1)
binary_tensor = torch.floor(random_tensor)
output = inputs / keep_prob * binary_tensor # error happens here
return output
Faced error: RuntimeError: expected backend CUDA and dtype Float but got backend CPU and dtype Float
when I try to run on GPU, this error happens, the error direct me to this line, I think we should convert binary_tensor
to inputs.device:
binary_tensor = torch.floor(random_tensor).to(inputs.device)
Metadata
Metadata
Assignees
Labels
No labels