8000 CUDA -> CPU issue · Issue #32 · lukemelas/EfficientNet-PyTorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
CUDA -> CPU issue #32
Closed
Closed
@kamwoh

Description

@kamwoh
def drop_connect(inputs, p, training):
    """ Drop connect. """
    if not training: return inputs
    batch_size = inputs.shape[0]
    keep_prob = 1 - p
    random_tensor = keep_prob
    random_tensor += torch.rand([batch_size, 1, 1, 1], dtype=inputs.dtype)  # uniform [0,1)
    binary_tensor = torch.floor(random_tensor)
    output = inputs / keep_prob * binary_tensor # error happens here
    return output

Faced error: RuntimeError: expected backend CUDA and dtype Float but got backend CPU and dtype Float

when I try to run on GPU, this error happens, the error direct me to this line, I think we should convert binary_tensor to inputs.device:

binary_tensor = torch.floor(random_tensor).to(inputs.device)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0