8000 Per channel weight observer for ConvTranspose · Issue #54816 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Per channel weight observer for ConvTranspose #54816
Open
@evrrn

Description

@evrrn

I’ve just tried to quantize a model with ConvTranspose1d using torch 1.8 and fbgemm backend, but got this error message: AssertionError: Per channel weight observer is not supported yet for ConvTranspose{nx}d.

Pitch

Quantization of a model with ConvTranspose works fine with default qconfig using the most reliable per channel quantization.

Alternatives

There's an option to use per tensor configuration instead, though it's typically less accurate.

cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @dzhulgakov

6063

Metadata

Metadata

Assignees

Labels

low priorityWe're unlikely to get around to doing this in the near futureoncall: quantizationQuantization support in PyTorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0