8000 torch.quantized_batch_norm API return different output when input is nan/inf or undefined mathematical calculation method · Issue #154708 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
torch.quantized_batch_norm API return different output when input is nan/inf or undefined mathematical calculation method #154708
Open
@yjmyl

Description

@yjmyl

🐛 Describe the bug

An issue occurred during boundary value testing where different output values were generated under identical input conditions.

Theoretically, both 0 and 98 are possible results depending on the calculation method.
The suspected root cause involves quantization from float to int:

A NaN in float converts to 0 when cast to int.
Two quantization formulas exist:
(int)(y/scale) + offset → Outputs ‌98‌ (true value). --- compute method 1
(int)(y/scale + (float)offset) → Outputs ‌0‌ (true value). --- compute method 2
Here, y and scale are float, while offset is int.

our test code is :

import torch
qx = torch.quantize_per_tensor(torch.rand(1,10,1,1) ,0,0,torch.qint8)
print(torch.quantized_batch_norm(qx,torch.ones(10),torch.zeros(10),torch.zeros(10), torch.ones(10) * -10,0.00001,1,-98))

and then we get different output with same input.
please check it.
And we hope the output will be the same as compute method 2

Versions

Image

cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: NaNs and InfsProblems related to NaN and Inf handling in floating pointneeds reproductionSomeone else needs to try reproducing the issue given the instructions. No action needed from useroncall: quantizationQuantization support in PyTorch

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0