Description
🐛 Describe the bug
An issue occurred during boundary value testing where different output values were generated under identical input conditions.
Theoretically, both 0 and 98 are possible results depending on the calculation method.
The suspected root cause involves quantization from float to int:
A NaN in float converts to 0 when cast to int.
Two quantization formulas exist:
(int)(y/scale) + offset → Outputs 98 (true value). --- compute method 1
(int)(y/scale + (float)offset) → Outputs 0 (true value). --- compute method 2
Here, y and scale are float, while offset is int.
our test code is :
import torch
qx = torch.quantize_per_tensor(torch.rand(1,10,1,1) ,0,0,torch.qint8)
print(torch.quantized_batch_norm(qx,torch.ones(10),torch.zeros(10),torch.zeros(10), torch.ones(10) * -10,0.00001,1,-98))
and then we get different output with same input.
please check it.
And we hope the output will be the same as compute method 2
Versions
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim