You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm trying to quantize the BatchNormd2d Layer but when I'm using BatchNorm2dToQuantScaleBias the loss isn't converging(stays at the same value). When using regular nn.BatchNorm2d with the rest of the layers quantized training is possible so the problem is in the quantized BatchNorm. I couldn't find any examples on how to use this type of layers.
Thanks
The text was updated successfully, but these errors were encountered:
Hello, I'm trying to quantize the BatchNormd2d Layer but when I'm using BatchNorm2dToQuantScaleBias the loss isn't converging(stays at the same value). When using regular nn.BatchNorm2d with the rest of the layers quantized training is possible so the problem is in the quantized BatchNorm. I couldn't find any examples on how to use this type of layers.
Thanks
The text was updated successfully, but these errors were encountered: