Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use BatchNorm2dToQuantScaleBias #393

Open
MiguelAReis opened this issue Mar 14, 2022 · 2 comments
Open

How to use BatchNorm2dToQuantScaleBias #393

MiguelAReis opened this issue Mar 14, 2022 · 2 comments

Comments

@MiguelAReis
Copy link

MiguelAReis commented Mar 14, 2022

Hello, I'm trying to quantize the BatchNormd2d Layer but when I'm using BatchNorm2dToQuantScaleBias the loss isn't converging(stays at the same value). When using regular nn.BatchNorm2d with the rest of the layers quantized training is possible so the problem is in the quantized BatchNorm. I couldn't find any examples on how to use this type of layers.
Thanks

@tafk7
Copy link

tafk7 commented Jan 15, 2023

Bumping this, would really appreciate a simple example beyond the basic functionality validation of the test_merge_bn.py file.

@ardeal
Copy link

ardeal commented Sep 25, 2023

@MiguelAReis @Rellek72
I am experiencing the same issue.
Do you have any solution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants