Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compatibility of Quant3Linear and 4-bit quantization #48

Open
mynotwo opened this issue Dec 10, 2023 · 0 comments
Open

Compatibility of Quant3Linear and 4-bit quantization #48

mynotwo opened this issue Dec 10, 2023 · 0 comments

Comments

@mynotwo
Copy link

mynotwo commented Dec 10, 2023

Hi! I've noticed that the quantization layer would pack the quantized weight using class Quant3Linear, as shown below:
image

However, it seems to me that it only suits for 2bits and 3bits weights. If the original weights in $intweight is 4bits, some bits would be lost.

Could you explain the logic behind this? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant