-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transformer 4.46.1 compat #24
Comments
It is a pity that I have no time to support this. But I think you can try to do it yourself as it is not so complex. |
@HandH1998 Understood. Second question, will the vllm hqq kernel be maintained by you or someone associated with qqq or is that kernel also be left to the open source community as well? |
The vllm qqq kernel now is maintained by the vllm team. The open source community can also modify it for your use and only need to maintain the copyright statement and cite our paper. |
@HandH1998 I will be doing some testing next week. If QQQ quantization quality is stable and inference is good, I will ask my team to integrate qqq into GPTQModel via |
That is great! If you have any question, chat with me. |
@HandH1998 Is there plan to bring the llama/qwen2.5 modeling code up-to-date with latest 4.46.1? Upon testing I find the modeling code is out of sync and QQQ will only run with fixed 4.38 transformers.
The text was updated successfully, but these errors were encountered: