Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Info about other tokenizer #26

Open
chri002 opened this issue Nov 22, 2024 · 2 comments
Open

Info about other tokenizer #26

chri002 opened this issue Nov 22, 2024 · 2 comments

Comments

@chri002
Copy link

chri002 commented Nov 22, 2024

Amazing work.
An info about the tokenizer, in the tokenizer and embedded code there are references about Qwen2-0.5B-Instruct and 1.5B; I wanted to know if there have been any tests using them and if so, what precision and fidelity they achieved.
Because using them could greatly reduce the VRAM.

An other info, Is there any chance that an inference with offloading of the various models is planned, for low end PCs?

@lawrence-cj
Copy link
Collaborator

Qwen is not tested in Sana. And what do you mean by offloading for low end PCs?

@chri002
Copy link
Author

chri002 commented Nov 22, 2024

Thanks for the info on qwen, regarding offloading I mean loading and unloading the various parts or layers into RAM so as to fit Sana into a 8 or 6 gb GPU (and maybe even 4gb) besides running for example vae on CPU or even part of gemma. After all gemma also runs on 6gb desktop GPU and android with 8gb RAM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants