You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Amazing work.
An info about the tokenizer, in the tokenizer and embedded code there are references about Qwen2-0.5B-Instruct and 1.5B; I wanted to know if there have been any tests using them and if so, what precision and fidelity they achieved.
Because using them could greatly reduce the VRAM.
An other info, Is there any chance that an inference with offloading of the various models is planned, for low end PCs?
The text was updated successfully, but these errors were encountered:
Thanks for the info on qwen, regarding offloading I mean loading and unloading the various parts or layers into RAM so as to fit Sana into a 8 or 6 gb GPU (and maybe even 4gb) besides running for example vae on CPU or even part of gemma. After all gemma also runs on 6gb desktop GPU and android with 8gb RAM.
Amazing work.
An info about the tokenizer, in the tokenizer and embedded code there are references about Qwen2-0.5B-Instruct and 1.5B; I wanted to know if there have been any tests using them and if so, what precision and fidelity they achieved.
Because using them could greatly reduce the VRAM.
An other info, Is there any chance that an inference with offloading of the various models is planned, for low end PCs?
The text was updated successfully, but these errors were encountered: