You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I downloaded GPT4ll recently and would like to try Zicklein with it, but unfortunately i'm a bit stumped on how to get it into GPT4all so i can try it.
It might be worth expanding the readme on that part?
The text was updated successfully, but these errors were encountered:
To explain gpt4all uses llama ggml q4_0 models in a single bin file.
Like: wizardlm-13b-v1.1-superhot-8k.ggmlv3.q4_0.bin
So we need a way to get the three models into one, thats what my zpatchert Hirn (smached brain) tells me.
From what i understood it's,
the base model: decapoda-research/llama-7b-hf
the stanford mod: tloen/alpaca-lora-7b
and the German mod: avocardio/alpaca-lora-7b-german-base-52k
but since im no developer i can't write a python-converter.py which does this for us and also i guess my hardware isn't able to fight it.
If it would, i guess running the generator.py would the correct way to get it.
"Ich bin grad zpatschert dafür..."
I downloaded GPT4ll recently and would like to try Zicklein with it, but unfortunately i'm a bit stumped on how to get it into GPT4all so i can try it.
It might be worth expanding the readme on that part?
The text was updated successfully, but these errors were encountered: