-
Notifications
You must be signed in to change notification settings - Fork 160
Errors loading ggml models #107
Comments
I'm pretty sure the issue is with GPT4All as I can load all models mentioned with:
Are there any major differences from loading the model through Llama instead of GPT4All? |
I have the same issue. My environment: Code: I can load ggml-gpt4all-l13b-snoozy.bin and https://huggingface.co/mrgaang/aira/blob/main/gpt4all-converted.bin and they work fine, but the following models fail to load: Loading ggml-wizardLM-7b.q4_2.bin and ggml-vicuna-7b-1.1-q4_2.bin gives
Loading ggml-mpt-7b-chat.bin gives
I tried using llama.cpp/migrate-ggml-2023-03-30-pr613.py on ggml-mpt-7b-chat.bin, but got the error:
I tried using llama.cpp/convert-unversioned-ggml-to-ggml.py to fix this error, but got the error:
I tried using llama.cpp/migrate-ggml-2023-03-30-pr613.py on ggml-wizardLM-7b.q4_2.bin, but got the message: I have no idea how to fix this or why it happens. |
I'm having the same issue on
and models all result in
|
Hello,
I'm just starting to explore the models made available by gpt4all but I'm having trouble loading a few models.
My environment details:
Code to reproduce error (vicuna_test.py):
Issue:
The issue is that I can't seem to load some of the models listed here - https://github.com/nomic-ai/gpt4all-chat#manual-download-of-models. The models I've failed to load are:
As shown below, the ggml-gpt4all-l13b-snoozy.bin model loads without issue. I also managed to load this version - https://huggingface.co/mrgaang/aira/blob/main/gpt4all-converted.bin.
Errors loading the listed models:
Can anyone please advise on how I resolve these issues?
The text was updated successfully, but these errors were encountered: