You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 12, 2023. It is now read-only.
Below is my code for python:
from pygpt4all import GPT4All
model = GPT4All(r'./models/ggml-stable-vicuna-13B.q4_2.bin')
while True:
try:
prompt = input("You: ", flush=True)
if prompt == '':
continue
print("AI:", end='')
for token in model.generate(prompt):
print(f"{token}", end='', flush=True)
print()
except KeyboardInterrupt:
break
It shows the result is below: bad f16 value 5
llama_model_load: loading model from './models/ggml-stable-vicuna-13B.q4_2.bin' - please wait ...
llama_model_load: n_vocab = 32001
llama_model_load: n_ctx = 512
llama_model_load: n_embd = 5120
llama_model_load: n_mult = 256
llama_model_load: n_head = 40
llama_model_load: n_layer = 40
llama_model_load: n_rot = 128
llama_model_load: f16 = 5
llama_model_load: n_ff = 13824
llama_model_load: n_parts = 2
llama_model_load: type = 2
llama_model_load: invalid model file './models/ggml-stable-vicuna-13B.q4_2.bin' (bad f16 value 5)
llama_init_from_file: failed to load model
file ggml-stable-vicuna-13B.q4_2.bin is download from :https://gpt4all.io/models/ggml-stable-vicuna-13B.q4_2.bin
Can someone help me? Thank you very much!
The text was updated successfully, but these errors were encountered: