You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 30, 2024. It is now read-only.
Welcome to use the llama on the ITREX!
AVX:1 AVX2:1 AVX512F:0 AVX_VNNI:1 AVX512_VNNI:0 AMX_INT8:0 AMX_BF16:0 AVX512_BF16:0 AVX512_FP16:0
Loading the bin file with GGUF format...
main: seed = 1712361979
model.cpp: loading model from /models/llama-2-7b.Q4_K_S.gguf
error loading model: unrecognized tensor type 12
model_init_from_file: failed to load model
I got this error when trying to load the Q4_K_M and Q4_K_S quantized models for Llama-2-7B-GGUF. Would appreciate support could be added.
The text was updated successfully, but these errors were encountered:
I got this error when trying to load the Q4_K_M and Q4_K_S quantized models for Llama-2-7B-GGUF. Would appreciate support could be added.
The text was updated successfully, but these errors were encountered: