You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your provided [tesseract_with_llama2_corrections.py] code snippet is equipped with the llma2 chat ggml q3 k_s.bin LLM model but the huggingface.co is referring to use GGUF instead saying the GGML is deprecated. Now, I need to know whether I can write the GGUF in the model_file_path in the code snippet.
I need your help because I have to be confirmed before downloading 108GB of data.
The text was updated successfully, but these errors were encountered:
Your provided [tesseract_with_llama2_corrections.py] code snippet is equipped with the llma2 chat ggml q3 k_s.bin LLM model but the huggingface.co is referring to use GGUF instead saying the GGML is deprecated. Now, I need to know whether I can write the GGUF in the model_file_path in the code snippet.
I need your help because I have to be confirmed before downloading 108GB of data.
The text was updated successfully, but these errors were encountered: