Trying to run from local llama.cpp server #67
Unanswered
DefamationStation
asked this question in
Questions
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello
I run a server at http://127.0.0.1:8080/v1
and this is my config
default_model = "Llama-3-8b-Q4_K_S"
system_prompt = ""
message_code_theme = "dracula"
[[models]]
name = "Llama-3-8b-Q4_K_S"
api_base = "http://127.0.0.1:8080/v1"
api_key = "api-key-if-required"
but I am getting this error
Beta Was this translation helpful? Give feedback.
All reactions