Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem talking to the backend #188

Open
andreas-bulling opened this issue Jul 23, 2024 · 2 comments
Open

Problem talking to the backend #188

andreas-bulling opened this issue Jul 23, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@andreas-bulling
Copy link

Installation went fine but I get the following error when trying to invoke the assistant:

Sorry, there was a problem talking to the backend: RuntimeError('llama_decode returned 1')

image

@andreas-bulling andreas-bulling added the bug Something isn't working label Jul 23, 2024
@acon96
Copy link
Owner

acon96 commented Jul 30, 2024

Can you provide more information? What model were you using? How many entities do you have exposed? Were there any errors or warnings in the HA logs?

@Teagan42
Copy link
Contributor

llama_decode is a method in llama.cpp: https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.llama_cpp.llama_decode

Does your LLM model work if you call it directly (i.e. no Home-Assistant, use the CLI or API of the runner)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants