Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v0.2.16 - intregration failed to configure bug #140

Open
pbn42 opened this issue May 4, 2024 · 10 comments
Open

v0.2.16 - intregration failed to configure bug #140

pbn42 opened this issue May 4, 2024 · 10 comments
Labels
bug Something isn't working

Comments

@pbn42
Copy link

pbn42 commented May 4, 2024

Describe the bug

Using v0.2.146, installation works fine, but when i finished to create the integration, i got a "failed to configure" message.

Expected behavior
Integration shall start and should appear as a conversation agent

Logs
If applicable, please upload any error or debug logs output by Home Assistant.

`Enregistreur: homeassistant.config_entries
Source: config_entries.py:575
S'est produit pour la première fois: 22:11:11 (1 occurrences)
Dernier enregistrement: 22:11:11

Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/config_entries.py", line 575, in async_setup
    result = await component.async_setup_entry(hass, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/llama_conversation/__init__.py", line 67, in async_setup_entry
    agent = await hass.async_add_executor_job(create_agent, backend_type)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/llama_conversation/__init__.py", line 63, in create_agent
    return agent_cls(hass, entry)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/llama_conversation/agent.py", line 139, in __init__
    self._load_model(entry)
  File "/config/custom_components/llama_conversation/agent.py", line 542, in _load_model
    validate_llama_cpp_python_installation()
  File "/config/custom_components/llama_conversation/utils.py", line 97, in validate_llama_cpp_python_installation
    raise Exception(f"Failed to properly initialize llama-cpp-python. (Exit code {process.exitcode}.)")
Exception: Failed to properly initialize llama-cpp-python. (Exit code -4.)`

Thanks a lot

@pbn42 pbn42 added the bug Something isn't working label May 4, 2024
@acon96
Copy link
Owner

acon96 commented May 5, 2024

Can you post the contents of /proc/cpuinfo for the system this is running on? The error is that the integration is trying to execute an illegal instruction because it is installing the wrong variant of llama-cpp-python. See #99 for more info.

@pbn42
Copy link
Author

pbn42 commented May 5, 2024

Sure :

`processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 156
model name : Intel(R) Celeron(R) N5095A @ 2.00GHz
stepping : 0
microcode : 0x1b
cpu MHz : 1996.800
cache size : 16384 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 27
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch _perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 cx 16 pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand hy pervisor lahf_lm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tp r_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms rdseed smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves arat vnmi umip waitpkg gfn i rdpid movdiri movdir64b md_clear flush_l1d arch_capabilities
vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ep t_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest v apic_reg vid shadow_vmcs pml tsc_scaling
bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs srbds mmio_stal e_data rfds
bogomips : 3993.60
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 156
model name : Intel(R) Celeron(R) N5095A @ 2.00GHz
stepping : 0
microcode : 0x1b
cpu MHz : 1996.800
cache size : 16384 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 27
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch _perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 cx 16 pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand hy pervisor lahf_lm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tp r_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms rdseed smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves arat vnmi umip waitpkg gfn i rdpid movdiri movdir64b md_clear flush_l1d arch_capabilities
vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ep t_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest v apic_reg vid shadow_vmcs pml tsc_scaling
bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs srbds mmio_stal e_data rfds
bogomips : 3993.60
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:`

@mikaelheidrich
Copy link

For what it's worth I am having the exact same issue using v0.2.17:

`Logger: homeassistant.config_entries
Source: config_entries.py:575
First occurred: 10:26:19 (1 occurrences)
Last logged: 10:26:19

Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 575, in async_setup
result = await component.async_setup_entry(hass, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/init.py", line 67, in async_setup_entry
agent = await hass.async_add_executor_job(create_agent, backend_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/init.py", line 63, in create_agent
return agent_cls(hass, entry)
^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/agent.py", line 139, in init
self._load_model(entry)
File "/config/custom_components/llama_conversation/agent.py", line 542, in _load_model
validate_llama_cpp_python_installation()
File "/config/custom_components/llama_conversation/utils.py", line 97, in validate_llama_cpp_python_installation
raise Exception(f"Failed to properly initialize llama-cpp-python. (Exit code {process.exitcode}.)")
Exception: Failed to properly initialize llama-cpp-python. (Exit code -4.)
`
Logger: homeassistant.config_entries
Source: config_entries.py:575
First occurred: 10:26:19 (1 occurrences)
Last logged: 10:26:19

Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 575, in async_setup
result = await component.async_setup_entry(hass, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/init.py", line 67, in async_setup_entry
agent = await hass.async_add_executor_job(create_agent, backend_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/init.py", line 63, in create_agent
return agent_cls(hass, entry)
^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/agent.py", line 139, in init
self._load_model(entry)
File "/config/custom_components/llama_conversation/agent.py", line 542, in _load_model
validate_llama_cpp_python_installation()
File "/config/custom_components/llama_conversation/utils.py", line 97, in validate_llama_cpp_python_installation
raise Exception(f"Failed to properly initialize llama-cpp-python. (Exit code {process.exitcode}.)")
Exception: Failed to properly initialize llama-cpp-python. (Exit code -4.)

Logger: homeassistant.config_entries
Source: config_entries.py:575
First occurred: 10:26:19 (1 occurrences)
Last logged: 10:26:19

Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 575, in async_setup
result = await component.async_setup_entry(hass, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/init.py", line 67, in async_setup_entry
agent = await hass.async_add_executor_job(create_agent, backend_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/init.py", line 63, in create_agent
return agent_cls(hass, entry)
^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/agent.py", line 139, in init
self._load_model(entry)
File "/config/custom_components/llama_conversation/agent.py", line 542, in _load_model
validate_llama_cpp_python_installation()
File "/config/custom_components/llama_conversation/utils.py", line 97, in validate_llama_cpp_python_installation
raise Exception(f"Failed to properly initialize llama-cpp-python. (Exit code {process.exitcode}.)")
Exception: Failed to properly initialize llama-cpp-python. (Exit code -4.)

Logger: homeassistant.config_entries
Source: config_entries.py:575
First occurred: 10:26:19 (1 occurrences)
Last logged: 10:26:19

Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 575, in async_setup
result = await component.async_setup_entry(hass, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/init.py", line 67, in async_setup_entry
agent = await hass.async_add_executor_job(create_agent, backend_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/init.py", line 63, in create_agent
return agent_cls(hass, entry)
^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/agent.py", line 139, in init
self._load_model(entry)
File "/config/custom_components/llama_conversation/agent.py", line 542, in _load_model
validate_llama_cpp_python_installation()
File "/config/custom_components/llama_conversation/utils.py", line 97, in validate_llama_cpp_python_installation
raise Exception(f"Failed to properly initialize llama-cpp-python. (Exit code {process.exitcode}.)")
Exception: Failed to properly initialize llama-cpp-python. (Exit code -4.)

CPU is Genuine intel celeron 2955u

@mikaelheidrich
Copy link

To follow up, tried the new release and still have this error. I have tried manually adding various whl to the custom_components/llama_conversation/ directory and continuing the installation. All result with the same error:

Logger: homeassistant.config_entries
Source: config_entries.py:594
First occurred: 12:40:25 (6 occurrences)
Last logged: 14:11:04

Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation
Error setting up entry LLM Model 'llama_cpp_python-0.2.77-cp312-cp312-musllinux_1_2_x86_64-noavx.whl' (llama.cpp) for llama_conversation
Error setting up entry LLM Model 'llama_cpp_python-0.2.77-cp312-cp312-musllinux_1_2_x86_64-avx512.whl' (llama.cpp) for llama_conversation
Error setting up entry LLM Model 'llama_cpp_python-0.2.77-cp312-cp312-musllinux_1_2_x86_64.whl' (llama.cpp) for llama_conversation
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 594, in async_setup
result = await component.async_setup_entry(hass, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/init.py", line 80, in async_setup_entry
agent = await hass.async_add_executor_job(create_agent, backend_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/init.py", line 76, in create_agent
return agent_cls(hass, entry)
^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/agent.py", line 152, in init
self._load_model(entry)
File "/config/custom_components/llama_conversation/agent.py", line 776, in _load_model
validate_llama_cpp_python_installation()
File "/config/custom_components/llama_conversation/utils.py", line 132, in validate_llama_cpp_python_installation
raise Exception(f"Failed to properly initialize llama-cpp-python. (Exit code {process.exitcode}.)")
Exception: Failed to properly initialize llama-cpp-python. (Exit code -4.)

CPU is Genuine intel celeron 2955u

@acon96
Copy link
Owner

acon96 commented Jun 9, 2024

Your CPU says that it supports all of the required instructions but keeps crashing because of a missing instruction.

The solution to get around this is to follow the directions here to build wheels that are compatible with the machine you are using: https://github.com/acon96/home-llm/blob/develop/docs/Backend%20Configuration.md#build-your-own

@mikaelheidrich
Copy link

Hey! Thanks for the reply. I’m running Home Assistant OS supervised and the command line won’t let me execute docker or git commands. Guess I’ll have to find a different route. Thanks again!

@pbn42
Copy link
Author

pbn42 commented Jun 10, 2024

Your CPU says that it supports all of the required instructions but keeps crashing because of a missing instruction.

The solution to get around this is to follow the directions here to build wheels that are compatible with the machine you are using: https://github.com/acon96/home-llm/blob/develop/docs/Backend%20Configuration.md#build-your-own

Thanks a lot. I just generated it for my Intel(R) Celeron(R) N5095A @ 2.00GHz (see attachment below)

Can you just tell us where to store it in HA please ?

llama_cpp_python-0.2.77-cp312-cp312-musllinux_1_2_x86_64.zip

@benbender
Copy link

@pbn42 "Take the appropriate wheel and copy it to the custom_components/llama_conversation/ directory." (See https://github.com/acon96/home-llm/blob/develop/docs/Backend%20Configuration.md#wheels)

@benbender
Copy link

I followed the given instructions and placed the newly created wheel inside of the custom_components/llama_conversation/-folder.

I'm still getting:

2024-06-10 13:10:45.828 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry LLM Model 'acon96/Home-3B-v3-GGUF' (llama.cpp) for llama_conversation
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/config_entries.py", line 594, in async_setup
    result = await component.async_setup_entry(hass, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/llama_conversation/__init__.py", line 80, in async_setup_entry
    agent = await hass.async_add_executor_job(create_agent, backend_type)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/llama_conversation/__init__.py", line 76, in create_agent
    return agent_cls(hass, entry)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/llama_conversation/agent.py", line 152, in __init__
    self._load_model(entry)
  File "/config/custom_components/llama_conversation/agent.py", line 776, in _load_model
    validate_llama_cpp_python_installation()
  File "/config/custom_components/llama_conversation/utils.py", line 132, in validate_llama_cpp_python_installation
    raise Exception(f"Failed to properly initialize llama-cpp-python. (Exit code {process.exitcode}.)")

Latest home-llm, latest home-assistant on a Intel(R) Celeron(R) N5105 @ 2.00GHz-CPU.

@bigboo3000
Copy link

bigboo3000 commented Jul 9, 2024

I have exactly the same problem with an Intel Celeron J4105, I first use a "noavx" prebuilt wheel >not working

then I built a custom wheel on my machine and placed in correct folder, but I still have:
Exception: Failed to properly initialize llama-cpp-python. (Exit code -4.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants