Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: expected CFunctionType instance instead of function in _pyllmodel.py #3310

Open
omin23 opened this issue Dec 16, 2024 · 1 comment
Labels
bug-unconfirmed chat gpt4all-chat issues

Comments

@omin23
Copy link

omin23 commented Dec 16, 2024

Bug Report

When running gpt4all's GPT4ALL() function to initialise a model for use, the program throws an error:

Traceback (most recent call last):
  File "/path/to/script/trygpt.py", line 4, in <module>
    model = GPT4All("Meta-Llama-3-8B-Instruct.Q4_0.gguf")
  File "/path/to/packages/gpt4all/gpt4all-bindings/python/gpt4all/gpt4all.py", line 263, in __init__
    self.model = LLModel(self.config["path"], n_ctx, ngl, backend)
  File "/path/to/packages/gpt4all/gpt4all-bindings/python/gpt4all/_pyllmodel.py", line 294, in __init__
    llmodel.llmodel_model_foreach_special_token(
ctypes.ArgumentError: argument 2: TypeError: expected CFunctionType instance instead of function

Steps to Reproduce

Run test script:

from gpt4all import GPT4All

model = GPT4All("Meta-Llama-3-8B-Instruct.Q4_0.gguf")

with model.chat_session():
    print(model.generate("How can I run LLMs efficiently on my laptop?", max_tokens=1024))

Expected Behavior

The correct model to be downloaded and instantiated.

Your Environment

  • GPT4All version: 2.8.3.dev0 (0f27359c)
  • Python: 3.10.15
  • Operating System: Ubuntu 18.04.6 LTS
  • Chat model used (if applicable): n/a
@omin23 omin23 added bug-unconfirmed chat gpt4all-chat issues labels Dec 16, 2024
@mgoltzsche
Copy link

mgoltzsche commented Dec 17, 2024

Right, I also just ran into this while trying to build the latest release (v3.5.3) for arm64/raspberry pi and running the CLI app.
Apparently the issue exists since gpt4all v3.5.0. Finally I was able to build and run it using gpt4all v3.4.2.

Fwiw this is how I've built a working alpine-based gpt4all v3.4.2 python CLI container. Building it with --build-arg GPT4ALL_VERSION=v3.5.3 reproduces the issue.

Btw it is a pity that the latest gpt4all python package that was released to pypi (2.8.2) does not support arm64. I had to build the previously listed Dockerfile on arm64 myself in order to be able to run a recent gpt4all version on the Raspberry Pi.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed chat gpt4all-chat issues
Projects
None yet
Development

No branches or pull requests

2 participants