Skip to content
This repository has been archived by the owner on May 12, 2023. It is now read-only.

pyllamacpp not support M1 chips MacBook #57

Open
laihenyi opened this issue Apr 12, 2023 · 9 comments
Open

pyllamacpp not support M1 chips MacBook #57

laihenyi opened this issue Apr 12, 2023 · 9 comments

Comments

@laihenyi
Copy link

Traceback (most recent call last):
File "/Users/laihenyi/Documents/GitHub/gpt4all-ui/app.py", line 29, in
from pyllamacpp.model import Model
File "/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/pyllamacpp/model.py", line 21, in
import _pyllamacpp as pp
ImportError: dlopen(/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so, 0x0002): tried: '/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so' (no such file), '/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))

@covalspace
Copy link

I'm having a similar issue on M1 chip of missing import but pip install requirements already satisfied

@shivam-singhal
Copy link

Similar issue, except I just get [1] 79802 illegal hardware instruction python upon running from pyllamacpp.model import Model

@NickAnastasoff
Copy link

I'm having a similar issue on M1 chip of missing import but pip install requirements already satisfied

I was having the same issue because I had multiple versions of python. It might be worth a shot to just try changing your python version around some.

@laihenyi
Copy link
Author

Any Lucky? days passed, and it seems the developer did not run a fix on this issue.

@absadiki
Copy link
Collaborator

Sorry @laihenyi, I don't have a Mac so I couldn't debug the issue.
Could you please try to build from source in that case ? it is a straightforward process!

@shivam-singhal
Copy link

I encountered 2 problems:

  1. My conda install was for the x86 platform, and I should have instead installed another binary for arm64
  2. Installing from whl (pypi?) was pulling the x86 version, not the arm64 version of pyllamacpp

This ultimately was causing the binary to not be able to link with BLAS, as provided on macs via the accelerate framework (namely, could not find _clbas_sgemm, which is a function for single precision matrix multiplication). In simple terms, a dependency issue because of incompatible platforms.

I found that I had to fix both of the above issues - just fixing one did not work. Now, after a separate conda for arm64, and installing pyllamacpp from source, I am able to run the sample code.

@absadiki
Copy link
Collaborator

Thanks so much @shivam-singhal for the solution. I really appreciate it.

@laihenyi
Copy link
Author

I encountered 2 problems:

  1. My conda install was for the x86 platform, and I should have instead installed another binary for arm64
  2. Installing from whl (pypi?) was pulling the x86 version, not the arm64 version of pyllamacpp

This ultimately was causing the binary to not be able to link with BLAS, as provided on macs via the accelerate framework (namely, could not find _clbas_sgemm, which is a function for single precision matrix multiplication). In simple terms, a dependency issue because of incompatible platforms.

I found that I had to fix both of the above issues - just fixing one did not work. Now, after a separate conda for arm64, and installing pyllamacpp from source, I am able to run the sample code.

Sorry,
I am not a code developer, however, I am an M1 MacBook user.
What could I do to help you to fix this bug?
remote-connect to my laptop to compile M1 binary?

@hsgarcia22
Copy link

I encountered 2 problems:

  1. My conda install was for the x86 platform, and I should have instead installed another binary for arm64
  2. Installing from whl (pypi?) was pulling the x86 version, not the arm64 version of pyllamacpp

This ultimately was causing the binary to not be able to link with BLAS, as provided on macs via the accelerate framework (namely, could not find _clbas_sgemm, which is a function for single precision matrix multiplication). In simple terms, a dependency issue because of incompatible platforms.

I found that I had to fix both of the above issues - just fixing one did not work. Now, after a separate conda for arm64, and installing pyllamacpp from source, I am able to run the sample code.

This doesn't make sense, I'm not running this in conda, its native python3. What did you modify to correct the original issue, and why is everyone linking this to the pygpt4all import GPT4All when it seems to be a separate issue?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants