You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Atome-FE/llama-node is mentioned in the docs as a Node.js/JavaScript binding for rwkv.cpp.
Right now, however, it does not seem to work with the current version of rwkv.cpp (as mentioned in issue 121) - presumably because of the changes made in commit 8db73b1 ("update ggml"). The error message thrown is
llama.cpp: loading model from /Users/andreas/rozek/AI/RWKV/RWKV-5-World-0.1B-v1-20230803-ctx4096-Q4_1.bin
error loading model: unknown (magic, version) combination: 67676d66, 00000065; is this really a GGML file?
llama_init_from_file: failed to load model
node:internal/process/promises:288
triggerUncaughtException(err, true /* fromPromise */);
^
[Error: Failed to initialize LLama context from file: /Users/andreas/rozek/AI/RWKV/RWKV-5-World-0.1B-v1-20230803-ctx4096-Q4_1.bin] {
code: 'GenericFailure'
}
Node.js v18.17.0
Unfortunately, since I'm not a C++ programmer, I'm not able to revert just those changes which actually change the GGML handling - simply going back one commit further did not help (the resulting code did not compile)
Unless somebody will be able to help me to continue, I would recommend adding an appropriate note to the docs until "llama-node" has been fixed
The text was updated successfully, but these errors were encountered:
That said, the version of rwkv.cpp that llama-node uses is 6 months old. You would be missing on proper sequence mode support (which is a major optimization), soon to come RWKV v5 support (which finally makes RWKV competitive to Transformers) and other often invisible to end users, but still significant optimizations in ggml. My personal recommendation, if possible, is to use rwkv.cpp directly in your app. Being "not a C++ programmer" should not be an issue, since we have simple and powerful Python API.
Hello!
Atome-FE/llama-node is mentioned in the docs as a Node.js/JavaScript binding for rwkv.cpp.
Right now, however, it does not seem to work with the current version of rwkv.cpp (as mentioned in issue 121) - presumably because of the changes made in commit 8db73b1 ("update ggml"). The error message thrown is
Unfortunately, since I'm not a C++ programmer, I'm not able to revert just those changes which actually change the GGML handling - simply going back one commit further did not help (the resulting code did not compile)
Unless somebody will be able to help me to continue, I would recommend adding an appropriate note to the docs until "llama-node" has been fixed
The text was updated successfully, but these errors were encountered: