You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am excited that gpt-neox now support llama model. However, the script in tools/convert_raw_llama_weights_to_neox.py only support origin llama weight. Considering the large number of users currently using Huggingface, would it be possible to provide a script for converting the Huggingface Llama model into Neox?
In my experiments, training speed and memory usage in gpt-neox is much better than other language model framework, even if training by Lora. So I want to use gpt-neox to train if it support the model.
The text was updated successfully, but these errors were encountered:
Hello, I am excited that gpt-neox now support llama model. However, the script in tools/convert_raw_llama_weights_to_neox.py only support origin llama weight. Considering the large number of users currently using Huggingface, would it be possible to provide a script for converting the Huggingface Llama model into Neox?
In my experiments, training speed and memory usage in gpt-neox is much better than other language model framework, even if training by Lora. So I want to use gpt-neox to train if it support the model.
The text was updated successfully, but these errors were encountered: