You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@b4rtaz Hey, thank you for your wonderful work. Could you please offer some details about how to add supported model? For example, how to to convert some ollama models like command+r or starcoder or llama3 70b to ddlama
To convert Llama 3 you have 2 options, you can do it by using Meta files and convert them by using convert-llama.py script, here is the tutorial. The second option is download .safetensor weights from Huggingface and convert it by using convert-hf.py.
I think this depends on a specyfic architecture. Some architectures will be easy, some not. Adding new architecture is always non-zero effort. Currently DL supports: llama, mixtral and grok1.
@b4rtaz Hey, thank you for your wonderful work. Could you please offer some details about how to add supported model? For example, how to to convert some ollama models like command+r or starcoder or llama3 70b to ddlama
https://ollama.com/library/command-r-plus
https://ollama.com/library/llama3:70b
https://ollama.com/library/starcoder2
The text was updated successfully, but these errors were encountered: