Skip to content

Issues: b4rtaz/distributed-llama

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Branching out on model support
#175 opened Feb 19, 2025 by pcfreak30
Work with cline
#145 opened Dec 22, 2024 by piotreq7
dllama-api & chat ui
#119 opened Aug 19, 2024 by twuerfl
Support for Gemma 2?
#115 opened Aug 4, 2024 by sdmorrey
[Feature request] Kubernetes setup
#101 opened Jul 12, 2024 by kami4ka
What about mobile phones?
#89 opened Jun 12, 2024 by dcale
Support nSlices > nKvHeads
#70 opened May 27, 2024 by b4rtaz
network utilization
#58 opened May 18, 2024 by zhengpeirong
How To Add Suppoerted Model
#55 opened May 16, 2024 by hyperbolic-c
Need help in set up all the devices
#21 opened Apr 16, 2024 by MarcuXu
ProTip! no:milestone will show everything without a milestone.