-
Notifications
You must be signed in to change notification settings - Fork 135
Issues: b4rtaz/distributed-llama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Weird bug where malformed API request causes model to analyze error message
#151
opened Jan 22, 2025 by
jkeegan
[New Feature] Add new route for dllama api for embeding models
#96
opened Jul 1, 2024 by
testing0mon21
Unknown header key's while converting llama 3 70b to distributed format
#40
opened May 8, 2024 by
DifferentialityDevelopment
Hi, do you know why the synchronization time from 4pi to 8pi suddenly increases?
#20
opened Apr 6, 2024 by
yuezhan0721
How about the multi-core support of stand-alone dual-socket motherboards?
#19
opened Apr 5, 2024 by
win10ogod
ProTip!
no:milestone will show everything without a milestone.