-
Notifications
You must be signed in to change notification settings - Fork 639
Issues: nebuly-ai/optimate
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Does This Fine-tuning code doesn't work in single A6000 GPU for LLaMA-2-7B with LoRA?
#359
opened Nov 4, 2024 by
01choco
[Speedster] optimize_model took 10 hours, and it's not over yet
#354
opened Sep 14, 2023 by
wanglongwork
[Speedster] TensorRt OSError: [WinError 127] The specified procedure could not be found
#353
opened Aug 22, 2023 by
nemeziz69
[speedster] _dl_check_map_versions assertion error with optimize_model and ONNX compilers
#346
opened Jun 20, 2023 by
trent-s
yolov8 + nebuly | AttributeError: type object 'DummyClass' has no attribute 'models'
#342
opened May 20, 2023 by
scraus
[Chatllama] what's supposed to be in the Actor checkpoint dir?
#337
opened Apr 18, 2023 by
StrangeTcy
Support for torch 2.0
speedster
Issue related to the Speedster App
#325
opened Apr 3, 2023 by
lminer
[Chatllama] Merge the datasets to create more insightful training data
chatllama
Issue related to the ChatLLaMA module
good first issue
Good for newcomers
#321
opened Mar 31, 2023 by
PierpaoloSorbellini
3 tasks
[Chatllama] Support Inference for trained models.
chatllama
Issue related to the ChatLLaMA module
good first issue
Good for newcomers
#320
opened Mar 31, 2023 by
PierpaoloSorbellini
4 tasks
[Chatllama] Evaluation Function and Loop with metrics
chatllama
Issue related to the ChatLLaMA module
good first issue
Good for newcomers
#319
opened Mar 31, 2023 by
PierpaoloSorbellini
1 of 5 tasks
[Chatllama] how to reduce the CUDA memory comsumption a llama7B model
#315
opened Mar 31, 2023 by
balcklive
Previous Next
ProTip!
Follow long discussions with comments:>50.