-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: unslothai/unsloth
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Adding New Tokens, then Saving & Re-loading Model Adapter
#1343
opened Nov 26, 2024 by
laura-burdick-sil
Will using Unsloth affect the training results, or does it only serve to accelerate the process?
#1337
opened Nov 25, 2024 by
lichaoahcil
[Issue] Triton Compilation Error in Unsloth Fine-Tuning Script on Kernel 5.4.0
fixed - pending confirmation
Fixed, waiting for confirmation from poster
#1336
opened Nov 25, 2024 by
gityeop
Can we use a custom chat template (or no template at all) for vision fine-tuning?
#1331
opened Nov 24, 2024 by
Any-Winter-4079
add generation prompt enforcement is too severe
currently fixing
Am fixing now!
#1330
opened Nov 24, 2024 by
RonanKMcGovern
Fail to Load LoRA Model for VLM Fine-Tune
fixed - pending confirmation
Fixed, waiting for confirmation from poster
URGENT BUG
Urgent bug
#1329
opened Nov 23, 2024 by
krittaprot
[Urgent] After reinstalling unsloth, Llama 3.2/3.1 fine tuning gets error with customized compute_metrics function
fixed - pending confirmation
Fixed, waiting for confirmation from poster
URGENT BUG
Urgent bug
#1327
opened Nov 22, 2024 by
yuan-xia
qwen2-vl 2b 4-bit always getting OOM, yet llama3.2 11b works!
#1326
opened Nov 22, 2024 by
mehamednews
Llama 3.2 vision finetuning error (Unsupported: hasattr ConstDictVariable to)
fixed - pending confirmation
Fixed, waiting for confirmation from poster
URGENT BUG
Urgent bug
#1325
opened Nov 22, 2024 by
adi7820
Unsloth Phi-3.5 LoRA: 3x the Number of Trainable Parameters with the Same Hyperparameters
#1324
opened Nov 22, 2024 by
KristianMoellmann
Saving the model with save_pretrained_merged failed.
currently fixing
Am fixing now!
#1323
opened Nov 22, 2024 by
WATCHARAPHON6912
How to fine-tune LLaMA 3.2 11B Vision using LoRA with the recent update?
#1319
opened Nov 21, 2024 by
yukiarimo
failed finetune qwen32b_awq_int4 using lora with llama-factory
#1314
opened Nov 21, 2024 by
Daya-Jin
The tokenizer does not have a {% if add_generation_prompt %}
#1312
opened Nov 21, 2024 by
Galaxy-Husky
Not able to load model from huggingface repo with correct path (FileNotFoundError: invalid repository id)
#1311
opened Nov 20, 2024 by
ygl1020
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.