You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you tried experimenting with lower parameter models like flan t5, albert, bert etc or even qwen 0.5b?
With fine tuning they might be able suffice in this specific domain?
I have a low end machine and even tinyllama is kinda slow.
I have tried tinkering around your existing codebase but i lack the skills as well as horsepower to do so
Hats off for working on this awesome project btw
The text was updated successfully, but these errors were encountered:
I definitely think that this project shows a great use case for fine-tuning smaller models instead of relying on the zero shot performance of larger models (7B+) using in-context-learning examples.
I'll see if I can get some time in the next few weeks to re-run the training because that specific run I linked had an issue with not wanting to use the EOS token and rambling on about random stuff after turning on your lights (which while it can be hilarious, is not ideal)
So huggingface released a new series of LLMs called smollm. I want to experiment with the 135M parameter one but I only have a GPU with 4 gigs of vram 🥲. With my limited knowledge, I tried fine tuning that model using lora on my gpu but ran into CUDA OOM errors. Have I got any chances of making it work?
Have you tried experimenting with lower parameter models like flan t5, albert, bert etc or even qwen 0.5b?
With fine tuning they might be able suffice in this specific domain?
I have a low end machine and even tinyllama is kinda slow.
I have tried tinkering around your existing codebase but i lack the skills as well as horsepower to do so
Hats off for working on this awesome project btw
The text was updated successfully, but these errors were encountered: