Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for lower parameter models #184

Open
panikinator opened this issue Jul 13, 2024 · 2 comments
Open

Support for lower parameter models #184

panikinator opened this issue Jul 13, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@panikinator
Copy link

Have you tried experimenting with lower parameter models like flan t5, albert, bert etc or even qwen 0.5b?
With fine tuning they might be able suffice in this specific domain?
I have a low end machine and even tinyllama is kinda slow.
I have tried tinkering around your existing codebase but i lack the skills as well as horsepower to do so

Hats off for working on this awesome project btw

@panikinator panikinator added the enhancement New feature or request label Jul 13, 2024
@acon96
Copy link
Owner

acon96 commented Jul 13, 2024

I did some experiments with Qwen2 0.5B and the results were quite impressive compared to models like Phi-2 from last year: https://github.com/acon96/home-llm/blob/feature/polish-dataset/docs/experiment-notes-qwen.md#tinyhome-qwen-rev3

I definitely think that this project shows a great use case for fine-tuning smaller models instead of relying on the zero shot performance of larger models (7B+) using in-context-learning examples.

I'll see if I can get some time in the next few weeks to re-run the training because that specific run I linked had an issue with not wanting to use the EOS token and rambling on about random stuff after turning on your lights (which while it can be hilarious, is not ideal)

@panikinator
Copy link
Author

So huggingface released a new series of LLMs called smollm. I want to experiment with the 135M parameter one but I only have a GPU with 4 gigs of vram 🥲. With my limited knowledge, I tried fine tuning that model using lora on my gpu but ran into CUDA OOM errors. Have I got any chances of making it work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants