Skip to content

notebook for fine-tuning google/gemma-2-2b-it with medquad dataset using low-rank adaptation of large language models

License

Notifications You must be signed in to change notification settings

jgyasu/MediGemma

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 

Repository files navigation

MediGemma

(This was a task for the selection of research interns at NUS)

This paper did not have any code associated or results to reproduce. It presented a benchmark and further instruction prompt tuned Flan-PaLM model to create Med-PaLM. PaLM models are old generation closed-source models so to prove my coding proficiency, I fine-tuned Google's Gemma on the MedQuad dataset consisting of 16000+ medical questions and answers. Limited by the computing resource, I fine-tuned it using LoRa on 2000 questions and answers. The notebook used for fine-tuning is present in the repository which I further plan to share on Hugging Face so that it can be referenced as an API.

About

notebook for fine-tuning google/gemma-2-2b-it with medquad dataset using low-rank adaptation of large language models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published