Skip to content

NyKxo1/SigPhi-Med

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SigPhi-Med: A Lightweight Vision-Language Assistant for Biomedicine

Introduction

SigPhi-Med is a lightweight vision-language model designed for biomedical applications. It leverages compact architectures while maintaining strong performance in visual question answering (VQA) and related multimodal tasks. This repository provides code for training, evaluation, and model deployment.

Results

Installation

To set up the environment, refer to TinyLLaVA Factory and requirements.txt file.

Model Weights

Datasets

SigPhi-Med is trained and evaluated on the following biomedical multimodal datasets:

Training

To train SigPhi-Med, modify the training script as needed:

  1. Edit the configuration in scripts/train/train_phi.sh.
  2. Run the training script:
sh scripts/train/train_phi.sh

Evaluation

To evaluate the model on biomedical VQA tasks, use:

sh scripts/eval/VQA.sh

Acknowledgements

We appreciate the contributions of the following projects:

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published