Skip to content

csjihwanh/CLIP2GPT3

Repository files navigation

CLIP2GPT3

Vision Question Answering using LLaMA with CLIP module

figure1

How to use

  • Activate conda environment

conda env create -f clip2gpt3_env.yaml

conda activate clip2gpt3_env

  • To train

python main.py --load_state_dict params/params.pth --mode train

  • To test

python main.py --load_state_dict params/params.pth --mode test

About

Visual Question Answering with CLIP and LLM

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages