Skip to content

Latest commit

 

History

History
55 lines (47 loc) · 1.81 KB

README.md

File metadata and controls

55 lines (47 loc) · 1.81 KB

COMP5212 Visual Entailment - GivBERT

Part of the code is modified from Github repo vilbert-multi-task.

Video Introduction

GivBERT

Setup

  1. Download the repository from Github
git clone [email protected]:NoOneUST/COMP5212-Project-GivBERT.git
cd  COMP5212-Project-GivBERT
  1. Install the requirements
pip install -r requirements.txt
  1. Install pytorch, please check your CUDA version

If you want to run GivBERT, we recommend cuda 10.2 and

pip install pytorch==1.5 torchvision==0.6

If you want to run VilBERT, use

conda install pytorch==1.4 torchvision cudatoolkit=10.1 -c pytorch

Data Setup

To setup the data, you can either download the data provided by vilbert-multi-task, or download from Google Drive which is a pruned-version especially for this project.

TBC

Model Setup

To get move on, you need to download the pre-trained VilBERT models for 12-in-1: Multi-Task Vision and Language Representation Learning. Please put the models under model folder. The download links are listed below:

VilBERT

Download link

VilBERT-MT

Download link

Working directory

GivBERT

cd ./GivBERT

VilBERT

cd ./

Command lines for experiments

python main.py --bert_model bert-base-uncased --from_pretrained model/<model_name> --config_file config/bert_base_6layer_6conect.json --lr_scheduler 'warmup_linear' --train_iter_gap 4 --save_name <finetune_from_multi_task_model>