You should organize them in the following format.
MuPlon
├── data
├── lgnn
├── pretrained_models
├── path
├── data_load_utils.py
├── cvae_models.py
├── cvae_pretrain_small.py
├── models.py
├── LLM_Test.py
├── LLM_Test_two.py
├── fever.py.py
├── po2.py.py
├── po3.py.py
├── train.sh
├── llama_test.py
├── sladder.py
└── utils.py
conda create -n MuPlon python=3.9
conda activate MuPlon
pip install -r requirements.txt
fever.py
: the FEVER datasetpo2.py
: the PO2 datasetpo3.py
: the PO3 datasetsladder.py
: sladder datasetllama_test.py
: using the LLaMA model with feverLLM_Test.py
andLLM_Test_two.py
: OLLAMA testpath
: save the running path of Modellgnn
: local generation feature model
https://drive.google.com/drive/folders/1ORZ7SjvKvmKvmpzJDRb4OydiYYs0FqZ6?usp=drive_link
https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
CUDA_VISIBLE_DEVICES=0 python fever.py \
--seed 1234 \
--batch_size 16 \
--lr 2e-5 \
--epochs 20 \
--weight_decay 5e-4 \
--evi_num 5 \
--max_seq_length 128
CUDA_VISIBLE_DEVICES=0 python po3.py \
--seed 1234 \
--batch_size 4 \
--lr 1e-5 \
--epochs 20 \
--weight_decay 5e-4 \
--evi_num 20 \
--max_seq_length 128
CUDA_VISIBLE_DEVICES=0 python po2.py \
--seed 1234 \
--batch_size 4 \
--lr 1e-5 \
--epochs 20 \
--weight_decay 5e-4 \
--evi_num 20 \
--max_seq_length 128
CUDA_VISIBLE_DEVICES=0 python sladder.py \
--seed 1234 \
--batch_size 16 \
--lr 2e-5 \
--epochs 20 \
--weight_decay 5e-4 \
--evi_num 5 \
--max_seq_length 128
CUDA_VISIBLE_DEVICES=0 python llama_test.py \
--seed 1234 \
--batch_size 16 \
--lr 2e-5 \
--epochs 20 \
--weight_decay 5e-4 \
--evi_num 5 \
--max_seq_length 128
bash train.sh