Annotation Download: Download the annotations of evaluation datasets from: csuhan/OneLLM_Eval, and put it under datasets/Eval
.
- Download COCO2014 Val and put it under
datasets/InstructionTuning/image/coco/val2014
- Fill
pretrained_path
in eval/image_cap_cococap.py and run:python eval/image_cap_cococap.py
- Install
https://github.com/salaniz/pycocoevalcap
- Evaluate with eval/caption_eval.py
- Download MMVet from mm-vet.zip and put it under
datasets/Eval/image/mm-vet
- Fill
pretrained_path
in eval/image_bench_mmvet.py and run:python eval/image_bench_mmvet.py
- Submit the result file to Oneline Eval Server
- Download MSVD video clips from this link and put it under
datasets/Eval/video/MSVD/YouTubeClips
- Fill
pretrained_path
in eval/video_qa_msvd.py and run:python eval/video_qa_msvd.py
.
- Download Clothov2 evaluation set from this link and put it under
datasets/Eval/audio/clothov2/evaluation
- Fill
pretrained_path
in eval/audio_cap_clothov2.py and run:python eval/audio_cap_clothov2.py
. - Evaluate with eval/caption_eval.py.
- Download PointLLM data from this link
- Fill
pretrained_path
in eval/point_cap_pointllm.py and run:python eval/point_cap_pointllm.py
. - Evaluate with eval/caption_eval.py. The annotation file is at datasets/Eval/point/pointllm_test_cococap.json
TODO
- Download Ego4D IMU data. Please refer to docs/Data.md.
- Fill
IMU_PATH
andpretrained_path
in eval/imu_cap_ego4d.py and run:python eval/imu_cap_ego4d.py
. - Evaluate with eval/caption_eval.py
- Download NSD data. Please refer to docs/Data.md.
- Fill
pretrained_path
in eval/fmri_cap_nsd.py and run:python eval/fmri_cap_nsd.py
. - Evaluate with eval/caption_eval.py