Skip to content

Latest commit

 

History

History
56 lines (40 loc) · 2.06 KB

README.md

File metadata and controls

56 lines (40 loc) · 2.06 KB

ruImageCaptioning


Inference Notebook:

Hugginface🤗 HF spaces

Русская версия CLIP prefix caption, обученная на ruGPTSMALL + CLIP(OPENAI), можно использовать для VQA, image captioning и прочее. Модель работает <1c + ее можно эффективно квантануть/перенести в ONNX. Обучалось в течении 3х дней на 2*1080ti

Trained and validated on ruCOCO BLEU: 37.3

chrF: 32.4

ROUGE-1-F: 33.0

ROUGE-2-F: 14.1

ROUGE-L-F: 30.3

Пример кэпшенинга Пример работы в zero shot

THIS WORK IS BASED ON https://github.com/rmokady/CLIP_prefix_caption (english version)

@article{mokady2021clipcap,
  title={ClipCap: CLIP Prefix for Image Captioning},
  author={Mokady, Ron and Hertz, Amir and Bermano, Amit H},
  journal={arXiv preprint arXiv:2111.09734},
  year={2021}
}
@article{AlexWortega,
  title={ruImage captioning},
  author={Aleksandr Nikolic, Asta gpu server },
}

Acknowledgments

This repository is heavily based on CLIP and Hugging-faces repositories. For training we used the data of COCO dataset and Conceptual Captions translated by ALEX WORTEGA ruCOCO