You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your novel work for the low-resource image captioning datasets.
However, I wonder why you do not provide the checkpoints from all baselines you used and your proposed method on MS-COCO, IU X-Ray as well as your 0.1, 0.5 and 1% MS-COCO training datasets?
It seems that this repo is only used to train on MS-COCO dataset, how about IU X-Ray? Did you modify from https://github.com/cuhksz-nlp/R2Gen or directly use this repo for experiments?
I think above points should be made clear.
Thank you very much.
The text was updated successfully, but these errors were encountered:
Dear friend,
Thank you for your novel work for the low-resource image captioning datasets.
However, I wonder why you do not provide the checkpoints from all baselines you used and your proposed method on MS-COCO, IU X-Ray as well as your 0.1, 0.5 and 1% MS-COCO training datasets?
It seems that this repo is only used to train on MS-COCO dataset, how about IU X-Ray? Did you modify from https://github.com/cuhksz-nlp/R2Gen or directly use this repo for experiments?
I think above points should be made clear.
Thank you very much.
The text was updated successfully, but these errors were encountered: