You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you so much for this repo! It has been a pleasure to work with.
I am setting up a chart captioning finetuning task. My dataset contains pairs of chart images and chart scenegraphs (textual representations of the chart spec). I also have ground truth natural language captions.
I have finetuned your pretrained VLT5 model on my data. It is generating informative captions, but the generated captions are much shorter than the ground truth captions. The ground truth captions are on average 450 characters, whereas the generated captions are on average 181 characters.
Would you expect VLT5 to prefer short captions (i.e., because it was pretrained on short text)? Or would you expect I have a parameter set incorrectly? I have set gen_max_length = 512 and max_text_length = 512.
The text was updated successfully, but these errors were encountered:
In my experiments, most of the target text was pretty short (< 20 tokens), so I don't have experience using VL-T5 to generate such long text. Theoretically, the model would learn the distribution in the target data, but LM can often degenerate for various reasons (e.g., trained on small data).
For your use case, how about controlling min_length parameters in generate() method?
Thank you so much for this repo! It has been a pleasure to work with.
I am setting up a chart captioning finetuning task. My dataset contains pairs of chart images and chart scenegraphs (textual representations of the chart spec). I also have ground truth natural language captions.
I have finetuned your pretrained VLT5 model on my data. It is generating informative captions, but the generated captions are much shorter than the ground truth captions. The ground truth captions are on average 450 characters, whereas the generated captions are on average 181 characters.
Would you expect VLT5 to prefer short captions (i.e., because it was pretrained on short text)? Or would you expect I have a parameter set incorrectly? I have set
gen_max_length = 512
andmax_text_length = 512
.The text was updated successfully, but these errors were encountered: