Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

does it limit the length of outputs? #7

Open
nnakamura3 opened this issue Sep 1, 2020 · 1 comment
Open

does it limit the length of outputs? #7

nnakamura3 opened this issue Sep 1, 2020 · 1 comment

Comments

@nnakamura3
Copy link

When decoding, I noticed that the output caption length is less than 16 tokens. Can I change this maximum caption length?

@simaoh
Copy link
Collaborator

simaoh commented Jan 7, 2021

@nnakamura3 ,
The model sample functions use seq_length given by opt.seq_length, which as you can see in the training script , is set during training to be the maximum length of a sequence in the data found by the dataloader
https://github.com/yahoo/object_relation_transformer/blob/master/dataloader.py#L60-L63

This makes sense during training, since no larger sentence will be found in the data.

If you need to change the maximum length of a generated sequence on inference, for your application, you can do so byadding a seq_length parameter to your opts.py file.

Does this answer your question?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants