Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(gpt): set attention mask and address other warnings #114

Merged
merged 1 commit into from
Oct 26, 2024
Merged

Conversation

eginhard
Copy link
Member

Sets the attention mask in Tortoise and XTTS to avoid the following warning: The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.

Also address other warnings during XTTS inference to be ready for future changes in the transformers library.

Obsoletes #106

@eginhard eginhard merged commit 88de5c4 into dev Oct 26, 2024
49 checks passed
@eginhard eginhard deleted the gpt-warnings branch October 26, 2024 14:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants