-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feat] Request response refusal validator #1189
Comments
Ooh. That's a neat idea. It's not on the current sprint listing but I'd like to take a swing at it. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
this is not stale |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
this is a good feature to have. Azure openai can detect response refusal. |
Description
Request a validator that determines whether or not a LLM refuses a prompt and generates an output that starts with texts such as "I cannot", "I can't", and "It is illegal"
Why is this needed
If the response is refused, it should not be returned to the client application to display. The validator should throw a validation error that the application should handle appropriately.
Implementation details
I suppose Huggingface provides models for response refusal checking.
End result
After a LLM generate a text, the validator is used to validate whether the response is refused by having but not limited to texts such as "I can not", "I can't", "It is not legal", etc.
The text was updated successfully, but these errors were encountered: