You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to use self-refine for reasoning task, such as open-book qa for example.
For the few-shot examples for the initial generation. Does the examples have to be bad examples?
If I have good examples, could I use them for the initial stage and hope that through iterations, it gets even better?
However if i were to use already good examples, it might be tough to come up with even better ones in the few-shot examples for the refine stage?
The text was updated successfully, but these errors were encountered:
I think it depends on the task, and how much "prior" knowledge you expect the users to have.
For example, in tasks like dialog response generation and code optimization we provide a prompt that has all the information. Some other tasks, like code readability, only used an instruction.
In general, it is possible that a better engineered prompt will lead to a better initial output. But to be realistic, who wants to do endless prompt engineering? Isn't it better to start with something lightweight/minimal and let the model refine the outputs? 😄 That is the key point of the idea of Self-Refine.
I want to use self-refine for reasoning task, such as open-book qa for example.
For the few-shot examples for the initial generation. Does the examples have to be bad examples?
If I have good examples, could I use them for the initial stage and hope that through iterations, it gets even better?
However if i were to use already good examples, it might be tough to come up with even better ones in the few-shot examples for the refine stage?
The text was updated successfully, but these errors were encountered: