Skip to content

Commit

Permalink
typo
Browse files Browse the repository at this point in the history
  • Loading branch information
GreenWizard2015 committed Sep 29, 2023
1 parent 36a2b69 commit 2a7ba3d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Here are the main lessons I've learned from my first experience with the API Cha

1. **One Prompt, One Task:** When using ChatGPT, it's best to stick to one task per prompt. While it may be tempting to create branching prompt with different outcomes, this approach can lead to instability in the responses. It's generally more reliable to keep each prompt focused on a single task or question.
2. **Custom Response Format:** While many recommend enforcing ChatGPT responses in JSON format, I found this approach to be somewhat cumbersome. Instead, I developed my own response format that takes into account the nuances of how the language model works. Simplicity and clarity in the response format can make working with ChatGPT more straightforward.
3. **Flags:** To gauge the complexity of responses, I moved away from using a simple rating scale and instead began detecting elements like sarcasm, humor, or complex topics. The model responds with "Yes" or "No" to indicate the presence of these elements, and I count the number of "Yes" answers to determine if a more complex reply is needed. This approach proved to be both simple and stable. In general, it's best to **keep as much of the logic as possible on the client side, rather than relying on the LLM response**. See [this template](https://github.com/GreenWizard2015/AIEnhancedTranslator/blob/fd7bdd567100f09050ac13431032e682db0a92be/data/translate_shallow.txt) for more details.
3. **Flags:** To gauge the quality of translation, I moved away from using a simple rating scale and instead began detecting elements like sarcasm, humor, or complex topics. The model responds with "Yes" or "No" to indicate the presence of these elements, and I count the number of "Yes" answers to determine if a more complex reply is needed. This approach proved to be both simple and stable. In general, it's best to **keep as much of the logic as possible on the client side, rather than relying on the LLM response**. See [this template](https://github.com/GreenWizard2015/AIEnhancedTranslator/blob/fd7bdd567100f09050ac13431032e682db0a92be/data/translate_shallow.txt) for more details.
4. **Explicit Requests:** Sometimes, it's not enough to ask for something in a general way, like "issues with ...: {list}". To get more precise responses, it can be helpful to request a list, specify the minimum number of elements, and explicitly begin the list while describing the meaning of its elements. Providing clear context in your prompts can lead to more accurate and relevant responses. See [this template](https://github.com/GreenWizard2015/AIEnhancedTranslator/blob/fd7bdd567100f09050ac13431032e682db0a92be/data/translate_deep.txt#L8-L9) for more details.
5. **Translation of the notifications:** It's interesting that you can request a translation of some messages for UI directly in the prompt. This is extremely unusual for me as a programmer. See [this template](https://github.com/GreenWizard2015/AIEnhancedTranslator/blob/e1c0975202e926e339ee10766810f26d710a2f4a/prompts/translate_shallow.txt#L14) for more details. Ultimately, I chose not to pursue this approach, because it's reducing the stability of the system. But it's a really interesting idea, in my opinion.
6. **Prompt optimization:** After receiving the first results, I started optimizing the size and stability of the prompts. AI doesn't care about grammar and spelling, so we can shorten the prompt to the minimum necessary for stable text generation. This improves the stability of the output and reduces the cost of requests. However, I haven't shortened the critically important parts of the prompt. By the way, basic optimization can be done quite well with ChatGPT. I assume that the process of refining prompts can be automated without significant costs.

0 comments on commit 2a7ba3d

Please sign in to comment.