You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm probably over-thinking this since the current models are a lot more powerful but I was wondering if it'd be worth it to use Exponential backoff here to avoid having Mentat automatically stop in the case of a API rate limit from the LLM in use over here?
The tenacity python library as recommended by OpenAI in one of their cook books can be used to achieve this very easily but I wanted to know if it solves the problem and your guys's thoughts.
The text was updated successfully, but these errors were encountered:
I'm probably over-thinking this since the current models are a lot more powerful but I was wondering if it'd be worth it to use Exponential backoff here to avoid having Mentat automatically stop in the case of a API rate limit from the LLM in use over here?
The tenacity python library as recommended by OpenAI in one of their cook books can be used to achieve this very easily but I wanted to know if it solves the problem and your guys's thoughts.
The text was updated successfully, but these errors were encountered: