You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I might work on this myself, but I'm highlighting the discrepancy between building the prompt and making the standard API call with all messages as array in payload
Raw API Calls
initial message prompt tokens: 9
followup prompt tokens: 32
2nd followup prompt tokens: 63
final: 127
The results I get are very comparable, and I could save tokens this way.
With System Message
Note that the current buildPrompt method is more useful when you want a system message, as it's comparable in token usage, and it often nets better results for gpt-3.5.
I might work on this myself, but I'm highlighting the discrepancy between building the prompt and making the standard API call with all messages as array in payload
In each test, I use the same set of messages.
ChatGPTClient
initial message prompt tokens: 58
followup prompt tokens: 82
2nd followup prompt tokens: 114
final: 179
Raw API Calls
initial message prompt tokens: 9
followup prompt tokens: 32
2nd followup prompt tokens: 63
final: 127
The results I get are very comparable, and I could save tokens this way.
With System Message
Note that the current buildPrompt method is more useful when you want a system message, as it's comparable in token usage, and it often nets better results for gpt-3.5.
ChatGPTClient
initial message prompt tokens: 47
followup prompt tokens: 111
2nd followup prompt tokens: 194
final: 280
Raw API Calls
initial message prompt tokens: 34
followup prompt tokens: 98
2nd followup prompt tokens: 181
final: 267
The text was updated successfully, but these errors were encountered: