You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
the semantic kernel package in c# is inflating the usage tokens by around 20% for my prompts because it escapes Unicode characters and URL encodes the string of the content field of the messages array.
I tested it on a prompt that has 49k input tokens based on the metadata information from the Azure OpenAI api, and when using semantic kernel it inflates the input tokens to 63k
It's very easy to see the prompt differences when it has JSON in it
Platform
OS: Windows
IDE: Visual Studio
Language: C#
Source: NuGet package version 1.15
The text was updated successfully, but these errors were encountered:
github-actionsbot
changed the title
Bug: Semantic Kernel inflates token usage of input prompt by 20% due to Unicode escaping and URL encoding of the prompt
.Net: Bug: Semantic Kernel inflates token usage of input prompt by 20% due to Unicode escaping and URL encoding of the prompt
Jun 26, 2024
And it's not just the consumption of tokens, it's that it costs more for the LLM to process it. When receiving encrypted data, the responses are sent to you encoded, when it should not be that way. In my case using Gemini.
Describe the bug
the semantic kernel package in c# is inflating the usage tokens by around 20% for my prompts because it escapes Unicode characters and URL encodes the string of the content field of the messages array.
I tested it on a prompt that has 49k input tokens based on the metadata information from the Azure OpenAI api, and when using semantic kernel it inflates the input tokens to 63k
It's very easy to see the prompt differences when it has JSON in it
Platform
The text was updated successfully, but these errors were encountered: