You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In LLM there is a function called tokensFromMessages. Current default implementation is using the models encoding (from ModelType) to compute the token count locally.
TODO: default implementation of tokensFromMessages has to be removed and replaced by provider specific implementations (for OpenAI based on encoding, and for GCP on external API call)
In
LLM
there is a function calledtokensFromMessages
. Current default implementation is using the models encoding (fromModelType
) to compute the token count locally.Problem: Afaik, the encoding is not made publicly available by Google. Thus we have to make an API call to GCP (https://cloud.google.com/vertex-ai/docs/generative-ai/get-token-count).
TODO: default implementation of
tokensFromMessages
has to be removed and replaced by provider specific implementations (for OpenAI based on encoding, and for GCP on external API call)depends on #393
depends on #405
The text was updated successfully, but these errors were encountered: