You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, when setting max_tokens for a conversation buffer memory within langchain, we use simple string selection to set the token limit to 29,000 if the model is a gpt-4 model, and 100,000 if the model is one of the preview models (these are 128k context)
It would be nicer to have some sort of get_max_conversation_tokens where it would return the correct bound for a conversation buffer memory given the model name
The text was updated successfully, but these errors were encountered:
Currently, when setting max_tokens for a conversation buffer memory within langchain, we use simple string selection to set the token limit to 29,000 if the model is a gpt-4 model, and 100,000 if the model is one of the preview models (these are 128k context)
It would be nicer to have some sort of
get_max_conversation_tokens
where it would return the correct bound for a conversation buffer memory given the model nameThe text was updated successfully, but these errors were encountered: