You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a question caching feature. Pseudocode based on other repo-
def chat_completion(user_input):
# Generate embeddings from the user input
user_embeddings = generate_embeddings(openai_client, user_input)
# Query the chat history cache first to see if this question has been asked before
cache_results = cache_search(vectors=user_embeddings , num_results=1)
if len(cache_results) > 0:
return cache_results[0]['completion'], True
else:
# Perform vector search
search_results = get_similar_docs(openai_client, dbsource, user_input, 3)
print("Getting Chat History\n")
# Chat history
chat_history = get_chat_history(1)
# Generate the completion
print("Generating completions \n")
completions_results = generatecompletionede(user_input, search_results, chat_history)
# Cache the response
cacheresponse(user_input, user_embeddings, completions_results)
print("\n")
# Return the generated LLM completion
return completions_results['choices'][0]['message']['content'], False
The text was updated successfully, but these errors were encountered:
Description
Add a question caching feature. Pseudocode based on other repo-
The text was updated successfully, but these errors were encountered: