Skip to content

A wrong answer from Cache record #385

Answered by SimFG
terryweijian asked this question in Q&A
Discussion options

You must be logged in to vote

@terryweijian there is no idea to control the weight of vector calculation for different key words. You can choose to skip cache searching when you think the cached answer doesn't meet the requirements, but save the llm result to the cache this time. The next time you ask the same question, you will be able to get an accurate answer.

cache_skip param usage

openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "what's github"}],
    cache_skip=True,
)

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@terryweijian
Comment options

@terryweijian
Comment options

Answer selected by SimFG
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants