Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Reduce the llm request count for the low time cost #83

Closed
SimFG opened this issue May 23, 2023 · 2 comments
Closed

[Feature]: Reduce the llm request count for the low time cost #83

SimFG opened this issue May 23, 2023 · 2 comments

Comments

@SimFG
Copy link

SimFG commented May 23, 2023

Is your feature request related to a problem? Please describe.

Awesome project!!! I will take less time to write duplicate or similar sql. Usually llm reasoning takes a long time, and it takes longer to process Chinese than English. Maybe there are some ways to reduce the response time, and at the same time reduce the pressure on computer resources. And I believe it will make the project more attractive.

Describe the solution you'd like

GPTCache is a semantic cache library for LLMs, and it's fully integrated with LangChain and llama_index. Also when you encounter some problems about the usage, I am willing to provide a range of help.

@csunny
Copy link
Collaborator

csunny commented May 24, 2023

It‘s cool! We will use it, and look forward to some interesting things happening, stay in touch...

@csunny
Copy link
Collaborator

csunny commented Nov 24, 2023

Thanks again, we already support cache. #803

@csunny csunny closed this as completed Nov 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants