Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(model): Support model cache and new Agentic Workflow Expression Language(AWEL) #803

Merged
merged 4 commits into from
Nov 17, 2023

Conversation

fangyinc
Copy link
Collaborator

@fangyinc fangyinc commented Nov 16, 2023

Model cache

  • Cache model output in memory.
  • Cache model output in local rocksdb.

Agentic Workflow Expression Language(AWEL)

  • Common workflow Operators
  • Implements LocalWorkflowRunner
  • Integrate model serving and model result cache with AWEL in base_chat.py

Note:

DB-GPT uses local disk cache by default. You need to install related dependencies by command:

pip install -e ".[cache]"

If there are no dependencies of disk cache, DB-GPT will uses memory cache.

Closes #788

@github-actions github-actions bot added enhancement New feature or request model Module: model labels Nov 16, 2023
Copy link
Collaborator

@Aries-ckt Aries-ckt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

neat code!

@Aries-ckt Aries-ckt merged commit 1240352 into eosphoros-ai:main Nov 17, 2023
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request model Module: model
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature][cache] LLM cache support
2 participants