Skip to content

Commit

Permalink
docs: optimize model deployment layout
Browse files Browse the repository at this point in the history
  • Loading branch information
junewgl committed Nov 27, 2023
1 parent 346e0b6 commit bca701e
Showing 1 changed file with 16 additions and 4 deletions.
20 changes: 16 additions & 4 deletions docs/docs/quickstart.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
---
sidebar_position: 0
---

# Quickstart
DB-GPT supports the installation and use of a variety of open source and closed models. Different models have different requirements for environment and resources. If localized model deployment is required, GPU resources are required for deployment. The API proxy model requires relatively few resources and can be deployed and started on a CPU machine.

Expand Down Expand Up @@ -56,8 +55,18 @@ cp .env.template .env
Provide two deployment methods to quickly start experiencing DB-GPT.

:::
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

<Tabs
defaultValue="openai"
values={[
{label: 'Open AI', value: 'openai'},
{label: 'Vicuna', value: 'vicuna'},
]}>

<TabItem value="openai" label="openai">

### Method 1. OpenAI agent mode deployment
:::info note

⚠️ You need to ensure that git-lfs is installed
Expand Down Expand Up @@ -90,9 +99,9 @@ LLM_MODEL=chatgpt_proxyllm
PROXY_API_KEY={your-openai-sk}
PROXY_SERVER_URL=https://api.openai.com/v1/chat/completions
```
</TabItem>


### Method 2. Vicuna local deployment
<TabItem value="vicuna" label="vicuna">

#### Hardware requirements description
| Model | Quantize | VRAM Size |
Expand Down Expand Up @@ -122,6 +131,9 @@ git clone https://huggingface.co/lmsys/vicuna-13b-v1.5
# .env
LLM_MODEL=vicuna-13b-v1.5
```
</TabItem>

</Tabs>


## Test data (optional)
Expand Down

0 comments on commit bca701e

Please sign in to comment.