Skip to content

Commit

Permalink
Merge branch 'v0.1.0' of github.com:kuwaai/genai-os into v0.1.0
Browse files Browse the repository at this point in the history
  • Loading branch information
taifu9920 committed Apr 8, 2024
2 parents 420bdd3 + e747096 commit b4409be
Show file tree
Hide file tree
Showing 3 changed files with 49 additions and 2 deletions.
23 changes: 23 additions & 0 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,4 +120,27 @@ The files are as follows:
Start the basic Kuwa GenAI OS system, PostgreSQL and Gemini-Pro Executor using Docker Compose
```sh
docker compose -f compose.yaml -f pgsql.yaml -f gemini.yaml up --build
```

## Advanced Usage

### 1. Launch Debug Mode
The docker version will not display any error message on the Multi-Chat web frontend by default. If you encounter an error, you can use the following command to enable the debug mode.
```sh
docker compose -f compose.yaml -f dev.yaml -f <other yaml files...> up --build
```

### 2. Run Multiple Executors
The settings for each Executor are already written in the corresponding YAML files (gemini.yaml, chatgpt.yaml, huggingface.yaml, llamacpp.yaml), please refer to these configuration files and extend them according to your needs.
You may need to refer to the [Executor documentation](../src/executor/README.md).
Once the configuration files are complete, you can use the following command to start the entire system
```sh
docker compose -f compose.yaml -f pgsql.yaml -f <executor1 configuration file> -f <executor2 configuration file...> up --build
```

### 3. Force Upgrade
If your database is accidentally lost or damaged, you can force upgrade it
Please make sure the system is running, then use the following command to force upgrade the database
```sh
docker exec -it kuwa-multi-chat-1 docker-entrypoint force-upgrade
```
4 changes: 2 additions & 2 deletions docker/README_TW.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,11 +131,11 @@ docker compose -f compose.yaml -f dev.yaml -f <其他 yaml 檔案...> up --build
```

### 2. 執行多個 Executor
每種 Executor 的設定都已寫在對應的 YAML 檔案中 (gemini.yaml, chatgpt.yaml, huggingface.yaml),請參考這些設定檔按照您的需求擴充。
每種 Executor 的設定都已寫在對應的 YAML 檔案中 (gemini.yaml, chatgpt.yaml, huggingface.yaml, llamacpp.yaml),請參考這些設定檔按照您的需求擴充。
您可能需要參考 [Executor 說明文件](../src/executor/README_TW.md)。
完成設定檔後可使用以下指令啟動整個系統
```sh
docker compose -f compose.yaml -f pgsql.yaml -f <Executor1設定檔> -f <Executor2設定檔...> up --build
docker compose -f compose.yaml -f pgsql.yaml -f <executor1設定檔> -f <executor2設定檔...> up --build
```

### 3. 強制更新
Expand Down
24 changes: 24 additions & 0 deletions docker/llamacpp.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
services:
llamacpp-executor:
build:
context: ../
dockerfile: docker/executor/Dockerfile
image: kuwa-executor
environment:
EXECUTOR_TYPE: llamacpp
EXECUTOR_ACCESS_CODE: taide-4bit
EXECUTOR_NAME: TAIDE 4bit
depends_on:
- kernel
- multi-chat
command: ["--model_path", "/var/model/taide-4bit.gguf", "--ngl", "-1", "--temperature", "0"]
restart: unless-stopped
volumes: ["/path/to/taide/model.gguf:/var/model/taide-4bit.gguf"]
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
networks: ["backend"]

0 comments on commit b4409be

Please sign in to comment.