Skip to content

Commit

Permalink
docs(project): update
Browse files Browse the repository at this point in the history
  • Loading branch information
tpoisonooo committed Jan 12, 2024
1 parent 9b9f200 commit 09c5e9d
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 7 deletions.
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ The first run will automatically download the configuration of internlm2-7B.
# config.ini
[llm]
..
client_url = "http://10.140.24.142:39999/inference" # example
client_url = "http://10.140.24.142:8888/inference" # example
python3 main.py workdir
```
Expand All @@ -124,7 +124,8 @@ python3 main.py workdir
If you still need to read Feishu group messages, see [Feishu Developer Square - Add Application Capabilities - Robots](https://open.feishu.cn/app?lang=zh-CN).
## STEP4. Advanced Version [Optional]
To further enhance the experience of the assistant's response, the more features you turn on, the better.
The basic version may not perform well. You can enable these features to enhance performance. The more features you turn on, the better.
1. Use higher accuracy local LLM
Expand All @@ -134,7 +135,7 @@ To further enhance the experience of the assistant's response, the more features
2. Hybrid LLM Service
For LLM services that support the openai interface, HuixiangDou can utilize its Long Context ability.
Using Kimi as an example, below is an example of `config.ini` configuration:
Using [kimi](https://platform.moonshot.cn/) as an example, below is an example of `config.ini` configuration:
```shell
# config.ini
Expand All @@ -146,7 +147,7 @@ To further enhance the experience of the assistant's response, the more features
remote_llm_max_text_length = 128000
remote_llm_model = "moonshot-v1-128k"
```
We also support the gpt API. Note that this feature will increase response time and operating costs.
We also support chatgpt API. Note that this feature will increase response time and operating costs.
3. Repo search enhancement
Expand Down
6 changes: 3 additions & 3 deletions README_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@

以下是运行茴香豆的硬件需求。建议遵循部署流程,从基础版开始,逐渐体验高级特性。

| 版本 | 硬件需求 | 描述 | Linux 系统已验证设备 |
| 版本 | GPU显存需求 | 描述 | Linux 系统已验证设备 |
| :-: | :-: | :-: | :-: |
| 基础版 | 20GB | 能回答领域知识的基础问题,零成本运行 | ![](https://img.shields.io/badge/3090%2024G-passed-blue?style=for-the-badge) |
| 高级版 | 40GB | 能够回答源码级问题,零成本运行 | ![](https://img.shields.io/badge/A100%2080G-passed-blue?style=for-the-badge) |
Expand Down Expand Up @@ -139,7 +139,7 @@ python3 main.py workdir
2. Hybrid LLM Service

对于支持 openai 接口的 LLM 服务,茴香豆可以发挥它的 Long Context 能力。
以 kimi 为例,以下是 `config.ini` 配置示例:
[kimi](https://platform.moonshot.cn/) 为例,以下是 `config.ini` 配置示例:

```shell
# config.ini
Expand All @@ -151,7 +151,7 @@ python3 main.py workdir
remote_llm_max_text_length = 128000
remote_llm_model = "moonshot-v1-128k"
```
我们同样支持 gpt API。注意此特性会增加响应耗时和运行成本。
我们同样支持 chatgpt API。注意此特性会增加响应耗时和运行成本。

3. repo 搜索增强

Expand Down

0 comments on commit 09c5e9d

Please sign in to comment.