Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] 简洁阐述问题 / Concise description of the issue #5150

Open
hz-jamszzy opened this issue Dec 22, 2024 · 2 comments
Open

[BUG] 简洁阐述问题 / Concise description of the issue #5150

hz-jamszzy opened this issue Dec 22, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@hz-jamszzy
Copy link

问题描述 / Problem Description
多功能对话提问直接报错:ERROR | chatchat.server.api_server.openai_routes:generator:105 - openai request error: An error occurred during streaming

复现问题的步骤 / Steps to Reproduce

  1. 使用xinference加载本地模型glm-4-9b-chat-1m-hf或者glm-4-9b-chat-hf
  2. langchain-chatchat初始化,然后端口访问成功
  3. 多功能对话然后失败

预期的结果 / Expected Result
返回对应的回答

实际结果 / Actual Result
报错:ERROR | chatchat.server.api_server.openai_routes:generator:105 - openai request error: An error occurred during streaming

环境信息 / Environment Information

  • Langchain-Chatchat 版本 / commit 号:0.3.1
  • 部署方式(pypi 安装 / 源码部署 / docker 部署):pypi 安装
  • 使用的模型推理框架(Xinference / Ollama / OpenAI API 等):Xinference
  • 使用的 LLM 模型(GLM-4-9B / Qwen2-7B-Instruct 等):glm-4-9b-chat-1m-hf
  • 使用的 Embedding 模型(bge-large-zh-v1.5 / m3e-base 等):bge-large-zh-v1.5
  • 使用的向量库类型 (faiss / milvus / pg_vector 等): faiss
  • 操作系统及版本 / Operating system and version: MacOS
  • Python 版本 / Python version: 3.9
  • 推理使用的硬件(GPU / CPU / MPS / NPU 等) / Inference hardware (GPU / CPU / MPS / NPU, etc.): CPU
  • 其他相关环境信息 / Other relevant environment information:

附加信息 / Additional Information
添加与问题相关的任何其他信息 / Add any other information related to the issue.

@hz-jamszzy hz-jamszzy added the bug Something isn't working label Dec 22, 2024
Copy link

dosubot bot commented Dec 22, 2024

@hz-jamszzy 你好! 我是Dosu,一个能帮助你解决错误、回答问题,并成为贡献者的机器人。在等待人类维护者的期间,我将为你提供帮助。

I found a similar open issue titled "接入xinference多功能对话报An error occurred during streaming" on the Langchain-Chatchat repository, but no solution was found in the comments [1]. Additionally, there are several related closed issues, such as "【bug】选择用chatglm-api 加载模型时报错" which might provide some context, although they do not contain solutions directly applicable to your problem [2].

For the error "An error occurred during streaming," potential failure points in Langchain-Chatchat include model compatibility, asynchronous task management, and error handling in streaming. The system uses an asynchronous iterator to stream responses, and issues can arise if the model does not support streaming or if there are interruptions in task management [3][4][5].

To diagnose the issue further, enabling detailed logging might help. You can do this by setting the --verbose option to "true" using the chatchat-config command [6].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@948024326
Copy link

可能是xinfer的问题? 看看xinfer的日志有没有报错

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants