Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

加载本地ChatGLM2-6b报错,找不到generation_config.json #2568

Closed
1 of 3 tasks
congge27 opened this issue Nov 21, 2024 · 1 comment
Closed
1 of 3 tasks

加载本地ChatGLM2-6b报错,找不到generation_config.json #2568

congge27 opened this issue Nov 21, 2024 · 1 comment
Labels
Milestone

Comments

@congge27
Copy link

System Info / 系統信息

Cuda12.4
centos stream 9
python 3.12.10

Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?

  • docker / docker
  • pip install / 通过 pip install 安装
  • installation from source / 从源码安装

Version info / 版本信息

latest

The command used to start Xinference / 用以启动 xinference 的命令

python -m xinference-cmdline 。。。。。。

Reproduction / 复现过程

1.加载模型
2.在webui页面使用对话功能
3.提示在本地模型文件夹找不到generation_config.json文件,ChatGLM2-6b没有这个文件,所以是不支持吗还是

Expected behavior / 期待表现

不知道是不是不支持chatglm2-6b

@XprobeBot XprobeBot added the gpu label Nov 21, 2024
@XprobeBot XprobeBot added this to the v0.16 milestone Nov 21, 2024
@qinxuye
Copy link
Contributor

qinxuye commented Nov 21, 2024

ChatGLM2 太老了,Xinference 不再支持。

@qinxuye qinxuye closed this as completed Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants