Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor chat template and support accurate name matching. #1216

Merged
merged 33 commits into from
Mar 12, 2024

Conversation

AllentDan
Copy link
Collaborator

@AllentDan AllentDan commented Feb 29, 2024

  • Refactor chat template. Removed from_config, decorate_prompt, _translate_messages, and sampling_param from the chat template.
  • accurate name matching

@AllentDan AllentDan removed the WIP label Mar 1, 2024
@AllentDan
Copy link
Collaborator Author

Need update after #1168 merged

@lvhan028
Copy link
Collaborator

lvhan028 commented Mar 1, 2024

#1168 has been merged

lmdeploy/model.py Outdated Show resolved Hide resolved
lmdeploy/model.py Outdated Show resolved Hide resolved
eoh='\n',
assistant='ASSISTANT: ',
eoa='</s>',
separator='\n',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vicuna 的模版里面有 \n 么?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vicuna decorate之后跟之前的不一样,可以看一下

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是有点奇怪,FastChat 文档里面的模板和代码效果略有差异。文档里效果是这样的:

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.

USER: Hello!
ASSISTANT: Hello!</s>
USER: How are you?
ASSISTANT: I am good.</s>

然后代码效果这样的:

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hello! ASSISTANT: Hi!</s>USER: How are you? ASSISTANT:

lmdeploy/model.py Outdated Show resolved Hide resolved
examples/vl/qwen_model.py Outdated Show resolved Hide resolved
Comment on lines 169 to 180
box_map = dict(user=self.user,
assistant=self.assistant,
system=self.system)
eox_map = dict(user=self.eoh,
assistant=self.eoa + self.separator,
system=self.eosys)
ret = ''
for message in messages:
role = message['role']
content = message['content']
ret += f'{box_map[role]}{content}{eox_map[role]}'
ret += f'{self.assistant}'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

messages2prompt 是说messages里面没有system的话,就不加了是么?这个之前貌似默认是有的。

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

对,这个行为被BC了(internlm 的模型除外),需要讨论下。之前如果用户不传 meta_instrucion,会用默认的。现在只看用户传的,如果用户没传就没在用。

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

确实有些纠结。这里修改的出发点是,可以简化些逻辑,不用translate_messages,对吧。

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

确实有些纠结。这里修改的出发点是,可以简化些逻辑,不用translate_messages,对吧。

对,其实我们是不知道ChatGPT内部的行为是什么。不过像 FastChat 似乎内部是用了缺省值的,如果用户没传的话。

@grimoire
Copy link
Collaborator

grimoire commented Mar 6, 2024

tokenizer_config.json contain a lot of useful information might help to ease the chat template.
For example gemma-7b-it has bos_token, eos_token and chat_template.

lmdeploy/cli/utils.py Outdated Show resolved Hide resolved
Conflicts:
	lmdeploy/cli/cli.py
lmdeploy/model.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@RunningLeon RunningLeon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lvhan028
Copy link
Collaborator

lvhan028 commented Mar 7, 2024

I am doing BC-breaking test, comparing the results between v0.2.5 and this PR. They are different

prompt = 'hi, how are you'

from lmdeploy.model import MODELS

chat_templates = [
    'internlm', 'llama', 'base',
    'wizardlm', 'vicuna',
    'internlm-chat', 'internlm-chat-7b', 'internlm-chat-20b', 'internlm-chat-7b-8k',
    'internlm2-chat', 'internlm2-chat-1_8b', 'internlm2-chat-7b', 'internlm2-chat-20b',
    'baichuan-7b',
    'baichuan2-7b',
    'llama2', 'llama-2', 'llama-2-chat',
    'qwen-7b', 'qwen-14b',
    'codellama',
    'falcon',
    'chatglm2-6b',
    'solar', 'solar-70b',
    'ultracm',
    'ultralm',
    'yi', 'yi-chat', 'yi-200k', 'yi-34b',
    'Mistral-7B-Instruct', 'Mixtral-8x7B-Instruct',
    'gemma',
    'deepseek-chat'
]

for _template in chat_templates:
    print(f'---------{_template}---------')
    model = MODELS.get(_template)()
    print(model.get_prompt(prompt))

@AllentDan
Copy link
Collaborator Author

还有个问题,base 模型的匹配,现在基本都会匹配不到名字。

lmdeploy/model.py Outdated Show resolved Hide resolved
@lvhan028 lvhan028 merged commit 24bd4b9 into InternLM:main Mar 12, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants