Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add function to set end_id and apply chat template for GAP Triton In-Process #183

Open
wants to merge 19 commits into
base: main
Choose a base branch
from
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,10 @@ def convert(

for file_data in generic_dataset.files_data.values():
for row in file_data.rows:
token_ids = config.tokenizer.encode(row.texts[0])
if not config.apply_chat_template:
token_ids = config.tokenizer.encode(row.texts[0])
else:
token_ids = config.tokenizer.apply_chat_template([{"role": "user", "content": row.texts[0]}])
payload = {
"input_ids": {
"content": token_ids,
Expand Down Expand Up @@ -80,6 +83,8 @@ def _add_request_params(self, payload: Dict, config: InputsConfig) -> None:
payload["request_output_len"] = [num_tokens]
if config.output_tokens_deterministic:
payload["min_length"] = [num_tokens]
if config.set_end_id:
payload["end_id"] = [config.tokenizer._tokenizer.eos_token_id]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit concerned about adding a new CLI option for this specific use cases. Since we try to avoid adding too much options to the tool, and this can be achieved through the --extra-inputs <name>:<value> as well.
cc @dyastremsky

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah if this can be done via --extra-inputs, let's skip this change.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is moved to --extra-inputs. however, code change is still necessary unless use directly provide eos token id (instead of fetching from tokenizer). Please let me know if current approach looks good. Thanks.


for key, value in config.extra_inputs.items():
payload[key] = [value]
6 changes: 6 additions & 0 deletions genai-perf/genai_perf/inputs/inputs_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,3 +142,9 @@ class InputsConfig:

# Seed used to generate random values
random_seed: int = DEFAULT_RANDOM_SEED

# whether to set end_id in triton converter
set_end_id: bool = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still necessary? We don't want endpoint-specific fields in inputs_config.py.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch, i'll remove this


# whether to apply chat template in triton converter
apply_chat_template: bool = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update the comment to make this generic. You'd also want apply_chat_template to be a string if we're making it generic based on how other endpoints use chat templating.

Newline after this.

2 changes: 2 additions & 0 deletions genai-perf/genai_perf/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,8 @@ def create_config_options(args: Namespace) -> InputsConfig:
batch_size_image=args.batch_size_image,
batch_size_text=args.batch_size_text,
output_dir=args.artifact_dir,
set_end_id=args.triton_converter_set_end_id,
apply_chat_template=args.triton_converter_apply_chat_template,
)


Expand Down
15 changes: 15 additions & 0 deletions genai-perf/genai_perf/parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -571,6 +571,21 @@ def _add_image_input_args(parser):
"If format is not selected, format of generated image is selected at random",
)

input_group.add_argument(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Hyunjae's point below. Also, the image input arg function does not seem like the right place for this.

"--triton-converter-set-end-id",
action="store_true",
required=False,
help="If specified, the input to trtllm engines in triton server will "
"contain end_id set to EOS token."
)

input_group.add_argument(
"--triton-converter-apply-chat-template",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apply_chat_template I think could be a useful cli option to add: https://huggingface.co/docs/transformers/main/en/chat_templating

But I would go for adding it in a more generic way to support chat_template for any use cases, not just for triton in-process.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please correct me if I'm wrong. Is chat template only necessary to benchmark raw engines? Benchmarking through api endpoints does not need to add chat template in GAP, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you're right. I guess what I wanted to say was we want to make it more generic so that the option is not limited to only triton in-process although the current use case is just triton in-process. This way we have a stronger reason to add this to our cli options.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. Does GAP have any other route to benchmark raw engines that might need chat templates? I am not familiar with the codebase and it would be great if someone from your team has some BW to help.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link from Hyunjae shows how chat templates are used with some APIs. The only endpoint that I know supports templates right now is the chat endpoint.

The team is bandwidth-constrained at the moment, though that could be a good addition.

action="store_true",
required=False,
help="If specified, the input to trtllm engines in triton server will "
"be wrapped with chat template."
)

def _add_profile_args(parser):
profile_group = parser.add_argument_group("Profiling")
Expand Down
3 changes: 3 additions & 0 deletions genai-perf/genai_perf/tokenizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,9 @@ def __call__(self, text, **kwargs) -> "BatchEncoding":
def encode(self, text, **kwargs) -> List[int]:
self._encode_args.update(kwargs)
return self._tokenizer.encode(text, **self._encode_args)

def apply_chat_template(self, text, **kwargs) -> List[int]:
return self._tokenizer.apply_chat_template(text, **kwargs)

def decode(self, token_ids, **kwargs) -> str:
self._decode_args.update(kwargs)
Expand Down
2 changes: 2 additions & 0 deletions genai-perf/genai_perf/wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,8 @@ def build_cmd(args: Namespace, extra_args: Optional[List[str]] = None) -> List[s
"tokenizer",
"tokenizer_trust_remote_code",
"tokenizer_revision",
"triton_converter_set_end_id",
"triton_converter_apply_chat_template"
]

utils.remove_file(args.profile_export_file)
Expand Down