Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve tokenizer scalability #202

Merged
merged 2 commits into from
Dec 3, 2024
Merged

Improve tokenizer scalability #202

merged 2 commits into from
Dec 3, 2024

Conversation

dyastremsky
Copy link
Contributor

@dyastremsky dyastremsky commented Dec 3, 2024

This pull request directly gets tokens from the prompt to avoid needing to prepend "hi" multiple times to get an exact token count. This is more efficient and avoids getting higher than random cache hits when using K-V caching.

In addition, the corpus was switched from the farewell address to Shakespeare's sonnets to extend the number of tokens from 7229 to 146236 (for the GPT2 tokenizer). This increased size is necessary at larger scales of benchmarking.

In addition, this lays the blueprint for scalability. Experiments to speed up or parallelize synthetic data retrieval did not yield fruit for this PR (perhaps multi-threading/multi-processing is only useful for large prompts sizes). However, quickly parsing a larger corpus worked out well. Some experimenting lead to using 150 threads. I timed all of the new code as I added it.

This should also prevent input token counts from being wrong with the previous "hi" approach (e.g. for some niche tokenizers), since we are using the tokenizer itself to provide the right number of tokens.

The timing is posted below, with GenAI-Perf running until it generated inputs. The changes result in a 45% drop in runtime for the original text. It's likely much more with the new corpus which is 20x larger.

Before
With previous farewells.txt (1000 prompts and default 100 prompts):
image

After
image
Using 10 threads due to smaller corpus

With new corpus
No change in runtime:
image

Exact count provided
image

Draft tokenizer update

Revert temp changes
@dyastremsky dyastremsky marked this pull request as ready for review December 3, 2024 04:15
@dyastremsky dyastremsky changed the title Improve tokenizer Improve tokenizer scalability Dec 3, 2024
Args:
tokenizer: Tokenizer for tokenizing the corpus.
"""
farewell_path = pathlib.Path(__file__).parent / "sonnets.txt"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically, this is no longer farewell_path ;-) Maybe we should just use a more generic name for this variable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely. Fixed! :)

def tokenize_chunk(chunk):
return tokenizer.encode(" ".join(chunk))

num_threads = 150
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this a big excessive of a number? usually we'd go with number of CPU cores, os.cpu_count().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch! I was doing empirical testing in case the I/O made a difference, but this approach works better. And shaved off another ~.06s on my machine. Updated!

@IzzyPutterman
Copy link
Contributor

For the mooncake dataset, if we generate 10 different chunks of size 512 and joint then, will that new total chunk be of size 5120? Asking basically if this approach works with merging chunks with a space and retaining expected token count. I would assume it does.

@dyastremsky
Copy link
Contributor Author

dyastremsky commented Dec 3, 2024

For the mooncake dataset, if we generate 10 different chunks of size 512 and joint then, will that new total chunk be of size 5120? Asking basically if this approach works with merging chunks with a space and retaining expected token count. I would assume it does.

It worked consistently in testing. The testing was pretty extensive (lots of runs, lots of tokens, tried at higher thread counts), so it seems to work, but I only tested with GPT2.

Right now, this is just optimizing how we do our synthetic approach (i.e. from the static file with a corpus). I don't know of a tokenizer for which this would break things, but it's possible that there are going to be times where the last token of a chunk and first token of the next chunk (or in the case of wrapping around due to reaching the corpus end) would change the token count when the chunks are combined. I think we even saw that with the "hi" approach for a niche tokenizer. If we start seeing variance reported, we can modify the approach.

@dyastremsky dyastremsky merged commit 8ee6e0d into main Dec 3, 2024
6 of 7 checks passed
@dyastremsky dyastremsky deleted the dyas-tokenizer branch December 3, 2024 20:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants