-
Notifications
You must be signed in to change notification settings - Fork 485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use token_ids
to track the FSM state for each sequence in the vLLM integration
#539
Closed
Closed
Changes from all commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
db6ef24
Regression test case
viktor-ferenczi 37d9e01
Fixed test_regex to expect the final state
viktor-ferenczi 8c02922
fix beam search and multiple concurrent sequences using token_id tupl…
e1347ca
include viktor-ferenczi refactor
2f63743
fix tests
f3043d9
don't recurse, assume previous input was handled by logits processor
28e3fdc
move tests
b490e8b
fix tests
70bcc52
integrate CachedRegexFSM from @viktor-ferenczi
e913c6e
dead code
f6e6743
fix tests s.t. they mock forgetting the logits processor
142eb0d
Revert "fix tests s.t. they mock forgetting the logits processor"
751bd62
Revert "Regression test case"
3a86332
fix bad rebase
7790c25
make adjustments for vllm 0.3.0
7d15257
fix bad rebase again
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,118 @@ | ||
import re | ||
|
||
import torch | ||
|
||
from outlines.serve.vllm import RegexLogitsProcessor, _patched_apply_logits_processors | ||
|
||
|
||
class MockTokenizer: | ||
vocabulary = { | ||
**{chr(i): i for i in range(256)}, | ||
**{"eos": 256}, | ||
} | ||
special_tokens = {"eos"} | ||
eos_token_id = 256 | ||
|
||
@property | ||
def inverse_vocabulary(self): | ||
return {v: k for k, v in self.vocabulary.items()} | ||
|
||
def decode(self, token_ids): | ||
return "".join([self.inverse_vocabulary[t] for t in token_ids]) | ||
|
||
#### | ||
# vLLM tokenizer features | ||
#### | ||
all_special_tokens = list(special_tokens) | ||
|
||
def convert_tokens_to_string(self, token): | ||
return token[0] | ||
|
||
def get_vocab(self): | ||
return MockTokenizer.vocabulary | ||
|
||
|
||
class MockTokenizerGroup: | ||
tokenizer = MockTokenizer() | ||
|
||
|
||
class MockModel: | ||
tokenizer = MockTokenizerGroup() | ||
|
||
|
||
def sample_from_logits(logits): | ||
probs = torch.exp(logits) / torch.sum(torch.exp(logits)) | ||
return torch.multinomial(probs, 1).item() | ||
|
||
|
||
def test_time_regexp(): | ||
pattern = r"(0?[1-9]|1[0-2]):[0-5]\d\s?(am|pm)?" | ||
llm = MockModel() | ||
logits_processor = RegexLogitsProcessor(pattern, llm) | ||
|
||
token_ids = [] | ||
while True: | ||
random_scores = -10 + 20 * torch.rand(len(llm.tokenizer.vocabulary)) | ||
logits = logits_processor( | ||
input_ids=token_ids, | ||
scores=random_scores, | ||
) | ||
new_token_id = sample_from_logits(logits) | ||
if new_token_id == llm.tokenizer.eos_token_id: | ||
break | ||
token_ids.append(new_token_id) | ||
|
||
assert re.fullmatch(pattern, llm.tokenizer.decode(token_ids)) is not None | ||
|
||
|
||
def test_time_regexp_multiple_samples(): | ||
num_seq = 64 | ||
|
||
pattern = r"(0?[1-9]|1[0-2]):[0-5]\d\ ?(am|pm)?" | ||
llm = MockModel() | ||
|
||
class MockSeqData: | ||
def __init__(self): | ||
self.output_token_ids = [] | ||
|
||
class MockSamplingParams: | ||
logits_processors = [RegexLogitsProcessor(pattern, llm)] | ||
|
||
class MockSamplingMeta: | ||
seq_groups = [[range(num_seq), MockSamplingParams()]] # seq_ids | ||
seq_data = {seq_id: MockSeqData() for seq_id in range(num_seq)} | ||
|
||
sampling_meta = MockSamplingMeta() | ||
|
||
results = [] | ||
while True: | ||
complete_seq_ids = set() | ||
|
||
logits = torch.randn(len(sampling_meta.seq_data), len(llm.tokenizer.vocabulary)) | ||
new_logits = _patched_apply_logits_processors(logits, sampling_meta) | ||
seq_ids = sorted(sampling_meta.seq_groups[0][0]) | ||
for logits_row, seq_id in zip(new_logits, seq_ids): | ||
new_token_id = sample_from_logits(logits_row) | ||
if new_token_id == llm.tokenizer.eos_token_id: | ||
complete_seq_ids.add(seq_id) | ||
results.append(sampling_meta.seq_data[seq_id].output_token_ids) | ||
else: | ||
sampling_meta.seq_data[seq_id].output_token_ids.append(new_token_id) | ||
|
||
if complete_seq_ids: | ||
seq_datas = [ | ||
sd | ||
for seq_id, sd in sampling_meta.seq_data.items() | ||
if seq_id not in complete_seq_ids | ||
] | ||
sampling_meta.seq_data = { | ||
i: seq_data for i, seq_data in enumerate(seq_datas) | ||
} | ||
sampling_meta.seq_groups[0][0] = range(len(sampling_meta.seq_data)) | ||
|
||
if not sampling_meta.seq_data: | ||
break | ||
|
||
assert len(results) == num_seq | ||
for result in results: | ||
assert re.fullmatch(pattern, llm.tokenizer.decode(result)) is not None |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this testing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I observed a lack of stability in sequence order when using beam search with Outlines. This resulted in a new token for one sequence being applied to a different sequence.
This test reproduces that behavior. It fails on
main
and passes with these changes.I will leave an explanatory doc string.