Skip to content

Conversation

aolemila
Copy link

@aolemila aolemila commented Sep 25, 2025

Motivation

From sgl-project#202.
Here supports the seed feature, which can be specified by every request. But this feature has to be used with --enable-deterministic-sampling.

note

  • multinomial_with_seed is from SGLang codes.
  • There are four large prime numbers, and based on issue, we can change the numbers freely.

Command

JAX_COMPILATION_CACHE_DIR=/tmp/jit_cache \
python3 -u -m sgl_jax.launch_server \
--model-path Qwen/Qwen3-8B \
--trust-remote-code  \
--tp-size=4 \
--device=tpu \
--mem-fraction-static=0.8 \
--chunked-prefill-size=2048 \
--download-dir=/tmp \
--dtype=bfloat16 \
--max-running-requests 256 \
--skip-server-warmup \
--page-size=128  \
--disable-radix-cache --enable-deterministic-sampling 
# Note: The `--enable-deterministic-sampling` can be enabled if you want to sample deterministically.

evalscope eval  --model Qwen-8B --api-url http://127.0.0.1:30000/v1/chat/completions --api-key EMPTY --eval-type service --datasets gsm8k --eval-batch-size 64

python3 -m sgl_jax.bench_serving --backend sgl-jax --dataset-name random --num-prompts 48 --random-input 1024 --random-output 1024 --random-range-ratio 1 --warmup-requests 0 --max-concurrency=16

Accurancy

not deterministic

+---------+-----------+-----------------+----------+-------+---------+---------+
| Model   | Dataset   | Metric          | Subset   |   Num |   Score | Cat.0   |
+=========+===========+=================+==========+=======+=========+=========+
| Qwen-8B | gsm8k     | AverageAccuracy | main     |  1319 |  0.9045 | default |
+---------+-----------+-----------------+----------+-------+---------+---------+

deterministic

+---------+-----------+-----------------+----------+-------+---------+---------+
| Model   | Dataset   | Metric          | Subset   |   Num |   Score | Cat.0   |
+=========+===========+=================+==========+=======+=========+=========+
| Qwen-8B | gsm8k     | AverageAccuracy | main     |  1319 |  0.9113 | default |
+---------+-----------+-----------------+----------+-------+---------+---------+

Benchmark

not deterministic

============ Serving Benchmark Result ============
Backend:                                 sgl-jax
Traffic request rate:                    inf
Max request concurrency:                 16
Successful requests:                     48
Benchmark duration (s):                  19.03
Total input tokens:                      49152
Total generated tokens:                  49152
Total generated tokens (retokenized):    49113
Request throughput (req/s):              2.52
Input token throughput (tok/s):          2583.44
Output token throughput (tok/s):         2583.44
Total token throughput (tok/s):          5166.89
Concurrency:                             15.99
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   6337.80
Median E2E Latency (ms):                 6339.11
---------------Time to First Token----------------
Mean TTFT (ms):                          156.09
Median TTFT (ms):                        161.65
P99 TTFT (ms):                           261.80
---------------Inter-Token Latency----------------
Mean ITL (ms):                           6.04
Median ITL (ms):                         5.92
P95 ITL (ms):                            6.24
P99 ITL (ms):                            6.43
Max ITL (ms):                            227.31
==================================================

deterministic

============ Serving Benchmark Result ============
Backend:                                 sgl-jax
Traffic request rate:                    inf
Max request concurrency:                 16
Successful requests:                     48
Benchmark duration (s):                  19.31
Total input tokens:                      49152
Total generated tokens:                  49152
Total generated tokens (retokenized):    49113
Request throughput (req/s):              2.49
Input token throughput (tok/s):          2545.20
Output token throughput (tok/s):         2545.20
Total token throughput (tok/s):          5090.39
Concurrency:                             15.99
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   6433.04
Median E2E Latency (ms):                 6427.44
---------------Time to First Token----------------
Mean TTFT (ms):                          161.56
Median TTFT (ms):                        165.14
P99 TTFT (ms):                           268.46
---------------Inter-Token Latency----------------
Mean ITL (ms):                           6.13
Median ITL (ms):                         6.02
P95 ITL (ms):                            6.27
P99 ITL (ms):                            6.51
Max ITL (ms):                            230.81
==================================================

Copy link

Summary of Changes

Hello @aolemila, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement to the token sampling mechanism by implementing a multinomial_with_seed function. This new capability allows for deterministic and reproducible sampling outcomes by incorporating a unique seed and token position for each sample. The changes are integrated throughout the sampling stack, from user-defined sampling parameters to the core sampler logic and model runner, ensuring that seeded sampling can be consistently applied. This feature is particularly valuable for scenarios requiring precise control over randomness, such as debugging or specific research applications like Tunix.

Highlights

  • Seeded Multinomial Sampling: Introduced a new multinomial_with_seed function to enable deterministic token sampling using a provided seed and token position, crucial for reproducibility.
  • Sampling Parameter Integration: Added sampling_seed to SamplingParams and SamplingMetadata to propagate the seed consistently through the entire sampling pipeline.
  • Sampler and Model Runner Updates: Modified the Sampler and ModelRunner components to accept and utilize the new positions and sampling_seed arguments during the token sampling process.
  • JAX X64 Environment Handling: The multinomial_with_seed function explicitly manages the JAX_ENABLE_X64 environment variable, temporarily enabling it for precise calculations and then disabling it, addressing a noted kernel error.
  • New Unit Tests: A new test file, test_sampler.py, was added to thoroughly validate the deterministic and correct behavior of the multinomial_with_seed function under various conditions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for deterministic sampling with multinomial_with_seed and adds corresponding tests. The propagation of sampling_seed and positions parameters through the system is handled well. However, there are a couple of critical issues that need addressing. The new multinomial_with_seed function modifies os.environ to enable 64-bit support in JAX, which is a dangerous side-effect within a JIT-compiled function and can lead to race conditions and unpredictable behavior. This should be configured at application startup. Additionally, a typo np.concat instead of np.concatenate will cause a runtime error. I've also included several medium-severity suggestions to improve code style, readability, and fix minor typos.

@aolemila aolemila force-pushed the feat/add-seed-for-sample branch 3 times, most recently from 024e890 to bec5ffc Compare September 28, 2025 08:03
…, add enable-deterministic-sampling arg, support temperature, top_k, top_p, min_p and sampling_seed without cache_miss
@aolemila aolemila force-pushed the feat/add-seed-for-sample branch 3 times, most recently from df971ab to 2e6e64e Compare September 28, 2025 08:36
@aolemila aolemila force-pushed the feat/add-seed-for-sample branch from 2e6e64e to d29ada1 Compare September 28, 2025 08:40
@aolemila aolemila changed the title add multinomial_with_seed for sampler and test_sampler.py. note: JAX_… add multinomial_with_seed for sampler and test_sampler.py Sep 28, 2025
@pathfinder-pf
Copy link

/LGTM

@aolemila aolemila merged commit 41ce0a8 into feat/integrate-tunix Sep 28, 2025
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants