Skip to content

Commit

Permalink
who knows i just don't want to lose anything
Browse files Browse the repository at this point in the history
  • Loading branch information
gnpinkert committed Sep 16, 2024
1 parent 5ce45eb commit 5874d30
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions vllm/model_executor/layers/fused_moe/fused_moe.py
Original file line number Diff line number Diff line change
Expand Up @@ -550,6 +550,10 @@ def fused_experts(hidden_states: torch.Tensor,
sorted_token_ids, expert_ids, num_tokens_post_padded = (
moe_align_block_size(curr_topk_ids, config['BLOCK_SIZE_M'], E))

print(f"Top_k IDs: {curr_topk_ids}")
print(f"sorted_token_ids IDs: {sorted_token_ids}")
print(f"expert ids: {expert_ids}")

invoke_fused_moe_kernel(curr_hidden_states,
w1,
intermediate_cache1,
Expand Down

0 comments on commit 5874d30

Please sign in to comment.