Skip to content

Conversation

@RMLYC
Copy link
Collaborator

@RMLYC RMLYC commented Oct 10, 2025

  1. Add Seqlen_q parameter to the mha_decode kernel in top/mha.py and tests/test_mha_decode.py

  2. Add try-catch blocks to the ref_program execution of both mha_decode and mha kernels to capture runtime errors in top/mha.py

  3. Add profile/mha_profile_test.py and profile/mha_decode_profile_test.py test scripts to perform multi-group performance tests for mha and mha_decode based on input parameter lists. Performance test results are saved as CSV files, and runtime output is logged

  4. Add profile-test.sh script for profile testing in CI; this test runs daily and outputs performance tables for each operator test under different input shapes

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @RMLYC, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the project's profiling capabilities by introducing a structured framework for performance testing of Multi-Head Attention (MHA) and MHA-Decode kernels. It centralizes common utilities, allows for parameterizing performance sweeps via CSV files, and improves the MHA-Decode kernel's flexibility by enabling variable query sequence lengths. The changes also bolster the robustness of the profiling process with better error handling, ensuring more reliable performance data collection.

Highlights

  • New Profiling Framework: Introduced a comprehensive system for profiling Multi-Head Attention (MHA) and MHA-Decode operations.
  • Centralized Utilities: Added "common_tools.py" to consolidate shared functionalities like CSV parsing, logging, and subprocess management for performance sweeps.
  • Configurable Sweeps: Implemented dedicated scripts ("mha_profile_test.py", "mha_decode_profile_test.py") and CSV input files ("mha_params.csv", "mha_decode_params.csv") to define and run MHA and MHA-Decode profiling sweeps.
  • MHA-Decode Enhancements: Updated the MHA-Decode kernel to support a variable query sequence length ("seqlen_q"), making its profiling more flexible.
  • Improved Error Handling: Added "try-except" blocks in the MHA kernel's check and profile methods to gracefully handle and report Pytorch-related errors during execution.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a robust profiling test suite for MHA and MHA-Decode operators, which is a great addition. The structure is well-thought-out, with a central shell script orchestrating the tests and a common Python module for shared utilities. I've identified a critical bug related to a missing import that would cause the script to fail, a high-severity bug in logging incorrect metrics, and a significant code duplication issue between the shell script and a Python module. I've also included a few medium-severity suggestions to improve code clarity and maintainability. Addressing these points will make the new test suite more reliable and easier to maintain.

Comment on lines +58 to +120
print_csv_as_table() {
local csv_path="$1"
local max_col_width="$2"
"$PYTHON_BIN" - "$csv_path" "$max_col_width" <<'PYCODE'
# -*- coding: utf-8 -*-
# Render a CSV as a fixed-width ASCII table (truncates long cells).

import csv, sys, os
from typing import List, Dict

def truncate(s: str, limit: int) -> str:
if limit <= 3 or len(s) <= limit:
return s[:limit]
return s[:limit-3] + "..."

def col_widths(headers: List[str], rows: List[Dict[str,str]], maxw: int) -> List[int]:
w = [len(h) for h in headers]
for r in rows:
for i, h in enumerate(headers):
val = str(r.get(h, ""))
w[i] = min(max(w[i], len(val)), maxw)
return w

def draw_row(cells: List[str], widths: List[int]) -> str:
parts = []
for c, w in zip(cells, widths):
parts.append(" " + c.ljust(w) + " ")
return "|" + "|".join(parts) + "|"

def draw_sep(widths: List[int]) -> str:
parts = [ "-" * (w + 2) for w in widths ]
return "+" + "+".join(parts) + "+"

def main():
if len(sys.argv) < 3:
print("Usage: <script> <csv_path> <max_col_width>", file=sys.stderr)
sys.exit(2)
path = sys.argv[1]
maxw = int(sys.argv[2])
if not os.path.exists(path):
print(f"[WARN] CSV not found: {path}", file=sys.stderr)
sys.exit(0)

with open(path, "r", encoding="utf-8") as f:
reader = csv.DictReader(f)
headers = reader.fieldnames or []
rows = list(reader)

trunc_rows = [{ h: truncate(str(r.get(h, "")), maxw) for h in headers } for r in rows]
widths = col_widths(headers, trunc_rows, maxw)
sep = draw_sep(widths)

print(sep)
print(draw_row(headers, widths))
print(sep)
for r in trunc_rows:
print(draw_row([str(r.get(h, "")) for h in headers], widths))
print(sep)

if __name__ == "__main__":
main()
PYCODE
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The print_csv_as_table shell function contains a large embedded Python script that duplicates the table-printing functionality already implemented in profile/common_tools.py. This creates a maintenance burden, as any changes to the table format would need to be applied in two places.

To resolve this, I recommend the following refactoring:

  1. Create a new, small Python script (e.g., profile/print_table.py).
  2. This new script should parse a CSV path from its command-line arguments.
  3. It should then import and call the print_table function from common_tools.py to render the table.
  4. Finally, update this shell script to call the new Python script instead of using the embedded heredoc.

Comment on lines +460 to +462
print(f"[DONE {idx}/{len(rows)}] latency_ms={metrics.get('latency_ms','')} TFLOPs={metrics.get('tflops','')}")
logger.info("[DONE %d/%d] latency_ms=%s TFLOPs=%s",
idx, len(rows), metrics.get("latency_ms", ""), metrics.get("tflops", ""))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The metrics dictionary returned by parse_stdout_metrics contains keys like fwd_latency_ms and fwd_tflops, but the keys latency_ms and tflops are no longer populated (the code for them is commented out). These print and log statements will therefore display empty values for latency and TFLOPs. You should use the forward pass metrics for this summary message.

Suggested change
print(f"[DONE {idx}/{len(rows)}] latency_ms={metrics.get('latency_ms','')} TFLOPs={metrics.get('tflops','')}")
logger.info("[DONE %d/%d] latency_ms=%s TFLOPs=%s",
idx, len(rows), metrics.get("latency_ms", ""), metrics.get("tflops", ""))
print(f"[DONE {idx}/{len(rows)}] latency_ms={metrics.get('fwd_latency_ms','')}")
logger.info("[DONE %d/%d] latency_ms=%s TFLOPs=%s",
idx, len(rows), metrics.get("fwd_latency_ms", ""), metrics.get("fwd_tflops", ""))

Comment on lines +90 to +97
# Optional: keep backward-compat fields empty or map to fwd values if desired.
# Here we keep them empty to avoid confusion; uncomment if you need fallback.
# out.update({
# "ref_latency_ms": out["fwd_ref_latency_ms"],
# "ref_tflops": out["fwd_ref_tflops"],
# "latency_ms": out["fwd_latency_ms"],
# "tflops": out["fwd_tflops"],
# })
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This commented-out block for backward compatibility appears to be dead code. If it's no longer needed, removing it would improve code clarity and simplify maintenance.

RMLYC and others added 2 commits October 10, 2025 20:25
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant