- 
                Notifications
    You must be signed in to change notification settings 
- Fork 87
Fusion rule for handling transformers exported models #2632
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
          
     Draft
      
      
            justinchuby
  wants to merge
  4
  commits into
  main
  
    
      
        
          
  
    
      Choose a base branch
      
     
    
      
        
      
      
        
          
          
        
        
          
            
              
              
              
  
           
        
        
          
            
              
              
           
        
       
     
  
        
          
            
          
            
          
        
       
    
      
from
justinchu/attention-key-rule
  
      
      
   
  
    
  
  
  
 
  
      
    base: main
Could not load branches
            
              
  
    Branch not found: {{ refName }}
  
            
                
      Loading
              
            Could not load tags
            
            
              Nothing to show
            
              
  
            
                
      Loading
              
            Are you sure you want to change the base?
            Some commits from the old base branch may be removed from the timeline,
            and old review comments may become outdated.
          
          Conversation
  
    
      This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
      Learn more about bidirectional Unicode characters
    
  
  
    
    Signed-off-by: Justin Chu <[email protected]>
Signed-off-by: Justin Chu <[email protected]>
Signed-off-by: Justin Chu <[email protected]>
Signed-off-by: Justin Chu <[email protected]>
| Codecov Report❌ Patch coverage is  
 Additional details and impacted files@@           Coverage Diff           @@
##             main    #2632   +/-   ##
=======================================
  Coverage   70.38%   70.39%           
=======================================
  Files         222      223    +1     
  Lines       26288    26309   +21     
  Branches     2629     2629           
=======================================
+ Hits        18503    18519   +16     
- Misses       6865     6870    +5     
  Partials      920      920           ☔ View full report in Codecov by Sentry. | 
    
  gramalingam 
      pushed a commit
      that referenced
      this pull request
    
      Oct 15, 2025 
    
    
      
  
    
      
    
  
Output present key value from the Attention op because past key value is provided. Previously the Attention op created would consume past key/value but not produce present key/value, which is not correct for ORT. <img width="1377" height="1225" alt="image" src="https://github.com/user-attachments/assets/118958b4-bc27-4912-b70b-000549887c0f" /> Replaces #2632 Signed-off-by: Justin Chu <[email protected]>
| This is still useful when enable_gqa=True | 
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
      
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
When torch.onnx exports a model from transformers with SDPA, it generates a Concat
node to concatenate past_key/value with the new key/value to produce the graph output
for kv cache. This pattern can be fused into the Attention node, which has present_key
and present_value outputs. It is necessary for ONNX Runtime because it requires the outputs
to be produced by the Attention node when past_key and past_value inputs are provided.