Skip to content

Commit

Permalink
Fix llava_next for llama 3.2 vision cross attention states (#641)
Browse files Browse the repository at this point in the history
  • Loading branch information
tgaddair authored Oct 15, 2024
1 parent bea8834 commit 4c84b4f
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions server/lorax_server/models/custom_modeling/llava_next.py
Original file line number Diff line number Diff line change
Expand Up @@ -258,6 +258,7 @@ def forward(
input_lengths=input_lengths,
max_s=max_s,
prefill_cache_indices=None,
cross_attention_states=None,
adapter_data=adapter_data,
)
if lm_head_indices is not None:
Expand Down

0 comments on commit 4c84b4f

Please sign in to comment.