Skip to content

Commit

Permalink
handle cache_position kwarg in updated llama modeling
Browse files Browse the repository at this point in the history
  • Loading branch information
winglian committed Feb 21, 2024
1 parent 3eb834d commit 9bd96e2
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions src/axolotl/monkeypatch/llama_attn_hijack_flash.py
Original file line number Diff line number Diff line change
Expand Up @@ -688,6 +688,9 @@ def llama_model_forward(
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
cache_position: Optional[ # pylint: disable=unused-argument
torch.LongTensor
] = None,
) -> Union[Tuple, BaseModelOutputWithPast]:
output_attentions = (
output_attentions
Expand Down

0 comments on commit 9bd96e2

Please sign in to comment.