Skip to content

Commit

Permalink
Fix llama3.2-1b inference error by handling tie_word_embedding (#2568)
Browse files Browse the repository at this point in the history
  • Loading branch information
grimoire authored Oct 9, 2024
1 parent c722ff5 commit 231e5bb
Showing 1 changed file with 5 additions and 0 deletions.
5 changes: 5 additions & 0 deletions lmdeploy/pytorch/models/llama.py
Original file line number Diff line number Diff line change
Expand Up @@ -371,6 +371,11 @@ def forward(
)
return hidden_states

def update_weights(self):
"""update weights."""
if self.config.tie_word_embeddings:
self.lm_head.weight = self.model.embed_tokens.weight

def get_logits(self, hidden_states: torch.Tensor):
"""compute logits of the model output."""
return self.lm_head(hidden_states)
Expand Down

0 comments on commit 231e5bb

Please sign in to comment.