Skip to content

Commit

Permalink
[Perf] Reduce peak memory usage of llama (vllm-project#10339)
Browse files Browse the repository at this point in the history
Signed-off-by: andoorve <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
  • Loading branch information
andoorve authored and mfournioux committed Nov 20, 2024
1 parent 7a2b6c6 commit 32005ce
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions vllm/model_executor/models/llama.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,8 @@ def __init__(
self.act_fn = SiluAndMul()

def forward(self, x):
gate_up, _ = self.gate_up_proj(x)
x = self.act_fn(gate_up)
x, _ = self.gate_up_proj(x)
x = self.act_fn(x)
x, _ = self.down_proj(x)
return x

Expand Down

0 comments on commit 32005ce

Please sign in to comment.