Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MiniCPM-V 2.6 memory leak occurred !!! #1886

Open
1 task
Liwx1014 opened this issue Dec 30, 2024 · 3 comments
Open
1 task

MiniCPM-V 2.6 memory leak occurred !!! #1886

Liwx1014 opened this issue Dec 30, 2024 · 3 comments

Comments

@Liwx1014
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [ ✅] I carefully followed the README.md.
  • [ ✅] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [ ✅] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

MiniCPM-V 2.6 memory leak ?

Current Behavior

I am testing minicpm-v-2.6 ,here is part of my test code:
37a5d00121d9619b47c64c17b35131b

when i run code ,it adds 10MB per inference,i use memory_profiler find in

embed = self._embed_image_bytes(image_bytes, llama.context_params.n_threads_batch)

a6879c9732cf5cd576859e126b66faf

I thought at first that this variable "embed" wasn't being released,I added the code manual release on line 2856,but the program reported an error.
then I replace "embed" with "self._last_image_embed",Memory leaks continue to occur.

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.then

Example environment info:


Python 3.10.10
ubuntu 18.04
cudatoolkit 12.1
llama-cpp-python 0.2.90

@woojh3690
Copy link

I have the same issue. It might be related to ggerganov/llama.cpp#9879.

@Liwx1014
Copy link
Author

Liwx1014 commented Jan 2, 2025

I have the same issue. It might be related to ggerganov/llama.cpp#9879.

ok,thanks!. By the way Have you tried this solution? Whether this problem can be solved

@woojh3690
Copy link

No, I haven't tried applying the solution from ggerganov/llama.cpp#9879.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants