-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: IndexNode OOM after upgrading from 2.3.12 to 2.4.4 #34273
Comments
Checking the logs, I did not see anything doubtful. The default value of max segment size changed from 512MB to 1024GB, that's the only suspected point I can think of. @artinshahverdian quick question: how did you observe the index memory usage? |
According to the log information, the size of the newly segment to build index is |
/assign @artinshahverdian |
can confirm the segment size default value is changed in 2.4.4:
these are my configs now. If I reduce these to:
and trigger compaction, will I get smaller segments and can I use an 8GB machine for indexNode or the existing segments cannot change anymore? |
It is no way to reduce the segment size through compaction. The recommended approach is to scale up the indexnode memory to 10GB; for 2.3GB segment, 10GB of memory should be sufficient for building the index. |
@artinshahverdian I have not changed the segment size or built a disk index. Is there anyway I can find the big segment and verify the size? |
@xiaocai2333 do you see any downside of changing the segment size back to 512? |
Is there an existing issue for this?
Environment
Current Behavior
I am running Milvus 2.4.4 in cluster mode on AWS EKS. The I am seeing the indexnode being crashed while it's trying to index. I have just upgraded from 2.3.12 to 2.4.4 and have a dedicated nodegroup for the indexnode. The machine has 8GB memory. Why would the indexnode work fine in 2.3.12 with the same memory and get OOM after upgrading to 2.4.4. Anything I'm missing? Logs for indexnode are included from start until the crash. Logs are set at info level.
After upgrading to a 16GB Node, the memory usage didn't go above 6GB and it dropped multiple times and grew. I suspect Milvus is not monitoring memory usage and doesn't kick off a garbage collection before using more memory.
My segment size and max segment size are the default and I have not overridden anything.
indexnode.log
Expected Behavior
Indexnode should work fine as it was in 2.3.12 with an 8GB machine and run garbage collection periodically.
Steps To Reproduce
No response
Milvus Log
indexnode.log
Anything else?
No response
The text was updated successfully, but these errors were encountered: