Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] memory fragmentation #592

Closed
yun-yeo opened this issue Oct 25, 2021 · 18 comments · Fixed by #593
Closed

[BUG] memory fragmentation #592

yun-yeo opened this issue Oct 25, 2021 · 18 comments · Fixed by #593
Assignees
Labels
bug Something isn't working

Comments

@yun-yeo
Copy link
Contributor

yun-yeo commented Oct 25, 2021

Describe the bug

[email protected] series are undergoing OOM problem. The memory usages and grow speed slightly decreased with jemalloc adoption in wasmvm part, but still seeing linearly increasing memory allocation (1GB per day).

When I attach bcc/memory-leak tool to core process, the memory is quite stable so I assume there is no actual leak instead it has memory fragmentation issues.

Reported memory usages

image

@yun-yeo yun-yeo added the bug Something isn't working label Oct 25, 2021
@yun-yeo yun-yeo changed the title [BUG] memory leak or fragmentation [BUG] memory fragmentation Oct 25, 2021
@yun-yeo
Copy link
Contributor Author

yun-yeo commented Oct 25, 2021

To solve the problem, firstly I'm doing some tests with jemalloc shared library by replacing whole process memory allocator.

Install [email protected]

JEMALLOC_VERSION=5.2.1
wget https://github.com/jemalloc/jemalloc/releases/download/$JEMALLOC_VERSION/jemalloc-$JEMALLOC_VERSION.tar.bz2 
tar -xf ./jemalloc-$JEMALLOC_VERSION.tar.bz2 
cd jemalloc-$JEMALLOC_VERSION
# for the node with high query rate or wasm cache size, recommend below config
# ./configure --with-malloc-conf=background_thread:true,dirty_decay_ms:5000,muzzy_decay_ms:5000
./configure --with-malloc-conf=background_thread:true,metadata_thp:auto,dirty_decay_ms:30000,muzzy_decay_ms:30000
make
sudo make install

Start terrad with jemalloc shared library

LD_PRELOAD=/usr/local/lib/libjemalloc.so terrad start

@yun-yeo
Copy link
Contributor Author

yun-yeo commented Oct 26, 2021

After applying jemalloc process wide, I can see stabilized memory usage.

I will post this metric data again with long term monitoring.

image

@yun-yeo
Copy link
Contributor Author

yun-yeo commented Oct 27, 2021

It look clam during 3 days (monitoring data is just for one day, but the node was started 3 days ago)

image

@yun-yeo
Copy link
Contributor Author

yun-yeo commented Oct 27, 2021

When the following config is applied to query nodes, it shows high memory consumption than normal nodes.

./configure --with-malloc-conf=background_thread:true,metadata_thp:auto,dirty_decay_ms:30000,muzzy_decay_ms:30000

so the modified config is applied as following for query nodes with large wasm cache size

./configure --with-malloc-conf=background_thread:true,dirty_decay_ms:5000,muzzy_decay_ms:5000

@yun-yeo yun-yeo self-assigned this Oct 27, 2021
@begetan
Copy link

begetan commented Oct 28, 2021

I found that version v0.5.9 not only has a memory leak, but it also affects syncing perfomance.
After restart, the memory consumption of a Mainnet full note goes quickly to 11 GB and sync speed is around 1-2 blocks per second. After several hours memory consumption is growing and syn speed is dropping to 3-5 seconds between blocks. It is clearly visible in CPU usage

Screenshot 2021-10-28 at 23 08 01

Oct 27 23:30:47 ip-10-5-1-60 kernel: [91722.105495] amazon-cloudwat invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Oct 27 23:30:47 ip-10-5-1-60 kernel: [91722.105522]  oom_kill_process.cold+0xb/0x10
Oct 27 23:30:47 ip-10-5-1-60 kernel: [91722.105751] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
Oct 27 23:30:47 ip-10-5-1-60 kernel: [91722.105834] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/terra.service,task=terrad,pid=1015,uid=7010
Oct 27 23:30:47 ip-10-5-1-60 kernel: [91722.105880] Out of memory: Killed process 1015 (terrad) total-vm:95496940kB, anon-rss:31765820kB, file-rss:484kB, shmem-rss:0kB, UID:7010 pgtables:115840kB oom_score_adj:0
Oct 27 23:30:49 ip-10-5-1-60 kernel: [91724.245277] oom_reaper: reaped process 1015 (terrad), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB```

@begetan
Copy link

begetan commented Oct 28, 2021

We also found an OOM issue when intensive querying tendermint RPC endpoint. But we should investigate it better and may be have to wait until this ticket will be resolved.

@yun-yeo
Copy link
Contributor Author

yun-yeo commented Oct 29, 2021

yea memory clear spend a lot of resources.

according to this article

"What we found was that, as allocations went up, memory would also go up. However, as objects were deleted, memory would not go back down unless all objects created at the top of the address range were also removed, exposing the stack-like behavior of the glibc allocator. In order to avoid this, you would need to make sure that any allocations that you expected to stick around would not be assigned to a high order address space."

I also suspect the wasm cache. In wasm cache structure, each code cache can hold bunch of child memories which is not released due to cache memory is still accessible.

@yun-yeo
Copy link
Contributor Author

yun-yeo commented Oct 30, 2021

most stable config is

./configure --with-malloc-conf=background_thread:true,dirty_decay_ms:0,muzzy_decay_ms:0

with jemalloc installed libwasmvm

@yun-yeo
Copy link
Contributor Author

yun-yeo commented Oct 31, 2021

Also found some interesting thing.

When we use small wasm cache like 100MB, the memory is quite stable.

From RomanS
image

Form samwise
image

@gitcoinbot
Copy link

Issue Status: 1. Open 2. Started 3. Submitted 4. Done


This issue now has a funding of 5000.0 UST (5000.0 USD @ $1.0/UST) attached to it.

@gitcoinbot
Copy link

Issue Status: 1. Open 2. Started 3. Submitted 4. Done


Workers have applied to start work.

These users each claimed they can complete the work by 265 years from now.
Please review their action plans below:

1) pmlambert has applied to start work (Funders only: approve worker | reject worker).

I can try to reproduce it and spend some time debugging it.

Learn more on the Gitcoin Issue Details page.

@gitcoinbot
Copy link

Issue Status: 1. Open 2. Started 3. Submitted 4. Done


Workers have applied to start work.

These users each claimed they can complete the work by 265 years from now.
Please review their action plans below:

1) bradlet has applied to start work (Funders only: approve worker | reject worker).

Just wanting to find out if this issue is still active. The linked PR was merged w/ comments showing memory usage was stable. But, it was reopened shortly thereafter. Is the bounty still available, and if so, what is the remaining work request?

Learn more on the Gitcoin Issue Details page.

@shupcode
Copy link

shupcode commented Nov 12, 2021

I noticed that my validator node takes up nearly 2x memory usage compared to a non-validator node.
The rate of increase in memory usage for my validator node is also much faster than my sentry node.
Does this mean that the memory issue is related to how a validator is signing / committing blocks?

on 0.5.11.

sentry node

  • initial mem usage 9 GiB, grows slowly to 11 GiB
  • not a validator
  • 40 external peers
  • pex disabled

validator node

  • initial mem usage 11 GiB, grows quickly to 22 GiB
  • 2 sentry node peers
  • pex disabled

@themarpe
Copy link

Hi @YunSuk-Yeo
Just as an idea of a possible memory fragmentation reduction malloc replacement:

It supposedly works better than jemalloc. It is however Linux & macOS only at the current time AFAIK.

@yun-yeo
Copy link
Contributor Author

yun-yeo commented Nov 18, 2021

When we use memory smaller than 300MB, the memory usage is stable.
CosmWasm team has plan to implement TTL of contract cache. CosmWasm/wasmvm#264 (comment)

@terra-money terra-money deleted a comment from gitcoinbot Nov 20, 2021
@terra-money terra-money deleted a comment from gitcoinbot Nov 20, 2021
@terra-money terra-money deleted a comment from gitcoinbot Nov 20, 2021
@terra-money terra-money deleted a comment from gitcoinbot Nov 20, 2021
@gitcoinbot
Copy link

Issue Status: 1. Open 2. Started 3. Submitted 4. Done


Workers have applied to start work.

These users each claimed they can complete the work by 264 years, 11 months from now.
Please review their action plans below:

1) jadelaawar has applied to start work (Funders only: approve worker | reject worker).

I have reviewed your bug and have already figured out a solution for it!

Learn more on the Gitcoin Issue Details page.

@gitcoinbot
Copy link

Issue Status: 1. Open 2. Started 3. Submitted 4. Done


Work for 5000.0 UST (5010.00 USD @ $1.0/UST) has been submitted by:


@yun-yeo
Copy link
Contributor Author

yun-yeo commented Feb 21, 2022

@yun-yeo yun-yeo closed this as completed Feb 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants