Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Usage Question (Why is object_used_memory only about 30% of used_memory?) #4296

Open
wernermorgenstern opened this issue Dec 11, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@wernermorgenstern
Copy link

This is not really a bug, just a question, and explanation needed.

DragonFly Version: 1.25.4

info memory output on two Databases.

Database 1:

# Memory
used_memory:736326864
used_memory_human:702.22MiB
used_memory_peak:736735664
used_memory_peak_human:702.61MiB
fibers_stack_vms:20961280
fibers_count:640
used_memory_rss:875552768
used_memory_rss_human:834.99MiB
used_memory_peak_rss:929742848
maxmemory:8589934592
maxmemory_human:8.00GiB
used_memory_lua:0
object_used_memory:214915592
type_used_memory_string:10424880
type_used_memory_list:147160
type_used_memory_zset:186432
type_used_memory_hash:204157120
table_used_memory:82123904
num_buckets:88380
num_entries:647430
inline_keys:225
listpack_blobs:18446744073709551056
listpack_bytes:864678
small_string_bytes:10424880
pipeline_cache_bytes:1189037
dispatch_queue_bytes:0
dispatch_queue_subscriber_bytes:0
dispatch_queue_peak_bytes:937176
client_read_buffer_peak_bytes:212736
tls_bytes:3033496
snapshot_serialization_bytes:0
cache_mode:cache
maxmemory_policy:eviction
replication_streaming_buffer_bytes:0
replication_full_sync_buffer_bytes:0

Database 2:

# Memory
used_memory:260999952
used_memory_human:248.91MiB
used_memory_peak:261838544
used_memory_peak_human:249.71MiB
fibers_stack_vms:27839200
fibers_count:850
used_memory_rss:356663296
used_memory_rss_human:340.14MiB
used_memory_peak_rss:395956224
maxmemory:8589934592
maxmemory_human:8.00GiB
used_memory_lua:0
object_used_memory:141040800
type_used_memory_string:4758864
type_used_memory_hash:136281936
table_used_memory:14134840
num_buckets:16680
num_entries:152296
inline_keys:6538
listpack_blobs:18446744073709549234
listpack_bytes:707087
small_string_bytes:4758864
pipeline_cache_bytes:8160891
dispatch_queue_bytes:147936
dispatch_queue_subscriber_bytes:0
dispatch_queue_peak_bytes:844452
client_read_buffer_peak_bytes:419328
tls_bytes:3033496
snapshot_serialization_bytes:0
cache_mode:cache
maxmemory_policy:eviction
replication_streaming_buffer_bytes:0
replication_full_sync_buffer_bytes:0

So in our Grafana Dashboard (from the metrics scape), we see the memory shown is used_memory

However, for Database 1, for example

used_memory:736326864
maxmemory:8589934592
object_used_memory:214915592
type_used_memory_string:10424880
type_used_memory_list:147160
type_used_memory_zset:186432
type_used_memory_hash:204157120

Why is object_used_memory only about 30% of used_memory?

What other objects, etc is using up the memory?
How can we fine-tune the memory usage, so more available memory is available for actual objects and keys?

This is our current Configurations:

      "--alsologtostderr",
      "--cluster_mode=emulated",
      "--maxclients=1000000",
      "--maxmemory=0",
      "--dbnum=1",
      "--cache_mode",
      "--conn_use_incoming_cpu",
      "--slowlog_max_len=1024",
      "--slowlog_log_slower_than=10000",

I want to add the following too (It is only applied in our Development environment, as we need to make sure it works).
I wanted to fine-tune the evictions, and GC

      "--alsologtostderr",
      "--cluster_mode=emulated",
      "--maxclients=1000000",
      "--maxmemory=0",
      "--dbnum=1",
      "--cache_mode",
      "--conn_use_incoming_cpu",
      "--max_eviction_per_heartbeat=200",
      "--max_segment_to_consider=8",
      "--mem_defrag_page_utilization_threshold=0.7",
      "--mem_defrag_threshold=0.6",
      "--mem_defrag_waste_threshold=0.3",
      "--oom_deny_ratio=1.0",
      "--slowlog_max_len=1024",
      "--slowlog_log_slower_than=10000",
      "--vmodule=dns_resolve=1",

Can somebody help me in fine-tuning the memory allocations, defragmentations, and evictions as well?

Thank you very much

@wernermorgenstern wernermorgenstern added the bug Something isn't working label Dec 11, 2024
@wernermorgenstern wernermorgenstern changed the title Memory Usage Question (**Why is object_used_memory only about 30% of used_memory?**) Memory Usage Question (Why is object_used_memory only about 30% of used_memory?) Dec 11, 2024
@romange
Copy link
Collaborator

romange commented Dec 12, 2024

@wernermorgenstern we have a pending issue that we do not track list memory usage correctly. it affects both type_used_memory_list and object_used_memory metrics. used_memory is correct and is not affected by this.

@wernermorgenstern
Copy link
Author

wernermorgenstern commented Dec 12, 2024

@romange , ok, thank you.

Although what I have done, is I went through all keys (using SCAN), and then did a memory usage on each key, and then adding the size, and got a total. And that matches actually the object_used_memory very close (off by a few bytes), depending when I ran info memory vs the script.

@romange
Copy link
Collaborator

romange commented Dec 13, 2024

memory usage is also affected. You may try ghcr.io/dragonflydb/dragonfly-weekly:ubuntu version (pre-released) where the bug should be fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants