forked from greenplum-db/diskquota-archive
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ADBDEV-6442: Refactor diskquota local_table_stats_map #34
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
silent-observer
previously approved these changes
Oct 10, 2024
andr-sokolov
approved these changes
Oct 16, 2024
silent-observer
approved these changes
Oct 16, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Refactor diskquota local_table_stats_map
During initialization, diskquota used a non-optimal structure for the local
hashmap local_table_stats_map. In a hashmap, there is quite a significant
overhead for each entry. Therefore, a large number of small entries led to
increased RAM consumption during cluster startup. Change the specified
structure, making the table oid as the key, and an array of sizes by segments
as the value. This significantly reduces the amount of memory consumed, because
now there will be SEGCOUNT times fewer records. Also fix a small bug with
duplicate oid tables in the active_oids string array in the dispatch_rejectmap
function.
Tests are not provided, but you can estimate the hashmap size using the
hash_estimate_size
function, for example, like this:that gives
That is, the memory consumption for 1,000,000 tables on a 1000-segment cluster dropped from 38 gigabytes to 7.5 gigabytes.
It is easier to view the changes with the "Hide whitespace" option enabled.