-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hugepage_reset: Test compatible with different NUMA topologies #4237
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,6 +21,33 @@ def run(test, params, env): | |
:param env: Dictionary with the test environment. | ||
""" | ||
|
||
def allocate_largepages_per_node(): | ||
""" | ||
The functions is inteded to set 1G hugepages per NUMA | ||
node when the system has four or more of these nodes. | ||
For this function to work the hugepage size should be 1G. | ||
This way a QEMU failure could be avoided if it's unable | ||
to allocate memory. | ||
""" | ||
node_list = host_numa_node.online_nodes_withcpumem | ||
if len(node_list) >= 4: | ||
allocated_memory = False | ||
try: | ||
for node in node_list: | ||
node_mem_free = int( | ||
host_numa_node.read_from_node_meminfo(node, "MemFree") | ||
) | ||
if node_mem_free > (mem * 1024): | ||
hp_config.set_node_num_huge_pages(4, node, "1048576") | ||
allocated_memory = True | ||
break | ||
except ValueError as e: | ||
test.cancel(e) | ||
if not allocated_memory: | ||
test.fail( | ||
"There is no NUMA node with enough memory for running the test" | ||
) | ||
|
||
def set_hugepage(): | ||
"""Set nr_hugepages""" | ||
try: | ||
|
@@ -107,9 +134,9 @@ def heavyload_install(): | |
"No node on your host has sufficient free memory for " "this test." | ||
) | ||
hp_config = test_setup.HugePageConfig(params) | ||
if params.get("on_numa_node"): | ||
allocate_largepages_per_node() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @mcasquer , code LGTM, I just want to confirm with you that if the node mem is not enough, still setup or better to raise error or skip the test. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Mmmm @PaulYuuu good point, I think that situation should be handled, perhaps with a try block, I'll send an update of this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @PaulYuuu added a try block that will cancel the case ig there's no enough memory, faked example:
|
||
hp_config.target_hugepages = origin_nr | ||
test.log.info("Setup hugepage number to %s", origin_nr) | ||
hp_config.setup() | ||
hugepage_size = utils_memory.get_huge_page_size() | ||
params["hugepage_path"] = hp_config.hugepage_path | ||
params["start_vm"] = "yes" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure when we will meet
ValueError
,read_from_node_meminfo
?When all nodes do not match the enough memory we want, the current loop will continue the test, but we should skip the test as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@PaulYuuu
ValueError
is hit whenset_node_num_huge_pages
fails allocating hugepages in the node, for example, because there's not enough memory. Added a break and a condition to really check if the test could be run