Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolve test issue with aarch64 and 4k default hp size #5917

Merged

Conversation

hholoubk
Copy link
Contributor

Configuration of test for aarch64 expects 64k default hp size,, therefore test is not able to start the machine for 4k.

  • added expected default_hp_size into configuration
  • added check in the test that VM default_hp_size is same as expected, and update parameters otherwise.

@dzhengfy
Copy link
Contributor

dzhengfy commented Oct 8, 2024

@liang-cong-red-hat Could you help review it?

@dzhengfy
Copy link
Contributor

dzhengfy commented Oct 8, 2024

@hholoubk As a general rule for a PR fixing a failure, could you please also provide below information in the comment?

  • What is the previous error message without your fix codes?
  • The test result with your fix codes

target_hugepages = 1024
aarch64:
default_hp_size = 524288
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could use vm (memory size) / ( default hugepage size) to get the target_hugepages number, just like the code did in python file. So param like default_hp_size and target_hugepages could be abandoned.

if actual_hp_size != default_hp_size:
# actual hp size is different that is expected for defined number of hugepages
# as the size is not the main focus of the test, we have to recalculate and replace
new_target_hugepages = int((default_hp_size * target_hugepages) / actual_hp_size)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here we calculate the target hugepage number according to the actual hugepage size. As I mentioned before, we could directly use this way to calculate hugepage number we need.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I let there the parameters, because I was not sure, if the desired size of huge pages (now it is 2048x1024) can depend on anything or can change in future. If the overall size is always same, it can be of course removed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could get the memory size from domain xml, that is the total memory you want. you can use total memory / actual_hp_size, then you will get the huge page number.

…tual hp size and required VM memory. Therefore some configuration settings are no more necessary.
@hholoubk
Copy link
Contributor Author

@liang-cong-red-hat Could you, please, review it again? thank you. I run the test manually, I will schedule it also via jenkins

@hholoubk
Copy link
Contributor Author

hholoubk commented Oct 10, 2024

The PR is resolving issue with test:

  • rhel.svirt.umask.files_accessed_by_qemu
  • VMStartError: VM 'avocado-vt-vm1' failed to start: error: Failed to start domain 'avocado-vt-vm1'error: internal error: process exited while connecting to monitor: 2023-10-30T16:19:18.249090Z qemu-kvm: unable to map backing store for guest RAM: Cannot allocate memory(exit status: 1)

Copy link
Contributor

@liang-cong-red-hat liang-cong-red-hat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@dzhengfy dzhengfy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dzhengfy dzhengfy merged commit f0520d3 into autotest:master Oct 29, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants