-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
virtio_fs_localfs_migration: Migrate localfs with different cache modes #4184
base: master
Are you sure you want to change the base?
Conversation
70593da
to
7eb9f47
Compare
hi @hellohellenmao , would you please review this patch? I am not sure if this patch meet your expectation, I just follow Your palorion case's steps |
b8d9be4
to
54bce23
Compare
hi @hellohellenmao , would you please review it again? All test metrix was covered IMO, thanks. The following testing matrix is covered: Single node migration, with different localfs. |
Testing Win2022 and RHEL9.3 guest OS, all passed |
1662e04
to
2a44523
Compare
@xiagao Could you please help to take a review from windows perspective? |
Sure, will work on it after other tasks on hand. |
@fbq815 Could you please help to review here? Thanks. |
2a44523
to
44f08b6
Compare
44f08b6
to
550fa74
Compare
@zhencliu we use memory-backend-file on s390x when we use memory-backend with virtio-fs, please reference to the usage of avocado-framework/avocado-vt@85915c0 |
Test result on s390x: RHEL.10: @zhencliu the case LGTM and I'll ack once you update the note in the py file |
550fa74
to
526702b
Compare
1f99626
to
522f96a
Compare
Updated, note in cfg clarifies memory-backend-file can cause error while note in commit body shows the details |
Talked with @hellohellenmao , for the multifs scenario, we only cover cache mode auto. I have to sign the commit (Verified) later for something wrong happened to my account |
522f96a
to
18bf913
Compare
Tested with rhel guest, all the cases passed.
So the only comment from me for this patch is the same as Xiaoling mentioned that it's better to test the dir is writable after migration. |
The following testing matrix is covered: cache mode: auto, never, always, metadata writeback: enabled and disabled allow-direct-io enabled and disabled count of fs: single fs and two fs Note we usually use memory-backend-file on s390x for the virtio-fs testing on RHEL, but for the live migration scenario, we have to use memory-backend-memfd due to RHEL-58831. Signed-off-by: Zhenchao Liu <[email protected]>
18bf913
to
a88083c
Compare
writing added after migration. @hellohellenmao @xiagao , passed on RHEL9 and win2025 |
(1/8) Host_RHEL.m10.u0.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.10.0.x86_64.io-github-autotest-qemu.virtio_fs_localfs_migration.cache_mode_auto.q35: STARTED LGTM. |
- cache_mode_never: | ||
fs_binary_extra_options += " --allow-direct-io --cache never" | ||
- cache_mode_metadata: | ||
fs_binary_extra_options += " --cache metadata" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the matrix here we also need to cover other modes(auto, always, never) standalone besides the ones above.
session = vm.wait_for_login() | ||
|
||
for fs_dest in guest_mnts.values(): | ||
out = session.cmd_output(params["read_file_cmd"] % fs_dest).strip() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first step here should be to check the virtiofs mount status is still working well in the destination VM. Like to check that there is virtiofs type(like it's in the source VM) in the output of mount command in destination VM .
5. Run virtiofsd daemons to share the directories in step4 | ||
6. Boot the target guest and mount the virtiofs targets in step4 | ||
7. Do migration from the source guest to target | ||
8. No error occurs and the file content keeps the same |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we need to check that the virtiofs directory is mounted well on the destination guest.
), | ||
tmo, | ||
): | ||
test.log.fail(f"Failed to mount {fs_target}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error info here should be something like "There is no actived virtiofs mounted on the destination VM after migration, check please.", but not "Failed to mount ...", which looks like the operation here is to mount the device, but we are just checking here.
fs_binary_extra_options += " --allow-direct-io" | ||
variants: | ||
- @default: | ||
- multifs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For multifs here, I think we do not need to cover all of the cache mode matrises above, just cover the default one should be enough.
ID: 2967