-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
virtio_fs_migration_on_error: Migration test with migration-on-error #4196
base: master
Are you sure you want to change the base?
virtio_fs_migration_on_error: Migration test with migration-on-error #4196
Conversation
93fd9a0
to
c65b91d
Compare
Covered the following viriofsd options: --migration-on-error abort --migration-on-error guest-error Signed-off-by: Zhenchao Liu <[email protected]>
c65b91d
to
12f0380
Compare
hi @hellohellenmao , would you please review this patch? For migration-on-error=guest-error, as we talked before, I didn't check the warning message. I select the bold msg for check |
@zhencliu we use memory-backend-file on s390x when we use memory-backend with virtio-fs, please reference to the usage of avocado-framework/avocado-vt@85915c0 |
Thanks, I didn't use memory-backend-file because it will cause core dump during migration(refer to RHEL-58831), cc @hellohellenmao , does the core dump issue only happen on x86? |
Sorry that I have not tried on other platforms, but from my understanding that this should be a common issue not related to the platform. But maybe @fbq815 could you double confirm here? Thanks |
Never mind, I guessed this is a common issue too, but in that case, we may not test it on s390 currently. |
@zhencliu @hellohellenmao I agree to use memfd based on the issue but could we add some note to paste the issue link and let people know about the reason we use memfd here on s390x? |
Hi @fbq815 , make sense, I will update the commit body later to describe why we are using mem-fd instead of mem-file here, thanks. |
Results on s390x:
LGTM |
session.close() | ||
|
||
def start_service(session): | ||
def start_multifs_instance(fs_tag, fs_target, fs_volume_label): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No multifs in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! The reason why I handle all of these in the 'multifs' way is that multifs is a test matrix and currently talked with Tingting, we can just cover this scenario in one test case, maybe in future, we will cover it from more test cases, so in the automation code, I considered this situation that it should be easy to extend to multifs later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhencliu Thanks for your explanation, I prefer to remove them at the moment. 1) Not sure when will enable multifs. 2) If there are something update or bug fix for this part, it would take resources to maintain them in more than one py files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am OK with that, but maybe it might be better to confirm this with Tingting, hi @hellohellenmao , what's your opinion please?
ID: 2968, 2970