Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

virtio_fs_migration_on_error: Migration test with migration-on-error #4196

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

zhencliu
Copy link
Contributor

@zhencliu zhencliu commented Nov 6, 2024

ID: 2968, 2970

@zhencliu zhencliu force-pushed the virtiofs_migration_onerror_abort branch 3 times, most recently from 93fd9a0 to c65b91d Compare November 11, 2024 09:32
Covered the following viriofsd options:
  --migration-on-error abort
  --migration-on-error guest-error

Signed-off-by: Zhenchao Liu <[email protected]>
@zhencliu zhencliu force-pushed the virtiofs_migration_onerror_abort branch from c65b91d to 12f0380 Compare November 11, 2024 13:43
@zhencliu
Copy link
Contributor Author

hi @hellohellenmao , would you please review this patch?

For migration-on-error=guest-error, as we talked before, I didn't check the warning message.
For abort, the following outputs:
[qemu output] qemu-kvm: Error loading back-end state of virtio-user-fs device /machine/peripheral/vufs_virtiofs_targetfs/virtio-backend (tag: "myfs"): Back-end failed to process its internal state
[qemu output] qemu-kvm: Failed to load vhost-user-fs-backend:back-end
[qemu output] qemu-kvm: error while loading state for instance 0x0 of device '0000:00:01.3:00.0/vhost-user-fs'
[qemu output] qemu-kvm: load of migration failed: Input/output error

I select the bold msg for check

@XueqiangWei
Copy link
Contributor

Hi @fbq815 , @xiagao, could you please help review it? Many thanks.

@fbq815
Copy link
Contributor

fbq815 commented Dec 5, 2024

@zhencliu we use memory-backend-file on s390x when we use memory-backend with virtio-fs, please reference to the usage of avocado-framework/avocado-vt@85915c0

@zhencliu
Copy link
Contributor Author

zhencliu commented Dec 5, 2024

@zhencliu we use memory-backend-file on s390x when we use memory-backend with virtio-fs, please reference to the usage of avocado-framework/avocado-vt@85915c0

Thanks, I didn't use memory-backend-file because it will cause core dump during migration(refer to RHEL-58831), cc @hellohellenmao , does the core dump issue only happen on x86?

@hellohellenmao
Copy link
Contributor

Sorry that I have not tried on other platforms, but from my understanding that this should be a common issue not related to the platform. But maybe @fbq815 could you double confirm here? Thanks

@zhencliu
Copy link
Contributor Author

zhencliu commented Dec 5, 2024

Sorry that I have not tried on other platforms, but from my understanding that this should be a common issue not related to the platform. But maybe @fbq815 could you double confirm here? Thanks

Never mind, I guessed this is a common issue too, but in that case, we may not test it on s390 currently.

@fbq815
Copy link
Contributor

fbq815 commented Dec 5, 2024

@zhencliu @hellohellenmao I agree to use memfd based on the issue but could we add some note to paste the issue link and let people know about the reason we use memfd here on s390x?

@zhencliu
Copy link
Contributor Author

zhencliu commented Dec 5, 2024

@zhencliu @hellohellenmao I agree to use memfd based on the issue but could we add some note to paste the issue link and let people know about the reason we use memfd here on s390x?

Hi @fbq815 , make sense, I will update the commit body later to describe why we are using mem-fd instead of mem-file here, thanks.

@BohdanMar
Copy link
Contributor

BohdanMar commented Dec 5, 2024

Results on s390x:
JOB ID : 7f98ed6378895486129c14e85d706c9eca481c66
JOB LOG : /root/avocado/job-results/job-2024-12-05T10.28-7f98ed6/job.log

  1. s390x.io-github-autotest-qemu.unattended_install.cdrom.extra_cdrom_ks.default_install.aio_threads.s390-virtio: PASS (450.83 s)
  2. s390x.io-github-autotest-qemu.unattended_install.cdrom.extra_cdrom_ks.default_install.aio_threads.s390-virtio: PASS (428.23 s)
  3. s390x.io-github-autotest-qemu.unattended_install.cdrom.extra_cdrom_ks.default_install.aio_threads.s390-virtio: PASS (448.18 s)
  4. s390x.io-github-autotest-qemu.virtio_fs_migration_on_error.s390-virtio: PASS (50.06 s)
  5. s390x.io-github-autotest-qemu.virtio_fs_migration_on_error.abort.s390-virtio: PASS (48.60 s)
  6. s390x.io-github-autotest-qemu.virtio_fs_migration_on_error.guest_error.diff_dir.s390-virtio: PASS (58.24 s)
  7. s390x.io-github-autotest-qemu.virtio_fs_migration_on_error.s390-virtio: PASS (48.24 s)
  8. s390x.io-github-autotest-qemu.virtio_fs_migration_on_error.abort.s390-virtio: PASS (46.05 s)
  9. s390x.io-github-autotest-qemu.virtio_fs_migration_on_error.guest_error.diff_dir.s390-virtio: PASS (55.66 s)
  10. s390x.io-github-autotest-qemu.virtio_fs_migration_on_error.s390-virtio: PASS (54.65 s)
  11. s390x.io-github-autotest-qemu.virtio_fs_migration_on_error.abort.s390-virtio: PASS (53.19 s)
  12. s390x.io-github-autotest-qemu.virtio_fs_migration_on_error.guest_error.diff_dir.s390-virtio: PASS (62.29 s)
    RESULTS : PASS 12 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
    JOB HTML : /root/avocado/job-results/job-2024-12-05T10.28-7f98ed6/results.html
    JOB TIME : 1812.32 s

LGTM

session.close()

def start_service(session):
def start_multifs_instance(fs_tag, fs_target, fs_volume_label):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No multifs in this case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! The reason why I handle all of these in the 'multifs' way is that multifs is a test matrix and currently talked with Tingting, we can just cover this scenario in one test case, maybe in future, we will cover it from more test cases, so in the automation code, I considered this situation that it should be easy to extend to multifs later

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhencliu Thanks for your explanation, I prefer to remove them at the moment. 1) Not sure when will enable multifs. 2) If there are something update or bug fix for this part, it would take resources to maintain them in more than one py files.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am OK with that, but maybe it might be better to confirm this with Tingting, hi @hellohellenmao , what's your opinion please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants