-
Notifications
You must be signed in to change notification settings - Fork 931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lxc copy --refresh deletes latest snapshot and resends it even when trees are already in sync. ZFS storage backend #14472
Comments
Please may you update your reproducer steps with the exact Also please can you confirm this issue isn't fix in |
OK, just tried it with edge Updated the reproduction steps to have the snapshot commands. It's using the plan lxc snapshot commands. |
Indeed, unnecessary I/O is occurring when copying containers with identical snapshots: capacity operations bandwidth
pool alloc free read write read write
-------------------------------------------- ----- ----- ----- ----- ----- -----
default 2.90G 26.6G 64 1.06K 224K 3.60M
/var/snap/lxd/common/lxd/disks/default.img 2.90G 26.6G 64 1.06K 224K 3.60M
-------------------------------------------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
-------------------------------------------- ----- ----- ----- ----- ----- -----
default 2.90G 26.6G 4 260 5.49K 649K
/var/snap/lxd/common/lxd/disks/default.img 2.90G 26.6G 4 260 5.49K 649K
-------------------------------------------- ----- ----- ----- ----- ----- ----- |
And, the ❯ zfs get guid default/containers/c1@snapshot-3
NAME PROPERTY VALUE SOURCE
default/containers/c1@snapshot-3 guid 6664040801001024508 -
❯ zfs get guid default/containers/c2@snapshot-3
NAME PROPERTY VALUE SOURCE
default/containers/c2@snapshot-3 guid 6664040801001024508 - |
After further investigation, I've found that we already have logic to prevent snapshots with identical lxd/lxd/storage/drivers/driver_zfs_volumes.go Lines 943 to 957 in 98a7221
Furthermore, lxd/lxd/storage/drivers/driver_zfs_volumes.go Lines 1050 to 1064 in 98a7221
lxd/lxd/storage/drivers/driver_zfs_volumes.go Lines 1139 to 1144 in 98a7221
I think |
But I do want the snapshots to be copied if they are missing instead of just applying the "active" data outside the snapshot. In this example that's only 1GB but a production system that maybe 100Gb-10TB depending on what the system is doing. This just happens to be a case where both trees are identical so nothing should happen. But for the current implementation I think resending the top level snapshot again when it matches doesn't make sense |
After following the reproducer steps outlined in the issue, here are the
Nov 28 2024 08:00:02.108390124 sysevent.fs.zfs.history_event
version = 0x0
class = "sysevent.fs.zfs.history_event"
pool = "default"
pool_guid = 0xdd325a595ddcd173
pool_state = 0x0
pool_context = 0x0
history_hostname = "devbox"
history_dsname = "default/containers/c2/%rollback"
history_internal_str = "parent=c2"
history_internal_name = "clone swap"
history_dsid = 0x1077
history_txg = 0x5fe
history_time = 0x67488572
time = 0x67488572 0x675e6ec
eid = 0xd47
Nov 28 2024 08:00:02.109390127 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.123390164 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.147390227 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.160390262 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.160390262 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.160390262 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.161390264 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.230390447 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.231390449 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.256390515 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.284390589 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.334390721 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.334390721 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.472391086 sysevent.fs.zfs.history_event Nov 28 2024 08:00:02.472391086 sysevent.fs.zfs.history_event When the last snapshots are identical, there aren’t any operations performed on it. The expected operations are executed when running Could you please provide details about when and how you noticed the last snapshot being deleted and subsequently re-sent? |
Required information
Issue description
When sending a copy of a container to another host using lxc copy --refresh using a zfs backend on both systems top level snapshot is deleted on the remote and resent even if the snapshots are identical.
This can cause large amounts of unnecessary network traffic, disk io, etc.
Some steps to help resolve this. Would be before sending any snapshots checking the zfs guid property for the snapshots on both hosts and comparing.
Since the zfs guids for the snapshots can't change as they are read only. If both parties leading snapshot match then no operations are needed.
This might have to be a new flag to specify you only want to check snapshot consistency instead of the live data as well.
Something like
--snaps-only
or a better name.Steps to reproduce
a.
lxc snapshot c1 1
b.
lxc snapshot c1 2
a.
lxc exec c1 -- dd if=/dev/urandom of=/root/dd.img bs=1M count=1000
a.
lxc snapshot c1 3
a. lxc copy c1 sys2:c1
a.
lxc info c1
should show snapshots 1,2,3b.
zfs list -t snapshot
should show the snapshots for c1 (assumes your pool isn't managed by lxda. lxc copy --refresh c1 sys2:c1
Information to attach
dmesg
)lxc info NAME --show-log
)lxc config show NAME --expanded
)lxc monitor
while reproducing the issue)The text was updated successfully, but these errors were encountered: