-
Notifications
You must be signed in to change notification settings - Fork 931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support VM disk resize (zfs and lvm) without reboot (from Incus) #14211
base: main
Are you sure you want to change the base?
Support VM disk resize (zfs and lvm) without reboot (from Incus) #14211
Conversation
c32cf54
to
09b040b
Compare
5141575
to
4646696
Compare
4646696
to
39f4b7a
Compare
Heads up @mionaalex - the "Documentation" label was applied to this issue. |
bef2c1b
to
eb1b3a6
Compare
eb1b3a6
to
41533d9
Compare
What would prevent growing live the |
I'd like it if we could explore adding suppport for that, we support growing the raw disk file offline, so not sure if there is a reason we cant do it online? |
Needs a rebase too please |
41533d9
to
877ed7d
Compare
@simondeziel @tomponline re: online disk resize I don't see an issue with adding online disk resizing for ceph. RBD has an exclusive lock feature and supports online resizing with RBD client kernel > 3.10. |
Thanks for checking on |
877ed7d
to
b3f97a5
Compare
@tomponline rebased and good to go. Do we want to include support for live resizing ceph disks with this PR or open up a separate issue and save it for later? |
Lets try and do it as part of this PR. And then we can add a single API extension. |
257be7c
to
86b38f5
Compare
I've tested live resizing a Ceph RBD filesystem disk and it works as expected - it's just online resizing of Ceph RBD block volumes that doesn't work, which explains why I haven't been able to resize a Ceph backed rootfs. |
It doesn't look like we'll be able to add support for online growing of Ceph RBD root disks. Ceph backed VM's have a read only snapshot which can't be updated when the root disk size is updated (see below). The snapshot is used for instance creation. lxd/lxd/storage/drivers/driver_ceph_volumes.go Lines 1332 to 1337 in 9ac2433
Furthermore, online resizing for Ceph volumes is generally considered unsafe in LXD: lxd/lxd/storage/drivers/driver_ceph_volumes.go Lines 192 to 205 in 9ac2433
|
86b38f5
to
512f8d6
Compare
Rebased and good to go. In summary, we're adding support for online resizing (growing) of any zfs or lvm disks. Online resizing Ceph RBD filesystems was possible before the changes in this PR, but we've confirmed that online resizing of Ceph RBD block volumes is not possible due to the read only snapshot used during instance creation. |
zvols have a similar read-only snapshot as their origin, I guess it's an inherent limitation of how CoW is implemented in Ceph. Thanks for digging into it. I'm now wondering what's up with |
https://docs.ceph.com/en/reef/rbd/rbd-snapshot/#layering seems to suggest it should just work:
But since you ran into issues, maybe we need to flatten those cloned images before growing them? https://docs.ceph.com/en/reef/rbd/rbd-snapshot/#flattening-a-cloned-image |
Should we add a row for live VM disk resize in the storage driver features table? See: https://documentation.ubuntu.com/lxd/en/latest/reference/storage_drivers/#feature-comparison |
+1 |
Signed-off-by: Stéphane Graber <[email protected]> (cherry picked from commit d78b0a89e61afbb73790c561653acda1d79d6f9f) Signed-off-by: Kadin Sayani <[email protected]> License: Apache-2.0
Signed-off-by: Stéphane Graber <[email protected]> (cherry picked from commit 0d8561e95d0f0eac1f4a5c497916f950dc6a6db1) Signed-off-by: Kadin Sayani <[email protected]> License: Apache-2.0
Signed-off-by: Stéphane Graber <[email protected]> (cherry picked from commit c13e9298cc6341bdb522b91ea53bbb91e6865eb1) Signed-off-by: Kadin Sayani <[email protected]> License: Apache-2.0
Signed-off-by: Stéphane Graber <[email protected]> (cherry picked from commit 17fb18ef07b1369f59bc9181e9657f7c6e1ee3fa) Signed-off-by: Kadin Sayani <[email protected]> License: Apache-2.0
Signed-off-by: Stéphane Graber <[email protected]> (cherry picked from commit de3ea2ec6e7ac112ad0e91c0c08339adbae368b1) Signed-off-by: Kadin Sayani <[email protected]> License: Apache-2.0
Signed-off-by: Kadin Sayani <[email protected]>
512f8d6
to
50c1dcd
Compare
Signed-off-by: Kadin Sayani <[email protected]>
Signed-off-by: Stéphane Graber <[email protected]> (cherry picked from commit 9df531e5ee9a5d0267cd74f15312d9ac031315da) Signed-off-by: Kadin Sayani <[email protected]> License: Apache-2.0
…esize Signed-off-by: Stéphane Graber <[email protected]> (cherry picked from commit 81f9c4b915830322871bb49d6f04f3009f63d01a) Signed-off-by: Kadin Sayani <[email protected]> License: Apache-2.0
50c1dcd
to
e5e9a6e
Compare
Thanks for digging into this further :) Given my initial research, your new findings, and what I've seen in the LXD codebase, I believe it is theoretically possible to online resize (grow) Ceph RBD block volumes, dir and .raw files. I think I have some more work to do for this PR. |
I don't think flattening the cloned image is a safe approach. From the docs:
|
So although it is possible to online grow a Ceph RBD backed root disk, I found another problem: When we create a Ceph RBD volume, a read only snapshot is created. This read only snapshot is used as the clone source for future non-image volumes. The read only or protected property of the snapshot is a precondition for creating RBD clones. |
That's initial image turned into a cloned read only snapshot really maps to my understanding of how it works with ZFS. Still not clear why/what's different with Ceph RBD volumes :/ |
For reference, here is the error I'm getting after modifying the behaviour to allow for online growing the root disk, and adding a file system resize: root@testbox:~# lxc config device set v1 root size=11GiB
Error: Failed to update device "root": Could not grow underlying "ext4" filesystem for "/dev/rbd0": Failed to run: resize2fs /dev/rbd0: exit status 1 (resize2fs 1.47.0 (5-Feb-2023)
resize2fs: Bad magic number in super-block while trying to open /dev/rbd0) |
Same for |
65bf48d
to
e57a3d1
Compare
Signed-off-by: Kadin Sayani <[email protected]>
Signed-off-by: Kadin Sayani <[email protected]>
e57a3d1
to
319bf37
Compare
I don't mind (too much) having this feature land in a per-driver fashion. However, I suspect/hope that Ceph is the special case here and all our other drivers would support live growing. I didn't hear back from you regarding the easy to test |
This PR adds support for resizing (growing) VM disks without rebooting, when using ZFS or LVM storage backends.
Resolves #13311.