Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR (refind_btrfs.state_management.refind_btrfs_machine/refind_btrfs_machine.py/run): Subvolume '@' is itself a snapshot #61

Open
Martan404 opened this issue Sep 5, 2024 · 12 comments
Assignees
Labels
bug Something isn't working

Comments

@Martan404
Copy link

Martan404 commented Sep 5, 2024

So I have an Arch system with BTRFS snapshots using snapper and the standard Arch BTRFS partition layout. Previously I used GRUB with grub-btrfs as my bootloader and now I use rEFInd instead.

When I try to run refind-btrfs I get these errors

Initializing the block devices using lsblk.
Initializing the physical partition table for device '/dev/sda' using lsblk.
Initializing the live partition table for device '/dev/sda' using findmnt.
Initializing the physical partition table for device '/dev/sdb' using lsblk.
Initializing the live partition table for device '/dev/sdb' using findmnt.
Initializing the physical partition table for device '/dev/nvme0n1' using lsblk.
Initializing the live partition table for device '/dev/nvme0n1' using findmnt.
Found the ESP mounted at '/boot' on '/dev/nvme0n1p1'.
Found the root partition on '/dev/nvme0n1p2'.
Found a separate boot partition on '/dev/nvme0n1p1'.
Searching for snapshots of the '@' subvolume in the '/.snapshots' directory.
Found subvolume '@' mounted as the root partition.
ERROR (refind_btrfs.state_management.refind_btrfs_machine/refind_btrfs_machine.py/run): Subvolume '@' is itself a snapshot (parent UUID - 'b1004b3f-ebee-0d4f-bc30-f8d763023dda'), exiting...

I am assuming it might be because of how I restore my snapshots? My system is not read-only and there is no reason I can not make a new snapshots for my system.

Here's the script I am using to restore a previous snapshot after I have booted in to it from grub

#!/bin/bash
# shellcheck disable=SC2002,SC2162,SC2086
snapshot_layout="arch"
snap_manager="snapper"

if [[ $snapshot_layout == "snapper" ]]; then
		snapshot_path="$snaphot_number/snapshot"

	elif [[ $snapshot_layout == "arch" ]] && [[ $snap_manager == "snapper" ]]; then
		snapshot_path="$snaphot_number/snapshot"

	elif [[ $snapshot_layout == "arch" ]] && [[ $snap_manager == "yabsnap" ]]; then
		snapshot_path="$snaphot_number"
fi

echo -e "
                        Snapshot rollback script
-------------------------------------------------------------------------"

root_disk=$(cat /proc/cmdline | awk '{sub("root=UUID=", "", $2); print $2}')
snaphot_number=$(cat /proc/cmdline | awk -F '/' '{print $3}')

echo -e "Mounting root on /mnt"

sudo mount "/dev/disk/by-uuid/$root_disk" /mnt

echo -e "-------------------------------------------------------------------------"
echo -e "Moving broken root"

sudo mv /mnt/@ /mnt/@broken

echo -e "-------------------------------------------------------------------------"
echo -e "Setting snapshot as root"

sudo btrfs subvolume snapshot /mnt/@snapshots/$snapshot_path /mnt/@ && success="yes"

echo -e "-------------------------------------------------------------------------"
echo -e "Removing broken root"

[[ $success == "yes" ]] && sudo rm -rf /mnt/@broken

if [ -e "/mnt/@/var/lib/pacman/db.lck" ]; then

	echo -e "-------------------------------------------------------------------------"
	echo -e "Removing pacman db.lck"

	sudo rm /mnt/@/var/lib/pacman/db.lck
fi

echo -e "-------------------------------------------------------------------------"
echo -e "Unmounting /mnt"

sudo umount -R /mnt

read -p "Press any key to reboot..."
reboot
@Venom1991 Venom1991 self-assigned this Sep 5, 2024
@Venom1991 Venom1991 added the help wanted Extra attention is needed label Sep 5, 2024
@Venom1991
Copy link
Owner

Hi, the problem is that the snapshot still retains its parent subvolume's UUID event after you've restored it even though its parent probably doesn't event exist anymore. I have to rework that validation.

Anyhow, you can turn this check off by setting this config option to "false".

@Martan404
Copy link
Author

Martan404 commented Sep 5, 2024

Thanks I will change the config and have a thorough check through it! For the validation, would it be possible to just check if the system is read-only or not? Try to write to a file or something like that.

@Venom1991
Copy link
Owner

This tool is meant for booting into r/w snapshots so that kind of test would always fail.
Anyhow, an idea I've gotten from somebody else is to simply have a config option which defines the expected root subvolume's name, something like:
root_subvolume_name = "@"

This string could then be compared with the actual root subvolume's name during runtime.

@Martan404
Copy link
Author

Oh I see! It sounds like you got a good solution there. Not too complicated.

@Martan404
Copy link
Author

May I ask how you are restoring your snapshots to the root subvolume? I am having a hard time to get my rollback script to work with refind-btrfs due to the fact that it doesn't boot the read-only snapshots that snapper or yabsnap creates.

@Venom1991
Copy link
Owner

Venom1991 commented Sep 8, 2024

Given a snapshot boot stanza named "Arch Linux - Stable (rwsnap_2020-12-14_05-00-00_ID502)" the ID502 is the ID of the original snapshot created by, for example, Snapper so that's the one you want to restore.
Also, if you've set this option to "true" or if the snapshot was writable to begin with you also have to change its fstab file so that the / mount point points to the actual root subvolume which you want to restore. Otherwise there's no need to bother with that.

@Martan404
Copy link
Author

I got my scripts working thanks to the modify_read_only_flag. I made a script for quick restoration of the booted snapshot and one script for choosing which snapshot to restore.
Is there a particular reason that you need the snapshots writable? grub-btrfs just boots the read only snapshots.
When booting from a snapshot located on root, like it did when modify_read_only_flag = false, my system would completely crash when I deleted the previous root. When booting from a snapshot located on the @snapshots subvolume you can delete the previous root immediately.

@Venom1991
Copy link
Owner

Venom1991 commented Sep 11, 2024

Is there a particular reason that you need the snapshots writable? grub-btrfs just boots the read only snapshots.

Yes, that feature inherently adds considerable complexity and I wanted to keep my implementation reasonably simple.
I might consider doing something similar to what grub-btrfs does.

Please close the issue if you're not experiencing problems anymore.

@Martan404
Copy link
Author

Okey, thanks for letting me know! Shouldn't I leave this issue should be open until the root snapshot validation is reworked? Or do you want me to close it?

@Venom1991
Copy link
Owner

Shouldn't I leave this issue should be open until the root snapshot validation is reworked?

Agreed, leave it open. 👍🏼

@Venom1991 Venom1991 added bug Something isn't working and removed help wanted Extra attention is needed labels Sep 11, 2024
@theepicflyer
Copy link

Hey @Venom1991 just want to chime in as I am facing this issue too. I want to raise that I seem to get a different btrfs-snapshot-stanzas/arch_btrfs_vmlinuz-linux.conf if I make exit_if_root_is_snapshot false. For context, I did this first on my PC, and performed restoring a backup to test it out. I got the subvolume is itself a snapshot. I then set it up again on my laptop, without the simulated restoration. Here's what I did, as best as I can remember:

  1. Create manual boot stanza, tested it works.
  2. Run refind-btrfs successfully
  3. Boot into one of the backups
  4. Restored the backup to root by doing the following
    4.1. mv @ to @_broken
    4.2. snapshot @snapshots/472/snapshot (the snapper-created snapshot, not the refind-btrfs one) to @
  5. Boot normally again (into the new @)
  6. Run refind-btrfs, running into the "is itself a snapshot" error
  7. Change exit_if_root_is_snapshot false and run refind-btrfs again.

This is the new arch_btrfs_vmlinuz-linux.conf:

menuentry "Arch Linux (rwsnap_2024-09-16_11-37-06_ID750)" {
    icon /EFI/refind/themes/refind-theme-regular/icons/384-144/os_arch.png
    volume ARCH_BTRFS
    loader /@/root/.refind-btrfs/rwsnap_2024-09-16_11-37-06_ID750/boot/vmlinuz-linux
    initrd /@/root/.refind-btrfs/rwsnap_2024-09-16_11-37-06_ID750/boot/initramfs-linux.img
    options "root=PARTUUID=3e5dc849-bb6d-4e06-aa04-1b6766e74c33 rootflags=subvol=@/root/.refind-btrfs/rwsnap_2024-09-16_11-37-06_ID750 rw add_efi_memmap loglevel=3 quiet"
    submenuentry "Arch Linux (rwsnap_2024-09-16_11-36-14_ID749)" {
        loader /@/root/.refind-btrfs/rwsnap_2024-09-16_11-36-14_ID749/boot/vmlinuz-linux
        initrd /@/root/.refind-btrfs/rwsnap_2024-09-16_11-36-14_ID749/boot/initramfs-linux.img
        options "root=PARTUUID=3e5dc849-bb6d-4e06-aa04-1b6766e74c33 rootflags=subvol=@/root/.refind-btrfs/rwsnap_2024-09-16_11-36-14_ID749 rw add_efi_memmap loglevel=3 quiet"
    }
    submenuentry "Arch Linux (rwsnap_2024-09-16_11-00-14_ID747)" {
        loader /@/root/.refind-btrfs/rwsnap_2024-09-16_11-00-14_ID747/boot/vmlinuz-linux
        initrd /@/root/.refind-btrfs/rwsnap_2024-09-16_11-00-14_ID747/boot/initramfs-linux.img
        options "root=PARTUUID=3e5dc849-bb6d-4e06-aa04-1b6766e74c33 rootflags=subvol=@/root/.refind-btrfs/rwsnap_2024-09-16_11-00-14_ID747 rw add_efi_memmap loglevel=3 quiet"
    }
}

My concern is that the loader and initrd fields contain the /@/root/.refind-btrfs/rwsnap_2024-09-16_11-37-06_ID750 part. I'm not sure if this is the expected behaviour.

Redoing the setup process on my laptop, without the restoration, this is what I get, closer to what I expect. Ignore the difference of not having the /boot prefix for loader and initrd as on this setup my ESP is mounted as /boot on my laptop.

menuentry "Arch Linux (rwsnap_2024-09-16_12-00-23_ID304)" {
    icon /EFI/refind/themes/refind-theme-regular/icons/384-144/os_arch.png
    volume SYSTEM_DRV
    loader /vmlinuz-linux
    initrd /initramfs-linux.img
    options "root=PARTUUID=1e244bdb-600d-4fee-9890-3516030bc612 rootflags=subvol=@/root/.refind-btrfs/rwsnap_2024-09-16_12-00-23_ID304 rw add_efi_memmap loglevel=3 quiet"
    submenuentry "Arch Linux (rwsnap_2024-09-14_10-02-47_ID303)" {
        options "root=PARTUUID=1e244bdb-600d-4fee-9890-3516030bc612 rootflags=subvol=@/root/.refind-btrfs/rwsnap_2024-09-14_10-02-47_ID303 rw add_efi_memmap loglevel=3 quiet"
    }
    submenuentry "Arch Linux (rwsnap_2024-09-14_09-24-57_ID302)" {
        options "root=PARTUUID=1e244bdb-600d-4fee-9890-3516030bc612 rootflags=subvol=@/root/.refind-btrfs/rwsnap_2024-09-14_09-24-57_ID302 rw add_efi_memmap loglevel=3 quiet"
    }
    submenuentry "Arch Linux (rwsnap_2024-09-14_09-24-54_ID301)" {
        options "root=PARTUUID=1e244bdb-600d-4fee-9890-3516030bc612 rootflags=subvol=@/root/.refind-btrfs/rwsnap_2024-09-14_09-24-54_ID301 rw add_efi_memmap loglevel=3 quiet"
    }
    submenuentry "Arch Linux (rwsnap_2024-09-14_09-19-55_ID300)" {
        options "root=PARTUUID=1e244bdb-600d-4fee-9890-3516030bc612 rootflags=subvol=@/root/.refind-btrfs/rwsnap_2024-09-14_09-19-55_ID300 rw add_efi_memmap loglevel=3 quiet"
    }
}

Hope this helps you in fixing this!

@Venom1991
Copy link
Owner

Venom1991 commented Sep 17, 2024

@theepicflyer

The second generated boot stanza (on your laptop) looks the way it does because you've setup a separate /boot partition (it doesn't matter that it also serves as the ESP). Also due to that kind of setup, adjusting the paths of "loader" and "initrd" options would render the generated boot stanza unusable.
This is expected behavior. Have a look at this config option. Also, a quote from this project's description:

In case a separate /boot partition is detected only the fields relevant to / are modified ("subvol" and/or "subvolid") while the "loader" and "initrd" fields (the former may also be nested within the "options" field) remain unaffected.
It goes without saying that the consequence of having this kind of a setup is being unable to mitigate a problematic kernel upgrade by simply booting into a snapshot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants