-
-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjust qubesd.service ordering once more #630
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #630 +/- ##
==========================================
+ Coverage 69.31% 69.32% +0.01%
==========================================
Files 58 58
Lines 11994 12005 +11
==========================================
+ Hits 8314 8323 +9
- Misses 3680 3682 +2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
This won't work, as LVM activation is event based (doesn't use lvm2-activation-generator anymore). The event-generated unit looks like this:
Note there is no common |
I think qubesd needs to not assume that every volume in a pool is active. That’s a good idea anyway for performance reasons. The current assumption is technical debt that needs to be paid off. |
I don't consider it a technical debt, there are many benefits of keeping volumes activated all the time. In my experiments the major disadvantage of activating on demand is inability to check its size/usage (LVM does not show this info for inactive volumes). But also, in my experiment with activating on demand (#382 ) it turns out there is a lot more corner cases and potential for various race conditions (since you can't use /dev/ file existence anymore, and calling lvm each time is slow, you need to rely on the cache much more). And at the same time, it's very unclear if there are any benefits from this approach (some rough tests shows it isn't significantly faster, if at all) - it looks just like extra complexity... |
How many volumes were active in these tests? |
AFAIR I did that on a test system with 100+ VMs |
0288015
to
d76622f
Compare
Normally, VM gets snapshots of its volumes anyway, so it makes little sense to enforce they are already active. This requires several changes: 1. Including inactive volumes in the size_cache (those will have 0 as usage, because inactive do not report usage). 2. Do not check if volume is active in the verify() function. 3. With the above changed, adjust checking for volume existence by looking at size_cache instead of /dev/ node existence. For now do this in code paths related to VM startup. 4. In the rare case of not using snapshot (or new volatile volume) do activate the volume if needed. 5. Refresh size_cache after changing volumes. Do not deactivate volumes on stop to not lose information about usage. All that should solve an issue when qubesd (and possibly some VMs) are started while LVM is still activating volumes. A simpler (and more reliable) solution would be to order qubesd.service after activating all (present at boot) volumes, but unfortunately current LVM + systemd integration doesn't provide anything to set such ordering.
d76622f
to
6dffbf9
Compare
OpenQA test summaryComplete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2024110718-4.3&flavor=pull-requests Test run included the following:
New failures, excluding unstableCompared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2024091704-4.3&flavor=update
Failed tests9 failures
Fixed failuresCompared to: https://openqa.qubes-os.org/tests/112766#dependencies 201 fixed
Unstable tests
|
Order it after lvm2-activation.service, so that all present groups are
already activated (and its list can be cached).