-
-
Notifications
You must be signed in to change notification settings - Fork 364
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
libvirtd: Add support for remote libvirt URIs #824
Conversation
@@ -47,6 +47,15 @@ in | |||
''; | |||
}; | |||
|
|||
options = { | |||
deployment.libvirtd.URI = mkOption { | |||
type = types.str; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this benefit from types.enum?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may or may not reveal how little I know about libvirt :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this benefit from types.enum?
Probably not, this option can be an arbitrary URI string: https://libvirt.org/uri.html.
It would be nice to have types.URL and types.URI though. :)
67ea37c
to
749cef8
Compare
@@ -217,10 +224,15 @@ def read_file(stream, nbytes, f): | |||
stream.sendAll(read_file, f) | |||
stream.finish() | |||
|
|||
def _qemu_executable(self): | |||
domaincaps_xml = self.conn.getDomainCapabilities( | |||
emulatorbin=None, arch='x86_64', machine=None, virttype='kvm', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hardcoding arch could be a problem, couldn't it ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current version has hardcoded qemu-system-x86_64
, so I decided to let it be for now it and maybe address it later in a separate PR. Didn't want to include too many unrelated changes in a single PR.
nixops/backends/libvirtd.py
Outdated
qemu_executable = "qemu-system-x86_64" | ||
qemu = spawn.find_executable(qemu_executable) | ||
assert qemu is not None, "{} executable not found. Please install QEMU first.".format(qemu_executable) | ||
qemu = self._qemu_executable() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't it be better to keep the assert ? I would miss the error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, if QEMU is not installed, the call to libvirt.open()
will fail with some unclear exception like no connection driver available for qemu:///system
. I'll try to handle the error and output some useful message to the user.
Sounds interesting, do I need nixops on the remote node or just libvirt is fine ? |
Only libvirtd is needed on the remote host. I'm currently testing it with remote libvirtd on Debian and it works fine. |
I fear this might be time consuming since I often had permission problems with libvirt/qemu but I want to give it a try anyway (btw any advice ?). https://libvirt.org/remote.html looks like a good resource for remote usage. I quickly tried but it failed to connect. Does it need to be a hostname ? |
Should probably be :) |
When deploying with this PR, I run into (I don't with master nixops)
I am on a slightly modified nixos-unstable so this could be the reason why. Hope someone else can try else or I might try on stock nixos-unstable. Whatversion of nixpkgs do you use ? |
@teto Rebased the PR against recent master, hope that helps. |
Before I test again, which version of nixpkgs do you use ? so that I can try on that one :) |
@flokli thanks for the pointer, it seems the problem is indeed gone. with current nixos-unstable 5402412b97247bcc0d2b693e159d55d114d1327b and your PR rebased on nixops a232fc5 (https://github.com/teto/nixops/tree/libvirt_uri), I get
I discovered the lslocks command but even with -u or -J flags I could not get the full paths, I assume it's the correct one.
My deployement consists of 2 identical machines, with just one supposed to be remote.
Maybe when building the 2 identical drives is the problem ? |
This is a bug in the current nixos-unstable, there is a PR for that: NixOS/nixpkgs#34052. Hope it is merged soon. |
Ok I give up ^^ tell me when planets align :> |
@teto planets aligned, and the PR landed in nixpkgs master. Can you test again? |
takes too long to compile master, I'll wait for it to make it to a channel
first.
2018-01-23 1:00 GMT+09:00 Florian Klink <[email protected]>:
… @teto <https://github.com/teto> planets aligned, and the PR landed in
nixpkgs master. Can you test again?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#824 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AA2FOvMo_5vrtcmxbLehoo92YvX3v8gfks5tNLCMgaJpZM4ROaCQ>
.
|
@teto NixOS/nixpkgs#34052 should have landed in |
I've nixos-unstable with #34052 in ~/nixpkgs3 and ran |
Not sure what broke there. Maybe this file already exists, and/or is in use somehow? I did ran What still needs to be tested is the code after the initial setup (the ssh access part) Nixops tries to get a 'primary ip' to ssh into by using I was only able to test it on an hardware with this configuration, so nixops couldn't find an IP to ssh into and kept waiting for an IP address. When using the nixops generated ssh privkey, I was however able to log in, so everything before seems to have worked. Additionally, for convenience, we might need to tunnel the following ssh connection via the remote libvirt host, as the discovered IPs might be private and not routed to the libvirt host from elsewhere (or at least try to use a public ipv6 address if available). If we get the tunneling working, IPv6 link local might be the best choice, as we can calculate this from the mac address. I'm not sure if it's possible to abuse the libvirt connection as a jumphost to internal IPs. @erosennin, do you have an idea? |
I upgraded my remote VM to nixos-unstable even though I don't think it's necessary but deployement still fails locally. When using master, it works.
Have you tried a deployement of 2 VMs with similar config ? |
Yes, I rebased on master. I added two commits below, that clean up imports and fix above issue: |
@erosennin, could you cherry-pick those in here as well, and rebase to latest master? |
@flokli thanks, added your changes and rebased. |
Is there anything that can be done to move this PR forward? Really looking forward to having it merged. |
@mbrgm I'll try to find some time for it this week. |
bump (sorry xD). |
@teto the PR is not really backward compatible, right? (imageDir option removed) Is there a trivial fix for people, or can we support both storagePool and imageDir to maintain backward compatibility? I don't know much about the libvirtd backend, so the changes look fine except for the remark above. If the tests for libvirtd backend succeed, I don't see any reasons (other than above) against merging. |
I suppose that when imageDir is set, _make_domain_xml could create the pool ? |
@erosennin any intent to complete this ? that would be great ! I tried to keep backwards compatibility via the following patch that adds a
|
I solved the permission problem by hardcoding the permissions for now in the XML as teto:users and it worked (according to the doc libvirtd should use the parent's permissions when no permission is precised but that was not the case for me). For the curious here is my messy branch https://github.com/teto/nixops/tree/qemu_agent with fixes for this PR and #922 . I hope that @erosennin can complete this PR else I might take a try at it when I get time. |
@erosennin @teto is anyone willing to complete this PR? Could I do anything to help? |
@jokogr that would be great ! I don't have the time to untangle the 2/3 PRs I merged into https://github.com/teto/nixops/tree/qemu_agent but with the info I've given on this thread, you should be able to complete the PR without too much hassle. |
Otherwise, deployments with multiple VMs try to write to the same image. Also, do the temp_image_path calculation only once.
Does anyone know how much work is left here to get this ready to merge? |
I've tried this out by rebasing it to master and work well, good job! The only caveat seems that libvirt doesn't come anymore with a The custom network that is present into the documentation seems non necessary with vanilla libvirt config because the To connect to the VM network with address 192.168.122.0/24 I've used the |
i'd love to see this PR merged in order to do a remote deployment! |
@makefu in the meantime i use the following code to have a specialized version of NixOps with this patch included: let
localPkgs = import <nixpkgs> {};
nixopsPkg = builtins.fetchTarball {
url = "https://github.com/azazel75/nixops/archive/remote-libvirt.tar.gz";
sha256 = "1mg1bsgwrydyk2i0g5pzc35kr6ybjrihvfns8kd8d6b74qxnyh40";
};
libvirtNixops = (import "${nixopsPkg}/release.nix" {}).build.${localPkgs.system};
in
# rest of shell.nix |
@azazel75 libvirt being without a What else is preventing this PR from being merged? @domenkozar @AmineChikhaoui |
This pull request has been mentioned on Nix community. There might be relevant details there: https://discourse.nixos.org/t/what-am-i-doing-wrong-here/2517/1 |
Any progress? This would be really nice to get included.. |
Thanks everybody and sorry that it took a while to get merged. |
wow !that's good news ! May motivate me to share/update some PRs. |
Wow @AmineChikhaoui thanks! Keep up the good trend and check out #1123 too, It's an one liner but it's very useful! |
This PR adds support for deploying to remote libvirtd hosts via
qemu+ssh://...
orqemu+tcp://...
URIs.qemu:///system
./run/current-system/sw/bin/qemu-system-x86_64
.