-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
local: virtualize all the things #45
Comments
@adamreese pointed to https://github.com/linuxkit/linuxkit/blob/master/docs/platform-hyperkit.md which may be a good jump-off point to spike this idea. |
Could we Dockerise it? Or do we need a higher degree of virtualisation? |
I recall someone mentioning nomad doesn’t run well in Docker. Whether it’s because of Docker-in-Docker issues or something else, I don’t know. But it’s worth exploring further. |
That sounds plausible. This chap (https://github.com/vancluever/docker-nomad) claims to have got it working but it seems to come with lengthy instructions and caveats - Hashi not releasing an official image might be a hint that it's not optimal... |
Hello all, I have tried to do something similar with Vagrant (with VMware on Windows) and some modifications of the local installer script + port forwarding. So far, with a simple "vagrant up" command (and few command lines, because I have not finished yet) I can :
But I have some issues on :
I chose Vagrant to have a full VM, this way I can have a full dev/testing environment (with local DB and other local software interacting with my Spin project) which I can kill and re-deploy very quickly between several computers and with no further setup. |
A number of issues have cropped up using the local installer, mostly related to local system setup, permissions-based issues, and zombie processes causing indeterminate behaviour on future demo runs.
A short summary of these issues can be found here:
--preserve-env=PATH
tosudo
command nomad-local-demo#13Additionally, some engineers have questioned whether "tire kickers" would be comfortable installing a large number of tools on their system just to run a demo.
These issues have led to a very frustrating experience. In comparison, the aws installer is a breeze to debug and clean up because it all runs on a single EC2 instance. When the EC2 instance is torn down, the entire demo is cleaned up. Re-running the demo spins up a new EC2 instance. Orphaned processes from previous demo runs never collide.
IMO these class of issues would be moot if we virtualized the local installer. Users would only require the VM driver (xhyve for MacOS, WSl 2 for Windows, and QEMU for linux) and the
spin
CLI. When the VM is torn down, the entire demo is cleaned up, ready for a fresh run.The text was updated successfully, but these errors were encountered: