You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been using the managed-servers plugin for a few days and I think I have found a rather odd bug/feature.
my build process is to have a libvirt base build, that is built via libvirt puppet. This uses NFS and sends mounts to the guest node.
Once built, I have found that when I run the provision using managed-server via ssh/rsync. This install appears to infect the data on the nfs shares.
the ssh managed servers doesnt check if the directories it wants to use are NFS mounted or not? Is there anything that could be done to force this to use its own new directories?
Workaround I think will work is to make the nfs partitions are not loaded on guest before provisioning.
The text was updated successfully, but these errors were encountered:
I have been using the managed-servers plugin for a few days and I think I have found a rather odd bug/feature.
my build process is to have a libvirt base build, that is built via libvirt puppet. This uses NFS and sends mounts to the guest node.
Once built, I have found that when I run the provision using managed-server via ssh/rsync. This install appears to infect the data on the nfs shares.
the ssh managed servers doesnt check if the directories it wants to use are NFS mounted or not? Is there anything that could be done to force this to use its own new directories?
Workaround I think will work is to make the nfs partitions are not loaded on guest before provisioning.
The text was updated successfully, but these errors were encountered: