-
Hi, I am wondering, how to create persistent disks outside of a proxmox_virtual_environment_vm so that I may attach them when the VM is created, then delete them later? Without doing something like that, non-ephemeral storage becomes considerably more difficult and less performant (i.e. nfs, iscsi, etc...) I'm willing to work on the implementation if needed. Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 8 comments
-
Hi @mritalian! So far I didn't have such a use case in my environments. If you have some ideas how this can be implemented directly in PVE that would be a good starting point. Perhaps try to create this configuration in PVE UI and document the steps, then we can see what is needed from the provider to implement that? Thanks! |
Beta Was this translation helpful? Give feedback.
-
It looks like PVE has no support for disks not owned by a VM (or a template). But there might be a workaround. One VM (101) can use disks owned by a different VM (102).
VM 102 then exists just as a container for data disks - no boot disk, no boot order, etc. Ideally PVE would have additional VM type - in addition to existing "VM" and "template". It might also be a good idea to use Using single disk in multiple VMs at the same time would require use of a shared-disk clustered filesystem (like GFS2). Backups might work:
Second option is to handle data disks completely outside PVE control, just create The provider's
|
Beta Was this translation helpful? Give feedback.
-
Thanks for the useful info. In the state of things now it seems my only options are, in sequence
and at destroy time:
Using As far as enhancing this provider, the main thing of value I see at this point is some mechanism to specify volume to attach to vm at create time, with abilty to remember which disks "belong" to the vm and should be deleted with it, and which were simply attached & should not be deleted with the vm. Just for comparison, vSphere provider handles this behaviour in the provider (i.e. deleting the vm from the console always deletes all disks irrespective of whether they were created with the vm or not, but destroying the resource through tf doesn't destroy disks that were created outside of the resource). |
Beta Was this translation helpful? Give feedback.
-
No, volumes can be created using terraform: E.g. resource "proxmox_virtual_environment_vm" "data_vm" {
...
vm_id = 501
name = "test-data-vm"
tags = ["data-vm"]
started = false
# data disk to be used by another VM
disk {
datastore_id = "local-zfs"
file_format = "raw"
interface = "scsi0"
size = 20
}
} Volumes can be created in a never-to-be-run VMs, this way
The following is a proof-of-concept showing what is possible now: resource "proxmox_virtual_environment_vm" "data_user_vm" {
...
vm_id = 502
name = "test-data-user-vm"
# boot disk
disk {
datastore_id = "local-zfs"
file_format = "raw"
interface = "scsi0"
size = 8
}
connection {
type = "ssh"
user = "root"
# proxmox host
host = "pve-01.lan"
}
# attach data disk
provisioner "remote-exec" {
inline = [ "qm set ${self.vm_id} --scsi1 local-zfs:vm-501-disk-0"]
}
} At the moment, it has many unwanted qualities:
Making this actually usable would require some changes to the provider. resource "proxmox_virtual_environment_vm" "data_user_vm" {
...
# data disk
disk {
existing = "local-zfs:vm-501-disk-0"
interface = "scsi1"
}
} and ideally a version for volumes not managed by PVE at all (also useful for passing whole physical HDDs directly to VMs) resource "proxmox_virtual_environment_vm" "data_user_vm" {
...
# data disk
disk {
existing = "/dev/zvol/rpool/not-managed-by-pve/data-disk-123"
interface = "scsi1"
}
}
After running It seems that when PVE destroys a VM, it destroys only disks "owned" by the VM.
Yes, that looks like a better way to wrap
Fortunately it seems that there is no need to remember that. It looks like ``terraform destroy -target proxmox_virtual_environment_vm.data_user_vm It seems that adding something like |
Beta Was this translation helpful? Give feedback.
-
Hi. This is very useful info. I much prefer this way of creating volumes. I will work with this new info and report back my results. |
Beta Was this translation helpful? Give feedback.
-
Because anything worth doing is worth overdoing...
This has the added benefit of not requiring ssh access, at the expense of being super fragile. Still, with your poitners the solution very workable. That said, disk attachment is a somewhat rudimentary thing, I feel this could easily be added to the provider; I'm willing to assist in that. |
Beta Was this translation helpful? Give feedback.
-
#606 should fix it. @mritalian, could you verify that ? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Very cool. Will try it out. |
Beta Was this translation helpful? Give feedback.
#606 should fix it. @mritalian, could you verify that ? Thanks!