-
Notifications
You must be signed in to change notification settings - Fork 533
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RC2 fixes] fix cloudinit disk slot and pm_parallel oddities #947
base: master
Are you sure you want to change the base?
Conversation
@@ -456,8 +456,8 @@ func resourceVmQemu() *schema.Resource { | |||
Schema: map[string]*schema.Schema{ | |||
"ide0": schema_Ide("ide0"), | |||
"ide1": schema_Ide("ide1"), | |||
"ide2": schema_Ide("ide2"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the change from ide3 to ide2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
am I right with this change since ide2 should be not used directly? Ora maybe it can be used but it conflicts with cloudinit is someone want to deploy a vm with a ISO file with terraform?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well ide2
used to be reserved for the iso propertie, but that was removed in #937. Currently only ide3
is reserved for cloud-init
but we could move that to the new disks
schema as well and let people chose on which device they'll mount the cloud-init
disk. Fro now i would leave it as ide3
as it's more intuitive that ide 0,1,2 are usable instead of 0,1,3.
docs/resources/vm_qemu.md
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somewhere in the docs it should say that ide2
is reserve, because of the change withe the cloud-init
disk.
} | ||
|
||
d.Set("reboot_required", rebootRequired) | ||
log.Print("[DEBUG][QemuVmCreate] vm creation done!") | ||
logger.Info().Str("vmid", d.Id()).Msgf("VM creation done!") | ||
lock.unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a defer unlock on 802,803
|
||
// if vmState["status"] == "running" && d.Get("vm_state").(string) == "running" { | ||
// diags = append(diags, initConnInfo(ctx, d, pconf, client, vmr, &config, lock)...) | ||
// } | ||
lock.unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a defer unlock on 1048,1042
@@ -1587,6 +1566,7 @@ func resourceVmQemuDelete(ctx context.Context, d *schema.ResourceData, meta inte | |||
} | |||
|
|||
_, err = client.DeleteVm(vmr) | |||
lock.unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a defer unlock on 1320,1299
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there are some lock.unlock()
in there that are unnecessary due to deferred unlocks. Don't know for sure what happens when you try to unlock an unlocked mutex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confused about changing the cloud-init
disk to ide2
, docs don't represent this change. Found some redundant unlocks which could cause issues.
so @Tinyblargon regarding the so i leave the defer for failure INTO the execution of the create/update and i explicit unlock before read if all goes well. reading the docs about |
@mleone87 i get what you mean with calling the read from inside the update and create. instead of managing all the nested locks wouldn't it be easier to create a func resourceVmQemuRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
pconf := meta.(*providerConfiguration)
lock := pmParallelBegin(pconf)
defer lock.unlock()
return resourceVmQemuReadNoLock(ctx, d, meta)
}
func resourceVmQemuReadNoLock(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
// everything that is in the current version of resourceVmQemuRead()
} |
thinking about your proposed solution @Tinyblargon it will not work either since there are no free slot when entering the read function I tested my code and it worked well in various scenarios |
I should be ready to merge this |
@mleone87 still a bit confused about the change of the |
@Tinyblargon at the end I think that the only part to keep is the pm_parallel fix, if not fixed yet! |
@mleone87 Yes, everything else has been superseded by other pull requests. |
No description provided.