Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replacing agent VMs leave orphaned data disks #137

Open
BC-atJG opened this issue Apr 5, 2022 · 0 comments
Open

Replacing agent VMs leave orphaned data disks #137

BC-atJG opened this issue Apr 5, 2022 · 0 comments

Comments

@BC-atJG
Copy link

BC-atJG commented Apr 5, 2022

Description

I had built an Ubuntu 18 VM which had an attached Data Disk to capture a Managed Image for TeamCity to use. The NIC and both disks were configured to be deleted when the VM is deleted (not really important here, as that is not captured). Within my Cloud Profile I configured an Agent Image deploying resources to a Specific resource group using the Managed Image, and specified to re-use terminated VMs. All provisioned nicely.

When testing I updated the MI used within the Agent Image definition, I saw that TeamCity recognized that the existing Agents did not match the new Managed Image in the definition and sought to replace them. The old were deleted and new VMs were spun up to replace. That was fantastic! But, the additional Data Disks were left behind.

Here are the steps to reproduce and images to show what occurred.

  1. Define an Agent Image and spin up a couple VMs to support load.
    TC_Agent_Pool_list
    Resource_Group_resources

  2. Update the Managed Image used by the Agent Image, and replace the VMs (either stop/start or by letting them age-out and get replaced by demand). In the image below, I had stopped lin-sm-2. The virtual machine, network interface, and OS disk were deleted, but the Data Disk was left behind.
    Resource_Group_after_one_deletion

  3. After both were replaced and new ones started in their place, this is what remained in the Resource Group.
    New_TC_Agent_Pool_list
    Resource_Group_after_both_replaced
    This new pic below is a listing of the Disks, showing those which have been orphaned.
    Orphaned_disks
    As you can see, I'm starting a disk collection and when this grows to 100+ VMs it will become problematic.

It is known that specifying New resource group as the Agent Image Deployment is a work-around, but then our Resource Group list quickly becomes messy with 100+ RGs (and that is the reason we chose to use a Specific resource group).

Environment

  • TeamCity version: TeamCity Enterprise 2021.1.1 (build 92714)
  • Azure plugin version: 0.9.8

Diagnostic logs

None provided with this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant