-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Existing CloudFoundry environment cannot be imported #425
Comments
After thinking about this after writing it up, realizing this may be related to #305. |
|
@SeanKilleen I could not reproduce the behavior, so we have to sort out some things: The basic setup you referenced does not work for me, however that can depend on the region your subaccount is located. I therefore detail my flow, so that we can find deviations: The setup of the Cloud Foundry environment is executed via: resource "btp_subaccount_environment_instance" "cloudfoundry" {
subaccount_id = var.subaccount_id
name = var.instance_name
environment_type = "cloudfoundry"
service_name = "cloudfoundry"
plan_name = var.plan_name
landscape_label = "cf-us10"
parameters = jsonencode({
instance_name = var.cloudfoundry_org_name
})
} Some values are injected via variables, but nothing special here. In order to execute the import, I used the new import block feature of Terraform available since 1.5.x which makes the state import a bit more convenient. My setup for the import is: import {
to = btp_subaccount_environment_instance.cloudfoundry
id = "<SUBACCOUNT_ID>,<ENVIRONMENT_INSTANCE_ID>"
}
resource "btp_subaccount_environment_instance" "cloudfoundry" {
subaccount_id = var.subaccount_id
name = var.instance_name
environment_type = "cloudfoundry"
service_name = "cloudfoundry"
plan_name = var.plan_name
landscape_label = "cf-us10"
parameters = jsonencode({
instance_name = var.cloudfoundry_org_name
})
}
Executing a Apply complete! Resources: 1 imported, 0 added, 0 changed, 0 destroyed. I removed the import block and executed the No changes. Your infrastructure matches the configuration. What is different in our setups:
In case you have several users attached to the org you must add them to your parameters JSON section like: resource "btp_subaccount_environment_instance" "cloudfoundry" {
subaccount_id = var.subaccount_id
name = var.instance_name
environment_type = "cloudfoundry"
service_name = "cloudfoundry"
plan_name = var.plan_name
landscape_label = "cf-us10"
parameters = jsonencode({
instance_name = var.cloudfoundry_org_name,
users = [
{ id = "[email protected]"
email = "[email protected]"
}
]
})
} Can you check if the addition of these values gets you in a consistent setup? |
@lechnerc77 my general setup was:
At this point, I see the extra values. With that said, perhaps Terraform import blocks offer some additional protection against this somehow. I'll repeat the process with those blocks and see if I can get it to work. Happy to jump on a screen share with you as well. To be clear, at no point in my workflow am I running apply (yet) - these are existing resources and so the goal is to import the resource and then reconcile the Terraform such that it says "no changes necessary".
FWIW I had those values previously, but will attempt with them again. |
As a side note, thanks for teaching me about import blocks, which I'd completely missed somehow and which seem much nicer than the CLI for our situation! |
@lechnerc77 OK, so the exact repro:
resource "btp_subaccount_environment_instance" "cloudfoundry_abap" {
subaccount_id = btp_subaccount.abap.id
name = "SCT_Software_abapsystem"
environment_type = "cloudfoundry"
service_name = "cloudfoundry"
plan_name = "standard"
landscape_label = "cf-${var.abap_environment_region}"
parameters = jsonencode({
instance_name = "SCT_Software_abapsystem",
landscapeLabel = "cf-${var.abap_environment_region}"
users = [
{
email = "[email protected]"
},
]
})
}```
I add this import statement to my root module:
```tf
import {
to = module.product_environment.module.product_environment_btp.btp_subaccount_environment_instance.cloudfoundry_abap
id = "REDACTED_ORG_ID,REDACTED_INSTANCE_ID"
} After making these changes, when I run # module.product_environment.module.product_environment_btp.btp_subaccount_environment_instance.cloudfoundry_abap will be updated in-place
# (imported from "6897d5a9-6e5f-4978-9ddd-d619d62e370c,5E8CE958-CE5D-4C70-84F2-C32D1D00E501")
~ resource "btp_subaccount_environment_instance" "cloudfoundry_abap" {
~ broker_id = "E181A0CD-B424-4D07-87DA-0767EF912337" -> (known after apply)
~ created_date = "2023-01-01T19:56:27Z" -> (known after apply)
~ custom_labels = {} -> (known after apply)
+ dashboard_url = (known after apply)
+ description = (known after apply)
environment_type = "cloudfoundry"
id = "5E8CE958-CE5D-4C70-84F2-C32D1D00E501"
~ labels = jsonencode(
{
- "API Endpoint" = "https://api.cf.us10.hana.ondemand.com"
- "Org ID" = "2bc5166d-53c1-4494-9e17-9ae787c6f782"
- "Org Memory Limit" = "204,800MB"
- "Org Name" = "SCT_Software_abapsystem"
}
) -> (known after apply)
landscape_label = "cf-us10"
~ last_modified = "2023-09-04T12:28:28Z" -> (known after apply)
name = "SCT_Software_abapsystem"
~ operation = "provision" -> (known after apply)
~ parameters = jsonencode( # whitespace changes
{
instance_name = "SCT_Software_abapsystem"
landscapeLabel = "cf-us10"
users = [
{
email = "[email protected]"
},
]
}
)
~ plan_id = "fc5abe63-2a7d-4848-babf-f63a5d316df1" -> (known after apply)
plan_name = "standard"
~ platform_id = "2bc5166d-53c1-4494-9e17-9ae787c6f782" -> (known after apply)
~ service_id = "fa31b750-375f-4268-bee1-604811a89fd9" -> (known after apply)
service_name = "cloudfoundry"
~ state = "OK" -> (known after apply)
subaccount_id = "REDACTED"
~ tenant_id = "REDACTED" -> (known after apply)
~ type = "Provision" -> (known after apply)
}
Plan: 1 to import, 0 to add, 1 to change, 0 to destroy. The concerns I have are that:
|
I just cross checked the APIs that are used and you can remove the Your file should look like this: import {
to = module.product_environment.module.product_environment_btp.btp_subaccount_environment_instance.cloudfoundry_abap
id = "REDACTED_SUBACOUNT_ID,REDACTED_ENVINSTANCE_ID"
}
resource "btp_subaccount_environment_instance" "cloudfoundry_abap" {
subaccount_id = btp_subaccount.abap.id
name = "SCT_Software_abapsystem"
environment_type = "cloudfoundry"
service_name = "cloudfoundry"
plan_name = "standard"
landscape_label = "cf-${var.abap_environment_region}"
parameters = jsonencode({
instance_name = "SCT_Software_abapsystem",
})
}
If this doesn't do the trick, we should schedule a call. Background concerning the users: The management of the user in the org should be done via CF-specific resources. |
Hi @lechnerc77, my Terraform file looks like you describe. I'll reach out to schedule a call. Thanks! |
The Cloud Foundry was created on 01.01.2023. |
Retest on existing manually created CF instance (CF created in Feb 2023 and in 2021). Import successful without deviations. Parameters of environment however looked different than in account that is subject to this issue namely the landscape label is not set. |
To add some additional detail - I attempted to do this for a Kyma environment, which I'd previously created via TF (but where I had incorrectly set the timeouts and therefore it was created eventually but not registered in state). In this case, the import + resource combination once again shows that the environment will be torn down and recreated for similarly confusing reasons. Just wanted to add the note that the issue I'm seeing is consistently happening between both kinds of environments, including those created by TF. |
@lechnerc77 I think I might have made some progress here. I was thinking -- the fact that I'm currently using two separate local states could be used to experiment. So:
The differences between the two appear to be the timeout statement (missing in my local tfstate after the import) and the dependencies (not added as part of the import apparently). Below is the tfsatate from the machine that did the import (left) vs the machine that did the creation (right) When I added the timeout block and the dependencies into the tfstate file of the machine that had done the import, it too showed that it would not recreate the resource. (success!) So, it seems like something along the lines of
At this point I'm a bit out of my depth 😄 but hoping that it points you in the right direction. |
@lechnerc77 pinging on this only to ensure you saw my last comment, because I added it when you were almost certainly out of office. (disregard if you've already seen it, and accept my apologies for the ping.) |
Some things that are good to know, but that do not explain the CF behavior that is not 1:1 reproducible atm |
@SeanKilleen I am still struggling to get the import issue reproduced. It would be great if you could send me the output of a data "btp_subaccount_environment_instances" "all" {
subaccount_id = "<YOUR SUBACCOUNT ID>"
}
output "result" {
value = data.btp_subaccount_environment_instances.all
} Many thanks! |
This issue is stale because it has been open 15 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
This issue was closed because it has been stalled for 5 days with no activity. |
@lechnerc77 I apologize I wasn't able to circle back on this sooner. The environment was removed after we went all in on Kyma and so I wasn't able to get he requisite information. 😞 |
Is there an existing issue for this?
What version of Terraform are you using?
1.5.6
What type of issue are you facing
bug report
Describe the bug
I am attempting to capture an existing SAP BTP Environment setup for a CloudFoundry runtime for an ABAP environment. As far as I can tell, this lives outside of CloudFoundry and should be handled via the
btp_subaccount_environment_instance
resource.(Please let me know if I'm wrong above, since that could be the root of my issue.)
I define the resource based on the doc example:
I then import the existing resource via: the command line:
terraform import -var-file="env_dev.tfvars" module.product_environment.module.product_environment_btp.btp_subaccount_environment_instance.cloudfoundry_abap REDACTED_SUBACCOUNT_GUID,REDACTED_INSTANCE_ID
The import completes successfully.
However, when I next run terraform plan, I see:
I figured I might need to do some reconciliation, so as a first pass, I attempt to modify the
labels
field to match the changes that are being suggested. When I attempt that, I see:It seems that the import captures a great deal of information into TF state, which it then cannot reconcile with my Terraform, and where I can't update my script to bring it in line with the expectations due to the inability to set some fields.
Expected Behavior
No response
Steps To Reproduce
No response
Add screenshots to help explain your problem
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: