-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't transition from encryption_key to byok_key[0].id without cluster recreation #404
Comments
@Israphel thanks for creating the issue! Did you get a chance to look at #398 (comment)?
On a relevant note, could you share more details about it? |
I have seen all the issues related to this but I don't agree with the answer. In infra as code, the code has to match the infra, you want us to remove part of the code instead of providing backward compatibility with existing clusters. specially this:
Both the provider and the API belong to Confluent, as a company. There're no excuses to make them incompatible. About the schema registry, you can read me here: #399 (comment) I gonna talk with confluent about this, in our monthly sync, because so far the 2.0.0 provider is a disaster. |
@Israphel, there's an update from our side: we talked to the backend team, and we're considering reverting our deprecation change. More specifically:
In short, your existing instances of the |
even tho that's better than before, it's still not ideal. Why? because if I update my terraform module for kafka creation to comply with 2.0.0 (which I already did), then existing clusters can't transition to byok_key, and they will always be in a legacy state. The ideal move here is being able to import an existing key, and the cluster should see no changes. The same happened with s3_bucket back in aws 3.x.x, where every single config was moved into its own terraform resoruce, and existing buckets were not affected after doing the imports. Below are fragments of my Terraform module: # main.tf
# Confluent Kafka cluster
resource "confluent_kafka_cluster" "kafka_cluster" {
display_name = var.name
availability = var.type == "basic" ? "SINGLE_ZONE" : var.availability
cloud = var.cloud
region = var.region
dynamic "basic" {
for_each = var.type == "basic" ? [true] : []
content {}
}
dynamic "standard" {
for_each = var.type == "standard" ? [true] : []
content {}
}
dynamic "dedicated" {
for_each = var.type == "dedicated" ? [true] : []
content {
cku = var.type != "dedicated" ? null : var.cku
}
}
dynamic "byok_key" {
for_each = anytrue([local.encrypted.aws, local.encrypted.azure]) ? [true] : []
content {
id = one(confluent_byok_key.byok_key[*].id)
}
}
dynamic "network" {
for_each = var.type == "dedicated" && var.network_id != null ? [true] : []
content {
id = var.network_id
}
}
environment {
id = var.environment_id
}
}
# Confluent encryption key
resource "confluent_byok_key" "byok_key" {
count = anytrue([local.encrypted.aws, local.encrypted.azure]) ? 1 : 0
dynamic "aws" {
for_each = local.encrypted.aws ? [true] : []
content {
key_arn = var.kms_key_arn
}
}
dynamic "azure" {
for_each = local.encrypted.azure ? [true] : []
content {
tenant_id = var.tenant_id
key_vault_id = var.key_vault_id
key_identifier = var.key_identifier
}
}
} # locals.tf
locals {
# Determines encryption status
encrypted = {
aws = alltrue([var.type == "dedicated", var.cloud == "AWS", var.kms_key_arn != null])
azure = alltrue([var.type == "dedicated", var.cloud == "AZURE", var.tenant_id != null, var.key_vault_id != null, var.key_identifier != null])
}
} So, as explained before, I can't transition my confluent repo to my updated module because the plan wants to make a cluster recreation. Please consider supporting the byok import. |
This is exactly what I mentioned in #398 |
@petrkarytka @Israphel could you try to update TF Provider to 2.0.0 by following this guide and then reimport your Kafka clusters and see whether it resolves Your new TF config should still include Thank you! |
@linouk23 I don't have any issues with |
Nice!
We fully agree with this statement 💯. However, unfortunately, based on our conversation with the backend team, updating the old clusters to start using |
That's not what I want. Existing clusters should be able to move to the new resource, because I'm not changing the encryption key, it still is the same KMS ARN it always had. You changed some naming internally, not my fault. |
I'm upgrading my own modules to comply with this guide: https://registry.terraform.io/providers/confluentinc/confluent/latest/docs/guides/version-2-upgrade
I currently have a dedicated AWS cluster with encryption. I'd like to continue managing that with terraform so I wanted to import the (already in use) KMS key, and link that to the existing cluster (so I expect 0 changes while doing that)
I ran:
and I couldn't see the KMS Key I always used, so I ran:
confluent byok create "arn:aws:kms:us-east-1:xxxxxx:key/xxxxx"
With the exact same ARN I'm using.
Then I imported:
and ran a plan, but Terraform wants to remove everything.
Why can't we keep managing this for existing clusters with Terraform? this change alongside the schema registry change, seems to be removing more and more power from IaC.
The text was updated successfully, but these errors were encountered: