You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Attempted to upgrade from 2.29.0 to 2.30.0 and suddenly the provider is throwing errors on a terraform plan. Looking at the release notes here, I'm not seeing anything relevant to our current configuration that would produce this error. Simply downgrading back to 2.29.0 is sufficient as a workaround. Also worth noting, this only takes place during initial cluster creation. After a successful run on 2.29.0, I can upgrade to 2.30.0 and everything works fine.
│ Error: Provider configuration: cannot load Kubernetes client config
│
│ with provider["registry.terraform.io/hashicorp/kubernetes"],
│ on main.tf line 17, in provider "kubernetes":
│ 17: provider "kubernetes" {
│
│ invalid configuration: default cluster has no server defined
Issue is only present on new builds, if I am running a terraform plan against an environment that already has a cluster, I get no error.
Terraform Version, Provider Version and Kubernetes Version
tf plan failing due to "invalid configuration" in provider
Terraform Configuration Files
Provider definition
provider"kubernetes" {
host=data.aws_eks_cluster.default.endpointcluster_ca_certificate=base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
exec {
api_version="client.authentication.k8s.io/v1"command="aws"# This requires the awscli to be installed locally where Terraform is executedargs=["eks", "get-token", "--cluster-name", data.aws_eks_cluster.default.name, "--region", var.region]
env={
AWS_PROFILE = var.profile
}
}
}
What should have happened?
A successful plan to create an EKS cluster.
Actual Behavior
What actually happened?
│ Error: Provider configuration: cannot load Kubernetes client config
│
│ with provider["registry.terraform.io/hashicorp/kubernetes"],
│ on main.tf line 17, in provider "kubernetes":
│ 17: provider "kubernetes" {
│
│ invalid configuration: default cluster has no server defined
Community Note
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
The text was updated successfully, but these errors were encountered:
Jumping on the train, is there a proper other solution? because I don't like the approach of having to do two different apply one first for the cluster and then for the rest, I like having one IAC code that can be applied in itself ?
Edit: Fixed it by using this weird ternary behavior:
From my understanding the provider try to call the cluster if any of the parameter is "fixed" like here a file, which means each time there is a roughly set configuration.
Based on this I use this tricks so that at plan phase I use the output of the module which is empty, and then at the apply stage, it will use the proper ca-file (we have tls inspection etc).
Nonethess note that this does not works when the cluster needs to be recreated, as we end up having the other weird bug behavior with "unable to access localhost cluster"
Attempted to upgrade from
2.29.0
to2.30.0
and suddenly the provider is throwing errors on aterraform plan
. Looking at the release notes here, I'm not seeing anything relevant to our current configuration that would produce this error. Simply downgrading back to2.29.0
is sufficient as a workaround. Also worth noting, this only takes place during initial cluster creation. After a successful run on2.29.0
, I can upgrade to2.30.0
and everything works fine.Issue is only present on new builds, if I am running a
terraform plan
against an environment that already has a cluster, I get no error.Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
tf plan failing due to "invalid configuration" in provider
Terraform Configuration Files
Expected Behavior
What should have happened?
A successful plan to create an EKS cluster.
Actual Behavior
What actually happened?
Community Note
The text was updated successfully, but these errors were encountered: