Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting context in kubernetes provider gives different results #2813

Closed
illegalnumbers opened this issue Feb 3, 2024 · 4 comments
Closed
Assignees
Labels
awaiting-feedback Blocked on input from the author kind/bug Some behavior is incorrect or out of spec

Comments

@illegalnumbers
Copy link

What happened?

I used 3.91.1 version of pulumi due to other bug reports of other versions to setup a context for my kubernetes provider that I use with custom resources. I pulumi up once and it was fine, pulumi up again and it cant connect and cannot get my kubeconfig stating that it is incorrectly formatted. No code changes, running on a github action.

Example

I dont have repro steps because this is insane to me.

Output of pulumi about

CLI
Version 3.91.1
Go Version go1.21.3
Go Compiler gc

Plugins
NAME VERSION
kubernetes 3.30.2
nodejs unknown

Host
OS darwin
Version 13.6.2
Arch x86_64

This project is written in nodejs: executable='/Users/illegalnumbers/.nvm/versions/node/v20.11.0/bin/node' version='v20.11.0'

...

Found no pending operations associated with streamnative/test

Backend
Name pulumi.com
URL https://app.pulumi.com/illegalnumbers
User illegalnumbers
Organizations illegalnumbers, streamnative
Token type personal

Dependencies:
NAME VERSION
@pulumi/pulumi 3.91.1
@types/node 20.11.16
@kubernetes/client-node 0.16.3
@pulumi/kubernetes 3.30.2
@pulumi/kubernetesx 0.1.6

Pulumi locates its logs in /var/folders/9r/pm823df979nfr_611khshpvm0000gn/T/ by default

Additional context

I am honestly at a loss. I spent hours looking at this. We use gcp and the gcp cluster command to grab a kubeconfig and set it for the runner for all steps. That configuration should work, and did, and now it isn't. Locally it worked too. I have an open ticket with more details (4668) which will include stacks, etc. but I honestly have no idea why one run would not work and the other would with absolutely no changes.

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@illegalnumbers illegalnumbers added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Feb 3, 2024
@illegalnumbers
Copy link
Author

Code example for setting up kubernetes

const kubeContext = new pulumi.Config("kubernetes").require("context");
if (kubeContext === "none") {
  throw new Error("kubernetes:context must be defined to a valid context")
}

var k8sCluster = new k8s.Provider('k8s-cluster', {context: kubeContext})

Like I said, worked locally, worked on the runner once, failed after multiple times.

The failure is

 warning: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`- couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
    error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster was deleted, you can remove this resource from Pulumi state by rerunning the operation with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to "true"

Which makes no sense since it is generated via the gcloud gcloud container clusters get-credentials command.

I'm at a loss to even debug at this point. If someone could point me in any direction to move forward with this it would be appreciated. I spent way too much time today trying to figure this out, trying different versions of pulumi, different providers, etc. and nothing.

We also configure this

pulumi:disable-default-providers: ["*"]

In the root of our pulumi yaml.

@illegalnumbers
Copy link
Author

The thing that led to this initially was the default provider wasn't being used properly which was causing issues. So we changed it to be the above code and then used that as the provider for some CRD's. This resulted in pulumi trying to replace literally everything in the stack. This was clearly wrong, the provider changed but the stack was exactly the same and the context was the same. So we went down a rabbit hole and wasted a lot of time and energy on this.

@mjeffryes
Copy link
Member

Hi @illegalnumbers, Sorry for the trouble you've had here. I think what you're seeing is the same root cause as pulumi/pulumi-eks#1034. Any chance you upgrade kubectl since the last successful pulumi up?

@mjeffryes mjeffryes added awaiting-feedback Blocked on input from the author and removed needs-triage Needs attention from the triage team labels Feb 6, 2024
@mjeffryes
Copy link
Member

Actually @EronWright corrected me; we think you're probably running into the bug that was fixed in #2771 Can you try upgrading your kubernetes plugin to the latest version and see if that resolves the problem?

@EronWright EronWright closed this as not planned Won't fix, can't repro, duplicate, stale Feb 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-feedback Blocked on input from the author kind/bug Some behavior is incorrect or out of spec
Projects
None yet
Development

No branches or pull requests

3 participants