Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GKE: Kubernetes 1.19 or later is required to use Ingress in networking.k8s.io/v1 #1904

Closed
yellowhat opened this issue Feb 10, 2022 · 33 comments
Labels
area/helm kind/bug Some behavior is incorrect or out of spec

Comments

@yellowhat
Copy link

Hello!

  • Vote on this issue by adding a 👍 reaction
  • To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already)

Issue details

Hi,
I am trying to deploy the cloudbees-core helm chart using the 3.40.1+0dc0318c25a4 version on a GKE cluster (version 1.21):

core = kubernetes.helm.v3.Release(
    "core",
    args=kubernetes.helm.v3.ReleaseArgs(
        name="cloudbees-core",
        chart="cloudbees-core",
        version="3.40.1+0dc0318c25a4",
        namespace="cloudbees-core",
        repository_opts=kubernetes.helm.v3.RepositoryOptsArgs(
            repo="https://charts.cloudbees.com/public/cloudbees",
        ),
    ),
    opts=ResourceOptions(provider=k8s_provider),
)

But:

$ pulumi preview
Previewing update (dev):
     Type                              Name                      Plan       Info
 +   pulumi:pulumi:Stack               gcp-gke-dev               create     
 +   ├─ gcp:organizations:Project      project                   create     
 +   ├─ gcp:projects:Service           compute                   create     
 +   ├─ gcp:projects:Service           container                 create     
 +   ├─ gcp:compute:Network            vpc                       create     
 +   ├─ gcp:compute:Subnetwork         gke                       create     
 +   ├─ gcp:container:Cluster          cluster                   create     
 +   ├─ gcp:container:NodePool         pool-n2-4                 create     
 +   ├─ pulumi:providers:kubernetes    k8s                       create     
 +   ├─ kubernetes:core/v1:Namespace   core-ns                   create     
     └─ kubernetes:helm.sh/v3:Release  core                                 1 error
 
Diagnostics:
  kubernetes:helm.sh/v3:Release (core):
    error: failed to create chart from template: execution error at (cloudbees-core/templates/cjoc-ingress.yaml:2:4):
    
    ERROR: Kubernetes 1.19 or later is required to use Ingress in networking.k8s.io/v1 

I am not sure how to pass the kubernetes/GKE version.
helm install ... is working after just deploying the GKE cluster, so pulumi is not able to understand which kubernetes version will be run.

Any ideas?

Thanks

@yellowhat yellowhat added the kind/bug Some behavior is incorrect or out of spec label Feb 10, 2022
@rawkode
Copy link
Contributor

rawkode commented Feb 14, 2022

@yellowhat Can you share what k8s_provider is set to, please?

@yellowhat
Copy link
Author

Sure:

def update_kubeconfig(args):
    """Update kubeconfig"""
    name, location, project = args
    cmd = f"""
    gcloud container clusters get-credentials \\
        {name} \\
        --zone {location} \\
        --project {project}
    """
    run(cmd, shell=True, check=True)


kubeconfig = Output.all(
    cluster.name,
    cluster.location,
    cluster.project,
).apply(update_kubeconfig)

k8s_provider = kubernetes.Provider(
    "k8s",
    opts=ResourceOptions(depends_on=[kubeconfig]),
)

@rawkode
Copy link
Contributor

rawkode commented Feb 15, 2022

@yellowhat I don't believe this is doing what you think it's doing.

The gcloud container clusters get-credentials command adds the cluster information to your global ${HOME}/kube/config file, it doesn't print or return the kubeconfig to be consumed.

This likely means that your provider that you're creating is using your current local-context, which is probably an older version of Kubernetes.

Here's an example of getting a kubeconfig for a GKE cluster:

https://github.com/pulumi/examples/blob/fe1d2864fa23ba6f9115cdfa54dc7fad0d9f59c0/gcp-py-gke/__main__.py#L43

@yellowhat
Copy link
Author

Yes, I know that gcloud container clusters get-credentials writes to ${HOME}/.kube/config, and by default kubernetes.Provider will look at that file.
The idea is to allow pulumi and kubectl to look at the same file.

This is an example ~/.kube/config created by gcloud:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xx
    server: https://104.196.17.14
  name: gke_us-east1-b_dev-cluster
contexts:
- context:
    cluster: gke_us-east1-b_dev-cluster
    user: gke_us-east1-b_dev-cluster
  name: gke_us-east1-b_dev-cluster
current-context: gke_us-east1-b_dev-cluster
kind: Config
preferences: {}
users:
- name: gke_us-east1-b_dev-cluster
  user:
    auth-provider:
      config:
        access-token: xxx
        cmd-args: config config-helper --format=json
        cmd-path: /usr/local/google-cloud-sdk/bin/gcloud
        expiry: "2022-02-15T11:52:00Z"
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

It looks very similar to the one generated from here.

Could you explain how my local-context is tailored to a older version of kubernetes?

Thanks

@rawkode
Copy link
Contributor

rawkode commented Feb 15, 2022

@yellowhat If you're happy to jump on a Zoom call, I'd be happy to help work out what is happening here.

Currently, I'm a little confused 😃

@yellowhat
Copy link
Author

Sure, feel free to send me the invitation

@yellowhat
Copy link
Author

yellowhat commented Feb 15, 2022

Sorry, fat fingers

@yellowhat yellowhat reopened this Feb 15, 2022
@rawkode
Copy link
Contributor

rawkode commented Feb 15, 2022

@rawkode
Copy link
Contributor

rawkode commented Feb 15, 2022

OK. I tried to build a reproducer with pulumi new kubernetes-python and helm create example

The new chart uses a version check to deploy the correct Ingress resource and works.

Having my new pulumi project use the cloudbees-core chart does trigger the same error though, so I believe the problem lies within the chart.

Currently looking for the source code to see if I can pin this down.

@yellowhat
Copy link
Author

Great, if you can provide me more information I am able to report it.

Thanks

@rawkode
Copy link
Contributor

rawkode commented Feb 15, 2022

@yellowhat OK. It's a bug with the cloudbees-core chart.

They have this code:

{{- if not (.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" ) }}
{{ fail "\n\nERROR: Kubernetes 1.19 or later is required to use Ingress in networking.k8s.io/v1" }}
{{- end }}

Which can never pass successfully. They need to verify the namespace, not the resource.

{{- if not (.Capabilities.APIVersions.Has "networking.k8s.io/v1" ) }}
{{ fail "\n\nERROR: Kubernetes 1.19 or later is required to use Ingress in networking.k8s.io/v1" }}
{{- end }}

@yellowhat
Copy link
Author

Thanks I will report it.

@yellowhat
Copy link
Author

Is there a way to replicate the issue directly via helm?

@yellowhat
Copy link
Author

Looking at https://helm.sh/docs/chart_template_guide/builtin_objects/:

Capabilities.APIVersions.Has $version indicates whether a version (e.g., batch/v1) or resource (e.g., apps/v1/Deployment) is available on the cluster.

@rawkode
Copy link
Contributor

rawkode commented Feb 15, 2022

@yellowhat I'm still investigating why the capability checks have different resources, I'll get back to this issue as soon as I work this out.

@rawkode
Copy link
Contributor

rawkode commented Feb 15, 2022

@yellowhat OK. I've worked this out. HelmRelease via Pulumi uses "ClientMode" and doesn't fetch the registered resource list from the API Server, it only has registered API namespaces.

I'll update you when I've worked out the best next step.

@yellowhat
Copy link
Author

Ok. Thanks

@rawkode rawkode self-assigned this Feb 15, 2022
@rawkode
Copy link
Contributor

rawkode commented Feb 15, 2022

OK, @yellowhat. I've done all the digging I can do 😃

So the problem is that during a pulumi preview, we don't expect there to be access to the cluster, because a lot of teams actually provision the cluster in the same Pulumi program that they use to deploy to said cluster.

For you to make progress right now, you'd need to use pulumi up --skip-preview

We're currently discussing allowing the clientOnly option to be a CustomResouceOption that can be passed to helm.Release to allow you to toggle that behaviour.

I'll open an issue for that now.

@rawkode
Copy link
Contributor

rawkode commented Feb 15, 2022

@yellowhat I've opened #1908

Hopefully these responses help and you can make some progress with your stack.

Have a great week!

@yellowhat
Copy link
Author

Perfect. Thanks I will keep on eye on the discussion.

@rawkode
Copy link
Contributor

rawkode commented Feb 15, 2022

@yellowhat You happy for this issue to be closed for now?

@yellowhat
Copy link
Author

Hi,
unfortunately even with --skip-preview same error:

$ pulumi up --yes --skip-preview
Updating (dev):
     Type                              Name                      Status         Info
 +   pulumi:pulumi:Stack               gcp-gke-dev               created        2 messages
 +   ├─ gcp:organizations:Project      project                   created        
...
 +   ├─ gcp:container:NodePool         pool-n2-4                 created        
 +   ├─ pulumi:providers:kubernetes    k8s                       created        
     └─ kubernetes:helm.sh/v3:Release  core                      **failed**     1 error
 
Diagnostics:
  kubernetes:helm.sh/v3:Release (core):
    error: failed to create chart from template: execution error at (cloudbees-core/templates/cjoc-ingress.yaml:2:4):
    
    ERROR: Kubernetes 1.19 or later is required to use Ingress in networking.k8s.io/v1

@rawkode
Copy link
Contributor

rawkode commented Feb 16, 2022

Drats, it looks like our Check function renders the chart template in order to work out what resources will be created and this happens in ClientOnly mode.

I'll need to defer to @viveklak and @lblackstone, I'm not sure if it's possible to navigate around this atm.

@yellowhat
Copy link
Author

Is there something else I can do to move this forward?

Thanks

@yellowhat
Copy link
Author

Even using kubernetes.helm.v3.Chart instead of kubernetes.helm.v3.Release, I get the same error:

 ERROR: Kubernetes 1.19 or later is required to use Ingress in networking.k8s.io/v1
    error: an unhandled error occurred: Program exited with non-zero exit code: 1

@rawkode
Copy link
Contributor

rawkode commented Mar 5, 2022

I'll check with the team on Monday. Apologies for the silence, we've had some people on vacation

@yellowhat
Copy link
Author

Thanks for the reply

@yellowhat
Copy link
Author

Is there something I can try in the meantime?

Thanks

@rawkode
Copy link
Contributor

rawkode commented Mar 22, 2022

@yellowhat I don't believe there is, I think this is a problem that needs a fix on the Pulumi side and I'm not sure what that fix looks like atm yet.

@yellowhat
Copy link
Author

@rawkode I have noticed that if I remove the {{- include "ingress.check" . -}} line from the templates/cjoc-ingress.yaml file it works as expected.
Seems that the postrender option should be able to apply dynamically modifications, but I have no idea how is works.

I have tried:

  • postrender="/tmp/a.sh" where /tmp/a.sh is:
#!/bin/bash

# save incoming YAML to file
cat <&0 > /tmp/all.yaml
def post(*args):
    print(args) 

_ = kubernetes.helm.v3.Release(
...
    postrender=post(),                   
...

But in both cases nothing is outputted.

Could you point me to an example on how to use postrender?

Thanks

@rawkode
Copy link
Contributor

rawkode commented Apr 18, 2022

@yellowhat apologies for the delay, I was moving house. I’ve not used post render, but I’ll experiment with this and try to find you a solution

@segal90
Copy link

segal90 commented May 17, 2022

Hi @rawkode, we are also facing this issue. Have you managed to find a solution (with a different helm chart the root cause is the some)? Thanks

@EronWright
Copy link
Contributor

For Release resource, I believe this is fixed in #2672 because we no longer use templating during Check.

For Chart resource, please take a look at #2593 which provides the ability to override the kubeVersion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/helm kind/bug Some behavior is incorrect or out of spec
Projects
None yet
Development

No branches or pull requests

5 participants