-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GKE: Kubernetes 1.19 or later is required to use Ingress in networking.k8s.io/v1 #1904
Comments
@yellowhat Can you share what |
Sure: def update_kubeconfig(args):
"""Update kubeconfig"""
name, location, project = args
cmd = f"""
gcloud container clusters get-credentials \\
{name} \\
--zone {location} \\
--project {project}
"""
run(cmd, shell=True, check=True)
kubeconfig = Output.all(
cluster.name,
cluster.location,
cluster.project,
).apply(update_kubeconfig)
k8s_provider = kubernetes.Provider(
"k8s",
opts=ResourceOptions(depends_on=[kubeconfig]),
) |
@yellowhat I don't believe this is doing what you think it's doing. The This likely means that your provider that you're creating is using your current local-context, which is probably an older version of Kubernetes. Here's an example of getting a kubeconfig for a GKE cluster: |
Yes, I know that This is an example apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xx
server: https://104.196.17.14
name: gke_us-east1-b_dev-cluster
contexts:
- context:
cluster: gke_us-east1-b_dev-cluster
user: gke_us-east1-b_dev-cluster
name: gke_us-east1-b_dev-cluster
current-context: gke_us-east1-b_dev-cluster
kind: Config
preferences: {}
users:
- name: gke_us-east1-b_dev-cluster
user:
auth-provider:
config:
access-token: xxx
cmd-args: config config-helper --format=json
cmd-path: /usr/local/google-cloud-sdk/bin/gcloud
expiry: "2022-02-15T11:52:00Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp It looks very similar to the one generated from here. Could you explain how my local-context is tailored to a older version of kubernetes? Thanks |
@yellowhat If you're happy to jump on a Zoom call, I'd be happy to help work out what is happening here. Currently, I'm a little confused 😃 |
Sure, feel free to send me the invitation |
Sorry, fat fingers |
OK. I tried to build a reproducer with The new chart uses a version check to deploy the correct Ingress resource and works. Having my new pulumi project use the Currently looking for the source code to see if I can pin this down. |
Great, if you can provide me more information I am able to report it. Thanks |
@yellowhat OK. It's a bug with the cloudbees-core chart. They have this code:
Which can never pass successfully. They need to verify the namespace, not the resource.
|
Thanks I will report it. |
Is there a way to replicate the issue directly via |
Looking at https://helm.sh/docs/chart_template_guide/builtin_objects/:
|
@yellowhat I'm still investigating why the capability checks have different resources, I'll get back to this issue as soon as I work this out. |
@yellowhat OK. I've worked this out. HelmRelease via Pulumi uses "ClientMode" and doesn't fetch the registered resource list from the API Server, it only has registered API namespaces. I'll update you when I've worked out the best next step. |
Ok. Thanks |
OK, @yellowhat. I've done all the digging I can do 😃 So the problem is that during a For you to make progress right now, you'd need to use We're currently discussing allowing the I'll open an issue for that now. |
@yellowhat I've opened #1908 Hopefully these responses help and you can make some progress with your stack. Have a great week! |
Perfect. Thanks I will keep on eye on the discussion. |
@yellowhat You happy for this issue to be closed for now? |
Hi, $ pulumi up --yes --skip-preview
Updating (dev):
Type Name Status Info
+ pulumi:pulumi:Stack gcp-gke-dev created 2 messages
+ ├─ gcp:organizations:Project project created
...
+ ├─ gcp:container:NodePool pool-n2-4 created
+ ├─ pulumi:providers:kubernetes k8s created
└─ kubernetes:helm.sh/v3:Release core **failed** 1 error
Diagnostics:
kubernetes:helm.sh/v3:Release (core):
error: failed to create chart from template: execution error at (cloudbees-core/templates/cjoc-ingress.yaml:2:4):
ERROR: Kubernetes 1.19 or later is required to use Ingress in networking.k8s.io/v1 |
Drats, it looks like our I'll need to defer to @viveklak and @lblackstone, I'm not sure if it's possible to navigate around this atm. |
Is there something else I can do to move this forward? Thanks |
Even using ERROR: Kubernetes 1.19 or later is required to use Ingress in networking.k8s.io/v1
error: an unhandled error occurred: Program exited with non-zero exit code: 1 |
I'll check with the team on Monday. Apologies for the silence, we've had some people on vacation |
Thanks for the reply |
Is there something I can try in the meantime? Thanks |
@yellowhat I don't believe there is, I think this is a problem that needs a fix on the Pulumi side and I'm not sure what that fix looks like atm yet. |
@rawkode I have noticed that if I remove the I have tried:
#!/bin/bash
# save incoming YAML to file
cat <&0 > /tmp/all.yaml def post(*args):
print(args)
_ = kubernetes.helm.v3.Release(
...
postrender=post(),
... But in both cases nothing is outputted. Could you point me to an example on how to use Thanks |
@yellowhat apologies for the delay, I was moving house. I’ve not used post render, but I’ll experiment with this and try to find you a solution |
Hi @rawkode, we are also facing this issue. Have you managed to find a solution (with a different helm chart the root cause is the some)? Thanks |
Hello!
Issue details
Hi,
I am trying to deploy the
cloudbees-core
helm chart using the3.40.1+0dc0318c25a4
version on a GKE cluster (version1.21
):But:
I am not sure how to pass the kubernetes/GKE version.
helm install ...
is working after just deploying the GKE cluster, so pulumi is not able to understand which kubernetes version will be run.Any ideas?
Thanks
The text was updated successfully, but these errors were encountered: