Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add containerd_http_proxy, containerd_https_proxy and containerd_no_proxy #106

Open
mcarvalhor opened this issue May 9, 2024 · 7 comments

Comments

@mcarvalhor
Copy link
Member

Enhancement Proposal

Just like MicroK8s, I believe the following settings are very valuable:

  • containerd_http_proxy
  • containerd_https_proxy
  • containerd_no_proxy

For instance: Canonical K8s can't be deployed on Prodstack at this moment without the Proxy.

@addyess
Copy link
Contributor

addyess commented May 9, 2024

You can use juju-http-proxy juju-https-proxy and juju-no-proxy from juju model config.

Does this provide you what you need?

@mcarvalhor
Copy link
Member Author

This doesn't work, as I am still unable to query Registry in order to download pod images:

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "registry.k8s.io/pause:3.7": failed to pull image "registry.k8s.io/pause:3.7": failed to pull and unpack image "registry.k8s.io/pause:3.7": failed to resolve reference "registry.k8s.io/pause:3.7": failed to do request: Head "https://registry.k8s.io/v2/pause/manifests/3.7": dial tcp 34.96.108.209:443: i/o timeout

I guess these don't apply to containerd.

For reference, if I run this manually on the worker/unit, I am able to query:
https://pastebin.canonical.com/p/JddMGYkcxs/

So proxy is allowing the connection, but containerd may not be using these proxy configs from the juju model at all.

@addyess
Copy link
Contributor

addyess commented May 10, 2024

I know you have to have the juju proxy config prior to deployment. changing them doesn't create a config changed hook (so the charm can't update them after install).

The reason i know this is b/c i regularly use k8s in a proxied situation

@mcarvalhor
Copy link
Member Author

Hi @addyess ,

Looks like this works! Thanks! 😄
https://pastebin.canonical.com/p/mVNnNVRswB/

I can see that cilium, coredns and metrics-server pods from the kube-system namespace are not healthy, but I don't think this is related to the proxy now (so not related to this issue, I'm still investigating):
https://pastebin.canonical.com/p/G5rmXwQxCY/

Is it possible to keep this open though, so that we can have this variable available on the charm directly?

We are working on a Terraform Module to automate Canonical K8s clusters deployments (just like we did for MicroK8s), and since the model may have other applications that should not be proxied (like a Docker Registry, Haproxy, Subordinates, etc), I believe having this config option on the charm is a cleaner approach.

@mateoflorido
Copy link
Member

Hi @mcarvalhor
There's a workaround for deployments on OpenStack. It seems like the fan interface collisions with the Cilium VXLAN interface. Please try the following workaround: https://bugs.launchpad.net/charm-cilium/+bug/2016905/comments/1

@mcarvalhor
Copy link
Member Author

Hi @mateoflorido ,

That works perfectly and now I have a working/healthy cluster. Thanks!

@pedrofragola
Copy link

hi @mateoflorido,

I also encountered this Cilium issue using Openstack, and the workaround you mentioned helped me resolve the issue. The links you provided are related to the charm-cilium, but is there any other bug we should track for the issue with k8s snap+cilium?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants