Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support default/secure metrics-server installation #2

Open
timoreimann opened this issue Jul 11, 2019 · 17 comments
Open

Support default/secure metrics-server installation #2

timoreimann opened this issue Jul 11, 2019 · 17 comments
Labels
enhancement New feature or request

Comments

@timoreimann
Copy link

As of today, installing metrics-server requires tweaking the configuration since the default is to reach out to nodes by DNS and use TLS. A fair number of users have asked to support the default setup including TLS, which is a highly reasonable request.

The issue has originally been discussed in digitalocean/digitalocean-cloud-controller-manager#150. Several comments describe how to run metrics-server in TLS-less mode as a workaround for now.

@timoreimann timoreimann added the enhancement New feature or request label Jul 11, 2019
@timoreimann timoreimann changed the title Support metrics-server installations with TLS enabled Support default metrics-server installation Jul 11, 2019
@timoreimann timoreimann changed the title Support default metrics-server installation Support default/secure metrics-server installation Jul 11, 2019
@timoreimann
Copy link
Author

Required work items are referenced above.

@timoreimann
Copy link
Author

timoreimann commented Sep 12, 2019

With secure TLS usage of the kubelet API now possible (see the update to #6), the --kubelet-insecure-tls parameter is not needed anymore.

Users are down to having to specify --kubelet-preferred-address-types=InternalIP at this point, which is the last item to tackle prior to the default metrics-server configuration working out of the box.

@mbrodala
Copy link

mbrodala commented Jun 2, 2020

I just set up the metrics-server with the components.yaml from the v0.3.6 release and made sure to inject the --kubelet-preferred-address-types=InternalIP flag as mentioned here. (Using kustomize and a JSON patch for this)

The cluster is running on 1.17.5-do.0 currently.

Everything seems to be running just fine and I get no errors. With kubectl top node i see some metrics. However I see no change in the DOKS dashboard as suggested in official dashboard docs:

image

This is my view instead:

image

Am I missing something?

@timoreimann
Copy link
Author

@mbrodala is this the dashboard we integrate in the DigitalOcean cloud control panel, or a separate deployment you manage on your own?

@mbrodala
Copy link

mbrodala commented Jun 3, 2020

@timoreimann this is the original dashboard provided by DOKS.

@timoreimann
Copy link
Author

@mbrodala this may indeed be an issue on our end. I filed an internal bug report so that we can look into the matter more closely.

Thanks for bringing this to our attention. I'm going to report back to this issue once we've identified and fixed the problem.

@TonyBogdanov
Copy link

@timoreimann Any news on this?
I'm also facing the exact same issue - metrics server works properly (with the mentioned modifications) & I can see the stats using the top command, but the (built-in) dashboard isn't showing them in the UI.

@timoreimann
Copy link
Author

The dashboard metrics are served by a different, separate sidecar, which we yet have to integrate. I created #21 to track the effort.

@feluxe
Copy link

feluxe commented Oct 31, 2020

I just installed metrics-server with doctl kubernetes cluster create --1-clicks="metrics-server" .... When I run kubectl top pods --all-namespaces I get:

W1031 04:06:49.790056 1123183 top_pod.go:265] Metrics not available for pod default/external-dns-68cf9b5c56-4bjqs, age: 40m32.790034709s
error: Metrics not available for pod default/external-dns-68cf9b5c56-8bjqs, age: 40m32.790034709s

The metrics pod logs something like this:

I1031 03:03:25.598399       1 manager.go:148] ScrapeMetrics: time: 5.606118ms, nodes: 1, pods: 0
E1031 03:03:26.647337       1 reststorage.go:160] unable to fetch pod metrics for pod cert-manager/cert-manager-cainjector-6d59c8d4f7-7l274: no metrics known for pod
E1031 03:03:26.647589       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/cilium-operator-6cb976fbcf-rc8lv: no metrics known for pod
E1031 03:03:26.647701       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/cilium-rl67k: no metrics known for pod
E1031 03:03:26.647793       1 reststorage.go:160] unable to fetch pod metrics for pod cert-manager/cert-manager-webhook-578954cdd-9vwph: no metrics known for pod
E1031 03:03:26.647843       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/kube-proxy-4f2vb: no metrics known for pod
E1031 03:03:26.647900       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/coredns-76bcfddf46-rkfvt: no metrics known for pod
E1031 03:03:26.647957       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/do-node-agent-wx74s: no metrics known for pod
E1031 03:03:26.648048       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/csi-do-node-skgr8: no metrics known for pod
E1031 03:03:26.648137       1 reststorage.go:160] unable to fetch pod metrics for pod default/external-dns-68cf9b5c56-8bjqs: no metrics known for pod
E1031 03:03:26.648196       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/cilium-operator-6cb976fbcf-z5vmk: no metrics known for pod
E1031 03:03:26.648288       1 reststorage.go:160] unable to fetch pod metrics for pod cert-manager/cert-manager-86548b886-j655n: no metrics known for pod
E1031 03:03:26.648387       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/coredns-76bcfddf46-lsvm5: no metrics known for pod
E1031 03:03:26.648433       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/metrics-server-5b8f47666-m6sn5: no metrics known for pod
E1031 03:03:26.648487       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/kube-state-metrics-6f4b6669f9-zhgw2: no metrics known for pod
E1031 03:03:26.648544       1 reststorage.go:160] unable to fetch pod metrics for pod ingress-nginx/ingress-nginx-controller-98cb87fb7-bfvvr: no metrics known for pod

@johnwook
Copy link

johnwook commented Nov 3, 2020

@feluxe I have just the same issue with you

@kyranb
Copy link

kyranb commented Nov 4, 2020

@feluxe @johnwook and I have the same issue as you guys too.

@timoreimann
Copy link
Author

1.19 users are presumably affected by an incompatibility with Docker 18 as described in kubernetes/kubernetes#94281.

We plan to release a DOKS 1.19 update (ideally today) that is going to address the problem by moving to Docker 19.03.

@WyriHaximus
Copy link

@timoreimann as a 1.19 user I can confirm I'm affected by that (or something else) and currently don't have pod metrics through metrics-server

@WyriHaximus
Copy link

FYI:
1.19.3-do.2 was just released and should fix the problem. Please report back if that's not the case.

Originally posted by @timoreimann in digitalocean/digitalocean-cloud-controller-manager#150 (comment)

Can confirm this is now fixed for me

@feluxe
Copy link

feluxe commented Nov 5, 2020

Same here. kubectl top ... works with the new update, but the stats still don't show within the dashboard web UI.

@timoreimann
Copy link
Author

the stats still don't show within the dashboard web UI.

Unfortunately, that's unrelated to the latest release. It's still on the agenda to get it fixed as well.

@Nuxij
Copy link

Nuxij commented Jul 19, 2022

Hi this is still biting me on 1,22. I have to change the endpoints to be InternalIP and enable the apiService

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants