Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-prometheus-stack - Grafana Errors #5262

Open
mikev1963 opened this issue Feb 1, 2025 · 0 comments
Open

kube-prometheus-stack - Grafana Errors #5262

mikev1963 opened this issue Feb 1, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@mikev1963
Copy link

Describe the bug a clear and concise description of what the bug is.

When I am trying to import a new dashboard I get the following errors:

BAD Gateway

Logs from Grafana:
`logger=context userId=1 orgId=1 uname=admin t=2025-02-01T15:30:22.524729568Z level=error msg="Proxy request failed" err="remote error: tls: handshake failure"
logger=context userId=1 orgId=1 uname=admin t=2025-02-01T15:30:22.524945708Z level=error msg="Request Completed" method=GET path=/api/gnet/dashboards/66 status=502 remote_addr=127.0.0.1 time_ms=50 duration=50.813651ms size=0 referer=http://localhost:3001/dashboard/import handler=/api/gnet/* status_source=downstream

Here is my values.yaml file:
`fullnameOverride: prometheus

defaultRules:
create: true
rules:
alertmanager: true
etcd: true
configReloaders: true
general: true
k8s: true
kubeApiserverAvailability: true
kubeApiserverBurnrate: true
kubeApiserverHistogram: true
kubeApiserverSlos: true
kubelet: true
kubeProxy: true
kubePrometheusGeneral: true
kubePrometheusNodeRecording: true
kubernetesApps: true
kubernetesResources: true
kubernetesStorage: true
kubernetesSystem: true
kubeScheduler: true
kubeStateMetrics: true
network: true
node: true
nodeExporterAlerting: true
nodeExporterRecording: true
prometheus: true
prometheusOperator: true

alertmanager:
fullnameOverride: alertmanager
enabled: true
ingress:
enabled: false

grafana:
enabled: true
fullnameOverride: grafana
forceDeployDatasources: false
forceDeployDashboards: false
defaultDashboardsEnabled: true
defaultDashboardsTimezone: utc
serviceMonitor:
enabled: true

kubeApiServer:
enabled: true

kubelet:
enabled: true
serviceMonitor:
metricRelabelings:
- action: replace
sourceLabels:
- node
targetLabel: instance

kubeControllerManager:
enabled: true
endpoints: # ips of servers
- 192.168.100.210
- 192.168.100.211
- 192.168.100.212

coreDns:
enabled: true

kubeDns:
enabled: false

kubeEtcd:
enabled: true
endpoints: # ips of servers
- 192.168.100.210
- 192.168.100.211
- 192.168.100.212

service:
enabled: true
port: 2381
targetPort: 2381

serviceMonitor:
insecureSkipVerify: true

kubeScheduler:
enabled: true
endpoints: # ips of servers
- 192.168.100.210
- 192.168.100.211
- 192.168.100.212

kubeProxy:
enabled: true
endpoints: # ips of servers
- 192.168.100.210
- 192.168.100.211
- 192.168.100.212

kubeStateMetrics:
enabled: true

kube-state-metrics:
fullnameOverride: kube-state-metrics
selfMonitor:
enabled: true
prometheus:
monitor:
enabled: true
relabelings:
- action: replace
regex: (.*)
replacement: $1
sourceLabels:
- __meta_kubernetes_pod_node_name
targetLabel: kubernetes_node

nodeExporter:
enabled: true
serviceMonitor:
relabelings:
- action: replace
regex: (.*)
replacement: $1
sourceLabels:
- __meta_kubernetes_pod_node_name
targetLabel: kubernetes_node

prometheus-node-exporter:
fullnameOverride: node-exporter
podLabels:
jobLabel: node-exporter
extraArgs:
- --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)
- --collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
service:
portName: http-metrics
prometheus:
monitor:
enabled: true
relabelings:
- action: replace
regex: (.*)
replacement: $1
sourceLabels:
- __meta_kubernetes_pod_node_name
targetLabel: kubernetes_node
resources:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 2048Mi

prometheusOperator:
enabled: true
prometheusConfigReloader:
resources:
requests:
cpu: 200m
memory: 50Mi
limits:
memory: 100Mi

prometheus:
enabled: true
prometheusSpec:
replicas: 3
replicaExternalLabelName: "replica"
ruleSelectorNilUsesHelmValues: false
serviceMonitorSelectorNilUsesHelmValues: false
podMonitorSelectorNilUsesHelmValues: false
probeSelectorNilUsesHelmValues: false`

Any help would be appreciated.

I am running a standard K3S cluster

Mike

`

What's your helm version?

version.BuildInfo{Version:"v3.16.4", GitCommit:"7877b45b63f95635153b29a42c0c2f4273ec45ca", GitTreeState:"dirty", GoVersion:"go1.23.4"}

What's your kubectl version?

version.BuildInfo{Version:"v3.16.4", GitCommit:"7877b45b63f95635153b29a42c0c2f4273ec45ca", GitTreeState:"dirty", GoVersion:"go1.23.4"}

Which chart?

kube-prometheus-stack

What's the chart version?

68.4.3

What happened?

BAD Gateway

Logs from Grafana:
`logger=context userId=1 orgId=1 uname=admin t=2025-02-01T15:30:22.524729568Z level=error msg="Proxy request failed" err="remote error: tls: handshake failure"
logger=context userId=1 orgId=1 uname=admin t=2025-02-01T15:30:22.524945708Z level=error msg="Request Completed" method=GET path=/api/gnet/dashboards/66 status=502 remote_addr=127.0.0.1 time_ms=50 duration=50.813651ms size=0 referer=http://localhost:3001/dashboard/import handler=/api/gnet/* status_source=downstream

What you expected to happen?

No response

How to reproduce it?

No response

Enter the changed values of values.yaml?

No response

Enter the command that you execute and failing/misfunctioning.

helm install prometheus-stack prometheus-community/kube-prometheus-stack --namespace monitoring -f values.yaml

Anything else we need to know?

No response

@mikev1963 mikev1963 added the bug Something isn't working label Feb 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant