Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-35095: drop redundant KSM selector from KubeCPUOvercommit #2422

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Documentation/deps-versions.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
| OCP Version | alertmanager | kubeRbacProxy | kubeStateMetrics | kubernetesMetricsServer | monitoringPlugin | nodeExporter | promLabelProxy | prometheus | prometheusOperator | thanos |
|--------------|----------------------------------------------------------------------------------|--------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------------|---------------------------------------------------------------------------|-----------------------------------------------------------------------|---------------------------------------------------------------------------|---------------------------------------------------------------------|------------------------------------------------------------------------------|-----------------------------------------------------------------|
| release-4.20 | [0.27.0](https://github.com/openshift/prometheus-alertmanager/blob/release-4.20) | [0.18.1](https://github.com/openshift/kube-rbac-proxy/blob/release-4.20) | [2.13.0](https://github.com/openshift/kube-state-metrics/blob/release-4.20) | [0.7.2](https://github.com/openshift/kubernetes-metrics-server/blob/release-4.20) | [1.0.0](https://github.com/openshift/monitoring-plugin/blob/release-4.20) | [1.8.2](https://github.com/openshift/node_exporter/blob/release-4.20) | [0.11.0](https://github.com/openshift/prom-label-proxy/blob/release-4.20) | [2.55.1](https://github.com/openshift/prometheus/blob/release-4.20) | [0.78.2](https://github.com/openshift/prometheus-operator/blob/release-4.20) | [0.36.1](https://github.com/openshift/thanos/blob/release-4.20) |
| release-4.19 | [0.27.0](https://github.com/openshift/prometheus-alertmanager/blob/release-4.19) | [0.18.1](https://github.com/openshift/kube-rbac-proxy/blob/release-4.19) | [2.13.0](https://github.com/openshift/kube-state-metrics/blob/release-4.19) | [0.7.2](https://github.com/openshift/kubernetes-metrics-server/blob/release-4.19) | [1.0.0](https://github.com/openshift/monitoring-plugin/blob/release-4.19) | [1.8.2](https://github.com/openshift/node_exporter/blob/release-4.19) | [0.11.0](https://github.com/openshift/prom-label-proxy/blob/release-4.19) | [2.55.1](https://github.com/openshift/prometheus/blob/release-4.19) | [0.78.2](https://github.com/openshift/prometheus-operator/blob/release-4.19) | [0.36.1](https://github.com/openshift/thanos/blob/release-4.19) |
| release-4.20 | [0.27.0](https://github.com/openshift/prometheus-alertmanager/blob/release-4.20) | [0.18.2](https://github.com/openshift/kube-rbac-proxy/blob/release-4.20) | [2.13.0](https://github.com/openshift/kube-state-metrics/blob/release-4.20) | [0.7.2](https://github.com/openshift/kubernetes-metrics-server/blob/release-4.20) | [1.0.0](https://github.com/openshift/monitoring-plugin/blob/release-4.20) | [1.8.2](https://github.com/openshift/node_exporter/blob/release-4.20) | [0.11.0](https://github.com/openshift/prom-label-proxy/blob/release-4.20) | [2.55.1](https://github.com/openshift/prometheus/blob/release-4.20) | [0.78.2](https://github.com/openshift/prometheus-operator/blob/release-4.20) | [0.36.1](https://github.com/openshift/thanos/blob/release-4.20) |
| release-4.19 | [0.27.0](https://github.com/openshift/prometheus-alertmanager/blob/release-4.19) | [0.18.2](https://github.com/openshift/kube-rbac-proxy/blob/release-4.19) | [2.13.0](https://github.com/openshift/kube-state-metrics/blob/release-4.19) | [0.7.2](https://github.com/openshift/kubernetes-metrics-server/blob/release-4.19) | [1.0.0](https://github.com/openshift/monitoring-plugin/blob/release-4.19) | [1.8.2](https://github.com/openshift/node_exporter/blob/release-4.19) | [0.11.0](https://github.com/openshift/prom-label-proxy/blob/release-4.19) | [2.55.1](https://github.com/openshift/prometheus/blob/release-4.19) | [0.78.2](https://github.com/openshift/prometheus-operator/blob/release-4.19) | [0.36.1](https://github.com/openshift/thanos/blob/release-4.19) |
| release-4.18 | [0.27.0](https://github.com/openshift/prometheus-alertmanager/blob/release-4.18) | [0.18.1](https://github.com/openshift/kube-rbac-proxy/blob/release-4.18) | [2.13.0](https://github.com/openshift/kube-state-metrics/blob/release-4.18) | [0.7.2](https://github.com/openshift/kubernetes-metrics-server/blob/release-4.18) | [1.0.0](https://github.com/openshift/monitoring-plugin/blob/release-4.18) | [1.8.2](https://github.com/openshift/node_exporter/blob/release-4.18) | [0.11.0](https://github.com/openshift/prom-label-proxy/blob/release-4.18) | [2.55.1](https://github.com/openshift/prometheus/blob/release-4.18) | [0.78.1](https://github.com/openshift/prometheus-operator/blob/release-4.18) | [0.36.1](https://github.com/openshift/thanos/blob/release-4.18) |
| release-4.17 | [0.27.0](https://github.com/openshift/prometheus-alertmanager/blob/release-4.17) | [0.17.1](https://github.com/openshift/kube-rbac-proxy/blob/release-4.17) | [2.13.0](https://github.com/openshift/kube-state-metrics/blob/release-4.17) | [0.7.1](https://github.com/openshift/kubernetes-metrics-server/blob/release-4.17) | [1.0.0](https://github.com/openshift/monitoring-plugin/blob/release-4.17) | [1.8.2](https://github.com/openshift/node_exporter/blob/release-4.17) | [0.11.0](https://github.com/openshift/prom-label-proxy/blob/release-4.17) | [2.53.1](https://github.com/openshift/prometheus/blob/release-4.17) | [0.75.2](https://github.com/openshift/prometheus-operator/blob/release-4.17) | [0.35.1](https://github.com/openshift/thanos/blob/release-4.17) |
| release-4.16 | [0.26.0](https://github.com/openshift/prometheus-alertmanager/blob/release-4.16) | [0.17.1](https://github.com/openshift/kube-rbac-proxy/blob/release-4.16) | [2.12.0](https://github.com/openshift/kube-state-metrics/blob/release-4.16) | [0.7.1](https://github.com/openshift/kubernetes-metrics-server/blob/release-4.16) | [1.0.0](https://github.com/openshift/monitoring-plugin/blob/release-4.16) | [1.8.0](https://github.com/openshift/node_exporter/blob/release-4.16) | [0.8.1](https://github.com/openshift/prom-label-proxy/blob/release-4.16) | [2.52.0](https://github.com/openshift/prometheus/blob/release-4.16) | [0.73.2](https://github.com/openshift/prometheus-operator/blob/release-4.16) | [0.35.0](https://github.com/openshift/thanos/blob/release-4.16) |
Expand Down
6 changes: 3 additions & 3 deletions assets/alertmanager-user-workload/alertmanager.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ spec:
- --tls-private-key-file=/etc/tls/private/tls.key
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- --config-file=/etc/kube-rbac-proxy/config.yaml
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: alertmanager-proxy
ports:
- containerPort: 9095
Expand Down Expand Up @@ -72,7 +72,7 @@ spec:
- --tls-cert-file=/etc/tls/private/tls.crt
- --tls-private-key-file=/etc/tls/private/tls.key
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: tenancy-proxy
ports:
- containerPort: 9092
Expand Down Expand Up @@ -101,7 +101,7 @@ spec:
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- --client-ca-file=/etc/tls/client/client-ca.crt
- --allow-paths=/metrics
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy-metric
ports:
- containerPort: 9097
Expand Down
6 changes: 3 additions & 3 deletions assets/alertmanager/alertmanager.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ spec:
- --tls-private-key-file=/etc/tls/private/tls.key
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- --ignore-paths=/-/healthy,/-/ready
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy-web
ports:
- containerPort: 9095
Expand All @@ -68,7 +68,7 @@ spec:
- --tls-cert-file=/etc/tls/private/tls.crt
- --tls-private-key-file=/etc/tls/private/tls.key
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy
ports:
- containerPort: 9092
Expand All @@ -92,7 +92,7 @@ spec:
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- --client-ca-file=/etc/tls/client/client-ca.crt
- --allow-paths=/metrics
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy-metric
ports:
- containerPort: 9097
Expand Down
18 changes: 12 additions & 6 deletions assets/control-plane/prometheus-rule.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ spec:
summary: StatefulSet update has not been rolled out.
expr: |
(
max without (revision) (
max by(namespace, statefulset, job, cluster) (
kube_statefulset_status_current_revision{namespace=~"(openshift-.*|kube-.*|default)",job="kube-state-metrics"}
unless
kube_statefulset_status_update_revision{namespace=~"(openshift-.*|kube-.*|default)",job="kube-state-metrics"}
Expand Down Expand Up @@ -146,10 +146,10 @@ spec:
severity: warning
- alert: KubeContainerWaiting
annotations:
description: pod/{{ $labels.pod }} in namespace {{ $labels.namespace }} on container {{ $labels.container}} has been in waiting state for longer than 1 hour.
description: 'pod/{{ $labels.pod }} in namespace {{ $labels.namespace }} on container {{ $labels.container}} has been in waiting state for longer than 1 hour. (reason: "{{ $labels.reason }}").'
summary: Pod container waiting longer than 1 hour
expr: |
sum by (namespace, pod, container, cluster) (kube_pod_container_status_waiting_reason{namespace=~"(openshift-.*|kube-.*|default)",job="kube-state-metrics"}) > 0
kube_pod_container_status_waiting_reason{reason!="CrashLoopBackOff", namespace=~"(openshift-.*|kube-.*|default)",job="kube-state-metrics"} > 0
for: 1h
labels:
severity: warning
Expand Down Expand Up @@ -232,7 +232,7 @@ spec:
description: Cluster {{ $labels.cluster }} has overcommitted CPU resource requests for Pods by {{ $value }} CPU shares and cannot tolerate node failure.
summary: Cluster has overcommitted CPU resource requests.
expr: |
sum(namespace_cpu:kube_pod_container_resource_requests:sum{job="kube-state-metrics",}) by (cluster) - (sum(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster) - max(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster)) > 0
sum(namespace_cpu:kube_pod_container_resource_requests:sum{}) by (cluster) - (sum(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster) - max(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster)) > 0
and
(sum(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster) - max(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster)) > 0
for: 10m
Expand Down Expand Up @@ -336,7 +336,7 @@ spec:
description: The kubernetes apiserver has terminated {{ $value | humanizePercentage }} of its incoming requests.
summary: The kubernetes apiserver has terminated {{ $value | humanizePercentage }} of its incoming requests.
expr: |
sum(rate(apiserver_request_terminations_total{job="apiserver"}[10m])) / ( sum(rate(apiserver_request_total{job="apiserver"}[10m])) + sum(rate(apiserver_request_terminations_total{job="apiserver"}[10m])) ) > 0.20
sum by(cluster) (rate(apiserver_request_terminations_total{job="apiserver"}[10m])) / ( sum by(cluster) (rate(apiserver_request_total{job="apiserver"}[10m])) + sum by(cluster) (rate(apiserver_request_terminations_total{job="apiserver"}[10m])) ) > 0.20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unrelated to this PR but I'd recommend to remove the KubeAPITerminatedRequests alerting rule since we're not owners for the API server. And same goes with KubeAPIDown.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK, I'll open up a new PR.

for: 5m
labels:
severity: warning
Expand Down Expand Up @@ -477,7 +477,7 @@ spec:
max by(cluster, namespace, pod, node) (kube_pod_info{node!=""})
)
record: node_namespace_pod_container:container_memory_swap
- name: k8s.rules.container_resource
- name: k8s.rules.container_memory_requests
rules:
- expr: |
kube_pod_container_resource_requests{resource="memory",job="kube-state-metrics"} * on (namespace, pod, cluster)
Expand All @@ -496,6 +496,8 @@ spec:
)
)
record: namespace_memory:kube_pod_container_resource_requests:sum
- name: k8s.rules.container_cpu_requests
rules:
- expr: |
kube_pod_container_resource_requests{resource="cpu",job="kube-state-metrics"} * on (namespace, pod, cluster)
group_left() max by (namespace, pod, cluster) (
Expand All @@ -513,6 +515,8 @@ spec:
)
)
record: namespace_cpu:kube_pod_container_resource_requests:sum
- name: k8s.rules.container_memory_limits
rules:
- expr: |
kube_pod_container_resource_limits{resource="memory",job="kube-state-metrics"} * on (namespace, pod, cluster)
group_left() max by (namespace, pod, cluster) (
Expand All @@ -530,6 +534,8 @@ spec:
)
)
record: namespace_memory:kube_pod_container_resource_limits:sum
- name: k8s.rules.container_cpu_limits
rules:
- expr: |
kube_pod_container_resource_limits{resource="cpu",job="kube-state-metrics"} * on (namespace, pod, cluster)
group_left() max by (namespace, pod, cluster) (
Expand Down
4 changes: 2 additions & 2 deletions assets/kube-state-metrics/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ spec:
- --tls-private-key-file=/etc/tls/private/tls.key
- --client-ca-file=/etc/tls/client/client-ca.crt
- --config-file=/etc/kube-rbac-policy/config.yaml
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy-main
ports:
- containerPort: 8443
Expand Down Expand Up @@ -108,7 +108,7 @@ spec:
- --tls-private-key-file=/etc/tls/private/tls.key
- --client-ca-file=/etc/tls/client/client-ca.crt
- --config-file=/etc/kube-rbac-policy/config.yaml
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy-self
ports:
- containerPort: 9443
Expand Down
2 changes: 1 addition & 1 deletion assets/node-exporter/daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: status.podIP
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy
ports:
- containerPort: 9100
Expand Down
4 changes: 2 additions & 2 deletions assets/openshift-state-metrics/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ spec:
- --tls-private-key-file=/etc/tls/private/tls.key
- --config-file=/etc/kube-rbac-policy/config.yaml
- --client-ca-file=/etc/tls/client/client-ca.crt
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy-main
ports:
- containerPort: 8443
Expand Down Expand Up @@ -62,7 +62,7 @@ spec:
- --tls-private-key-file=/etc/tls/private/tls.key
- --config-file=/etc/kube-rbac-policy/config.yaml
- --client-ca-file=/etc/tls/client/client-ca.crt
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy-self
ports:
- containerPort: 9443
Expand Down
4 changes: 2 additions & 2 deletions assets/prometheus-k8s/prometheus-rule.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@ spec:
severity: warning
- alert: PrometheusErrorSendingAlertsToSomeAlertmanagers
annotations:
description: '{{ printf "%.1f" $value }}% errors while sending alerts from Prometheus {{$labels.namespace}}/{{$labels.pod}} to Alertmanager {{$labels.alertmanager}}.'
summary: Prometheus has encountered more than 1% errors sending alerts to a specific Alertmanager.
description: '{{ printf "%.1f" $value }}% of alerts sent by Prometheus {{$labels.namespace}}/{{$labels.pod}} to Alertmanager {{$labels.alertmanager}} were affected by errors.'
summary: More than 1% of alerts sent by Prometheus to a specific Alertmanager were affected by errors.
expr: |
(
rate(prometheus_notifications_errors_total{job=~"prometheus-k8s|prometheus-user-workload"}[5m])
Expand Down
6 changes: 3 additions & 3 deletions assets/prometheus-k8s/prometheus.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ spec:
- --tls-private-key-file=/etc/tls/private/tls.key
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- --ignore-paths=/-/healthy,/-/ready
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy-web
ports:
- containerPort: 9091
Expand Down Expand Up @@ -85,7 +85,7 @@ spec:
- --tls-private-key-file=/etc/tls/private/tls.key
- --client-ca-file=/etc/tls/client/client-ca.crt
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy
ports:
- containerPort: 9092
Expand Down Expand Up @@ -117,7 +117,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: status.podIP
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy-thanos
ports:
- containerPort: 10903
Expand Down
2 changes: 1 addition & 1 deletion assets/prometheus-operator-user-workload/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ spec:
- --tls-private-key-file=/etc/tls/private/tls.key
- --config-file=/etc/kube-rbac-policy/config.yaml
- --client-ca-file=/etc/tls/client/client-ca.crt
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy
ports:
- containerPort: 8443
Expand Down
2 changes: 1 addition & 1 deletion assets/prometheus-operator/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ spec:
- --tls-private-key-file=/etc/tls/private/tls.key
- --client-ca-file=/etc/tls/client/client-ca.crt
- --config-file=/etc/kube-rbac-policy/config.yaml
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
image: quay.io/brancz/kube-rbac-proxy:v0.18.2
name: kube-rbac-proxy
ports:
- containerPort: 8443
Expand Down
Loading