Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No data shown in Grafana Dashboards #468

Open
Loahrs opened this issue Mar 7, 2024 · 9 comments
Open

No data shown in Grafana Dashboards #468

Loahrs opened this issue Mar 7, 2024 · 9 comments

Comments

@Loahrs
Copy link

Loahrs commented Mar 7, 2024

I noticed that I don't see any data in my Grafana Dashboards. I hoped for this problem to be fixed after updating to the latest version of the chart (3.0.0. -> 3.3.0) but it persists ever since. All dashboards are showing no data.

I checked Grafanas settings and see that a Prometheus datasource is configured (http://pulsar-kube-prometheus-sta-prometheus.default:9090). If I click on "Test" to test the connection, I receive "Succesfully queried the Prometheus API".

After that I opened the Prometheus UI and checked the configuration under http://prometheus-address:9090/config. In it I see a bunch of jobs related to Pulsar:

job_name: podMonitor/default/pulsar-zookeeper/0
job_name: podMonitor/default/pulsar-proxy/0
job_name: podMonitor/default/pulsar-broker/0
job_name: podMonitor/default/pulsar-bookie/0

Looking up the Metrics Explorer I can't see any Pulsar related metrics.

I'll post here my values.yaml:

clusterName: cluster-a
namespace: pulsar
namespaceCreate: false
initialize: false

auth:
  authentication:
    enabled: true
    jwt:
      usingSecretKey: false
    provider: jwt
  authorization:
    enabled: true
  superUsers:
    broker: broker-admin
    client: admin
    proxy: proxy-admin

broker:
  configData:
    proxyRoles: proxy-admin

certs:
  internal_issuer:
    enabled: true
    type: selfsigning

components:
  pulsar_manager: false

tls:
  broker:
    enabled: true
  enabled: true
  proxy:
    enabled: true
  zookeeper:
    enabled: true

I installed the Pulsar helm chart into a namespace "pulsar" and noticed that all Grafana-Stack related components were installed into the "default" namespace.

Could this be an issue?
I also enabled authentication/authorization, maybe the issue has to do with that?

@lhotari
Copy link
Member

lhotari commented Mar 27, 2024

This might be caused by the configured authentication. I guess metrics requires a token currently.

@lhotari
Copy link
Member

lhotari commented Mar 27, 2024

for the broker authenticateMetricsEndpoint defaults to false, so it might be something else.

@lerodu
Copy link

lerodu commented Dec 28, 2024

I was having the same issue. Could resolve it by installing the helm chart into 'default' namespace rather than 'pulsar'

@lhotari
Copy link
Member

lhotari commented Dec 28, 2024

I was having the same issue. Could resolve it by installing the helm chart into 'default' namespace rather than 'pulsar'

thanks, @lerodu. Most likely this could be resolved by configuring kube-prometheus-stack.prometheus.prometheusSpec.podMonitorNamespaceSelector (docs) in values.yaml.

Something like this

kube-prometheus-stack:
  prometheus:
    prometheusSpec:
      podMonitorNamespaceSelector:
        matchLabels: {}

@lhotari
Copy link
Member

lhotari commented Dec 28, 2024

I was having the same issue. Could resolve it by installing the helm chart into 'default' namespace rather than 'pulsar'

thanks, @lerodu. Most likely this could be resolved by configuring kube-prometheus-stack.prometheus.prometheusSpec.podMonitorNamespaceSelector (docs) in values.yaml.

Something like this

kube-prometheus-stack:
  prometheus:
    prometheusSpec:
      podMonitorNamespaceSelector:
        matchLabels: {}

It might be actually related to this: https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/monitoring-additional-namespaces.md#monitoring-additional-namespaces

In order to monitor additional namespaces, the Prometheus server requires the appropriate Role and RoleBinding to be able to discover targets from that namespace. By default the Prometheus server is limited to the three namespaces it requires: default, kube-system and the namespace you configure the stack to run in via $.values.namespace.

Also mentioned at https://prometheus-operator.dev/kube-prometheus/kube/monitoring-other-namespaces/

@lhotari
Copy link
Member

lhotari commented Mar 5, 2025

It seems that this problem will be solved in the helm chart release 4.0.0 where #555 is also addressed.
If you'd like to use an existing Prometheus instance which isn't deployed with the Pulsar Helm chart, it will be necessary to configure RBAC so that Prometheus has sufficient access to the namespace where Pulsar is deployed.

Claude generated RBAC is something like this, however it might not be correct. The official docs are at https://prometheus-operator.dev/kube-prometheus/kube/monitoring-other-namespaces/ .

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: prometheus-k8s
  namespace: foo  # Replace with your target namespace
rules:
# Core Kubernetes resources
- apiGroups:
  - ""
  resources:
  - services
  - endpoints
  - pods
  verbs:
  - get
  - list
  - watch

# Prometheus Operator CRDs
- apiGroups:
  - monitoring.coreos.com
  resources:
  - servicemonitors
  - podmonitors
  - prometheusrules
  - probes
  - alertmanagers
  - prometheuses
  - thanosrulers
  verbs:
  - get
  - list
  - watch

# For metric delegation and federation
- apiGroups:
  - monitoring.coreos.com
  resources:
  - servicemonitors/finalizers
  - podmonitors/finalizers
  - probes/finalizers
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: prometheus-k8s
  namespace: foo  # Replace with your target namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: prometheus-k8s
subjects:
- kind: ServiceAccount
  name: prometheus-kube-prometheus-stack-prometheus  # This name depends on your release name
  namespace: monitoring  # This should be the namespace where Prometheus is deployed

@lhotari lhotari closed this as completed Mar 5, 2025
@lhotari
Copy link
Member

lhotari commented Mar 5, 2025

One detail is that Helm's --namespace/-n option should be used to set the namespace for the Pulsar deployment, including the kube-prometheus-stack deployment. The Pulsar Helm chart currently only supports a deployment where kube-prometheus-stack is deployed in the same namespace as Pulsar.

@lhotari
Copy link
Member

lhotari commented Mar 7, 2025

Reopening this issue since the dashboards don't connect to the datasource and that causes a problem.

@lhotari lhotari reopened this Mar 7, 2025
@lhotari
Copy link
Member

lhotari commented Mar 7, 2025

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants