Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Opentelemetry metrics are get in OpenSearch dashboard? #2172

Open
Ommkwn2001 opened this issue Sep 4, 2024 · 3 comments
Open

How to Opentelemetry metrics are get in OpenSearch dashboard? #2172

Ommkwn2001 opened this issue Sep 4, 2024 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@Ommkwn2001
Copy link

Ommkwn2001 commented Sep 4, 2024

Describe the bug

I install OpenSearch Dashboard , Dataprepper and Opentelemetry use with Helm chart.

So i First of i install OpenSearch and OpenSearch Dashboard and for the automatic generate index in the Opensearch dashboard i install the "Dataprepper" with this configuration in value.yaml code.

Dataprepper configuration code in value.yaml:

`

  config:
    otel-metrics-pipeline:
      workers: 8
      delay: 3000
      source:
        otel_metrics_source:
          health_check_service: true
          ssl: false
      processor:
        - otel_metrics:
            calculate_histogram_buckets: true
            calculate_exponential_histogram_buckets: true
            exponential_histogram_max_allowed_scale: 10
            flatten_attributes: false
      sink:
        - opensearch:
            hosts: ["https://opensearch-cluster-master.default.svc.cluster.local:9200"]
            username: "admin"
            password: "TadhakDev01"
            insecure: true
            index_type: custom
            index: ss4o_metrics-otel-%{yyyy.MM.dd}
            bulk_size: 4`

there are automatic create an index in the Opensearch dashboard with this name "ss4o_metrics-otel-%{yyyy.MM.dd}".

and i install Opentelemetry with this configuration in value.yaml code.

Opentelemetry configuration code in value.yaml:

`

 config:
    exporters:
      otlp/data-prepper:
        endpoint: my-data-prepper-release.default.svc.cluster.local:21891
        tls:
          insecure: true
  
      debug: {}
    extensions:
      # The health_check extension is mandatory for this chart.
      # Without the health_check extension the collector will fail the readiness and liveliness probes.
      # The health_check extension can be modified, but should never be removed.
      health_check:
        endpoint: ${env:MY_POD_IP}:13133
    processors:
      batch: {}
      # Default memory limiter configuration for the collector based on k8s resource limits.
      memory_limiter:
        check_interval: 5s
        limit_mib: 512 
        spike_limit_percentage: 25
    receivers:
      kubeletstats:
        insecure_skip_verify: true
        # collection_interval: 30s
        metrics:
          container.cpu.time:
            enabled: true
          container.cpu.utilization:
            enabled: true
          container.memory.available:
            enabled: true
          container.memory.usage:
            enabled: true
          k8s.node.cpu.time:
            enabled: true
          k8s.node.cpu.usage:
            enabled: true
          k8s.node.memory.available:
            enabled: true
          k8s.node.memory.usage:
            enabled: true
          k8s.pod.cpu.time:
            enabled: true
          k8s.pod.cpu.usage:
            enabled: true
          k8s.pod.memory.available:
            enabled: true
          k8s.pod.memory.usage:
            enabled: true
          
      k8s_cluster:
        node_conditions_to_report: [Ready, MemoryPressure]
        allocatable_types_to_report: [cpu, memory]
        metrics:
          k8s.container.cpu_limit:
            enabled: true
          k8s.container.cpu_request:
            enabled: true
          k8s.container.memory_limit:
            enabled: true
          k8s.container.memory_request:
            enabled: true
      zipkin:
        endpoint: ${env:MY_POD_IP}:9411
    service:
      telemetry:
        metrics:
          address: ${env:MY_POD_IP}:8888
      extensions:
        - health_check
      pipelines:
        logs:
          exporters:
            - debug
          processors:
            - memory_limiter
            - batch
          receivers:
            - otlp
        metrics:
          receivers: [hostmetrics]
          processors: []
          exporters: [otlp/data-prepper]`

when i install this Opentelemetry than my index "ss4o_metrics-otel-%{yyyy.MM.dd}" total size are automatically high.
this is the index image:
index

there are all metrics are show in the "Discover" in OpenSearch dashboard.
this is the "Discover" image :
discover

but in the OpenSearch dashboard "Metrics" there are not any metrics are available in this "Metrics" form
this is the "Metrics" image :
metrics

Expected behavior
So i want to all index metrics are get in the OpenSearch dashboard "Metrics".

OpenSearch Version
Please list the version of OpenSearch being used.

Dashboards Version
OpenSearch Dashboards version : 2.16.0

Host/Environment (please complete the following information):

  • Browser : Firefox
@Ommkwn2001 Ommkwn2001 added bug Something isn't working untriaged labels Sep 4, 2024
@LDrago27
Copy link

@opensearch-project/admin move it to observability-dashboards

@gaiksaya gaiksaya transferred this issue from opensearch-project/OpenSearch-Dashboards Sep 17, 2024
@andrross
Copy link
Member

[Catch All Triage - 1, 2, 3]

@glelarge
Copy link

glelarge commented Oct 30, 2024

Hi, I met the same issue, index is displayed in the select box, but no metrics displayed.

Cause

Checking the developer console reveals a 500 HTTP request with this message:

Fetch Document Names Error:Error: Fetch Otel Metrics Error:[illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [name] in order to load field data by uninverting the inverted index. Note that this can use significant memory.

Screenshot from 2024-10-29 12-02-39

Temp fix

Following this post the mapping API helped to change the type of the name field to the correct type keyword, and now the metrics appears correctly :

Screenshot from 2024-10-29 15-24-35

Expected behavior

The mapping field type of name should be set to keyword.

I didn't find exactly what provides the default mapping used to create the index, and what actually creates this mapping on the index: dataprepper, dashoard or plugin...
Another curious thing is the mapping creation flow:

  • if OSD is started without nothing sending metrics, the index mapping is not created
  • the mapping seems to be created at metrics reception

So I'm asking if mapping rely only on the provided template or also on received data.

Permanent workaround

To fix this error permanently, we have configured the data prepper deployment with a custom mapping downloaded directly from the OpenSearch catalog :
https://github.com/opensearch-project/opensearch-catalog/blob/main/schema/observability/metrics/metrics-1.0.0.mapping

I attached the yaml configuration for the data prepper metrics pipeline.

Next

That's not finished, only HISTOGRAM metrics are displayed, I don't know why, so I created this #2236

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants