Skip to content

Latest commit

 

History

History
161 lines (131 loc) · 5.28 KB

crd-changelog.md

File metadata and controls

161 lines (131 loc) · 5.28 KB

CRD Changelog

This document explains major changes made in new CRD versions. It is intended to help users migrate and take advantage of the new features.

OpenTelemetryCollector.opentelemetry.io/v1beta1

Migration

There is no need for any immediate user action. The operator will continue to support existing v1alpha1 resources.

In addition, any newly applied v1alpha1 resource will be converted to v1beta1 and stored in the new API version.

The plan is to remove support for v1alpha1 in a future operator version, so users should migrate promptly. In order to migrate fully to v1beta1:

  1. Update any manifests you have stored outside the cluster, for example in your infrastructure git repository.
  2. Apply them, so they're all stored as v1beta1.
  3. Update the OpenTelemetryCollector CRD to only store v1beta1
    kubectl patch customresourcedefinitions opentelemetrycollectors.opentelemetry.io  \
      --subresource='status' \
      --type='merge' \
      -p '{"status":{"storedVersions":["v1beta1"]}}'

For a more thorough explanation of how and why this migration works, see the relevant Kubernetes documentation.

Operator Lifecycle Manager

If you're installing the opentelemetry-operator in OpenShift using OLM, be advised that only AllNamespaces install mode is now supported, due to the conversion webhook from v1beta1 to v1alpha1. See OLM docs and OLM operator groups docs.

Structured Configuration

The Config field containing the Collector configuration is a string in v1alpha1. This has some downsides:

  • It's easy to make YAML formatting errors in the content.
  • The field can have a lot of content, and may not show useful diffs for changes.
  • It's more difficult for the operator to reject invalid configurations at admission.

To solve these issues, we've changed the type of this field to a structure aligned with OpenTelemetry Collector configuration format. For example:

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: simplest
spec:
  config: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
      memory_limiter:
        check_interval: 1s
        limit_percentage: 75
        spike_limit_percentage: 15
      batch:
        send_batch_size: 10000
        timeout: 10s

    exporters:
      debug:

    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [debug]

becomes:

apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: simplest
spec:
  config:
    receivers:
      otlp:
        protocols:
          grpc: {}
          http: {}
    processors:
      memory_limiter:
        check_interval: 1s
        limit_percentage: 75
        spike_limit_percentage: 15
      batch:
        send_batch_size: 10000
        timeout: 10s
    exporters:
      debug: {}
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [debug]

Note

Empty maps, like debug: in the above configuration, should have an explicit value of {}.

Standard label selectors for Target Allocator

Configuring the target allocator to use Prometheus CRDs can involve setting label selectors for said CRDs. In the v1alpha1 Collector, these were simply maps representing the required labels. In order to allow more complex label selection rules and align with Kubernetes' recommended way of solving this kind of problem, we've switched to standard selectors.

For example, in v1alpha1 we'd have:

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: simplest
spec:
  targetAllocator:
    prometheusCR:
      serviceMonitorSelector:
        key: value

And in v1beta1:

apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: simplest
spec:
  targetAllocator:
    prometheusCR:
      serviceMonitorSelector:
        matchLabels:   
          key: value

Default Collector image

The OpenTelemetry Collector maintainers recently introduced a Collector distribution specifically aimed at Kubernetes workloads.

Our intent is to eventually use this distribution as our default collector image, as opposed to the core distribution we're currently using. After some debate, we've decided NOT to make this change in v1beta1, but rather roll it out more gradually, and with more warning to users. See this issue for more information.