Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Annotations do not take effect when container injection is specified #1854

Closed
liililil opened this issue Jun 19, 2023 · 7 comments
Closed

Annotations do not take effect when container injection is specified #1854

liililil opened this issue Jun 19, 2023 · 7 comments
Labels
area:auto-instrumentation Issues for auto-instrumentation

Comments

@liililil
Copy link

Deployment add annotations:

instrumentation.opentelemetry.io/inject-java: app-instrumentation
instrumentation.opentelemetry.io/container-names: app

The deployment has many containers, and the “app” container’s position is not fixed.
The first time the service is deployed, the opentelemetry-auto-instrumentation cannot be found in init containers. But after restart the deployment, the opentelemetry-auto-instrumentation works normally.

Environment:
Kubernetes 1.20.11
opentelemetry-operator 0.79.0
opentelemetry-autoinstrumentation-java 1.24.0

@pavolloffay pavolloffay added the area:auto-instrumentation Issues for auto-instrumentation label Jun 19, 2023
@pavolloffay
Copy link
Member

The first time the service is deployed, the opentelemetry-auto-instrumentation cannot be found in init containers.

I am not quite sure if I understand the issue.

Where the OTEL annotations present on the pod spec in deployment when the deployment was first created? Could you please paste the deployment yaml file here?

@liililil
Copy link
Author

The first time the service is deployed, the opentelemetry-auto-instrumentation cannot be found in init containers.

I am not quite sure if I understand the issue.

Where the OTEL annotations present on the pod spec in deployment when the deployment was first created? Could you please paste the deployment yaml file here?

When the deployment was first created, the OTEL annotations present on the pod annotations, but the annotations don't seem to work, there is not "opentelemetry-auto-instrumentation" init containers and the pod events disappear "Created container opentelemetry-auto-instrumentation".
The deployment original file can not be copied, but I extract the related structure as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
spec:
  template:
    metadata:
      annotations:
        instrumentation.opentelemetry.io/inject-java: app-instrumentation
        instrumentation.opentelemetry.io/container-names: app
    spec:
      containers:
        - name: container
          image: container:latest
        - name: app
          image: app:latest

@TylerHelmuth
Copy link
Member

@liililil to do have any other operators/mutators that could be interfering? We have seen issues in the past where other tools like istio mess with our operator's ability to inject the initContainer (#1765).

@liililil
Copy link
Author

@liililil to do have any other operators/mutators that could be interfering? We have seen issues in the past where other tools like istio mess with our operator's ability to inject the initContainer (#1765).

Thank you for your reply! But in my experiment, there was nothing attached except the OTEL Agent. I guess this is not the problem, and I am more skeptical that something was missing from the first deployment that caused the injection to fail.

@dacamposol
Copy link

@liililil the problem is that your MutationWebhookConfiguration .failurePolicy is set to Ignore for Pods, which doesn't prevent them to be created before the admission webhook is ready.

Ideally, you could set up the Pod to fail if the webhook isn't ready:

- admissionReviewVersions:
    - v1
  clientConfig:
    service:
      name: release-name-operator-webhook
      namespace: default
      path: /mutate-v1-pod
  failurePolicy: Fail
  name: mpod.kb.io
  rules:
    - apiGroups:
        - ""
      apiVersions:
        - v1
      operations:
        - CREATE
        - UPDATE
      resources:
        - pods

Now, take into account that the operator itself is a Pod, so you need to add some selector to know to which Pods you should apply this configuration. Ideally, I would recommend that you deploy the OpenTelemetry Operator in a namespace like monitoring and you indicate the Pods in that namespace and kube-system to ignore the mutation webhook:

...
namespaceSelector:
  matchExpressions:
  - key: kubernetes.io/metadata.name
    operator: NotIn
    values:
    - kube-system
    - monitoring <- This can be whichever namespace where you've deployed the operator
...

Basically, if you're using Helm, the values.yaml should be like this:

admissionWebhooks:
  pods:
    failurePolicy: Fail
  namespaceSelector:
    matchExpressions:
      - key: kubernetes.io/metadata.name
        operator: NotIn
        values:
          - kube-system
          - monitoring

You can also add objectSelector at the same level than the namespaceSelector if you want to be more concrete, but keep in mind that the Chart currently applies the same objectSelector to all resources (Pods, OpenTelemetryCollector, Instrumentation...) so it could be problematic if used without care.

I want to do a Pull-Request to modify this and to add customizable selectors per resource, but I don't have time right now 😅

@jaronoff97
Copy link
Contributor

@TylerHelmuth it seems like this may be a helm chart issue?

@TylerHelmuth
Copy link
Member

Based on @dacamposol great investigation I believe the helm chart can already provide a solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:auto-instrumentation Issues for auto-instrumentation
Projects
None yet
Development

No branches or pull requests

5 participants