You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using automatic instrumentation for the dotnet dd-trace library in Kubernetes. There is some other issue with how we are doing GRPC tracing (separate bug). It is causing the tracing library to write many logging messages to the disk, which eventually causes our pods to be killed since it is using hundreds of Mi of ephemeral storage.
The tracing log messages should not be written to a file. They should be written to stdout/stderr so they can be collected and managed automatically by Kubernetes and the Datadog Agent.
Are we missing some configuration that would let us configure that? If not, I'm sure this issue is affecting many Datadog customers. Perhaps they haven't noticed it yet if they just didn't happen to trigger log messages.
To Reproduce
Steps to reproduce the behavior:
Setup the tracing library with GRPC
Encounter some issue with how traces are being used (we are still investigating this)
See that a pod's ephemeral storage is very high.
Expected behavior
Log messages should be written to stdout/stderr so they can be managed correctly.
Runtime environment (please complete the following information):
Instrumentation mode: automatic injection via Datadog admission webhook
Tracer version: reproduced with 2.46.0 and 2.49.0
OS: Alpine Linux
CLR: .Net 7.0
Additional context
Running in AWS EKS, with the latest Datadog agent/cluster deployed.
The text was updated successfully, but these errors were encountered:
Describe the bug
We are using automatic instrumentation for the dotnet dd-trace library in Kubernetes. There is some other issue with how we are doing GRPC tracing (separate bug). It is causing the tracing library to write many logging messages to the disk, which eventually causes our pods to be killed since it is using hundreds of Mi of ephemeral storage.
The tracing log messages should not be written to a file. They should be written to stdout/stderr so they can be collected and managed automatically by Kubernetes and the Datadog Agent.
Are we missing some configuration that would let us configure that? If not, I'm sure this issue is affecting many Datadog customers. Perhaps they haven't noticed it yet if they just didn't happen to trigger log messages.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Log messages should be written to stdout/stderr so they can be managed correctly.
Runtime environment (please complete the following information):
Additional context
Running in AWS EKS, with the latest Datadog agent/cluster deployed.
The text was updated successfully, but these errors were encountered: