Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenTelemetry flaky test #916

Open
flavio opened this issue Sep 20, 2024 · 4 comments · May be fixed by #1097
Open

OpenTelemetry flaky test #916

flavio opened this issue Sep 20, 2024 · 4 comments · May be fixed by #1097
Assignees
Milestone

Comments

@flavio
Copy link
Member

flavio commented Sep 20, 2024

The integration test covering OpenTelemetry has been flaky during some runs of the GH CI.

This is the error reported:

---- test_otel stdout ----
Cannot connect to Kubernetes cluster: Failed to infer configuration: failed to infer config: in-cluster: (failed to read an incluster environment variable: environment variable not found), kubeconfig: (failed to read kubeconfig from '"/home/runner/.kube/config"': No such file or directory (os error 2))
Cannot connect to Kubernetes, context aware policies will not work properly
thread 'test_otel' panicked at tests/integration_test.rs:798:5:
assertion failed: metrics["metrics"].as_array().unwrap().iter().any(|m|
        { m["name"] == "kubewarden_policy_evaluation_latency_milliseconds" })
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
@flavio
Copy link
Member Author

flavio commented Sep 25, 2024

We cannot reproduce the flaky test, we've introduced more debugging information to better understand what's going on.

Moving away from the 'todo' column, we will wait for the failure to happen again.

@flavio flavio removed this from the 1.17 milestone Sep 25, 2024
@jvanz
Copy link
Member

jvanz commented Sep 30, 2024

I can see this error when I run the tests locally:

test test_otel ... FAILED

failures:

---- test_otel stdout ----
thread 'test_otel' panicked at tests/integration_test.rs:815:5:
metrics_output_json: {
  "resourceMetrics": [
    {
      "resource": {
        "attributes": [
          {
            "key": "service.name",
            "value": {
              "stringValue": "unknown_service"
            }
          },
          {
            "key": "telemetry.sdk.name",
            "value": {
              "stringValue": "opentelemetry"
            }
          },
          {
            "key": "telemetry.sdk.version",
            "value": {
              "stringValue": "0.24.1"
            }
          },
          {
            "key": "telemetry.sdk.language",
            "value": {
              "stringValue": "rust"
            }
          }
        ]
      },
      "scopeMetrics": [
        {
          "metrics": [
            {
              "name": "kubewarden_policy_evaluations_total",
              "sum": {
                "aggregationTemporality": 2,
                "dataPoints": [
                  {
                    "asInt": "1",
                    "attributes": [
                      {
                        "key": "initialization_error",
                        "value": {
                          "stringValue": "bootstrap failure: wrong_url: Error while downloading policy 'wrong_url' from ghcr.io/kubewarden/tests/not_existing:v0.1.0: Fail to interact with OCI registry: Registry error: url https://ghcr.io/v2/kubewarden/tests/not_existing/manifests/v0.1.0, envelope: OCI API errors: [OCI API error: manifest unknown]"
                        }
                      },
                      {
                        "key": "policy_name",
                        "value": {
                          "stringValue": "wrong_url"
                        }
                      }
                    ],
                    "startTimeUnixNano": "1727700657372706354",
                    "timeUnixNano": "1727700657528442954"
                  }
                ],
                "isMonotonic": true
              }
            }
          ],
          "scope": {
            "name": "kubewarden"
          }
        }
      ]
    }
  ]
}
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    test_otel

test result: FAILED. 18 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 115.27s

error: test failed, to rerun pass `--test integration_test`
make: *** [Makefile:24: test] Error 101

@flavio flavio moved this to Todo in Kubewarden Feb 14, 2025
@flavio flavio added this to the 1.22 milestone Feb 14, 2025
@flavio
Copy link
Member Author

flavio commented Feb 14, 2025

this is getting annoying, let's try to sort this out in time for 1.22

@fabriziosestito
Copy link
Contributor

fabriziosestito commented Feb 17, 2025

bumping the otel collector testcontainer to the latest version (0.119.0 at the time of writing)
produces a different set of errors:

2025-02-17T17:10:08.612342Z ERROR opentelemetry_sdk:  name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: Connection reset by peer (os error 104)\" })" Failed during the export process
2025-02-17T17:10:08.613382Z ERROR opentelemetry_sdk:  name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: tls handshake eof\" })" Failed during the export process
2025-02-17T17:10:12.114413Z ERROR opentelemetry_sdk:  name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: tls handshake eof\" })" Failed during the export process
test test_otel has been running for over 60 seconds
2025-02-17T17:10:52.115248Z ERROR opentelemetry_sdk:  name="PeriodicReader.ExportFailed" Failed to export metrics reason="Metrics exporter otlp failed with the grpc server returns error (The service is currently unavailable): , detailed error message: tls handshake eof"
2025-02-17T17:11:52.115397Z ERROR opentelemetry_sdk:  name="PeriodicReader.ExportFailed" Failed to export metrics reason="Metrics exporter otlp failed with the grpc server returns error (The service is currently unavailable): , detailed error message: tls handshake eof"
test test_otel ... FAILED

failures:

---- test_otel stdout ----
thread 'test_otel' panicked at tests/integration_test.rs:938:14:
called `Result::unwrap()` on an `Err` value: Error("EOF while parsing a value", line: 1, column: 0)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    test_otel

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 21 filtered out; finished in 137.64s

error: test failed, to rerun pass `--test integration_test`

We need to bump to this version before proceeding with the bug fix.

@jvanz jvanz self-assigned this Feb 17, 2025
@jvanz jvanz moved this from Todo to In Progress in Kubewarden Feb 17, 2025
@jvanz jvanz linked a pull request Feb 19, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: In Progress
Development

Successfully merging a pull request may close this issue.

4 participants