Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(protobuf): Test the protoc version #33636

Open
wants to merge 254 commits into
base: nschweitzer/more_dependabot
Choose a base branch
from

Conversation

chouetz
Copy link
Member

@chouetz chouetz commented Jan 31, 2025

What does this PR do?

  • Add a check on the protoc version and the tools during the test run, to ensure we are using the correct tooling
  • Focus the check on protobuf files
  • Add tests

Motivation

Prevent an false positive when a go.mod was modified in the git stack (and caught by the git status -suno). And ensure we are running the check with the pinned version of software

Describe how you validated your changes

Possible Drawbacks / Trade-offs

Additional Notes

mftoure and others added 30 commits January 21, 2025 11:53
@chouetz chouetz requested review from a team as code owners January 31, 2025 18:08
@chouetz chouetz requested review from dustmop, ankitpatel96, mbakht, FlorianVeaux and knusbaum and removed request for a team January 31, 2025 18:09
@agent-platform-auto-pr
Copy link
Contributor

[Fast Unit Tests Report]

On pipeline 54609358 (CI Visibility). The following jobs did not run any unit tests:

Jobs:
  • tests_deb-arm64-py3
  • tests_deb-x64-py3
  • tests_flavor_dogstatsd_deb-x64
  • tests_flavor_heroku_deb-x64
  • tests_flavor_iot_deb-x64
  • tests_rpm-arm64-py3
  • tests_rpm-x64-py3
  • tests_windows-x64

If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help

@agent-platform-auto-pr
Copy link
Contributor

Uncompressed package size comparison

Comparison with ancestor 77e59acdd78d420679362528a43594226bc7504d

Diff per package
package diff status size ancestor threshold
datadog-agent-amd64-deb 0.00MB 912.86MB 912.86MB 0.50MB
datadog-agent-x86_64-rpm 0.00MB 922.60MB 922.60MB 0.50MB
datadog-agent-x86_64-suse 0.00MB 922.60MB 922.60MB 0.50MB
datadog-agent-arm64-deb 0.00MB 900.07MB 900.07MB 0.50MB
datadog-agent-aarch64-rpm 0.00MB 909.79MB 909.79MB 0.50MB
datadog-dogstatsd-amd64-deb 0.00MB 59.02MB 59.02MB 0.50MB
datadog-dogstatsd-x86_64-rpm 0.00MB 59.10MB 59.10MB 0.50MB
datadog-dogstatsd-x86_64-suse 0.00MB 59.10MB 59.10MB 0.50MB
datadog-dogstatsd-arm64-deb 0.00MB 56.50MB 56.50MB 0.50MB
datadog-heroku-agent-amd64-deb 0.00MB 478.07MB 478.07MB 0.50MB
datadog-iot-agent-amd64-deb 0.00MB 93.84MB 93.84MB 0.50MB
datadog-iot-agent-x86_64-rpm 0.00MB 93.91MB 93.91MB 0.50MB
datadog-iot-agent-x86_64-suse 0.00MB 93.91MB 93.91MB 0.50MB
datadog-iot-agent-arm64-deb 0.00MB 89.89MB 89.89MB 0.50MB
datadog-iot-agent-aarch64-rpm 0.00MB 89.96MB 89.96MB 0.50MB

Decision

✅ Passed

Copy link

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 38bb3400-9ff9-4693-8e73-c5d688c28108

Baseline: 77e59ac
Comparison: 555c250
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gate_logs % cpu utilization +1.63 [-1.50, +4.75] 1 Logs
quality_gate_idle memory utilization +0.25 [+0.22, +0.28] 1 Logs bounds checks dashboard
uds_dogstatsd_to_api_cpu % cpu utilization +0.23 [-0.72, +1.18] 1 Logs
quality_gate_idle_all_features memory utilization +0.12 [+0.05, +0.18] 1 Logs bounds checks dashboard
file_to_blackhole_0ms_latency_http1 egress throughput +0.07 [-0.77, +0.92] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.02, +0.02] 1 Logs
file_to_blackhole_300ms_latency egress throughput -0.00 [-0.63, +0.63] 1 Logs
file_to_blackhole_100ms_latency egress throughput -0.01 [-0.79, +0.76] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.02 [-0.29, +0.25] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput -0.03 [-0.95, +0.88] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.03 [-0.49, +0.43] 1 Logs
file_to_blackhole_0ms_latency egress throughput -0.15 [-1.08, +0.78] 1 Logs
file_to_blackhole_1000ms_latency egress throughput -0.19 [-0.97, +0.60] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.34 [-1.12, +0.44] 1 Logs
tcp_syslog_to_blackhole ingress throughput -0.51 [-0.59, -0.44] 1 Logs
file_tree memory utilization -0.63 [-0.69, -0.56] 1 Logs

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle intake_connections 10/10 bounds checks dashboard
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs intake_connections 10/10
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog qa/no-code-change No code change in Agent code requiring validation
Projects
None yet
Development

Successfully merging this pull request may close these issues.