Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

in buffer batching with perf buffers for NPM #31402

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

brycekahle
Copy link
Member

@brycekahle brycekahle commented Nov 23, 2024

What does this PR do?

  • Reworks perf/ring buffer abstraction and usage to something less fragile.
  • Adds the option network_config.enable_kernel_batching for NPM. This enables the status quo custom batching.

Important

network_config.enable_kernel_batching is false by default, which means it must be enabled to restore the previous behavior.

  • Changes to callbacks for the data flow from perf/ring buffers because of channel overhead.

Motivation

  • Decreased usage of eBPF stack space, allowing the size of the eBPF data structure to increase.
  • Runtime flexibility on how many events to in-buffer batch for perf buffers. Allowing a tradeoff between buffer size and userspace CPU usage.

Describe how to test/QA your changes

  • Automated tests as passing
  • Manual performance testing on load-testing clusters is underway. I will update this PR with those results when I have them.
  • I will also deploy to a staging cluster before merge.

Possible Drawbacks / Trade-offs

  • Ring buffer usage by default is discouraged for NPM/USM. This is because neither product is benefiting from the ordering of ring buffers, nor the reserve helper call to not utilize stack space. Using in-buffer batching with perf buffers results in lower CPU usage. It is my recommendation to change the default, but I have not done that here.

Note

The current (and unchanged) configuration defaults to using ring buffers, if available.

  • perf/ring buffer sizes need to be re-evaluated and probably increased. This is due to removing the userspace buffer and in-buffer batching utilizing more of the buffer space before data is read.

Additional Notes

EBPF-481

  • USM batching was not modified because it has a different system. This can be reworked in a future PR.

@brycekahle brycekahle added changelog/no-changelog team/ebpf-platform qa/done QA done before merge and regressions are covered by tests labels Nov 23, 2024
@brycekahle brycekahle added this to the 7.61.0 milestone Nov 23, 2024
@brycekahle brycekahle requested review from a team as code owners November 23, 2024 00:05
@brycekahle brycekahle force-pushed the bryce.kahle/perf-buffer-npm-only branch from 2f1f991 to 99dae30 Compare November 23, 2024 00:10
@github-actions github-actions bot added component/system-probe long review PR is complex, plan time to review it labels Nov 23, 2024
Copy link

cit-pr-commenter bot commented Nov 23, 2024

Go Package Import Differences

Baseline: be4b703
Comparison: 6d8e725

binaryosarchchange
system-probelinuxamd64
+3, -0
+github.com/DataDog/datadog-agent/pkg/ebpf/perf
+github.com/DataDog/datadog-agent/pkg/util/encoding
+github.com/DataDog/datadog-agent/pkg/util/slices
system-probelinuxarm64
+3, -0
+github.com/DataDog/datadog-agent/pkg/ebpf/perf
+github.com/DataDog/datadog-agent/pkg/util/encoding
+github.com/DataDog/datadog-agent/pkg/util/slices

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Nov 23, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv create-vm --pipeline-id=50238895 --os-family=ubuntu

Note: This applies to commit 6d8e725

Copy link

cit-pr-commenter bot commented Nov 23, 2024

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 1e935a25-8416-4957-9bdd-97ed14fdec7d

Baseline: be4b703
Comparison: 6d8e725
Diff

Optimization Goals: ❌ Significant changes detected

perf experiment goal Δ mean % Δ mean % CI trials links
basic_py_check % cpu utilization -8.70 [-12.32, -5.08] 1 Logs

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gate_logs % cpu utilization +1.53 [-1.47, +4.53] 1 Logs
tcp_syslog_to_blackhole ingress throughput +1.25 [+1.18, +1.32] 1 Logs
otel_to_otel_logs ingress throughput +0.66 [-0.02, +1.34] 1 Logs
pycheck_lots_of_tags % cpu utilization +0.62 [-2.89, +4.12] 1 Logs
file_to_blackhole_500ms_latency egress throughput +0.14 [-0.62, +0.91] 1 Logs
file_to_blackhole_100ms_latency egress throughput +0.05 [-0.70, +0.80] 1 Logs
file_to_blackhole_0ms_latency egress throughput +0.03 [-0.81, +0.87] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization +0.02 [-0.71, +0.75] 1 Logs
file_to_blackhole_300ms_latency egress throughput +0.00 [-0.64, +0.64] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput -0.00 [-0.01, +0.01] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.00 [-0.12, +0.11] 1 Logs
file_to_blackhole_1000ms_latency egress throughput -0.08 [-0.84, +0.69] 1 Logs
quality_gate_idle memory utilization -0.19 [-0.24, -0.14] 1 Logs bounds checks dashboard
quality_gate_idle_all_features memory utilization -0.36 [-0.47, -0.26] 1 Logs bounds checks dashboard
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.37 [-0.84, +0.11] 1 Logs
file_tree memory utilization -0.39 [-0.54, -0.24] 1 Logs
basic_py_check % cpu utilization -8.70 [-12.32, -5.08] 1 Logs

Bounds Checks: ❌ Failed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_300ms_latency lost_bytes 9/10
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.

Copy link
Contributor

@guyarb guyarb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR changes classification code base which is owned by USM
Blocking the PR to ensure we can review it and verify there's no concern on our side

A side note - this is another large PR. Please try to split it to smaller pieces.

pkg/util/encoding/binary.go Show resolved Hide resolved
pkg/util/slices/map.go Show resolved Hide resolved
@brycekahle
Copy link
Member Author

The PR changes classification code base which is owned by USM

@guyarb do we need to update CODEOWNERS to reflect this?

@@ -36,7 +36,7 @@ BPF_PERF_EVENT_ARRAY_MAP(conn_close_event, __u32)
* or BPF_MAP_TYPE_PERCPU_ARRAY, but they are not available in
* some of the Kernels we support (4.4 ~ 4.6)
*/
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1024)
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think typically we set map sizes to 0 when we intend to overwrite them in userspace

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is for maps the must be resized. This can remain at 1 if it is not being used, but must be included because the code references it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think setting this to 0 is still a good safeguard. We can set this to 1 or ideally remove this from the map spec if not required, at load time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know we could remove maps from the spec at load time? If so it's likely a trivial difference in memory footprint but a good pattern nonetheless

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think setting this to 0 is still a good safeguard

Safeguard against what? The default configuration all matches at the moment. Changing this to 0 means, that you must resize the map, even if using the default value for whether or not to do the custom batching.

ideally remove this from the map spec if not required, at load time

I don't think we can completely remove the map spec. This is because there is still code that references that map, even though it is protected by a branch that will never get taken.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safeguard against what?

Against loading the map with max entries set to 1, because the userspace forgot to resize it. This may happen during a refactor, when someone moves the code around. Having a default value of 0 forces the userspace to think about the correct value under all conditions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max entries set to 1 is the desired value when custom batching is disabled.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you forgot to resize, then the batch manager will fail loudly when it is trying to setup the default map values.

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Nov 25, 2024

eBPF complexity changes

Summary result: ✅ - stable

  • Highest complexity change (%): +0.00%
  • Highest complexity change (abs.): +0 instructions
  • Programs that were above the 85.0% limit of instructions and are now below: 0
  • Programs that were below the 85.0% limit of instructions and are now above: 0
tracer details

tracer [programs with changes]

Program Avg. complexity Distro with highest complexity Distro with lowest complexity
kprobe__udp_destroy_sock 🟢 867.0 (-624.7, -41.88%) fedora_38/arm64: 🟢 1177.0 (-1230.0, -51.10%) amazon_5.4/arm64: 🟢 712.0 (-322.0, -31.14%)
kprobe__udpv6_destroy_sock 🟢 867.0 (-624.7, -41.88%) fedora_38/arm64: 🟢 1177.0 (-1230.0, -51.10%) amazon_5.4/arm64: 🟢 712.0 (-322.0, -31.14%)
kretprobe__tcp_close_clean_protocols 🟢 208.2 (-4.2, -1.98%) amazon_5.4/arm64: 🟢 214.0 (-3.0, -1.38%) debian_10/arm64: 🟢 197.0 (-2.0, -1.01%)

tracer [programs without changes]

Program Avg. complexity Distro with highest complexity Distro with lowest complexity
kprobe__tcp_connect ⚪ 457.3 (+0.0, +0.00%) fedora_38/arm64: ⚪ 538.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 417.0 (+0.0, +0.00%)
kprobe__tcp_done ⚪ 460.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 540.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 420.0 (+0.0, +0.00%)
kprobe__tcp_finish_connect ⚪ 630.7 (+0.0, +0.00%) fedora_38/arm64: ⚪ 732.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 580.0 (+0.0, +0.00%)
kprobe__tcp_read_sock ⚪ 23.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 23.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 23.0 (+0.0, +0.00%)
kprobe__tcp_recvmsg ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__tcp_recvmsg__pre_4_1_0 ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__tcp_recvmsg__pre_5_19_0 ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__tcp_retransmit_skb ⚪ 33.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 33.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 33.0 (+0.0, +0.00%)
kprobe__tcp_sendmsg ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%)
kprobe__tcp_sendmsg__pre_4_1_0 ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%)
kprobe__tcp_sendpage ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 24.0 (+0.0, +0.00%)
kprobe__udp_recvmsg ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__udp_recvmsg_pre_4_1_0 ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%)
kprobe__udp_recvmsg_pre_4_7_0 ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%)
kprobe__udp_recvmsg_pre_5_19_0 ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__udp_sendpage ⚪ 23.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 23.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 23.0 (+0.0, +0.00%)
kprobe__udpv6_recvmsg ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kprobe__udpv6_recvmsg_pre_4_1_0 ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%)
kprobe__udpv6_recvmsg_pre_4_7_0 ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 30.0 (+0.0, +0.00%)
kprobe__udpv6_recvmsg_pre_5_19_0 ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 28.0 (+0.0, +0.00%)
kretprobe__inet6_bind ⚪ 205.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 205.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 205.0 (+0.0, +0.00%)
kretprobe__inet_bind ⚪ 205.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 205.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 205.0 (+0.0, +0.00%)
kretprobe__inet_csk_accept ⚪ 820.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 916.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 772.0 (+0.0, +0.00%)
kretprobe__ip6_make_skb ⚪ 1266.3 (+0.0, +0.00%) fedora_38/arm64: ⚪ 1698.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 927.0 (+0.0, +0.00%)
kretprobe__ip_make_skb ⚪ 830.7 (+0.0, +0.00%) fedora_38/arm64: ⚪ 986.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 753.0 (+0.0, +0.00%)
kretprobe__tcp_close_flush ⚪ 216.4 (+0.0, +0.00%) debian_10/arm64: ⚪ 217.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 216.0 (+0.0, +0.00%)
kretprobe__tcp_done_flush ⚪ 216.4 (+0.0, +0.00%) debian_10/arm64: ⚪ 217.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 216.0 (+0.0, +0.00%)
kretprobe__tcp_read_sock ⚪ 711.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 807.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 663.0 (+0.0, +0.00%)
kretprobe__tcp_recvmsg ⚪ 711.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 807.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 663.0 (+0.0, +0.00%)
kretprobe__tcp_retransmit_skb ⚪ 475.7 (+0.0, +0.00%) fedora_38/arm64: ⚪ 559.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 434.0 (+0.0, +0.00%)
kretprobe__tcp_sendmsg ⚪ 711.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 807.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 663.0 (+0.0, +0.00%)
kretprobe__tcp_sendpage ⚪ 711.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 807.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 663.0 (+0.0, +0.00%)
kretprobe__udp_destroy_sock ⚪ 217.4 (+0.0, +0.00%) debian_10/arm64: ⚪ 218.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 217.0 (+0.0, +0.00%)
kretprobe__udp_recvmsg ⚪ 9.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 9.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 9.0 (+0.0, +0.00%)
kretprobe__udp_recvmsg_pre_4_7_0 ⚪ 1121.3 (+0.0, +0.00%) fedora_38/arm64: ⚪ 1352.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 853.0 (+0.0, +0.00%)
kretprobe__udp_sendpage ⚪ 646.7 (+0.0, +0.00%) fedora_38/arm64: ⚪ 732.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 604.0 (+0.0, +0.00%)
kretprobe__udpv6_destroy_sock ⚪ 217.4 (+0.0, +0.00%) debian_10/arm64: ⚪ 218.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 217.0 (+0.0, +0.00%)
kretprobe__udpv6_recvmsg ⚪ 9.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 9.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 9.0 (+0.0, +0.00%)
kretprobe__udpv6_recvmsg_pre_4_7_0 ⚪ 1121.3 (+0.0, +0.00%) fedora_38/arm64: ⚪ 1352.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 853.0 (+0.0, +0.00%)
socket__classifier_dbs ⚪ 2442.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 2450.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 2427.0 (+0.0, +0.00%)
socket__classifier_entry ⚪ 2555.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 2634.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 2425.0 (+0.0, +0.00%)
socket__classifier_grpc ⚪ 8913.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 10046.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 6673.0 (+0.0, +0.00%)
socket__classifier_queues ⚪ 7035.7 (+0.0, +0.00%) centos_8/arm64: ⚪ 8353.0 (+0.0, +0.00%) amazon_5.4/arm64: ⚪ 4558.0 (+0.0, +0.00%)
tracepoint__net__net_dev_queue ⚪ 971.7 (+0.0, +0.00%) fedora_38/arm64: ⚪ 1183.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 800.0 (+0.0, +0.00%)
tracer_fentry details

tracer_fentry [programs with changes]

Program Avg. complexity Distro with highest complexity Distro with lowest complexity
udp_destroy_sock 🟢 967.0 (-776.0, -44.52%) fedora_38/arm64: 🟢 1202.0 (-1230.0, -50.58%) centos_8/arm64: 🟢 732.0 (-322.0, -30.55%)
udpv6_destroy_sock 🟢 967.0 (-776.0, -44.52%) fedora_38/arm64: 🟢 1202.0 (-1230.0, -50.58%) centos_8/arm64: 🟢 732.0 (-322.0, -30.55%)

tracer_fentry [programs without changes]

Program Avg. complexity Distro with highest complexity Distro with lowest complexity
tcp_close_exit ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%)
tcp_connect ⚪ 487.5 (+0.0, +0.00%) fedora_38/arm64: ⚪ 547.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 428.0 (+0.0, +0.00%)
tcp_finish_connect ⚪ 668.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 744.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 592.0 (+0.0, +0.00%)
tcp_recvmsg_exit ⚪ 804.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 804.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 804.0 (+0.0, +0.00%)
tcp_recvmsg_exit_pre_5_19_0 ⚪ 660.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 660.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 660.0 (+0.0, +0.00%)
tcp_retransmit_skb ⚪ 44.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 44.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 44.0 (+0.0, +0.00%)
tcp_retransmit_skb_exit ⚪ 508.5 (+0.0, +0.00%) fedora_38/arm64: ⚪ 571.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 446.0 (+0.0, +0.00%)
tcp_sendmsg_exit ⚪ 732.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 804.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 660.0 (+0.0, +0.00%)
tcp_sendpage_exit ⚪ 732.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 804.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 660.0 (+0.0, +0.00%)
udp_destroy_sock_exit ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%)
udp_recvmsg ⚪ 40.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 40.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 40.0 (+0.0, +0.00%)
udp_recvmsg_exit ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%)
udp_recvmsg_exit_pre_5_19_0 ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%)
udp_sendmsg_exit ⚪ 247.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 247.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 247.0 (+0.0, +0.00%)
udp_sendpage_exit ⚪ 640.0 (+0.0, +0.00%) fedora_38/arm64: ⚪ 701.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 579.0 (+0.0, +0.00%)
udpv6_destroy_sock_exit ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 230.0 (+0.0, +0.00%)
udpv6_recvmsg ⚪ 40.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 40.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 40.0 (+0.0, +0.00%)
udpv6_recvmsg_exit ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%)
udpv6_recvmsg_exit_pre_5_19_0 ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 23.0 (+0.0, +0.00%)
udpv6_sendmsg_exit ⚪ 247.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 247.0 (+0.0, +0.00%) centos_8/arm64: ⚪ 247.0 (+0.0, +0.00%)

This report was generated based on the complexity data for the current branch bryce.kahle/perf-buffer-npm-only (pipeline 50238895, commit 6d8e725) and the base branch main (commit be4b703). Objects without changes are not reported. Contact #ebpf-platform if you have any questions/feedback.

Table complexity legend: 🔵 - new; ⚪ - unchanged; 🟢 - reduced; 🔴 - increased

@@ -194,6 +194,7 @@ func InitSystemProbeConfig(cfg pkgconfigmodel.Config) {
cfg.BindEnv(join(netNS, "max_failed_connections_buffered"))
cfg.BindEnvAndSetDefault(join(spNS, "closed_connection_flush_threshold"), 0)
cfg.BindEnvAndSetDefault(join(spNS, "closed_channel_size"), 500)
cfg.BindEnvAndSetDefault(join(netNS, "closed_buffer_wakeup_count"), 5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the plan to migrate other perf-buffers to this technique?
Do we plan to create a different configuration per perf-buffer?
Maybe we should have a single configuration for all perf-buffers, and allow different teams to create a dedicate configuration to override it

cfg.BindEnvAndSetDefault(join(spNS, "common_wakeup_count"), 5)
cfg.BindEnv(join(netNS, "closed_buffer_wakeup_count"))

in adjust_npm.go

	applyDefault(cfg, netNS("closed_buffer_wakeup_count"), cfg.GetInt(spNS("common_wakeup_count")))

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the plan to migrate other perf-buffers to this technique?

I was keeping each team to a separate PR. I wanted to consult first to ensure it was actually a feature they wanted.

Do we plan to create a different configuration per perf-buffer?

Yes, because how much you want to keep in the buffer before wakeup is a usecase-specific value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be specified in terms of bytes or maybe percentages, so the code can calculate the appropriate count based on the size of the records?
For example if we want a flush to happen when the perf buffer is at 25% percent capacity, then this config value can specify that (either as percentage or bytes), and the code can calculate the appropriate count based on the size of the perf buffer and the record items.

Copy link
Member Author

@brycekahle brycekahle Dec 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds like something that could be addressed in a future PR by NPM folks. That is additional complexity that I don't think is necessary for this PR, which is trying to closely match the behavior from the custom batching.

pkg/ebpf/manager.go Outdated Show resolved Hide resolved
// callback with `nil` in the latter case. There is a workaround, but it requires specifying two type constraints.
// For sake of cleanliness, we resort to a runtime check here.
if _, ok := any(new(T)).(encoding.BinaryUnmarshaler); !ok {
panic("pointer type *T must implement encoding.BinaryUnmarshaler")
Copy link
Contributor

@guyarb guyarb Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we allow panics in the code?

An alternative can be

func BinaryUnmarshalCallback(newFn func() encoding.BinaryUnmarshaler, callback func(encoding.BinaryUnmarshaler, error)) func(buf []byte) {
	return func(buf []byte) {
		if len(buf) == 0 {
			callback(nil, nil)
			return
		}

		d := newFn()
		if err := d.UnmarshalBinary(buf); err != nil {
			// pass d here so callback can choose how to deal with the data
			callback(d, err)
			return
		}
		callback(d, nil)
	}
}

Copy link
Member Author

@brycekahle brycekahle Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we allow panics in the code?

Panics are perfectly acceptable for programmer error (similar to asserts in other languages). See out of bounds access for slices, etc.

I cannot use your alternative for several reasons:

  1. The reason listed in the comment in the code
  2. I don't want to force the caller to cast from encoding.BinaryUnmarshaler to their type on each usage.
  3. I want to ensure the callback uses a pointer

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't you doing an extra allocation in new(T) here? The constraint(s) would get rid of that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A single allocation when setting up the callback, not on every call. Please see earlier comments why constraints will not work.

pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
Comment on lines 233 to 234
util.AddBoolConst(&mgrOpts, "ringbuffers_enabled", usingRingBuffers)
if features.HaveMapType(ebpf.RingBuf) != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we disable ringbuffers_enabled if features.HaveMapType(ebpf.RingBuf) != nil?

Copy link
Member Author

@brycekahle brycekahle Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a bit roundabout, but usingRingBuffers will be false if ring buffers are not available.

@guyarb
Copy link
Contributor

guyarb commented Nov 26, 2024

The PR changes classification code base which is owned by USM

@guyarb do we need to update CODEOWNERS to reflect this?

We'll do that

@brycekahle brycekahle modified the milestones: 7.61.0, 7.62.0 Nov 26, 2024
@brycekahle brycekahle force-pushed the bryce.kahle/perf-buffer-npm-only branch from 26a60de to 47c9860 Compare November 26, 2024 19:20
@@ -194,6 +194,7 @@ func InitSystemProbeConfig(cfg pkgconfigmodel.Config) {
cfg.BindEnv(join(netNS, "max_failed_connections_buffered"))
cfg.BindEnvAndSetDefault(join(spNS, "closed_connection_flush_threshold"), 0)
cfg.BindEnvAndSetDefault(join(spNS, "closed_channel_size"), 500)
cfg.BindEnvAndSetDefault(join(netNS, "closed_buffer_wakeup_count"), 5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be specified in terms of bytes or maybe percentages, so the code can calculate the appropriate count based on the size of the records?
For example if we want a flush to happen when the perf buffer is at 25% percent capacity, then this config value can specify that (either as percentage or bytes), and the code can calculate the appropriate count based on the size of the perf buffer and the record items.

pkg/config/setup/system_probe.go Outdated Show resolved Hide resolved
pkg/ebpf/manager.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Show resolved Hide resolved
// remove any existing perf buffers from manager
mgr.PerfMaps = slices.DeleteFunc(mgr.PerfMaps, func(perfMap *manager.PerfMap) bool {
return perfMap.Name == e.opts.MapName
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A log over here to show which one of watermark or wakeup events will get applied would be useful.

@@ -36,7 +36,7 @@ BPF_PERF_EVENT_ARRAY_MAP(conn_close_event, __u32)
* or BPF_MAP_TYPE_PERCPU_ARRAY, but they are not available in
* some of the Kernels we support (4.4 ~ 4.6)
*/
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1024)
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think setting this to 0 is still a good safeguard. We can set this to 1 or ideally remove this from the map spec if not required, at load time.

}

closeConsumer := newTCPCloseConsumer(connCloseEventHandler, batchMgr)
tr.closeConsumer = newTCPCloseConsumer(flusher, connPool)
Copy link
Contributor

@usamasaqib usamasaqib Nov 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The callback infrastructure here is very difficult to follow. It took me a long time to follow what happens when a perf event is received and its callback is invoked.

It is unclear to my why the TcpCloseConsumer abstraction even exists, if it is just calling the functions of the parent tracer. Why can't we do the same thing inline, without all of these callbacks calling each other across abstraction boundaries?

Overall to understand how flushing is supposed to work, a developer has to follow, more or less, the following layers:
Tracer -> CloseConsumer -> EventHandler -> ebpf-manager.PerfMap -> ebpf.Reader -> ebpf.Poller

Moreover, the Tracer -> ... -> EventHandler layers have no clear boundaries with callbacks invoking multiple different methods across the layers. I think there is a strong need to simplify this part of the infrastructure.

val06
val06 previously requested changes Nov 27, 2024
Copy link
Contributor

@val06 val06 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reviewed the ebpf-platform files.
i strongly recommend to omit the npm transition to the new functionality and keep this PR scoped to introduce the new behavior.

to avoid breaking changes, I would change the new parameter default value to true to keep the status-quo, and then, if decided (see my inline CR comment about it) change it to false in a separate PR where you move the NPM to use the new functionality

pkg/config/setup/system_probe.go Outdated Show resolved Hide resolved
pkg/ebpf/manager.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
pkg/ebpf/perf/event.go Show resolved Hide resolved
pkg/ebpf/perf/event.go Outdated Show resolved Hide resolved
// callback with `nil` in the latter case. There is a workaround, but it requires specifying two type constraints.
// For sake of cleanliness, we resort to a runtime check here.
if _, ok := any(new(T)).(encoding.BinaryUnmarshaler); !ok {
panic("pointer type *T must implement encoding.BinaryUnmarshaler")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't you doing an extra allocation in new(T) here? The constraint(s) would get rid of that.

if b != nil {
connHasher.Hash(b)
}
closedCallback(b)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we call this if b is nil?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is very important. That is the sentinel that indicates a flush is complete.

updateTCPStats(c, &rc.Tcp_stats)
c := p.connGetter.Get()
c.FromConn(rc)
p.ch.Hash(c)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think of adding a parameter to pass in the cookie hasher to FromConn? The cookie hashing should really be done in FromConn, but I don't see a another way for it to use a hash object.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It isn't in FromConn to allow for the BinaryUnmarshaler interface usage with ConnectionStats.

closedCount = 0
lostSamplesCount = 0
c.flushChannel <- request
c.flusher.Flush()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is Flush not synchronous?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It causes the perf/ring reader to wakeup, but it doesn't do the reading itself. That is still done by Read/ReadInto.

pkg/util/sync/pool.go Show resolved Hide resolved
package slices

// Map returns a new slice with the result of applying fn to each element.
func Map[S ~[]E, E any, RE any](s S, fn func(E) RE) []RE {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a Map implementation proposed in golang/go#61898 that may be more idiomatic.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those seem to use iterators which require Go 1.23

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a separate PR out to upgrade to go 1.23: #31022

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can always change the implementation and usage once that is merged. I don't want to tie this PR to the Go upgrade for such a minor part of it.

@brycekahle brycekahle force-pushed the bryce.kahle/perf-buffer-npm-only branch from d03c587 to 88565bb Compare December 2, 2024 21:43
func (c *tcpCloseConsumer) Callback(conn *network.ConnectionStats) {
// sentinel record post-flush
if conn == nil {
request := <-c.flushChannel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any chance this would block? Or will we will only get a nil connection here if Flush() is called?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is nothing to flush do we still get a sentinel value?

Copy link
Member Author

@brycekahle brycekahle Dec 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you will still get a sentinel value

@val06 val06 self-requested a review December 3, 2024 16:46
@val06 val06 dismissed their stale review December 3, 2024 16:46

removed to unblock

@val06 val06 removed their request for review December 3, 2024 16:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog component/system-probe long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests team/ebpf-platform
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants