-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
in buffer batching with perf buffers for NPM #31402
base: main
Are you sure you want to change the base?
Conversation
2f1f991
to
99dae30
Compare
Go Package Import DifferencesBaseline: be4b703
|
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv create-vm --pipeline-id=50238895 --os-family=ubuntu Note: This applies to commit 6d8e725 |
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: be4b703 Optimization Goals: ❌ Significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | quality_gate_logs | % cpu utilization | +1.53 | [-1.47, +4.53] | 1 | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | +1.25 | [+1.18, +1.32] | 1 | Logs |
➖ | otel_to_otel_logs | ingress throughput | +0.66 | [-0.02, +1.34] | 1 | Logs |
➖ | pycheck_lots_of_tags | % cpu utilization | +0.62 | [-2.89, +4.12] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | +0.14 | [-0.62, +0.91] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | +0.05 | [-0.70, +0.80] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | +0.03 | [-0.81, +0.87] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.02 | [-0.71, +0.75] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.00 | [-0.64, +0.64] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | -0.00 | [-0.12, +0.11] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.08 | [-0.84, +0.69] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | -0.19 | [-0.24, -0.14] | 1 | Logs bounds checks dashboard |
➖ | quality_gate_idle_all_features | memory utilization | -0.36 | [-0.47, -0.26] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | -0.37 | [-0.84, +0.11] | 1 | Logs |
➖ | file_tree | memory utilization | -0.39 | [-0.54, -0.24] | 1 | Logs |
✅ | basic_py_check | % cpu utilization | -8.70 | [-12.32, -5.08] | 1 | Logs |
Bounds Checks: ❌ Failed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
❌ | file_to_blackhole_300ms_latency | lost_bytes | 9/10 | |
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR changes classification code base which is owned by USM
Blocking the PR to ensure we can review it and verify there's no concern on our side
A side note - this is another large PR. Please try to split it to smaller pieces.
@guyarb do we need to update CODEOWNERS to reflect this? |
@@ -36,7 +36,7 @@ BPF_PERF_EVENT_ARRAY_MAP(conn_close_event, __u32) | |||
* or BPF_MAP_TYPE_PERCPU_ARRAY, but they are not available in | |||
* some of the Kernels we support (4.4 ~ 4.6) | |||
*/ | |||
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1024) | |||
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think typically we set map sizes to 0 when we intend to overwrite them in userspace
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is for maps the must be resized. This can remain at 1
if it is not being used, but must be included because the code references it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think setting this to 0 is still a good safeguard. We can set this to 1 or ideally remove this from the map spec if not required, at load time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't know we could remove maps from the spec at load time? If so it's likely a trivial difference in memory footprint but a good pattern nonetheless
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think setting this to 0 is still a good safeguard
Safeguard against what? The default configuration all matches at the moment. Changing this to 0
means, that you must resize the map, even if using the default value for whether or not to do the custom batching.
ideally remove this from the map spec if not required, at load time
I don't think we can completely remove the map spec. This is because there is still code that references that map, even though it is protected by a branch that will never get taken.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Safeguard against what?
Against loading the map with max entries set to 1, because the userspace forgot to resize it. This may happen during a refactor, when someone moves the code around. Having a default value of 0
forces the userspace to think about the correct value under all conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max entries set to 1 is the desired value when custom batching is disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you forgot to resize, then the batch manager will fail loudly when it is trying to setup the default map values.
eBPF complexity changesSummary result: ✅ - stable
tracer detailstracer [programs with changes]
tracer [programs without changes]
tracer_fentry detailstracer_fentry [programs with changes]
tracer_fentry [programs without changes]
This report was generated based on the complexity data for the current branch bryce.kahle/perf-buffer-npm-only (pipeline 50238895, commit 6d8e725) and the base branch main (commit be4b703). Objects without changes are not reported. Contact #ebpf-platform if you have any questions/feedback. Table complexity legend: 🔵 - new; ⚪ - unchanged; 🟢 - reduced; 🔴 - increased |
@@ -194,6 +194,7 @@ func InitSystemProbeConfig(cfg pkgconfigmodel.Config) { | |||
cfg.BindEnv(join(netNS, "max_failed_connections_buffered")) | |||
cfg.BindEnvAndSetDefault(join(spNS, "closed_connection_flush_threshold"), 0) | |||
cfg.BindEnvAndSetDefault(join(spNS, "closed_channel_size"), 500) | |||
cfg.BindEnvAndSetDefault(join(netNS, "closed_buffer_wakeup_count"), 5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the plan to migrate other perf-buffers to this technique?
Do we plan to create a different configuration per perf-buffer?
Maybe we should have a single configuration for all perf-buffers, and allow different teams to create a dedicate configuration to override it
cfg.BindEnvAndSetDefault(join(spNS, "common_wakeup_count"), 5)
cfg.BindEnv(join(netNS, "closed_buffer_wakeup_count"))
in adjust_npm.go
applyDefault(cfg, netNS("closed_buffer_wakeup_count"), cfg.GetInt(spNS("common_wakeup_count")))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the plan to migrate other perf-buffers to this technique?
I was keeping each team to a separate PR. I wanted to consult first to ensure it was actually a feature they wanted.
Do we plan to create a different configuration per perf-buffer?
Yes, because how much you want to keep in the buffer before wakeup is a usecase-specific value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be specified in terms of bytes or maybe percentages, so the code can calculate the appropriate count based on the size of the records?
For example if we want a flush to happen when the perf buffer is at 25% percent capacity, then this config value can specify that (either as percentage or bytes), and the code can calculate the appropriate count based on the size of the perf buffer and the record items.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds like something that could be addressed in a future PR by NPM folks. That is additional complexity that I don't think is necessary for this PR, which is trying to closely match the behavior from the custom batching.
// callback with `nil` in the latter case. There is a workaround, but it requires specifying two type constraints. | ||
// For sake of cleanliness, we resort to a runtime check here. | ||
if _, ok := any(new(T)).(encoding.BinaryUnmarshaler); !ok { | ||
panic("pointer type *T must implement encoding.BinaryUnmarshaler") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we allow panics in the code?
An alternative can be
func BinaryUnmarshalCallback(newFn func() encoding.BinaryUnmarshaler, callback func(encoding.BinaryUnmarshaler, error)) func(buf []byte) {
return func(buf []byte) {
if len(buf) == 0 {
callback(nil, nil)
return
}
d := newFn()
if err := d.UnmarshalBinary(buf); err != nil {
// pass d here so callback can choose how to deal with the data
callback(d, err)
return
}
callback(d, nil)
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we allow panics in the code?
Panics are perfectly acceptable for programmer error (similar to asserts in other languages). See out of bounds access for slices, etc.
I cannot use your alternative for several reasons:
- The reason listed in the comment in the code
- I don't want to force the caller to cast from
encoding.BinaryUnmarshaler
to their type on each usage. - I want to ensure the callback uses a pointer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't you doing an extra allocation in new(T)
here? The constraint(s) would get rid of that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A single allocation when setting up the callback, not on every call. Please see earlier comments why constraints will not work.
util.AddBoolConst(&mgrOpts, "ringbuffers_enabled", usingRingBuffers) | ||
if features.HaveMapType(ebpf.RingBuf) != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we disable ringbuffers_enabled
if features.HaveMapType(ebpf.RingBuf) != nil
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a bit roundabout, but usingRingBuffers
will be false if ring buffers are not available.
We'll do that |
26a60de
to
47c9860
Compare
@@ -194,6 +194,7 @@ func InitSystemProbeConfig(cfg pkgconfigmodel.Config) { | |||
cfg.BindEnv(join(netNS, "max_failed_connections_buffered")) | |||
cfg.BindEnvAndSetDefault(join(spNS, "closed_connection_flush_threshold"), 0) | |||
cfg.BindEnvAndSetDefault(join(spNS, "closed_channel_size"), 500) | |||
cfg.BindEnvAndSetDefault(join(netNS, "closed_buffer_wakeup_count"), 5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be specified in terms of bytes or maybe percentages, so the code can calculate the appropriate count based on the size of the records?
For example if we want a flush to happen when the perf buffer is at 25% percent capacity, then this config value can specify that (either as percentage or bytes), and the code can calculate the appropriate count based on the size of the perf buffer and the record items.
// remove any existing perf buffers from manager | ||
mgr.PerfMaps = slices.DeleteFunc(mgr.PerfMaps, func(perfMap *manager.PerfMap) bool { | ||
return perfMap.Name == e.opts.MapName | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A log over here to show which one of watermark or wakeup events will get applied would be useful.
@@ -36,7 +36,7 @@ BPF_PERF_EVENT_ARRAY_MAP(conn_close_event, __u32) | |||
* or BPF_MAP_TYPE_PERCPU_ARRAY, but they are not available in | |||
* some of the Kernels we support (4.4 ~ 4.6) | |||
*/ | |||
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1024) | |||
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think setting this to 0 is still a good safeguard. We can set this to 1 or ideally remove this from the map spec if not required, at load time.
} | ||
|
||
closeConsumer := newTCPCloseConsumer(connCloseEventHandler, batchMgr) | ||
tr.closeConsumer = newTCPCloseConsumer(flusher, connPool) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The callback infrastructure here is very difficult to follow. It took me a long time to follow what happens when a perf event is received and its callback is invoked.
It is unclear to my why the TcpCloseConsumer
abstraction even exists, if it is just calling the functions of the parent tracer. Why can't we do the same thing inline, without all of these callbacks calling each other across abstraction boundaries?
Overall to understand how flushing is supposed to work, a developer has to follow, more or less, the following layers:
Tracer
-> CloseConsumer
-> EventHandler
-> ebpf-manager.PerfMap
-> ebpf.Reader
-> ebpf.Poller
Moreover, the Tracer -> ... -> EventHandler
layers have no clear boundaries with callbacks invoking multiple different methods across the layers. I think there is a strong need to simplify this part of the infrastructure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reviewed the ebpf-platform files.
i strongly recommend to omit the npm transition to the new functionality and keep this PR scoped to introduce the new behavior.
to avoid breaking changes, I would change the new parameter default value to true to keep the status-quo, and then, if decided (see my inline CR comment about it) change it to false in a separate PR where you move the NPM to use the new functionality
// callback with `nil` in the latter case. There is a workaround, but it requires specifying two type constraints. | ||
// For sake of cleanliness, we resort to a runtime check here. | ||
if _, ok := any(new(T)).(encoding.BinaryUnmarshaler); !ok { | ||
panic("pointer type *T must implement encoding.BinaryUnmarshaler") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't you doing an extra allocation in new(T)
here? The constraint(s) would get rid of that.
if b != nil { | ||
connHasher.Hash(b) | ||
} | ||
closedCallback(b) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we call this if b
is nil
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is very important. That is the sentinel that indicates a flush is complete.
updateTCPStats(c, &rc.Tcp_stats) | ||
c := p.connGetter.Get() | ||
c.FromConn(rc) | ||
p.ch.Hash(c) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of adding a parameter to pass in the cookie hasher to FromConn
? The cookie hashing should really be done in FromConn
, but I don't see a another way for it to use a hash object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't in FromConn
to allow for the BinaryUnmarshaler
interface usage with ConnectionStats
.
closedCount = 0 | ||
lostSamplesCount = 0 | ||
c.flushChannel <- request | ||
c.flusher.Flush() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is Flush
not synchronous?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It causes the perf/ring reader to wakeup, but it doesn't do the reading itself. That is still done by Read/ReadInto
.
package slices | ||
|
||
// Map returns a new slice with the result of applying fn to each element. | ||
func Map[S ~[]E, E any, RE any](s S, fn func(E) RE) []RE { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a Map
implementation proposed in golang/go#61898 that may be more idiomatic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those seem to use iterators which require Go 1.23
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a separate PR out to upgrade to go 1.23: #31022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can always change the implementation and usage once that is merged. I don't want to tie this PR to the Go upgrade for such a minor part of it.
d03c587
to
88565bb
Compare
func (c *tcpCloseConsumer) Callback(conn *network.ConnectionStats) { | ||
// sentinel record post-flush | ||
if conn == nil { | ||
request := <-c.flushChannel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any chance this would block? Or will we will only get a nil connection here if Flush() is called?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is nothing to flush do we still get a sentinel value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you will still get a sentinel value
What does this PR do?
network_config.enable_kernel_batching
for NPM. This enables the status quo custom batching.Important
network_config.enable_kernel_batching
is false by default, which means it must be enabled to restore the previous behavior.Motivation
Describe how to test/QA your changes
Possible Drawbacks / Trade-offs
Note
The current (and unchanged) configuration defaults to using ring buffers, if available.
Additional Notes
EBPF-481