-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(dev, releasing): Have Cargo.toml release
profile match what releases are published with
#20034
base: master
Are you sure you want to change the base?
Conversation
…eleases are published with Per #17342 (comment) Linux distros commonly rebuild Vector rather than pulling in prebuilt artifacts. They could set the same profile flags we do (or even flags that are deemed to be better suited) but I do see value in "just" having the `release` profile match how we build and distribute release versions of Vector to serve as the default for anyone else building release builds. The original intent of having CI set different release flags than were in Cargo.toml was to have faster local release builds when analyzing Vector performance. Now that custom profiles exist, which didn't at the time, I added a custom profile, `dev-perf`, to be used for this purpose instead. Ref: #17342 (comment) Signed-off-by: Jesse Szwedko <[email protected]>
[profile.release] | ||
debug = false # Do not include debug symbols in the executable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the default so I dropped it.
Datadog ReportBranch report: ✅ 0 Failed, 7 Passed, 0 Skipped, 25.38s Wall Time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense to me, though I too would appreciate hearing from @tobz or @lukesteensen
Signed-off-by: Jesse Szwedko <[email protected]>
/ci-run-all |
Signed-off-by: Jesse Szwedko <[email protected]>
/ci-run-all |
Signed-off-by: Jesse Szwedko <[email protected]>
/ci-run-all |
Signed-off-by: Jesse Szwedko <[email protected]>
Regression Detector ResultsRun ID: bf565250-5af8-45d0-a893-0109b04eee89 Performance changes are noted in the perf column of each table:
Significant changes in experiment optimization goalsConfidence level: 90.00%
|
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
✅ | syslog_humio_logs | ingress throughput | +15.43 | [+15.30, +15.56] |
✅ | syslog_log2metric_tag_cardinality_limit_blackhole | ingress throughput | +15.11 | [+15.01, +15.22] |
✅ | syslog_log2metric_splunk_hec_metrics | ingress throughput | +15.01 | [+14.87, +15.15] |
✅ | syslog_regex_logs2metric_ddmetrics | ingress throughput | +14.74 | [+14.62, +14.86] |
✅ | syslog_loki | ingress throughput | +13.32 | [+13.25, +13.39] |
✅ | syslog_splunk_hec_logs | ingress throughput | +12.31 | [+12.21, +12.41] |
✅ | socket_to_socket_blackhole | ingress throughput | +11.25 | [+11.18, +11.33] |
✅ | syslog_log2metric_humio_metrics | ingress throughput | +9.61 | [+9.46, +9.76] |
✅ | datadog_agent_remap_blackhole_acks | ingress throughput | +9.29 | [+9.17, +9.41] |
✅ | otlp_http_to_blackhole | ingress throughput | +8.60 | [+8.46, +8.74] |
✅ | datadog_agent_remap_blackhole | ingress throughput | +8.18 | [+8.08, +8.29] |
✅ | datadog_agent_remap_datadog_logs_acks | ingress throughput | +7.08 | [+7.00, +7.17] |
✅ | otlp_grpc_to_blackhole | ingress throughput | +5.56 | [+5.47, +5.65] |
✅ | datadog_agent_remap_datadog_logs | ingress throughput | +5.54 | [+5.42, +5.65] |
✅ | splunk_hec_route_s3 | ingress throughput | +5.29 | [+4.79, +5.79] |
➖ | fluent_elasticsearch | ingress throughput | +3.72 | [+3.24, +4.20] |
➖ | http_text_to_http_json | ingress throughput | +3.19 | [+3.05, +3.34] |
➖ | http_to_http_acks | ingress throughput | +0.79 | [-0.53, +2.10] |
➖ | http_to_s3 | ingress throughput | +0.19 | [-0.08, +0.47] |
➖ | http_to_http_noack | ingress throughput | +0.13 | [+0.06, +0.21] |
➖ | http_to_http_json | ingress throughput | +0.02 | [-0.06, +0.10] |
➖ | enterprise_http_to_http | ingress throughput | +0.01 | [-0.05, +0.07] |
➖ | splunk_hec_indexer_ack_blackhole | ingress throughput | +0.00 | [-0.14, +0.14] |
➖ | splunk_hec_to_splunk_hec_logs_acks | ingress throughput | -0.00 | [-0.16, +0.15] |
➖ | splunk_hec_to_splunk_hec_logs_noack | ingress throughput | -0.01 | [-0.13, +0.11] |
➖ | file_to_blackhole | egress throughput | -0.01 | [-2.47, +2.45] |
➖ | http_elasticsearch | ingress throughput | -4.57 | [-4.63, -4.51] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Signed-off-by: Jesse Szwedko <[email protected]>
Signed-off-by: Jesse Szwedko <[email protected]>
Signed-off-by: Jesse Szwedko <[email protected]>
Regression Detector ResultsRun ID: 1cf02af2-6d9b-41f3-ae63-07e5154a9a1e Performance changes are noted in the perf column of each table:
Significant changes in experiment optimization goalsConfidence level: 90.00%
|
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
✅ | syslog_log2metric_tag_cardinality_limit_blackhole | ingress throughput | +14.22 | [+14.07, +14.37] |
✅ | syslog_log2metric_humio_metrics | ingress throughput | +13.15 | [+13.02, +13.29] |
✅ | syslog_log2metric_splunk_hec_metrics | ingress throughput | +12.71 | [+12.54, +12.87] |
✅ | syslog_loki | ingress throughput | +12.62 | [+12.51, +12.72] |
✅ | syslog_humio_logs | ingress throughput | +11.94 | [+11.82, +12.06] |
✅ | syslog_splunk_hec_logs | ingress throughput | +11.78 | [+11.70, +11.86] |
✅ | syslog_regex_logs2metric_ddmetrics | ingress throughput | +10.62 | [+10.46, +10.78] |
✅ | datadog_agent_remap_blackhole | ingress throughput | +9.82 | [+9.69, +9.95] |
✅ | splunk_hec_route_s3 | ingress throughput | +9.74 | [+9.23, +10.24] |
✅ | socket_to_socket_blackhole | ingress throughput | +7.10 | [+7.02, +7.18] |
✅ | fluent_elasticsearch | ingress throughput | +6.48 | [+5.99, +6.97] |
✅ | otlp_grpc_to_blackhole | ingress throughput | +6.42 | [+6.33, +6.51] |
✅ | datadog_agent_remap_blackhole_acks | ingress throughput | +6.34 | [+6.22, +6.46] |
✅ | datadog_agent_remap_datadog_logs | ingress throughput | +6.17 | [+6.06, +6.29] |
✅ | datadog_agent_remap_datadog_logs_acks | ingress throughput | +5.13 | [+5.04, +5.22] |
➖ | otlp_http_to_blackhole | ingress throughput | +4.40 | [+4.27, +4.53] |
➖ | http_text_to_http_json | ingress throughput | +3.92 | [+3.76, +4.09] |
➖ | file_to_blackhole | egress throughput | +0.89 | [-1.62, +3.40] |
➖ | http_to_http_noack | ingress throughput | +0.21 | [+0.12, +0.30] |
➖ | http_to_s3 | ingress throughput | +0.16 | [-0.12, +0.44] |
➖ | http_to_http_json | ingress throughput | +0.03 | [-0.05, +0.10] |
➖ | splunk_hec_to_splunk_hec_logs_acks | ingress throughput | -0.00 | [-0.15, +0.15] |
➖ | splunk_hec_indexer_ack_blackhole | ingress throughput | -0.01 | [-0.15, +0.13] |
➖ | splunk_hec_to_splunk_hec_logs_noack | ingress throughput | -0.05 | [-0.16, +0.07] |
➖ | enterprise_http_to_http | ingress throughput | -0.07 | [-0.14, -0.01] |
➖ | http_to_http_acks | ingress throughput | -0.21 | [-1.52, +1.11] |
➖ | http_elasticsearch | ingress throughput | -3.67 | [-3.76, -3.58] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Signed-off-by: Jesse Szwedko <[email protected]>
/ci-run-k8s |
[profile.release] | ||
debug = false # Do not include debug symbols in the executable. | ||
lto = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is "fat" worthwhile vs "thin"? Technically there's a few variants to choose.
false
(default) will opt-out of LTO whencodegen-units=1
oropt-level = 0
, otherwise it does crate local thin LTO (slightly different than "thin")."thin"
=> Thin LTO"fat"
/true
=> Full LTO (much slower, barely any notable gain)"off"
=> Disable LTO
In my testing with build size and time, I didn't see much value from building Vector with "fat"
/ true
. It was notably slower than false
/ "thin"
due to single thread vs multi-thread CPU usage at link time.
From what I've read of those that profile between thin and fat/full LTO there's barely a statistical difference in improvement to warrant the excess build time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good thought. Once this is merged, we can benchmark against thin
and see what the difference looks like for fat
vs. thin
. I believe we saw a 5% difference in throughput for fat
vs false
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found this blogpost that explains the difference for lto = false
(thin local LTO) vs lto ="thin"
, where false
does thin LTO over each crates codegen units.
I believe we saw a 5% difference in throughput for
fat
vsfalse
.
That's interesting, would be good to know how that compes to fat
vs thin
👍
Was the false
comparison done with codegen-units = 1
? Because as I mentioned earlier, that would opt-out of LTO and be equivalent to lto = "false"
.
References:
https://blog.llvm.org/2016/06/thinlto-scalable-and-incremental-lto.html
ThinLTO already performs well compared to LTO, in many cases matching the performance improvement.
In a few cases ThinLTO even outperforms full LTO, most likely because the higher scalability of ThinLTO allows using a more aggressive backend optimization pipeline (similar to that of a non-LTO build).
https://convolv.es/guides/lto/
our recurring dataset shows a 0.23% run-time delta between LLVM’s basic and parallel modes
LLVM’s implementation of parallel LTO achieves nearly all of the run-time performance improvement as seen with basic LTO:
- basic LTO reached a 2.86% improvement over non-LTO
- while parallel LTO achieved a 2.63% improvement over non-LTO.
Not sure how much it differs between GCC vs Clang as the linker driver. I think with Clang you need to have a compatible version to the Rust toolchain used (at least if doing cross-language LTO) which may not work as well with the CI image builds using cross
docker images (Ubuntu)? 🤷♂️ (this comment seems to claim a 20% runtime perf improvement by changing linker from GCC to Clang)
Signed-off-by: Jesse Szwedko <[email protected]>
/ci-run-k8s |
Signed-off-by: Jesse Szwedko <[email protected]>
/ci-run-k8s |
Signed-off-by: Jesse Szwedko <[email protected]>
/ci-run-k8s |
Signed-off-by: Jesse Szwedko <[email protected]>
/ci-run-k8s |
1 similar comment
/ci-run-k8s |
…ofile-default Signed-off-by: Jesse Szwedko <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ci-run-all
Datadog ReportBranch report: ✅ 0 Failed, 7 Passed, 0 Skipped, 25.51s Total Time |
Per #17342 (comment) Linux distros commonly rebuild Vector rather than pulling in prebuilt artifacts. They could set the same profile flags we do (or even flags that are deemed to be better suited) but I do see value in "just" having the
release
profile match how we build and distribute release versions of Vector to serve as the default for anyone else building release builds.The original intent of having CI set different release flags than were in Cargo.toml was to have faster local release builds when analyzing Vector performance. Now that custom profiles exist, which didn't at the time, I added a custom profile,
dev-perf
, to be used for this purpose instead.Ref: #17342 (comment)