Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): Try LTO to see if it fixes linking issues on old OSes #17342

Closed
wants to merge 3 commits into from

Conversation

jszwedko
Copy link
Member

@jszwedko jszwedko commented May 8, 2023

Seeing if this fixes nix compilation as suggested by nix-rust/nix#1972 (comment)

Signed-off-by: Jesse Szwedko jesse.szwedko@datadoghq.com

Signed-off-by: Jesse Szwedko <jesse.szwedko@datadoghq.com>
@netlify
Copy link

netlify bot commented May 8, 2023

Deploy Preview for vector-project ready!

Name Link
🔨 Latest commit 33338d2
🔍 Latest deploy log https://app.netlify.com/sites/vector-project/deploys/64594039a2eba60008b61586
😎 Deploy Preview https://deploy-preview-17342--vector-project.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site settings.

@netlify
Copy link

netlify bot commented May 8, 2023

Deploy Preview for vrl-playground canceled.

Name Link
🔨 Latest commit 33338d2
🔍 Latest deploy log https://app.netlify.com/sites/vrl-playground/deploys/64594039a8d32900081ffa90

@datadog-vectordotdev
Copy link

Datadog Report

Branch report: jszwedko/try-lto-nix
Commit report: 78e77bd

vector: 0 Failed, 0 New Flaky, 5 Passed, 0 Skipped, 10.03s Wall Time

@github-actions
Copy link

github-actions bot commented May 8, 2023

Regression Detector Results

Run ID: 7ee92b96-2091-4999-8a39-acd6823a5d8b
Baseline: bf8376c
Comparison: 33338d2
Total vector CPUs: 7

Explanation

A regression test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine quickly if vector performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

Changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%:

experiment goal Δ mean % confidence
syslog_log2metric_splunk_hec_metrics ingress throughput +28.37 100.00%
syslog_splunk_hec_logs ingress throughput +19.02 100.00%
syslog_loki ingress throughput +17.88 100.00%
syslog_humio_logs ingress throughput +16.30 100.00%
syslog_regex_logs2metric_ddmetrics ingress throughput +15.44 100.00%
datadog_agent_remap_blackhole_acks ingress throughput +14.56 100.00%
datadog_agent_remap_datadog_logs ingress throughput +12.91 100.00%
datadog_agent_remap_datadog_logs_acks ingress throughput +12.89 100.00%
datadog_agent_remap_blackhole ingress throughput +12.25 100.00%
otlp_http_to_blackhole ingress throughput +11.62 100.00%
otlp_grpc_to_blackhole ingress throughput +11.23 100.00%
http_text_to_http_json ingress throughput +8.11 100.00%
socket_to_socket_blackhole ingress throughput +7.70 100.00%
syslog_log2metric_humio_metrics ingress throughput +7.28 100.00%
splunk_hec_route_s3 ingress throughput +6.04 100.00%
Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
syslog_log2metric_splunk_hec_metrics ingress throughput +28.37 [+27.98, +28.77] 100.00%
syslog_splunk_hec_logs ingress throughput +19.02 [+18.96, +19.09] 100.00%
syslog_loki ingress throughput +17.88 [+17.81, +17.96] 100.00%
syslog_humio_logs ingress throughput +16.30 [+16.19, +16.41] 100.00%
syslog_regex_logs2metric_ddmetrics ingress throughput +15.44 [+15.13, +15.76] 100.00%
datadog_agent_remap_blackhole_acks ingress throughput +14.56 [+14.48, +14.64] 100.00%
datadog_agent_remap_datadog_logs ingress throughput +12.91 [+12.83, +13.00] 100.00%
datadog_agent_remap_datadog_logs_acks ingress throughput +12.89 [+12.78, +13.00] 100.00%
datadog_agent_remap_blackhole ingress throughput +12.25 [+12.09, +12.42] 100.00%
otlp_http_to_blackhole ingress throughput +11.62 [+11.44, +11.79] 100.00%
otlp_grpc_to_blackhole ingress throughput +11.23 [+11.11, +11.34] 100.00%
http_text_to_http_json ingress throughput +8.11 [+8.03, +8.18] 100.00%
socket_to_socket_blackhole ingress throughput +7.70 [+7.64, +7.75] 100.00%
syslog_log2metric_humio_metrics ingress throughput +7.28 [+7.16, +7.41] 100.00%
splunk_hec_route_s3 ingress throughput +6.04 [+5.90, +6.17] 100.00%
http_to_http_json ingress throughput +0.52 [+0.47, +0.57] 100.00%
file_to_blackhole ingress throughput +0.04 [-0.00, +0.09] 75.03%
enterprise_http_to_http ingress throughput +0.02 [-0.01, +0.05] 54.31%
fluent_elasticsearch ingress throughput +0.00 [-0.00, +0.00] 38.47%
http_to_http_noack ingress throughput -0.00 [-0.06, +0.06] 0.82%
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.00 [-0.05, +0.04] 11.06%
splunk_hec_indexer_ack_blackhole ingress throughput -0.01 [-0.05, +0.04] 17.96%
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.01 [-0.07, +0.04] 22.66%
http_to_http_acks ingress throughput -0.04 [-1.25, +1.18] 3.02%

Cargo.toml Outdated
@@ -35,11 +35,15 @@ path = "src/config/loading/secret_backend_example.rs"
test = false
bench = false

[profile.dev]
lto = true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this test, setting lto = "thin" is faster to link.

@jszwedko
Copy link
Member Author

@zamazan4ik you might find this interesting too :) Turns out we weren't doing LTO on release assets.

@zamazan4ik
Copy link
Contributor

Hah, I was sure it's enabled since Vector has https://github.com/vectordotdev/vector/blob/master/scripts/environment/release-flags.sh ... :)

@jszwedko
Copy link
Member Author

Hah, I was sure it's enabled since Vector has https://github.com/vectordotdev/vector/blob/master/scripts/environment/release-flags.sh ... :)

Aha, actually you are right :D We just don't use those flags for the performance tests apparently.

Signed-off-by: Jesse Szwedko <jesse.szwedko@datadoghq.com>
@jpds
Copy link
Contributor

jpds commented Feb 7, 2024

You might want to add the other optimization options from: https://github.com/cloud-hypervisor/cloud-hypervisor/blob/main/Cargo.toml#L20-L24

Right now I'm seeing:

276M Jan 31 23:19 ../vector-non-lto*
146M Feb  7 18:42 target/release/vector* # with LTO
69M Feb  1 00:03 target/release/vector* # release profile options from c-h

@zamazan4ik
Copy link
Contributor

How does enabling the opt-level = "s" option influence Vector performance? I think we cannot simply switch from opt-level = 3 to opt-level = "s" without benchmarking. AFAIK, for Vector right now performance is more important than the binary size. And do not forget, the actual release flags for Vector are described here - https://github.com/vectordotdev/vector/blob/master/scripts/environment/release-flags.sh ;)

@jpds
Copy link
Contributor

jpds commented Feb 9, 2024

How does enabling the opt-level = "s" option influence Vector performance?

I had opt-level = "s" on one of my instances and redeployed without, and apparently the utilization has actually increased on average:

Screenshot from 2024-02-09 20-01-08

AFAIK, for Vector right now performance is more important than the binary size

From the perspective of Linux distribution packages - binary size is much more important. These are the numbers for NixOS:

$ eza -lh /nix/store/*vector*/bin/vector
Permissions Size User Date Modified Name
.r-xr-xr-x  238M root  1 Jan  1970  /nix/store/1bsflz2c1kr9gmfkd7jmh6swh892mdy1-vector-0.34.1/bin/vector # Normal
.r-xr-xr-x   86M root  1 Jan  1970  /nix/store/3bjgilkwn2s82qry3lx0gigpp7wqg8kz-vector-0.34.1/bin/vector # LTO/stripped
.r-xr-xr-x  109M root  1 Jan  1970  /nix/store/3w2sqc4353j9zz5yzl5qjcz003jwx1cn-vector-0.34.1/bin/vector # LTO

Currently, for each Vector release/package update - the supporting infrastructure (which might not be cheap) has to push out a 238M binary package to every single instance deployed (rather than a much slimmer 86M which does the exact same thing) - and sure, I could just change that at the NixOS package level, but better to get it upstreamed for every other distro out there too.

@jszwedko
Copy link
Member Author

jszwedko commented Feb 9, 2024

Vector releases do enable LTO (per #17342 (comment) this is done by changing the flags in CI). The binaries we distribute are around ~120 MB. Certainly they could be smaller, but, as @zamazan4ik notes, Vector's focus is performance and so the compilation is optimized for that. I can see that Nix is sensitive to the package size though, maybe it'd make sense to use opt-level = "s" when compiling Vector for distribution on that platform?

@jszwedko
Copy link
Member Author

jszwedko commented Feb 9, 2024

I swear the binaries used to be smaller too, around 80 MB. I might bisect down to see if there were specific commits that bumped us up significantly or if it was a slow burn.

@jpds
Copy link
Contributor

jpds commented Mar 6, 2024

Vector releases do enable LTO (per #17342 (comment) this is done by changing the flags in CI). The binaries we distribute are around ~120 MB.

The issue is that no downstream Linux distribution is going to use the prebuilt binaries. I could patch this in, but much easier if this just gets merged.

I pushed the same thing to another Rust project last month and it's even landed in the stable NixOS release as of yesterday:

.r-xr-xr-x 61M root  1 Jan  1970 /nix/store/98ka7zbb7x88vv23yic0h99nx1spv4s7-garage-0.9.0/bin/garage
.r-xr-xr-x 28M root  1 Jan  1970 /nix/store/dl68bfmrkbn98qlpa2i84j3qpxsixkzp-garage-0.9.2/bin/garage

jszwedko added a commit that referenced this pull request Mar 7, 2024
…eleases are published with

Per #17342 (comment) Linux distros commonly
rebuild Vector rather than pulling in prebuilt artifacts. They could set the same profile flags we do (or
even flags that are deemed to be better suited) but I do see value in "just" having the `release`
profile match how we build and distribute release versions of Vector to serve as the default for
anyone else building release builds.

The original intent of having CI set different release flags than were in Cargo.toml was to have
faster local release builds when analyzing Vector performance. Now that custom profiles exist, which
didn't at the time, I added a custom profile, `dev-perf`, to be used for this purpose instead.

Ref: #17342 (comment)

Signed-off-by: Jesse Szwedko <jesse.szwedko@datadoghq.com>
@jszwedko
Copy link
Member Author

jszwedko commented Mar 7, 2024

Vector releases do enable LTO (per #17342 (comment) this is done by changing the flags in CI). The binaries we distribute are around ~120 MB.

The issue is that no downstream Linux distribution is going to use the prebuilt binaries. I could patch this in, but much easier if this just gets merged.

I pushed the same thing to another Rust project last month and it's even landed in the stable NixOS release as of yesterday:

.r-xr-xr-x 61M root  1 Jan  1970 /nix/store/98ka7zbb7x88vv23yic0h99nx1spv4s7-garage-0.9.0/bin/garage
.r-xr-xr-x 28M root  1 Jan  1970 /nix/store/dl68bfmrkbn98qlpa2i84j3qpxsixkzp-garage-0.9.2/bin/garage

That's a good point :) I opened #20034. Let me know what you think of that.

@jpds jpds mentioned this pull request Mar 12, 2024
@jszwedko
Copy link
Member Author

jszwedko commented Oct 4, 2024

I can't remember which OSes I was targeting with this change. I'm guessing it was CentOS 7, which went EOL earlier this year.

@jszwedko jszwedko closed this Oct 4, 2024
@jszwedko jszwedko deleted the jszwedko/try-lto-nix branch October 4, 2024 19:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants