-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suboptimal performance of BinaryHeap::append #77433
Comments
The comment suggests that the rebuild strategy is more efficient if at least one of the following is true:
You can see how the slope of the append method is lower, so its advantage would show if you extended your benchmark by an order of magnitude. If you used a more expensive comparison function it would probably show earlier too. |
The code for To simulate more expensive comparisons I've made a benchmark where each comparison involves some global atomic operations. Below that I've included a benchmark with ten times the number of items in the heaps in total (an order of magnitude more than that makes the benchmark very slow). The trends remain similar. The original implementation (#32526 and #32987) seems to base its choice for heuristic mostly on the expectation it is a reasonable heuristic. However, I think reasoning from the worst-case scenario (O(n log(m))) is pessimistic for the Changed code for atomic benchmarkuse std::cmp::Ordering;
use std::sync::atomic::{self, AtomicU64};
static C: AtomicU64 = AtomicU64::new(0);
fn expensive() {
for _ in 0..10 {
C.fetch_add(1, atomic::Ordering::Relaxed);
C.fetch_sub(1, atomic::Ordering::Relaxed);
}
}
struct T(u64);
impl PartialEq for T {
fn eq(&self, other: &Self) -> bool {
self.0.eq(&other.0)
}
}
impl Eq for T {}
impl PartialOrd for T {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
expensive();
self.0.partial_cmp(&other.0)
}
}
impl Ord for T {
fn cmp(&self, other: &Self) -> Ordering {
expensive();
self.0.cmp(&other.0)
}
}
fn random_data(n: u64) -> Vec<T> {
let mut data = Vec::from_iter((0..n).map(T));
data.shuffle(&mut thread_rng());
data
} |
Ah, I missed that. That indeed limits the opportunities to realize its advantage.
The x-axis still says 50k though?
First we would need a benchmark demonstrating that advantage anyway. |
Only for the first image, that one has the same number of elements as before but a more expansive comparison implementation. The x-axis of the second image goes up to 500k, that one has normal "cheap" integer comparison but a larger number of items in the heaps.
Certainly. |
Naively, when I look at the graphs in the OP what I see isn't that extend is always going to be better -- the slopes seem to be clearly different, though two graphs with different scales and extra lines makes that comparison harder than it ought to be -- but that the current heuristic is doing a bad job of accounting for constant factors when picking the switchover point. Why isn't the answer here just use a number other than two in the heuristic? 2 * (len1 + len2) < len2 * log2_fast(len1) |
Here's the two lines from a different measurement of the same benchmark in the same graph. It has a weird jump between 25000 and 27500 (artifact?), that's why included the more messy pictures initially.
The slopes are indeed very different, and extrapolating them would lead to a crossover point. However, this crossover point can never actually be reached because you can't extrapolate beyond the rightmost data point. The benchmark is set up such that two heaps are merged, one containing x elements and the other 100,000 - x. The rightmost data point at x = 50,000 corresponds to merging two equally sized heaps. The data point that would be to the right of that (x = 52,500) would give the same result as the data point to the left (x = 47,500) because both involve merging a heap of 47,500 and 52,500 elements and the heaps are swapped in one of them to make sure the smaller heap is append to the larger one. In other words, beyond the rightmost data point the slopes change (become negative) and the data would mirror the data to the left of that point (which is why I didn't include it). The crossover point is thus only reachable (in this benchmark) if you'd remove the swap-optimization (theoretically I mean, practically that would worsen performance, or course).
The TLDR is that it appears the number would have to be such that the condition always evaluates to On another note, I've now also benchmarked merging very large heaps (100 million elements after merge), and |
Just tried the benchmark with changes from PR #78857 and |
Thanks for the update to remind me about this 🙂 One thing that I'd missed last time: Do you want to wait on #78857 before looking at #77435 more? |
I think for now it would be better to wait and/or work towards a better heuristic (even though PS/Disclaimer: I'm the author of #78857 and not a reviewer, so I may be a little biased, however the benchmarks should speak for themself. Would be nice someone else could confirm my claims |
Nice performance improvements in #78857, just a pity it makes determining the switchover heuristic more difficult ;). I ran the benchmark again with #78857 and on my machine it also improves |
In order to determine the new switchover point, I ran some benchmarks for heaps of different lengths. For a constant total heap length (heap length after the The blue line is the rebuild strategy, and purple the extend strategy. The green vertical line indicates the intersection of the fitted lines, i.e. where the switchover between strategies should ideally be. The red vertical line indicates the switchover point using the current heuristic. The fits for all the different other lengths can be found in the collapsible section at the end of this comment. I've plotted the location of the ideal switchover point as a function of the total heap length (the current heuristic is not a smooth curve because it uses a fast but inexact log2): I think a couple points are noteworthy:
I am not sure how to best model a heuristic for larger heaps, especially because for the power-of-2 heaps show it drops again for the two largest heaps. However, I think it should be fairly straightforward to improve the current heuristic by setting a lower bound on the switchover point: if len1 + len2 < 4096 {
2 * (len1 + len2) < len2 * log2_fast(len1)
} else {
2 * (len1 + len2) < len2 * 11
} This results in the green curve in the graph above. It is closer to the ideal switchover point in all cases, but for larger heaps it is still suboptimal. I'd love to hear ideas on how to improve the heuristic further for larger heaps. Images of all fitsIn all graphs, blue is the rebuild strategy and purple the extend strategy. The dots indicate raw data points and the line is a least-squares linear fit. The green vertical line indicates the intersection of the fitted lines, i.e. where the switchover between strategies should ideally be. The red vertical line indicates the current switchover point. |
…ottmcm Always use extend in BinaryHeap::append This is faster, see rust-lang#77433. Fixes rust-lang#77433
…tmcm Always use extend in BinaryHeap::append This is faster, see rust-lang#77433. Fixes rust-lang#77433
This commit ports the change in the rebuild heuristic used by `BinaryHeap::append()` that was added in rust-lang/rust#77435: "Change BinaryHeap::append rebuild heuristic". See also the discussion in rust-lang/rust#77433: "Suboptimal performance of BinaryHeap::append" for more information on how the new heuristic was chosen. It also ports the new private method `.rebuild_tail()` now used by `std::collections::BinaryHeap::append()` from rust-lang/rust#78681: "Improve rebuilding behaviour of BinaryHeap::retain". Note that Rust 1.60.0 adds the clippy lint `manual_bits` which warns against code used here. We suppress the lint instead of following the upstream patch which now uses `usize::BITS`, since this was stabilized in Rust 1.53.0 and this crate's MSRV is currently 1.36.0.
The current implementation of
BinaryHeap::append
uses a heuristic based on the worst-case number of comparisons to determine between two strategies:self.extend(other.drain())
, equivalent to pushing all elements of the other heap.I've done some benchmarking, and it turns out that method 1 (based on
extend
) is always faster for two heaps based on randomly shuffled data – on the computers I've tested it on, anyway. I've included images of the benchmarks below: first the currentappend
strategy, then theextend
strategy (method 1). In the benchmarks, two heaps are merged, one containing the number of elements on the x-axis, the other containing 100,000 minus that number (for the rightmost data points both heaps are equal in size). The red line (sometimes hiding behind the green one) is in both images that corresponding tostd::collections::BinaryHeap
, the other lines are not relevant to this issue (they aren't standard library types).From the jump in performance in the first graph, you can clearly see when the switch between the two approaches happens. Graphs are similar for smaller heaps, I didn't test larger ones. It's possible method 2 is faster under other circumstances (maybe if one of the heaps contains mostly very small/large elements?), especially if both heaps are (almost) equal in size. However, I think the benchmark is more close to the "average" real-world use case than such a case would be (I'd be happy to be proven wrong about this, though).
Simplification of the benchmark that only runs the parts relevant for this issue
Cargo.toml
benches/benchmark.rs
The text was updated successfully, but these errors were encountered: