-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking issue for HashMap::extract_if and HashSet::extract_if #59618
Comments
Related #42849 |
This may be easy to implement since |
I have implemented this on a branch, but it will need to wait for a new release of the |
Add drain_filter method to HashMap and HashSet Add `HashMap::drain_filter` and `HashSet::drain_filter`, implementing part of rust-lang/rfcs#2140. These new methods are unstable. The tracking issue is rust-lang#59618. The added iterators behave the same as `BTreeMap::drain_filter` and `BTreeSet::drain_filter`, except their iteration order is arbitrary. The unit tests are adapted from `alloc::collections::btree`. This branch rewrites `HashSet` to be a wrapper around `hashbrown::HashSet` rather than `std::collections::HashMap`. (Both are themselves wrappers around `hashbrown::HashMap`, so the in-memory representation is the same either way.) This lets `std` re-use more iterator code from `hashbrown`. Without this change, we would need to duplicate much more code to implement `HashSet::drain_filter`. This branch also updates the `hashbrown` crate to version 0.9.0. Aside from changes related to the `DrainFilter` iterators, this version only changes features that are not used in libstd or rustc. And it updates `indexmap` to version 1.6.0, whose only change is compatibility with `hashbrown` 0.9.0.
This is now implemented in nightly behind the unstable |
Any reason not to stabilize this @mbrubeck? |
There are some open questions about naming and API design; see the summary in #43244 (comment) for some details. |
when do you estimate for this be stable? this is an amazing feature |
Hashbrown PR to remove the drain-on-drop behavior: rust-lang/hashbrown#374 |
I'd like to suggest an alternative API that is significantly more powerful than this one: for entry in map.entries() {
if entry.get().satisfies_condition() {
entry.remove();
}
} However, it is much more powerful and ergonomic for any non-trivial use cases. As far as I can tell these advantages fall into a few categories:
The obvious problem with this API is that A second, less obvious problem with this API is that it significantly restricts the space of possible implementations for the std |
any update here? this API is essential for my use case where having ownership of the removed elements would save me from unnecessary clones / allocations... |
Currently waiting for a hashbrown release so that |
@the8472 Nice. How often are these releases (and do they automatically land in std?) Is the vision ultimately to stabilize the |
The new version of |
Don't drain-on-drop in DrainFilter impls of various collections. This removes drain-on-drop behavior from various unstable DrainFilter impls (not yet for HashSet/Map) because that behavior [is problematic](rust-lang#43244 (comment)) (because it can lead to panic-in-drop when user closures panic) and may become forbidden if [this draft RFC passes](rust-lang/rfcs#3288). closes rust-lang#101122 [ACP](rust-lang/libs-team#136) affected tracking issues * rust-lang#43244 * rust-lang#70530 * rust-lang#59618 Related hashbrown update: rust-lang/hashbrown#374
Is this ready for the FCP? |
On the companion But that could be discussed as part of the FCP. |
Would love to use this feature in stable! |
The feature gate for the issue is
#![feature(hash_extract_if)]
(previouslyhash_drain_filter
)Currently only Vec and LinkedList have a drain_filter method, while other collections such as HashMap and HashSet do not.
This means that currently, removing items from a collection, and getting ownership of those items is fairly...unidiomatic and cumbersome.
For references, see rust-lang/rfcs#2140
Implementation History
The text was updated successfully, but these errors were encountered: