-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial review of parking_lot submodule #1
Conversation
I should probably also note my overall impression of everything as well! I don't think I found any correctness issues in parking_lot, so it all looks quite solid on that front. From a maintainability perspective though I'm quite worried about a few critical parts:
|
@@ -100,6 +112,9 @@ cfg_if! { | |||
} else if #[cfg(all(feature = "i-am-libstd", not(target_arch = "wasm32")))] { | |||
compile_error!("Not allowed to fall back to generic spin lock based thread parker"); | |||
} else { | |||
// REVIEW: this implementation isn't really appropriate for most use | |||
// cases other than "at least it compiles", and for libstd we'll want a | |||
// compile error if this is ever used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the above compile_error!
does precisely wat you want, except for allowing wasm to use the generic impl.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly. When used inside std the above compile_error
should make sure we never fall back to the generic one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would work, yes, but wasm should continue to not use the generic impl since it should panic if a request to wait is ever received because it won't otherwise wake up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So wasm should do what? Currently the wasm ThreadParker
requires the atomics
feature that is not available by default. Should we add a non-atomics wasm ThreadParker
? How would we implement that? My understanding was that the atomics
feature was needed in order to do any parking. Or do you mean adding a wasm thread parker that just panics? In other words, making the library compile, but not usable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well this should compile on CI with both atomics and without atomics. The nonatomic version should just panic if it ever requests to block, and the atomics version should do what it does today. It's difficult to run this so it's fine to just check it compiles. There's example CI configuration in wasm-bindgen of how to set up an atomics target
@@ -82,6 +94,8 @@ struct Bucket { | |||
mutex: WordLock, | |||
|
|||
// Linked list of threads waiting on this bucket | |||
// REVIEW: it seems like these should be `UnsafeCell` because they're | |||
// modified from multiple threads. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or AtomicPtr
* Overall quite worried about atomic orderings. Everything should be `SeqCst` | ||
until proven otherwise, and even then needs quite a lot of proof (IMO) to not | ||
use `SeqCst`. Being fast-and-loose with orderings seems like we're trying to | ||
be too clever by half. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree. In this case the orderings are very clear: Acquire
when acquiring a lock and Release
when releasing a lock. All other operations have no ordering requirements, and can use Relaxed
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry but this isn't gonna fly with me at least. Atomic orderings are notoriously hard to get right, and I think it's just wrong that "hey it's a lock so acquire/release and everything that looks unrelated is relaxed".
I've personally got two requirements for atomic orderings:
- Either everything is
SeqCst
- Or there's documentation lengthily explaining why
SeqCst
isn't used. This involves concrete benchmarks showing the performance different, pointers to other code in the world which is also thoroughly vetted and using more relaxed atomics, etc.
There's basically no way that this can be maintained to the level of quality expected from the standard library if our attitude is "no documentation necessary, and if it looks like a lock acquire/release otherwise relaxed everything".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that atomic algorithms need to be carefully documented to argue for their correctness. With Relaxed
, I agree that it should explain why there are no ordering requirements, and why it is okay not to synchronize this access.
However, I disagree that using SeqCst
makes the argument any simpler. SeqCst
works fine as a "I don't want to think about this now so let's use the least-likely-to-be-incorrect option". So, it makes things simpler if you don't have a careful argument. But reasoning based on SeqCst
basically requires reasoning by exhaustively looking at all interleavings, and that is certainly not maintainable. Hence I find any remaining SeqCst
to be a strong sign that something really strange is going on that needs sequential consistency (which is extremely rare).
After all, acquire/release operations are literally called that way because they are exactly what one needs to acquire/release a lock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the absolute bare minimum anything not SeqCst
needs to be documented in my opinion. It's a good point though that I personally assume SeqCst
isn't "I thought about this and need it" but rather "I didn't think about this so this is the default I chose". I think it's safe to assume that in the sense that if an ordering isn't documented and it's SeqCst
we can assume it's just "didn't think about this will relax later if necessary".
I am not personally comfortable enough with atomic orderings to sign off on this if the non-SeqCst
orderings are documented and remain. I'd want review from someone who is comfortable reviewing the orderings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can help check the orderings. And it seems we are in agreement that more docs are needed either way, that should hopefully make this easier. The docs should explain either why no synchronization is needed, or which kind of synchronization is happening. So e.g. the acquire load on a lock-acquire method should say that this synchronizes-with the release store of the lock-release operation of whoever previously held the lock, thus establishing the necessary synchronization between the previous thread that held the lock and the new thread that now got it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree about SeqCst
by default. If I see SeqCst
my immediate reaction is "the programmer likely has no idea about atomics, and this code is full of bugs". Atomics, even when "sequentially consistent" guarantee much less than most programmęrs would expect, and actually needing "sequential consistency" guarantee is very rare to the point where I've never seen it being necessary (albeit there may be usecases where it may need to be used - if you have one, there better be a comment explaining why you couldn't use Acquire
/Release
).
Additionally, using SeqCst
everywhere makes code harder to read as the necessary guarantees aren't visible from the source code.
std
uses SeqCst
pretty much everywhere in its mpsc
implementation. The implementation is pretty much broken as evidenced by issues like rust-lang/rust#39364.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, I agree with the sentiment that release-acquire should be the default and anything else require extra comments -- but that is not a consensus in the Rust teams, see rust-lang/rfcs#2503 for some discussion.
mutex? I think so but it's worth considering. For example you may be able to | ||
use `std::sync::Mutex` today to protect a global allocator, but with parking | ||
lot you won't be able to. Similarly about thread local internals with libstd, | ||
but I think it's just internal stuff to libstd. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You make a good point about the dependency on the global allocator. park
may need to grow the hash table if there are many parked threads. I'm not really sure if there is a good way to resolve this issue, maybe we could force parking_lot to use the system allocator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there's really a great answer to this either, I think it's just something we'll have to acknowledge as the libs team and be sure to indicate in release notes and/or changelogs. If it becomes a big issue we can try to use the system allocator or some other mechanism, but in general we don't have a great fix for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On Redox, for example, I believe that the Rust global allocator is the system allocator! So parking_lot
won’t work in that case. Furthermore, some code (such as OS kernels, filesystem drivers, etc) must recover from allocation failure. That might be much more difficult if taking a lock requires an allocation.
Would it be possible to use an intrusive linked list instead? This assumes that a mutex cannot be moved while it is locked, which seems like a reasonable assumption ― it will be borrowed by its guard.
literally everything using the interface agrees on what tokens to allocate? | ||
Using pointers works, but it seems like it's worthwhile documenting that it's | ||
basically very difficult to use `parking_lot_core` in a safe fashion unless | ||
you're doing the exact same thing as everyone else. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mutex
uses tokens to pass information from the unpaker to the thread being unparked. One token in particular (TOKEN_HANDOFF
) indicates that the ownership of the mutex has been directly transferred to the unparked thread, without unlocking it. If a TOKEN_HANDOFF
is sent out of nowhere, it could result in a thread assuming that it owns the mutex when it really doesn't. This is the main reason why the parking_lot_core methods are unsafe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having that documented in the function as a concrete case of unsafety would be useful!
// That means that attempting to access the thread local here may acquire a | ||
// lock which would in turn come back to access thread data? | ||
// | ||
// REVIEW: what are the error cases that accessing the thread local fails? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error case for try_with
is when a thread is currently in the middle of destroying its TLS. In such cases we fall back to a stack allocation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok makes sense, can this be organized without unsafe
though?
@@ -566,20 +603,30 @@ pub const DEFAULT_PARK_TOKEN: ParkToken = ParkToken(0); | |||
/// you could otherwise interfere with the operation of other synchronization | |||
/// primitives. | |||
/// | |||
// REVIEW: if a random key is passed in, is that the only thing that happens? | |||
// (messing with other primitives) If I'm the only user of `parking_lot` it's | |||
// memory safe to pass in any value, right? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Primitives rely on not being messed with for soundness (see my comment earlier).
// addition with `unparked` above? This is only used in the case that | ||
// `park_until` returns `false`, but doesn't that mean (given the | ||
// platform-specific implementations) that the timeout elapsed so | ||
// `timed_out` should be asserted to be true? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the timeout may have elapsed, our thread is still in the hash table so another may have unparked it after we timed out but before we called lock_bucket_checked
. In this case we need to report an Unparked
with the token that was given to us by the other thread.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah right yeah, can that be added to the comment?
@@ -27,6 +34,8 @@ type tv_nsec_t = libc::c_long; | |||
|
|||
// Helper type for putting a thread to sleep until some other thread wakes it up | |||
pub struct ThreadParker { | |||
// REVIEW: is it really worth it to gate this entire module on the existence | |||
// of `AtomicI32` rather than using `AtomicUsize`? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using AtomicUsize
would be incorrect on 64-bit platforms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it really? Sure it's not 32 bit it's actually 64, but why does that matter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(my point is that like elsewhere you could use AtomicI32
everywhere here and on stable and/or older versions just have a typedef of type AtomicI32 = AtomicUsize
or something like that)
// here instead of parking indefinitely? It's a bit of a moot | ||
// point in the sense that indefinitely vs sleeping for years | ||
// isn't really that different, but it's probably good to be | ||
// consistent. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree. I think that sleeping indefinitely is the better solution since it won't consume 100% CPU.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My point isn't consuming 100% cpu, it's more like if you ask to sleep for 1000 years and the system can only sleep for 800 years, we sleep for 800 and then we sleep for 200.
debug_assert!( | ||
*libc::__errno_location() == libc::EINTR | ||
|| *libc::__errno_location() == libc::EAGAIN | ||
|| (ts.is_some() && *libc::__errno_location() == libc::ETIMEDOUT) | ||
); | ||
|
||
// REVIEW: what's the platform compatibility of the futex | ||
// syscall? Does it fit libstd's platform compatibility? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's available since Linux 2.6.0.
// it's either a `Cell` or an `UnsafeCell`? | ||
// | ||
// I think the tl;dr; is that one of these `Cell` shoudl be an | ||
// `UnsafeCell`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should_park
is only accessed while holding the mutex, so this should be fine I think? The general rule I've been going with is to put Copy
types in Cell
and the rest in UnsafeCell
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This bit us long ago when dealing with channels at some point, but if interior mutability is used and it's in a threaded context then UnsafeCell
should be used. Using Cell
provides a false sense of security since the contract is that a lock must be held then the Cell
can be accessed and that's verify difficult to verify when reading unless there's an unsafe
block warning "go look for the rationale for safety somewhere"
// REVIEW: The first hit for this api on google is "Undocumented | ||
// functions of NTDLL". The standard library has historically been | ||
// conservative to not use undocumented platform features where | ||
// possible, such as symbolication APIs for backtraces on OSX. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only used on older versions of Windows (XP, Vista, 7) which don't have WaitOnAddress
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok but this is still missing any documentation like:
- Why is it justified to use an undocumented API?
- What is the general shape of this module and protocol used?
- Are there other systems doing something similar this is mirrored from? (etc etc)
@@ -47,6 +50,8 @@ impl WaitAddress { | |||
unsafe { | |||
// MSDN claims that that WaitOnAddress and WakeByAddressSingle are | |||
// located in kernel32.dll, but they are lying... | |||
|
|||
// REVIEW: how stable is this name to rely on it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is fixed in the Windows ABI at this point. Apps would fail to launch if they removed this DLL.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can... that be backed up? Like point to some official documentation? Something like that? I can declare anything as stable as well but it doesn't mean much if it's wrong...
@@ -178,6 +183,9 @@ unsafe impl<R: RawMutex + Send, G: GetThreadId + Send, T: ?Sized + Send> Send | |||
for ReentrantMutex<R, G, T> | |||
{ | |||
} | |||
// REVIEW: this naively seems like it should require `T: Sync` as well because | |||
// if you share `ReentrantMutex<R, G, T>` across threads then you also have | |||
// access to `&T` across all threads. I may be getting this backwards though. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch! This seems to be a typo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we add that requirement then a ReentrantMutex<RefCell<T>>
does not become Sync
any more. And the documentation for ReentrantMutex
explicitly states that one should use RefCell
to get mutable access to the data in the mutex. The is_mutex
test in this file breaks.
@@ -360,6 +379,7 @@ impl Condvar { | |||
} | |||
|
|||
// ... and re-lock it once we are done sleeping | |||
// REVIEW: how is it possible that TOKEN_HANDOFF gets produced here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is produced from RawMutex::unlock_slow
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can that be documented here and/or somewhere? Knowing that a head of time would have made it much easier to read...
// | ||
// REVIEW: the lock for the parking lot bucket is held here, | ||
// right? That means it's ok for there to be parked threads but | ||
// the park bit isn't actually set for the lock? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct
@@ -100,6 +112,9 @@ cfg_if! { | |||
} else if #[cfg(all(feature = "i-am-libstd", not(target_arch = "wasm32")))] { | |||
compile_error!("Not allowed to fall back to generic spin lock based thread parker"); | |||
} else { | |||
// REVIEW: this implementation isn't really appropriate for most use | |||
// cases other than "at least it compiles", and for libstd we'll want a | |||
// compile error if this is ever used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly. When used inside std the above compile_error
should make sure we never fall back to the generic one.
|
||
// REVIEW: the "i-am-libstd" feature would probably best be something like | ||
// `#[cfg(rustbuild)]` or something like that printed out by the build script in | ||
// rust-lang/rust. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Anything that makes future = "i-am-libstd"
a bit shorter would be really good.
@@ -794,6 +876,8 @@ pub unsafe fn unpark_all(key: usize, unpark_token: UnparkToken) -> usize { | |||
let mut link = &bucket.queue_head; | |||
let mut current = bucket.queue_head.get(); | |||
let mut previous = ptr::null(); | |||
|
|||
// REVIEW: is this really worth the `#[cfg]`? | |||
#[cfg(not(feature = "i-am-libstd"))] | |||
let mut threads = SmallVec::<[_; 8]>::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean? We don't have access to the smallvec
crate inside libstd, so we fall back to regular Vec
. However, smallvec
is no_std
, so it can be made compatible with libstd.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yeah sorry to expand a bit my point is if it's really that worth it to use SmallVec
here or if Vec
can just be used unconditionally
// THREAD_DATA.with(f) | ||
// } | ||
// | ||
// and that's it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In an old PR I changed with_thread_data
to call f
in two places, a bit like above. But @Amanieu argued that it shuold only be called once, so it's not inlined twice into with_thread_data
. See this comment: Amanieu#108 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I would personally prefer to err on the side of safety rather than code size for now and "hope llvm takes care of it", and we can evaluate code size impact and such later. It's unlikely to have any real effect I'd imagine since there's just so much code in general in Rust to micro-optimize any one case.
I'm sorry I haven't been able to respond to a lot of the feedback here, this has been an unusually busy week for me. I will take tomorrow (Friday) again and make sure I respond to all this feedback as well as Amanieu#140 |
@@ -93,6 +93,8 @@ | |||
#![warn(rust_2018_idioms)] | |||
#![cfg_attr(feature = "nightly", feature(const_fn))] | |||
|
|||
// REVIEW: would recommend unconditionally tagging this crate as `#![no_std]` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That results in this warning when building std
:
warning: unused attribute===========================================> ] 31/32: std
--> src/libstd/../parking_lot/lock_api/src/lib.rs:91:1
|
91 | #![no_std]
| ^^^^^^^^^^
|
= note: #[warn(unused_attributes)] on by default
Which is why I introduced the condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's indicative of a different organization being needed in that case? For example src/lib.rs
could be something like:
mod lock_api;
pub use lock_api::*;
and then src/lock_api/mod.rs
is the entry point for the crate and libstd
#[cfg(not(feature = "i-am-libstd"))] | ||
#[cfg(not(feature = "i-am-libstd"))] // REVIEW: why the `i-am-libstd` check here? | ||
// REVIEW: I'm noticing that this is probably in a number of locations, but | ||
// it's not clear why various methods are configured out in that build. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because I can't figure out how to make the defer!
macro available to this namespace without exporting it to all of libstd, which I want to avoid. Someone on IRC said it's not possible to refer to macros in your own crate by the full path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK the defer!
macro is just sort of like an anonymous destructor, so could that just be made into an API rather than a macro?
You can also scope a macro_rules
macro to just a few modules if it's defined in the crate by just putting its definition before the modules necessary
@@ -13,6 +13,9 @@ | |||
#![warn(rust_2018_idioms)] | |||
#![cfg_attr(feature = "nightly", feature(asm))] | |||
|
|||
// REVIEW: would recommend unconditionally tagging this crate as `#![no_std]` | |||
// and then manually handling `i-am-libstd`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what this comment is about. parking_lot
is not no_std
ever, only lock_api
is. We can't make this no_std
, can we? Then the pub use std::*;
in mod libstd
would not work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weird hacks like this mod libstd
are artifacts of trying to work around no_std
and such. The best way I've found to organize this is to unconditionally tag the crate #![no_std]
and then when necessary explicitly import std
and use it (aka when some feature is active or something like that)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not understand. the mod libstd
hack is not to work around no_std
it's to work around std
being available via different paths depending on i-am-libstd
.
#![no_std]
makes no sense in the i-am-libstd
case, because then we're not even a crate, so we should not declare crate attributes like this?
And in the not(i-am-libstd)
case it does not make sense either from how I understand it. Because the crate needs std
for a number of types. How would I import types from std
if I declare the crate no_std
? then std
is not even available to the crate, no? I'm confused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm ok let me put this another way. If this crate is not tagged #![no_std]
then all of a sudden it's a constant maintenance headache. You have to remember that although you can use std::...
you shouldn't. Tagging this crate unconditionally as #![no_std]
solves this issue and means you're explicilty reaching for std
, which clearly delineates areas which can't be compiled into the standard library.
Everything after that is basically just getting it set up to compile again. Sure it doesn't naively work as-is with a one line change and nothing else modified. If the organization of the library is tweaked a bit though and more than one line is changed this can work (and has for other crates like stdsimd)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a crate is #![no_std]
and it needs std
, then you'll use extern crate std
and then import from that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had no idea extern crate std
worked in no_std
crates. Now I understand :)
# CI should help that import paths and such are always organized correctly. | ||
|
||
# REVIEW: needs at least one builder for all of `thread_parker` implementations | ||
# (at least building it, executing it would be best), missing ones are wasm/OSX |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I add macOS testing here: Amanieu#147
But I'm not sure how we could test wasm. It requires the specially built libstd with atomic support, but we can't easily get that on travis, can we?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's not really a great way to test it via execution since engines haven't fully caught up yet all over the place, but I'd recommend using xargo
plus RUSTFLAGS
to ensure that the support here at least builds and produces a binary.
@@ -158,6 +169,8 @@ impl<R: RawMutex, T> Mutex<R, T> { | |||
} | |||
|
|||
impl<R: RawMutex, T: ?Sized> Mutex<R, T> { | |||
// REVIEW: it seems like this function should be `unsafe` with the | |||
// contract that the lock must be held when calling it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Being fixed over in Amanieu#148
@@ -62,6 +64,7 @@ impl<R: RawMutex, G: GetThreadId> RawReentrantMutex<R, G> { | |||
return false; | |||
} | |||
self.owner.store(id, Ordering::Relaxed); | |||
// REVIEW: add a debug assert for lock_count == 0? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doing so results in a panic. It's not zero here all the time. The failing test is is_mutex
. I have not yet checked if this is a bug or expected behavior, and I'm giving up for today now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm that naively sounds like it would cause the store of 1 below to perhaps overwrite something previous, so that sounds bad?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's just because when we perform the last unlock we don't bother resetting lock_count
to 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can it be reset to zero so a debug assertion can be added? Surely that's not really performance sensitive...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in Amanieu#148
// can't be fixed by other means then all unit tests in libstd should be | ||
// extracted to standalone tests in `src/libstd/tests` in the same manner as | ||
// `src/libcore/tests` and then unit tests for the libstd library should be | ||
// disabled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not deeply familiar with what tests are present in libstd
currently. But moving them out to a separate tests/
makes them only able to access the public API of libstd
, right? So any test that tests some internal function or other aspect will not work this way.
Simply not allowing testing anything internal in the entire standard library seems like a huge drawback to me, but as I said, I'm not familiar with the current tests and the approach they are usually written with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is correct, yes, we can't easily test private internal implementation details if they're moved out. That being said this is already the case for libcore, and there's really not that many private implementation details of libstd which aren't public in one way or another. Currently libcore gets around this by having unstable exports to test.
Like I'm sorry but a mangled constant with weird linkage is just a hard no for putting this in the standard library. This is basically guaranteed to cause us unending pain through LLVM updates, tiny linkage tweaks etc. This issue needs to be put to rest at the beginning one way or another which doesn't involve dipping so far into unstable features.
@alexcrichton @Amanieu any updates on this? |
@Dylan-DPC I have been getting a few PRs merged to Regarding the larger changes in libstd, to move all tests out to a separate crate etc I have no idea. |
How to recover tokens sent to the wrong address |
So is this effort on permanent hold? |
It's been a few years since the last discussion, has there been any movement in this area? Is there a chance to do something by 2024 edition? |
Rust's standard library now has optimized implementations of locks on several platforms (Windows & Linux notably). So there is much less benefit to including parking_lot now. |
I'm going to close this. The code in this PR is far behind latest |
@davids91 comments in a closed PR are generally not very useful. If you have a feature request for the rust standard library, please file an issue at https://github.com/rust-lang/rust/issues/ |
@RalfJung thank you! I did so already! :) rust-lang/libs-team#260 |
cc @Amanieu
I figure this is probably one of the better ways to organize this in terms of sending a PR that others can comment on as well. If others have thoughts of how to organize this I'm all for that!
I've left a lot of comments with "REVIEW:" prefixes below, and many of them are pretty straightforward to fix but a good number are also a bit deeper and will likely require more discussion. Doing this all in one PR may not be the best, so I can also open issues on
parking_lot
itself if that's easier.In terms of procedura this is all in @faern's fork currently since I'm reviewing what was integrated into libstd. Remaining modules for me to review are:
core/src/parking_lots.rs
src/raw_rwlock.rs