-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Massive slowdown caused by stacked borrows on code involving raw pointers + UnsafeCell #3735
Comments
I cannot reproduce this result. Can you post the version that uses |
#[test]
fn init_atomic() {
let pages = [(); PAGES].map(|()| unsafe { Box::<AtomicPage>::new_zeroed().assume_init() });
for i in 0..ITEMS {
let page_idx = i / PAGE_SIZE;
let inpage_idx = i % PAGE_SIZE;
pages[page_idx][inpage_idx].store(u32::MAX, Release);
}
}
Interestingly, keeping the #[test]
fn init_atomic() {
let pages = [(); PAGES].map(|()| unsafe { Box::<AtomicPage>::new_zeroed().assume_init() });
for i in 0..ITEMS {
let page_idx = i / PAGE_SIZE;
let inpage_idx = i % PAGE_SIZE;
let page: &AtomicPage = &pages[page_idx];
page[inpage_idx].store(u32::MAX, Release);
}
}
|
There is an extremely subtle difference between those two programs. You're specifying the type to be miri/src/borrow_tracker/stacked_borrows/mod.rs Lines 207 to 215 in 8f40dd2
|
Hunh. In fact adding the type annotation at all, even if it's fully redundant, still produces an extra reborrow. That is new to me. |
Ah of course, the primary issue with debugging this performance issue is that the GC is doing its absolute best to cover up the problem. Perhaps a better minimization is this: use std::cell::UnsafeCell;
const ITEMS: usize = 1 << 12;
fn main() {
let page = [const { UnsafeCell::new(0u8) }; 4096];
for i in 0..ITEMS {
let _page = &page;
}
} When run with This is very similar to our slice-get-unchecked benchmark miri/bench-cargo-miri/slice-get-unchecked/src/main.rs Lines 5 to 12 in 8f40dd2
But for this search, we don't have a stack cache, so you the only mitigation to true runtime explosion is the GC. So I have two observations: I wanted to print the stacks for this program and I got this: use std::cell::UnsafeCell;
extern "Rust" {
fn miri_get_alloc_id(ptr: *const u8) -> u64;
fn miri_print_borrow_state(alloc_id: u64, show_unnamed: bool);
}
fn get_alloc_id(ptr: *const u8) -> u64 {
unsafe { miri_get_alloc_id(ptr) }
}
fn print_borrow_stacks(alloc_id: u64) {
unsafe {
miri_print_borrow_state(alloc_id, /* ignored: show_unnamed */ false)
}
}
fn main() {
let page = [const { UnsafeCell::new(0u8) }; 4096];
let id = get_alloc_id(page.as_ptr().cast());
print_borrow_stacks(id);
}
There is a different borrow stack for every byte, but they are all equal. I thought we had a mechanism to prevent this. |
Yes, but that only kicks in when we iterate the stacks and it notices two adjacent stacks being equal. It also doesn't do arbitrarily much comparing and merging, to avoid unnecessary overhead. But here nothing at all seems to be merged, not even the first bytes:
That is indeed strange. |
I was trying to minimize a massive slowdown which is observed in our code and got the following snippet:
On my laptop I get the following results (more than 50x slowdown):
Replacing
[*mut AtomicPage; PAGES]
with[Box<AtomicPage>; PAGES]
reduces slowdown to less than 2x.The text was updated successfully, but these errors were encountered: