Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Span's From<&[T]> and From<&[T; N]> facilitate UB #33

Open
SFBdragon opened this issue May 5, 2024 · 11 comments
Open

Span's From<&[T]> and From<&[T; N]> facilitate UB #33

SFBdragon opened this issue May 5, 2024 · 11 comments

Comments

@SFBdragon
Copy link
Owner

SFBdragon commented May 5, 2024

Dereferencing &x as *const _ as *mut _ is UB.

This means some of the API surface of Talc is genuinely problematic, particularly the From implementations on Span for &[T] and &[T; N]. Not UB in itself (and technically Talc's docs on unsafety cover this), but it makes UB needlessly easy to invoke.

Here, the .into() is doing the legwork in hiding the issue as per the above. MIRI indeed takes issue with it.

fn main() {
    let mut arena = [0u8; 10000];

    unsafe {
        let mut talc = talc::Talc::new(talc::ClaimOnOom::new(arena.as_ref().into()));
        let x = talc.malloc(std::alloc::Layout::new::<[u64; 3]>()).unwrap();
        *x.as_ptr().cast::<[u64; 3]>() = [1, 2, 3];
    }
}

These From implementations should be removed. This is a breaking change though and will necessitate a major version bump.

@SFBdragon SFBdragon changed the title Span's From<&[T]> and From<&[T; N]> encourage UB Span's From<&[T]> and From<&[T; N]> facilitate UB May 5, 2024
@SFBdragon
Copy link
Owner Author

These functions have been deprecated to the extent possible is v4.4.2. They will be removed in v5.

@orsinium
Copy link

The README still show the old way:

https://github.com/SFBdragon/talc?tab=readme-ov-file#setup

What's the new way to create a global allocator?

@SFBdragon
Copy link
Owner Author

Hey, thanks for reaching out. I naively hoped that wouldn't confuse too many people before 1.83 lands. I missed it when pushing out the update.

Here's the currently recommended way to do that exact thing https://github.com/SFBdragon/talc/blob/master/stable_examples/examples/std_global_allocator.rs

#[global_allocator]
static ALLOCATOR: Talck<spin::Mutex<()>, ClaimOnOom> = Talc::new(unsafe {
    ClaimOnOom::new(Span::from_array(addr_of!(ARENA) as *mut [u8; 10000]))
}).lock();

Either ARENA should be large or some more memory should be claimed once the program starts running.

(Aiming to overhaul the READMEs and documentation situation before too long. Just finished exams so there'll be more movement over the next few weeks.)

@orsinium
Copy link

Thank you! It works. Clippy suggested using .cast_mut() instead of as *mut ...:

Span::from_array(addr_of!(ARENA).cast_mut())

https://rust-lang.github.io/rust-clippy/master/index.html#ptr_cast_constness

@SFBdragon
Copy link
Owner Author

SFBdragon commented Nov 19, 2024

No problem! Ah, that cast_mut is better. Thanks.


I got an email about a reply but I can't see it anywhere. Maybe you intentionally deleted it but I'll quickly reply anyway in case it's nonetheless useful

Thank you! It works but the produced wasm binary is 191 bytes bigger (after wasm-opt pass) 👀 Is there a way to use talc without a mutex? The environment where I run the wasm module is guaranteed to be single-threaded.

Also, it's worth to mention that spin is a separate crate that needs to be added. It took me a while to figure your that it doesn't come from use talc::*.

Instructions for WASM stuff is here, but this is documenting the old method as well.

https://github.com/SFBdragon/talc/blob/master/talc/README_WASM.md

If arena allocation is preferable to dynamic heap growth, then use

#[global_allocator]
static ALLOCATOR: talc::Talck<talc::locking::AssumeUnlockable, talc::ClaimOnOom> = {
    static mut MEMORY: [u8; 0x1000000] = [0; 0x1000000];
    let span = talc::Span::from_array(std::ptr::addr_of!(MEMORY).cast_mut());
    talc::Talc::new(unsafe { talc::ClaimOnOom::new(span) }).lock()
};

I don't like talc::locking::AssumeUnlockable very much as it violates unsafety encapsulation entirely, but it's probably what you'd want to use until V5, which should make this code cleaner either way.

@orsinium
Copy link

Yeah, I removed my comment when I realized that old code used unlockable and the new one that I copied from your example uses the lock, hence the size difference. I switched back to unlockable because Firefly Zero runs the games in a single thread.

Later, I found the wasm usage instructions and switched to TalckWasm. It increased the binary size by 600+ bytes but I decided to stick to it for now. I hoped that it will solve the problem that the app allocates by default 18 pages (almost megabyte) or linear memory but it haven't. It's probably a configuration somewhere in the Rust compiler, I left this mystery unsolved for now.

If you're interested, this is the PR:

firefly-zero/firefly-launcher#9

@SFBdragon
Copy link
Owner Author

SFBdragon commented Nov 21, 2024

Maybe try writing an allocator that forwards allocation requests to a real allocator (like talc) but does something else if it detects a massive allocation request. (Alternatively, make the fake allocator log allocations so you have a better idea of what to look for using web_sys::console::log_* or something like that)

// This is just a rough sketch

pub struct InspectorAlloc { pub talc: TalckWasm... }
#[global_allocator]
static MIDDLE_MAN_ALLOC: InspectorAlloc = ...;

impl GlobalAlloc for InspectorAlloc {
    fn alloc(&self, layout: Layout) -> *mut u8 {
        if layout.size > 10 * 64 * 1024 {
             // panic with backtrace, or log stuff like allocation sizes, or ???
        }
        self.talc.alloc(layout)
   }

    fn dealloc(&self, ptr: *mut u8, layout: Layout) { self.talc.dealloc(ptr, layout); }
}

Might help narrow the issue down.

I keep writing types like this or modifying talc to do this so maybe I should just write a crate that uses the tracing crate to handle this nicely. Or maybe something like that already exists.

@orsinium
Copy link

Thank you! I gave it a spin and found out that the biggest single allocation that happens is just 2048 bytes. So, no, it's not an allocation. Even more, this memory is allocated not dynamically but statically, the wasm binary specifies that the memory must be at least 17 pages by default. So, I think there might be a rust/llvm flag for that.

For posterity, this is the complete implementation of the allocator:

pub struct InspectorAlloc {
    pub talc: talc::TalckWasm,
}

#[global_allocator]
static ALLOCATOR: InspectorAlloc = InspectorAlloc {
    talc: unsafe { talc::TalckWasm::new_global() },
};

unsafe impl GlobalAlloc for InspectorAlloc {
    unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
        assert!(layout.size() < 2 * 1024, "{}", layout.size());
        self.talc.alloc(layout)
    }

    unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
        self.talc.dealloc(ptr, layout);
    }
}

@orsinium
Copy link

I've calculated the total allocations, it dynamically allocates just over 7 Kb. And the data section isn't that big. So it's certainly not a problem with talc.

@bjorn3
Copy link

bjorn3 commented Nov 21, 2024

Try passing something like -Clink-arg=-zstack-size=4096 to rustc to shrink the stack size to 4KiB. By default rustc sets the stack size to 1MiB: https://github.com/rust-lang/rust/blob/717f5df2c308dfb4b7b8e6c002c11fe8269c4011/compiler/rustc_target/src/spec/base/wasm.rs#L14 There is a fair chance that 4KiB is not enough stack space, but you can always choose a larger stack size if that is indeed the case.

@orsinium
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants