Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rollup of 6 pull requests #72898

Closed
wants to merge 15 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ or reading the [rustc dev guide][rustcguidebuild].

[rustcguidebuild]: https://rustc-dev-guide.rust-lang.org/building/how-to-build-and-run.html

### Building on *nix
### Building on Unix-like system
1. Make sure you have installed the dependencies:

* `g++` 5.1 or later or `clang++` 3.5 or later
Expand Down
122 changes: 36 additions & 86 deletions src/liballoc/raw_vec.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ use core::ptr::{NonNull, Unique};
use core::slice;

use crate::alloc::{
handle_alloc_error, AllocErr,
handle_alloc_error,
AllocInit::{self, *},
AllocRef, Global, Layout,
ReallocPlacement::{self, *},
Expand Down Expand Up @@ -211,13 +211,13 @@ impl<T, A: AllocRef> RawVec<T, A> {
}
}

/// Ensures that the buffer contains at least enough space to hold
/// `used_capacity + needed_extra_capacity` elements. If it doesn't already have
/// enough capacity, will reallocate enough space plus comfortable slack
/// space to get amortized `O(1)` behavior. Will limit this behavior
/// if it would needlessly cause itself to panic.
/// Ensures that the buffer contains at least enough space to hold `len +
/// additional` elements. If it doesn't already have enough capacity, will
/// reallocate enough space plus comfortable slack space to get amortized
/// `O(1)` behavior. Will limit this behavior if it would needlessly cause
/// itself to panic.
///
/// If `used_capacity` exceeds `self.capacity()`, this may fail to actually allocate
/// If `len` exceeds `self.capacity()`, this may fail to actually allocate
/// the requested space. This is not really unsafe, but the unsafe
/// code *you* write that relies on the behavior of this function may break.
///
Expand Down Expand Up @@ -263,64 +263,32 @@ impl<T, A: AllocRef> RawVec<T, A> {
/// # vector.push_all(&[1, 3, 5, 7, 9]);
/// # }
/// ```
pub fn reserve(&mut self, used_capacity: usize, needed_extra_capacity: usize) {
match self.try_reserve(used_capacity, needed_extra_capacity) {
pub fn reserve(&mut self, len: usize, additional: usize) {
match self.try_reserve(len, additional) {
Err(CapacityOverflow) => capacity_overflow(),
Err(AllocError { layout, .. }) => handle_alloc_error(layout),
Ok(()) => { /* yay */ }
}
}

/// The same as `reserve`, but returns on errors instead of panicking or aborting.
pub fn try_reserve(
&mut self,
used_capacity: usize,
needed_extra_capacity: usize,
) -> Result<(), TryReserveError> {
if self.needs_to_grow(used_capacity, needed_extra_capacity) {
self.grow_amortized(used_capacity, needed_extra_capacity, MayMove)
pub fn try_reserve(&mut self, len: usize, additional: usize) -> Result<(), TryReserveError> {
if self.needs_to_grow(len, additional) {
self.grow_amortized(len, additional)
} else {
Ok(())
}
}

/// Attempts to ensure that the buffer contains at least enough space to hold
/// `used_capacity + needed_extra_capacity` elements. If it doesn't already have
/// enough capacity, will reallocate in place enough space plus comfortable slack
/// space to get amortized `O(1)` behavior. Will limit this behaviour
/// if it would needlessly cause itself to panic.
/// Ensures that the buffer contains at least enough space to hold `len +
/// additional` elements. If it doesn't already, will reallocate the
/// minimum possible amount of memory necessary. Generally this will be
/// exactly the amount of memory necessary, but in principle the allocator
/// is free to give back more than we asked for.
///
/// If `used_capacity` exceeds `self.capacity()`, this may fail to actually allocate
/// the requested space. This is not really unsafe, but the unsafe
/// code *you* write that relies on the behavior of this function may break.
///
/// Returns `true` if the reallocation attempt has succeeded.
///
/// # Panics
///
/// * Panics if the requested capacity exceeds `usize::MAX` bytes.
/// * Panics on 32-bit platforms if the requested capacity exceeds
/// `isize::MAX` bytes.
pub fn reserve_in_place(&mut self, used_capacity: usize, needed_extra_capacity: usize) -> bool {
// This is more readable than putting this in one line:
// `!self.needs_to_grow(...) || self.grow(...).is_ok()`
if self.needs_to_grow(used_capacity, needed_extra_capacity) {
self.grow_amortized(used_capacity, needed_extra_capacity, InPlace).is_ok()
} else {
true
}
}

/// Ensures that the buffer contains at least enough space to hold
/// `used_capacity + needed_extra_capacity` elements. If it doesn't already,
/// will reallocate the minimum possible amount of memory necessary.
/// Generally this will be exactly the amount of memory necessary,
/// but in principle the allocator is free to give back more than what
/// we asked for.
///
/// If `used_capacity` exceeds `self.capacity()`, this may fail to actually allocate
/// the requested space. This is not really unsafe, but the unsafe
/// code *you* write that relies on the behavior of this function may break.
/// If `len` exceeds `self.capacity()`, this may fail to actually allocate
/// the requested space. This is not really unsafe, but the unsafe code
/// *you* write that relies on the behavior of this function may break.
///
/// # Panics
///
Expand All @@ -331,8 +299,8 @@ impl<T, A: AllocRef> RawVec<T, A> {
/// # Aborts
///
/// Aborts on OOM.
pub fn reserve_exact(&mut self, used_capacity: usize, needed_extra_capacity: usize) {
match self.try_reserve_exact(used_capacity, needed_extra_capacity) {
pub fn reserve_exact(&mut self, len: usize, additional: usize) {
match self.try_reserve_exact(len, additional) {
Err(CapacityOverflow) => capacity_overflow(),
Err(AllocError { layout, .. }) => handle_alloc_error(layout),
Ok(()) => { /* yay */ }
Expand All @@ -342,14 +310,10 @@ impl<T, A: AllocRef> RawVec<T, A> {
/// The same as `reserve_exact`, but returns on errors instead of panicking or aborting.
pub fn try_reserve_exact(
&mut self,
used_capacity: usize,
needed_extra_capacity: usize,
len: usize,
additional: usize,
) -> Result<(), TryReserveError> {
if self.needs_to_grow(used_capacity, needed_extra_capacity) {
self.grow_exact(used_capacity, needed_extra_capacity)
} else {
Ok(())
}
if self.needs_to_grow(len, additional) { self.grow_exact(len, additional) } else { Ok(()) }
}

/// Shrinks the allocation down to the specified amount. If the given amount
Expand All @@ -374,8 +338,8 @@ impl<T, A: AllocRef> RawVec<T, A> {
impl<T, A: AllocRef> RawVec<T, A> {
/// Returns if the buffer needs to grow to fulfill the needed extra capacity.
/// Mainly used to make inlining reserve-calls possible without inlining `grow`.
fn needs_to_grow(&self, used_capacity: usize, needed_extra_capacity: usize) -> bool {
needed_extra_capacity > self.capacity().wrapping_sub(used_capacity)
fn needs_to_grow(&self, len: usize, additional: usize) -> bool {
additional > self.capacity().wrapping_sub(len)
}

fn capacity_from_bytes(excess: usize) -> usize {
Expand All @@ -395,14 +359,9 @@ impl<T, A: AllocRef> RawVec<T, A> {
// so that all of the code that depends on `T` is within it, while as much
// of the code that doesn't depend on `T` as possible is in functions that
// are non-generic over `T`.
fn grow_amortized(
&mut self,
used_capacity: usize,
needed_extra_capacity: usize,
placement: ReallocPlacement,
) -> Result<(), TryReserveError> {
fn grow_amortized(&mut self, len: usize, additional: usize) -> Result<(), TryReserveError> {
// This is ensured by the calling contexts.
debug_assert!(needed_extra_capacity > 0);
debug_assert!(additional > 0);

if mem::size_of::<T>() == 0 {
// Since we return a capacity of `usize::MAX` when `elem_size` is
Expand All @@ -411,8 +370,7 @@ impl<T, A: AllocRef> RawVec<T, A> {
}

// Nothing we can really do about these checks, sadly.
let required_cap =
used_capacity.checked_add(needed_extra_capacity).ok_or(CapacityOverflow)?;
let required_cap = len.checked_add(additional).ok_or(CapacityOverflow)?;

// This guarantees exponential growth. The doubling cannot overflow
// because `cap <= isize::MAX` and the type of `cap` is `usize`.
Expand All @@ -437,30 +395,26 @@ impl<T, A: AllocRef> RawVec<T, A> {
let new_layout = Layout::array::<T>(cap);

// `finish_grow` is non-generic over `T`.
let memory = finish_grow(new_layout, placement, self.current_memory(), &mut self.alloc)?;
let memory = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?;
self.set_memory(memory);
Ok(())
}

// The constraints on this method are much the same as those on
// `grow_amortized`, but this method is usually instantiated less often so
// it's less critical.
fn grow_exact(
&mut self,
used_capacity: usize,
needed_extra_capacity: usize,
) -> Result<(), TryReserveError> {
fn grow_exact(&mut self, len: usize, additional: usize) -> Result<(), TryReserveError> {
if mem::size_of::<T>() == 0 {
// Since we return a capacity of `usize::MAX` when the type size is
// 0, getting to here necessarily means the `RawVec` is overfull.
return Err(CapacityOverflow);
}

let cap = used_capacity.checked_add(needed_extra_capacity).ok_or(CapacityOverflow)?;
let cap = len.checked_add(additional).ok_or(CapacityOverflow)?;
let new_layout = Layout::array::<T>(cap);

// `finish_grow` is non-generic over `T`.
let memory = finish_grow(new_layout, MayMove, self.current_memory(), &mut self.alloc)?;
let memory = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?;
self.set_memory(memory);
Ok(())
}
Expand Down Expand Up @@ -494,7 +448,6 @@ impl<T, A: AllocRef> RawVec<T, A> {
// much smaller than the number of `T` types.)
fn finish_grow<A>(
new_layout: Result<Layout, LayoutErr>,
placement: ReallocPlacement,
current_memory: Option<(NonNull<u8>, Layout)>,
alloc: &mut A,
) -> Result<MemoryBlock, TryReserveError>
Expand All @@ -508,12 +461,9 @@ where

let memory = if let Some((ptr, old_layout)) = current_memory {
debug_assert_eq!(old_layout.align(), new_layout.align());
unsafe { alloc.grow(ptr, old_layout, new_layout.size(), placement, Uninitialized) }
unsafe { alloc.grow(ptr, old_layout, new_layout.size(), MayMove, Uninitialized) }
} else {
match placement {
MayMove => alloc.alloc(new_layout, Uninitialized),
InPlace => Err(AllocErr),
}
alloc.alloc(new_layout, Uninitialized)
}
.map_err(|_| AllocError { layout: new_layout, non_exhaustive: () })?;

Expand Down
8 changes: 4 additions & 4 deletions src/liballoc/vec.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2965,12 +2965,12 @@ impl<T> Drain<'_, T> {
}

/// Makes room for inserting more elements before the tail.
unsafe fn move_tail(&mut self, extra_capacity: usize) {
unsafe fn move_tail(&mut self, additional: usize) {
let vec = self.vec.as_mut();
let used_capacity = self.tail_start + self.tail_len;
vec.buf.reserve(used_capacity, extra_capacity);
let len = self.tail_start + self.tail_len;
vec.buf.reserve(len, additional);

let new_tail_start = self.tail_start + extra_capacity;
let new_tail_start = self.tail_start + additional;
let src = vec.as_ptr().add(self.tail_start);
let dst = vec.as_mut_ptr().add(new_tail_start);
ptr::copy(src, dst, self.tail_len);
Expand Down
86 changes: 39 additions & 47 deletions src/libarena/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -146,18 +146,18 @@ impl<T> TypedArena<T> {
}

#[inline]
fn can_allocate(&self, len: usize) -> bool {
let available_capacity_bytes = self.end.get() as usize - self.ptr.get() as usize;
let at_least_bytes = len.checked_mul(mem::size_of::<T>()).unwrap();
available_capacity_bytes >= at_least_bytes
fn can_allocate(&self, additional: usize) -> bool {
let available_bytes = self.end.get() as usize - self.ptr.get() as usize;
let additional_bytes = additional.checked_mul(mem::size_of::<T>()).unwrap();
available_bytes >= additional_bytes
}

/// Ensures there's enough space in the current chunk to fit `len` objects.
#[inline]
fn ensure_capacity(&self, len: usize) {
if !self.can_allocate(len) {
self.grow(len);
debug_assert!(self.can_allocate(len));
fn ensure_capacity(&self, additional: usize) {
if !self.can_allocate(additional) {
self.grow(additional);
debug_assert!(self.can_allocate(additional));
}
}

Expand Down Expand Up @@ -214,36 +214,31 @@ impl<T> TypedArena<T> {
/// Grows the arena.
#[inline(never)]
#[cold]
fn grow(&self, n: usize) {
fn grow(&self, additional: usize) {
unsafe {
// We need the element size in to convert chunk sizes (ranging from
// We need the element size to convert chunk sizes (ranging from
// PAGE to HUGE_PAGE bytes) to element counts.
let elem_size = cmp::max(1, mem::size_of::<T>());
let mut chunks = self.chunks.borrow_mut();
let (chunk, mut new_capacity);
let mut new_cap;
if let Some(last_chunk) = chunks.last_mut() {
let used_bytes = self.ptr.get() as usize - last_chunk.start() as usize;
let currently_used_cap = used_bytes / mem::size_of::<T>();
last_chunk.entries = currently_used_cap;
if last_chunk.storage.reserve_in_place(currently_used_cap, n) {
self.end.set(last_chunk.end());
return;
} else {
// If the previous chunk's capacity is less than HUGE_PAGE
// bytes, then this chunk will be least double the previous
// chunk's size.
new_capacity = last_chunk.storage.capacity();
if new_capacity < HUGE_PAGE / elem_size {
new_capacity = new_capacity.checked_mul(2).unwrap();
}
last_chunk.entries = used_bytes / mem::size_of::<T>();

// If the previous chunk's capacity is less than HUGE_PAGE
// bytes, then this chunk will be least double the previous
// chunk's size.
new_cap = last_chunk.storage.capacity();
if new_cap < HUGE_PAGE / elem_size {
new_cap = new_cap.checked_mul(2).unwrap();
}
} else {
new_capacity = PAGE / elem_size;
new_cap = PAGE / elem_size;
}
// Also ensure that this chunk can fit `n`.
new_capacity = cmp::max(n, new_capacity);
// Also ensure that this chunk can fit `additional`.
new_cap = cmp::max(additional, new_cap);

chunk = TypedArenaChunk::<T>::new(new_capacity);
let chunk = TypedArenaChunk::<T>::new(new_cap);
self.ptr.set(chunk.start());
self.end.set(chunk.end());
chunks.push(chunk);
Expand Down Expand Up @@ -347,31 +342,28 @@ impl DroplessArena {

#[inline(never)]
#[cold]
fn grow(&self, needed_bytes: usize) {
fn grow(&self, additional: usize) {
unsafe {
let mut chunks = self.chunks.borrow_mut();
let (chunk, mut new_capacity);
let mut new_cap;
if let Some(last_chunk) = chunks.last_mut() {
let used_bytes = self.ptr.get() as usize - last_chunk.start() as usize;
if last_chunk.storage.reserve_in_place(used_bytes, needed_bytes) {
self.end.set(last_chunk.end());
return;
} else {
// If the previous chunk's capacity is less than HUGE_PAGE
// bytes, then this chunk will be least double the previous
// chunk's size.
new_capacity = last_chunk.storage.capacity();
if new_capacity < HUGE_PAGE {
new_capacity = new_capacity.checked_mul(2).unwrap();
}
// There is no need to update `last_chunk.entries` because that
// field isn't used by `DroplessArena`.

// If the previous chunk's capacity is less than HUGE_PAGE
// bytes, then this chunk will be least double the previous
// chunk's size.
new_cap = last_chunk.storage.capacity();
if new_cap < HUGE_PAGE {
new_cap = new_cap.checked_mul(2).unwrap();
}
} else {
new_capacity = PAGE;
new_cap = PAGE;
}
// Also ensure that this chunk can fit `needed_bytes`.
new_capacity = cmp::max(needed_bytes, new_capacity);
// Also ensure that this chunk can fit `additional`.
new_cap = cmp::max(additional, new_cap);

chunk = TypedArenaChunk::<u8>::new(new_capacity);
let chunk = TypedArenaChunk::<u8>::new(new_cap);
self.ptr.set(chunk.start());
self.end.set(chunk.end());
chunks.push(chunk);
Expand All @@ -386,7 +378,7 @@ impl DroplessArena {
self.align(align);

let future_end = intrinsics::arith_offset(self.ptr.get(), bytes as isize);
if (future_end as *mut u8) >= self.end.get() {
if (future_end as *mut u8) > self.end.get() {
self.grow(bytes);
}

Expand Down
Loading