Skip to content

Commit

Permalink
*RFL import: rust: upgrade to Rust 1.76.0
Browse files Browse the repository at this point in the history
This is the next upgrade to the Rust toolchain, from 1.75.0 to 1.76.0
(i.e. the latest) [1].

See the upgrade policy [2] and the comments on the first upgrade in
commit 3ed03f4 ("rust: upgrade to Rust 1.68.2").

No unstable features that we use were stabilized in Rust 1.76.0.

The only unstable features allowed to be used outside the `kernel` crate
are still `new_uninit,offset_of`, though other code to be upstreamed
may increase the list.

Please see [3] for details.

`rustc` (and others) now warns when it cannot connect to the Make
jobserver, thus mark those invocations as recursive as needed. Please
see the previous commit for details.

Rust 1.76.0 does not emit the `.debug_pub{names,types}` sections anymore
for DWARFv4 [4][5]. For instance, in the uncompressed debug info case,
this debug information took:

    samples/rust/rust_minimal.o   ~64 KiB (~18% of total object size)
    rust/kernel.o                 ~92 KiB (~15%)
    rust/core.o                  ~114 KiB ( ~5%)

In the compressed debug info (zlib) case:

    samples/rust/rust_minimal.o   ~11 KiB (~6%)
    rust/kernel.o                 ~17 KiB (~5%)
    rust/core.o                   ~21 KiB (~1.5%)

In addition, the `rustc_codegen_gcc` backend now does not emit the
`.eh_frame` section when compiling under `-Cpanic=abort` [6], thus
removing the need for the patch in the CI to compile the kernel [7].
Moreover, it also now emits the `.comment` section too [6].

The vast majority of changes are due to our `alloc` fork being upgraded
at once.

There are two kinds of changes to be aware of: the ones coming from
upstream, which we should follow as closely as possible, and the updates
needed in our added fallible APIs to keep them matching the newer
infallible APIs coming from upstream.

Instead of taking a look at the diff of this patch, an alternative
approach is reviewing a diff of the changes between upstream `alloc` and
the kernel's. This allows to easily inspect the kernel additions only,
especially to check if the fallible methods we already have still match
the infallible ones in the new version coming from upstream.

Another approach is reviewing the changes introduced in the additions in
the kernel fork between the two versions. This is useful to spot
potentially unintended changes to our additions.

To apply these approaches, one may follow steps similar to the following
to generate a pair of patches that show the differences between upstream
Rust and the kernel (for the subset of `alloc` we use) before and after
applying this patch:

    # Get the difference with respect to the old version.
    git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
    git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
        cut -d/ -f3- |
        grep -Fv README.md |
        xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
    git -C linux diff --patch-with-stat --summary -R > old.patch
    git -C linux restore rust/alloc

    # Apply this patch.
    git -C linux am rust-upgrade.patch

    # Get the difference with respect to the new version.
    git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
    git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
        cut -d/ -f3- |
        grep -Fv README.md |
        xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
    git -C linux diff --patch-with-stat --summary -R > new.patch
    git -C linux restore rust/alloc

Now one may check the `new.patch` to take a look at the additions (first
approach) or at the difference between those two patches (second
approach). For the latter, a side-by-side tool is recommended.

Link: https://github.com/rust-lang/rust/blob/stable/RELEASES.md#version-1760-2024-02-08 [1]
Link: https://rust-for-linux.com/rust-version-policy [2]
Link: Rust-for-Linux#2 [3]
Link: rust-lang/compiler-team#688 [4]
Link: rust-lang/rust#117962 [5]
Link: rust-lang/rust#118068 [6]
Link: https://github.com/Rust-for-Linux/ci-rustc_codegen_gcc [7]
Tested-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20240217002638.57373-2-ojeda@kernel.org
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
  • Loading branch information
ojeda authored and jannau committed May 22, 2024
1 parent f1e6a71 commit 6092708
Show file tree
Hide file tree
Showing 9 changed files with 125 additions and 44 deletions.
2 changes: 1 addition & 1 deletion Documentation/process/changes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils.
====================== =============== ========================================
GNU C 5.1 gcc --version
Clang/LLVM (optional) 11.0.0 clang --version
Rust (optional) 1.75.0 rustc --version
Rust (optional) 1.76.0 rustc --version
bindgen (optional) 0.65.1 bindgen --version
GNU make 3.82 make --version
bash 4.2 bash --version
Expand Down
3 changes: 3 additions & 0 deletions rust/alloc/alloc.rs
Original file line number Diff line number Diff line change
Expand Up @@ -426,12 +426,14 @@ pub mod __alloc_error_handler {
}
}

#[cfg(not(no_global_oom_handling))]
/// Specialize clones into pre-allocated, uninitialized memory.
/// Used by `Box::clone` and `Rc`/`Arc::make_mut`.
pub(crate) trait WriteCloneIntoRaw: Sized {
unsafe fn write_clone_into_raw(&self, target: *mut Self);
}

#[cfg(not(no_global_oom_handling))]
impl<T: Clone> WriteCloneIntoRaw for T {
#[inline]
default unsafe fn write_clone_into_raw(&self, target: *mut Self) {
Expand All @@ -441,6 +443,7 @@ impl<T: Clone> WriteCloneIntoRaw for T {
}
}

#[cfg(not(no_global_oom_handling))]
impl<T: Copy> WriteCloneIntoRaw for T {
#[inline]
unsafe fn write_clone_into_raw(&self, target: *mut Self) {
Expand Down
14 changes: 11 additions & 3 deletions rust/alloc/boxed.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1042,10 +1042,18 @@ impl<T: ?Sized, A: Allocator> Box<T, A> {
/// use std::ptr;
///
/// let x = Box::new(String::from("Hello"));
/// let p = Box::into_raw(x);
/// let ptr = Box::into_raw(x);
/// unsafe {
/// ptr::drop_in_place(ptr);
/// dealloc(ptr as *mut u8, Layout::new::<String>());
/// }
/// ```
/// Note: This is equivalent to the following:
/// ```
/// let x = Box::new(String::from("Hello"));
/// let ptr = Box::into_raw(x);
/// unsafe {
/// ptr::drop_in_place(p);
/// dealloc(p as *mut u8, Layout::new::<String>());
/// drop(Box::from_raw(ptr));
/// }
/// ```
///
Expand Down
1 change: 1 addition & 0 deletions rust/alloc/collections/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -150,6 +150,7 @@ impl Display for TryReserveError {

/// An intermediate trait for specialization of `Extend`.
#[doc(hidden)]
#[cfg(not(no_global_oom_handling))]
trait SpecExtend<I: IntoIterator> {
/// Extends `self` with the contents of the given iterator.
fn spec_extend(&mut self, iter: I);
Expand Down
8 changes: 4 additions & 4 deletions rust/alloc/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -81,8 +81,8 @@
not(no_sync),
target_has_atomic = "ptr"
))]
#![cfg_attr(not(bootstrap), doc(rust_logo))]
#![cfg_attr(not(bootstrap), feature(rustdoc_internals))]
#![doc(rust_logo)]
#![feature(rustdoc_internals)]
#![no_std]
#![needs_allocator]
// Lints:
Expand Down Expand Up @@ -144,7 +144,6 @@
#![feature(maybe_uninit_uninit_array)]
#![feature(maybe_uninit_uninit_array_transpose)]
#![feature(pattern)]
#![feature(ptr_addr_eq)]
#![feature(ptr_internals)]
#![feature(ptr_metadata)]
#![feature(ptr_sub_ptr)]
Expand All @@ -159,6 +158,7 @@
#![feature(std_internals)]
#![feature(str_internals)]
#![feature(strict_provenance)]
#![feature(trusted_fused)]
#![feature(trusted_len)]
#![feature(trusted_random_access)]
#![feature(try_trait_v2)]
Expand Down Expand Up @@ -278,7 +278,7 @@ pub(crate) mod test_helpers {
/// seed not being the same for every RNG invocation too.
pub(crate) fn test_rng() -> rand_xorshift::XorShiftRng {
use std::hash::{BuildHasher, Hash, Hasher};
let mut hasher = std::collections::hash_map::RandomState::new().build_hasher();
let mut hasher = std::hash::RandomState::new().build_hasher();
std::panic::Location::caller().hash(&mut hasher);
let hc64 = hasher.finish();
let seed_vec =
Expand Down
58 changes: 42 additions & 16 deletions rust/alloc/raw_vec.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,16 @@ enum AllocInit {
Zeroed,
}

#[repr(transparent)]
#[cfg_attr(target_pointer_width = "16", rustc_layout_scalar_valid_range_end(0x7fff))]
#[cfg_attr(target_pointer_width = "32", rustc_layout_scalar_valid_range_end(0x7fff_ffff))]
#[cfg_attr(target_pointer_width = "64", rustc_layout_scalar_valid_range_end(0x7fff_ffff_ffff_ffff))]
struct Cap(usize);

impl Cap {
const ZERO: Cap = unsafe { Cap(0) };
}

/// A low-level utility for more ergonomically allocating, reallocating, and deallocating
/// a buffer of memory on the heap without having to worry about all the corner cases
/// involved. This type is excellent for building your own data structures like Vec and VecDeque.
Expand All @@ -52,7 +62,12 @@ enum AllocInit {
#[allow(missing_debug_implementations)]
pub(crate) struct RawVec<T, A: Allocator = Global> {
ptr: Unique<T>,
cap: usize,
/// Never used for ZSTs; it's `capacity()`'s responsibility to return usize::MAX in that case.
///
/// # Safety
///
/// `cap` must be in the `0..=isize::MAX` range.
cap: Cap,
alloc: A,
}

Expand Down Expand Up @@ -121,7 +136,7 @@ impl<T, A: Allocator> RawVec<T, A> {
/// the returned `RawVec`.
pub const fn new_in(alloc: A) -> Self {
// `cap: 0` means "unallocated". zero-sized types are ignored.
Self { ptr: Unique::dangling(), cap: 0, alloc }
Self { ptr: Unique::dangling(), cap: Cap::ZERO, alloc }
}

/// Like `with_capacity`, but parameterized over the choice of
Expand Down Expand Up @@ -203,7 +218,7 @@ impl<T, A: Allocator> RawVec<T, A> {
// here should change to `ptr.len() / mem::size_of::<T>()`.
Self {
ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) },
cap: capacity,
cap: unsafe { Cap(capacity) },
alloc,
}
}
Expand All @@ -228,7 +243,7 @@ impl<T, A: Allocator> RawVec<T, A> {
// here should change to `ptr.len() / mem::size_of::<T>()`.
Ok(Self {
ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) },
cap: capacity,
cap: unsafe { Cap(capacity) },
alloc,
})
}
Expand All @@ -240,12 +255,13 @@ impl<T, A: Allocator> RawVec<T, A> {
/// The `ptr` must be allocated (via the given allocator `alloc`), and with the given
/// `capacity`.
/// The `capacity` cannot exceed `isize::MAX` for sized types. (only a concern on 32-bit
/// systems). ZST vectors may have a capacity up to `usize::MAX`.
/// systems). For ZSTs capacity is ignored.
/// If the `ptr` and `capacity` come from a `RawVec` created via `alloc`, then this is
/// guaranteed.
#[inline]
pub unsafe fn from_raw_parts_in(ptr: *mut T, capacity: usize, alloc: A) -> Self {
Self { ptr: unsafe { Unique::new_unchecked(ptr) }, cap: capacity, alloc }
let cap = if T::IS_ZST { Cap::ZERO } else { unsafe { Cap(capacity) } };
Self { ptr: unsafe { Unique::new_unchecked(ptr) }, cap, alloc }
}

/// Gets a raw pointer to the start of the allocation. Note that this is
Expand All @@ -261,7 +277,7 @@ impl<T, A: Allocator> RawVec<T, A> {
/// This will always be `usize::MAX` if `T` is zero-sized.
#[inline(always)]
pub fn capacity(&self) -> usize {
if T::IS_ZST { usize::MAX } else { self.cap }
if T::IS_ZST { usize::MAX } else { self.cap.0 }
}

/// Returns a shared reference to the allocator backing this `RawVec`.
Expand All @@ -270,7 +286,7 @@ impl<T, A: Allocator> RawVec<T, A> {
}

fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> {
if T::IS_ZST || self.cap == 0 {
if T::IS_ZST || self.cap.0 == 0 {
None
} else {
// We could use Layout::array here which ensures the absence of isize and usize overflows
Expand All @@ -280,7 +296,7 @@ impl<T, A: Allocator> RawVec<T, A> {
let _: () = const { assert!(mem::size_of::<T>() % mem::align_of::<T>() == 0) };
unsafe {
let align = mem::align_of::<T>();
let size = mem::size_of::<T>().unchecked_mul(self.cap);
let size = mem::size_of::<T>().unchecked_mul(self.cap.0);
let layout = Layout::from_size_align_unchecked(size, align);
Some((self.ptr.cast().into(), layout))
}
Expand Down Expand Up @@ -414,12 +430,15 @@ impl<T, A: Allocator> RawVec<T, A> {
additional > self.capacity().wrapping_sub(len)
}

fn set_ptr_and_cap(&mut self, ptr: NonNull<[u8]>, cap: usize) {
/// # Safety:
///
/// `cap` must not exceed `isize::MAX`.
unsafe fn set_ptr_and_cap(&mut self, ptr: NonNull<[u8]>, cap: usize) {
// Allocators currently return a `NonNull<[u8]>` whose length matches
// the size requested. If that ever changes, the capacity here should
// change to `ptr.len() / mem::size_of::<T>()`.
self.ptr = unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) };
self.cap = cap;
self.cap = unsafe { Cap(cap) };
}

// This method is usually instantiated many times. So we want it to be as
Expand All @@ -444,14 +463,15 @@ impl<T, A: Allocator> RawVec<T, A> {

// This guarantees exponential growth. The doubling cannot overflow
// because `cap <= isize::MAX` and the type of `cap` is `usize`.
let cap = cmp::max(self.cap * 2, required_cap);
let cap = cmp::max(self.cap.0 * 2, required_cap);
let cap = cmp::max(Self::MIN_NON_ZERO_CAP, cap);

let new_layout = Layout::array::<T>(cap);

// `finish_grow` is non-generic over `T`.
let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?;
self.set_ptr_and_cap(ptr, cap);
// SAFETY: finish_grow would have resulted in a capacity overflow if we tried to allocate more than isize::MAX items
unsafe { self.set_ptr_and_cap(ptr, cap) };
Ok(())
}

Expand All @@ -470,7 +490,10 @@ impl<T, A: Allocator> RawVec<T, A> {

// `finish_grow` is non-generic over `T`.
let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?;
self.set_ptr_and_cap(ptr, cap);
// SAFETY: finish_grow would have resulted in a capacity overflow if we tried to allocate more than isize::MAX items
unsafe {
self.set_ptr_and_cap(ptr, cap);
}
Ok(())
}

Expand All @@ -488,7 +511,7 @@ impl<T, A: Allocator> RawVec<T, A> {
if cap == 0 {
unsafe { self.alloc.deallocate(ptr, layout) };
self.ptr = Unique::dangling();
self.cap = 0;
self.cap = Cap::ZERO;
} else {
let ptr = unsafe {
// `Layout::array` cannot overflow here because it would have
Expand All @@ -499,7 +522,10 @@ impl<T, A: Allocator> RawVec<T, A> {
.shrink(ptr, layout, new_layout)
.map_err(|_| AllocError { layout: new_layout, non_exhaustive: () })?
};
self.set_ptr_and_cap(ptr, cap);
// SAFETY: if the allocation is valid, then the capacity is too
unsafe {
self.set_ptr_and_cap(ptr, cap);
}
}
Ok(())
}
Expand Down
16 changes: 11 additions & 5 deletions rust/alloc/vec/into_iter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,8 @@ use crate::raw_vec::RawVec;
use core::array;
use core::fmt;
use core::iter::{
FusedIterator, InPlaceIterable, SourceIter, TrustedLen, TrustedRandomAccessNoCoerce,
FusedIterator, InPlaceIterable, SourceIter, TrustedFused, TrustedLen,
TrustedRandomAccessNoCoerce,
};
use core::marker::PhantomData;
use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties};
Expand Down Expand Up @@ -287,9 +288,7 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> {
// Also note the implementation of `Self: TrustedRandomAccess` requires
// that `T: Copy` so reading elements from the buffer doesn't invalidate
// them for `Drop`.
unsafe {
if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) }
}
unsafe { if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) } }
}
}

Expand Down Expand Up @@ -341,6 +340,10 @@ impl<T, A: Allocator> ExactSizeIterator for IntoIter<T, A> {
#[stable(feature = "fused", since = "1.26.0")]
impl<T, A: Allocator> FusedIterator for IntoIter<T, A> {}

#[doc(hidden)]
#[unstable(issue = "none", feature = "trusted_fused")]
unsafe impl<T, A: Allocator> TrustedFused for IntoIter<T, A> {}

#[unstable(feature = "trusted_len", issue = "37572")]
unsafe impl<T, A: Allocator> TrustedLen for IntoIter<T, A> {}

Expand Down Expand Up @@ -425,7 +428,10 @@ unsafe impl<#[may_dangle] T, A: Allocator> Drop for IntoIter<T, A> {
// also refer to the vec::in_place_collect module documentation to get an overview
#[unstable(issue = "none", feature = "inplace_iteration")]
#[doc(hidden)]
unsafe impl<T, A: Allocator> InPlaceIterable for IntoIter<T, A> {}
unsafe impl<T, A: Allocator> InPlaceIterable for IntoIter<T, A> {
const EXPAND_BY: Option<NonZeroUsize> = NonZeroUsize::new(1);
const MERGE_BY: Option<NonZeroUsize> = NonZeroUsize::new(1);
}

#[unstable(issue = "none", feature = "inplace_iteration")]
#[doc(hidden)]
Expand Down
Loading

0 comments on commit 6092708

Please sign in to comment.