Skip to content

Commit

Permalink
contracts for unsafe helpers and intrinsics
Browse files Browse the repository at this point in the history
  • Loading branch information
carolynzech committed Sep 19, 2024
1 parent 809ba76 commit 3de338c
Showing 1 changed file with 56 additions and 47 deletions.
103 changes: 56 additions & 47 deletions doc/src/challenges/0007-atomic-types.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Challenge 7: Safety of Methods for Atomic Types and `ReentrantLock`
# Challenge 7: Safety of Methods for Atomic Types & Atomic Intrinsics

- **Status:** Open
- **Tracking Issue:** [#83](https://github.com/model-checking/verify-rust-std/issues/83)
- **Start date:** *2024-09-09*
- **Start date:** *2024-09-19*
- **End date:** *2024-12-10*

-------------------
Expand All @@ -14,15 +14,14 @@
`core::sync::atomic` provides methods that operate on atomic types.
For example, `atomic_store(dst: *mut T, val: T, order: Ordering)` stores `val` at the memory location pointed to by `dst` according to the specified [atomic memory ordering](https://doc.rust-lang.org/std/sync/atomic/enum.Ordering.html).
Rust developers can use these methods to ensure that their concurrent code is thread-safe.
They can also leverage these types to implement other, more complex atomic data structures.

The goal of this challenge is first to verify that the methods for atomic types are safe, then to verify that the implementation of `ReentrantLock` (which uses atomic types) is safe.
The goal of this challenge is to verify that these methods (and the intrinsincs they invoke) are safe.

### Success Criteria

#### Part 1: Atomic Types
#### Part 1: `from_ptr` Method for Atomic Types

First, verify that methods on atomic types (in `core::sync::atomic`) are safe.
First, verify that the unsafe `from_ptr` methods are safe.

Write safety contracts for each of the `from_ptr` methods:

Expand All @@ -39,8 +38,8 @@ Write safety contracts for each of the `from_ptr` methods:
- `AtomicI128::from_ptr`
- `AtomicU128::from_ptr`

Specifically, encode the conditions marked `#Safety` in the comments above the methods as preconditions.
Then, verify that the methods are safe for all possible values for the type that `ptr` points to.
Specifically, encode the conditions about `ptr`'s alignment and validity (marked `#Safety` in the comments above the methods) as preconditions.
Then, verify that the methods are safe for all possible values for the type that `ptr` points to, given that `ptr` satisfies those preconditions.

For example, `AtomicI8::from_ptr` is defined as:
```rust
Expand All @@ -61,47 +60,58 @@ pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T>
```

The type pointed to is a `*mut T`.
Verify that for any arbitrary value for this inner pointer (i.e., any arbitrary memory address), the method is safe.
Verify that for any arbitrary value for this `*mut T` (i.e., any arbitrary memory address), the method is safe.
However, you need not verify the method for all possible instantiations of `T`.
It suffices to verify this method for `T` of byte sizes 0, 1, 2, 4, and at least one non-power of two.

Then, write and verify safety contracts for the remaining methods that use `unsafe`:

| Method | Types |
| :--- | :---
| `from_mut` | `AtomicBool`, `AtomicPtr`, `AtomicI8`,`AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `get_mut_slice` | `AtomicBool`, `AtomicPtr`, `AtomicI8`,`AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `from_mut_slice` | `AtomicBool`, `AtomicPtr`, `AtomicI8`,`AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `load` | `AtomicBool`, `AtomicPtr`, `AtomicI8`,`AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `store` | `AtomicBool`, `AtomicPtr`, `AtomicI8`,`AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `swap` | `AtomicBool`, `AtomicPtr`, `AtomicI8`,`AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `compare_exchange` | `AtomicBool`, `AtomicPtr`, `AtomicI8`,`AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `compare_exchange_weak` | `AtomicBool`, `AtomicPtr`, `AtomicI8`,`AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `fetch_update` | `AtomicBool`, `AtomicPtr`, `AtomicI8`, `AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `fetch_and` | `AtomicBool`, `AtomicPtr`, `AtomicI8`, `AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `fetch_xor` | `AtomicBool`, `AtomicPtr`, `AtomicI8`, `AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `fetch_nand` | `AtomicBool`, `AtomicI8`, `AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128`|
| `fetch_or` | `AtomicBool`, `AtomicI8`, `AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `fetch_add` | `AtomicI8`, `AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `fetch_sub` | `AtomicI8`, `AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `fetch_max` | `AtomicI8`, `AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `fetch_min` | `AtomicI8`, `AtomicU8`,`AtomicI16`,`AtomicU16` `AtomicI32`, `AtomicU32`, `AtomicI64`, `AtomicU64`, `AtomicI128`,`AtomicU128` |
| `get_mut` | `AtomicBool`|
| `fetch_not` | `AtomicBool`|
| `fetch_ptr_add` | `AtomicPtr`|
| `fetch_ptr_sub` | `AtomicPtr`|
| `fetch_byte_add` | `AtomicPtr`|
| `fetch_byte_sub` | `AtomicPtr`|
| `fetch_or` | `AtomicPtr`|

You are not required to write correctness contracts for these functions (e.g., that `AtomicI8::fetch_sub` correctly subtracts the operands and stores the result), but it would be great to do so!

#### Part 2: Reentrant Lock

Verify that the `ReentrantLock` implementation in `std::sync::reentrant_lock` is safe. In particular, verify:

* That all uses of the methods on atomic types uphold the safety contracts you wrote in Part 1, and
* No other unsafe code in `ReentrantLock` causes undefined behavior or arithmetic overflow.
#### Part 2: Unsafe Atomic Functions

The atomic types expose safe methods for atomic operations.
These safe methods call unsafe private helper functions, which in turn invoke atomic instrinsics.
For example, `AtomicBool::store` is the (public) safe method that atomically updates the boolean's value.
This method invokes the unsafe function `atomic_store`, which in turn calls `intrinsics::atomic_store_relaxed`, `intrinsics::atomic_store_release`, or `intrinsics::atomic_store_seqcst`, depending on the provided ordering.
In this part, you will write safety contracts for each of the the unsafe helper functions.

Write and verify safety contracts for the unsafe functions:

- `atomic_store`
- `atomic_load`
- `atomic_swap`
- `atomic_add`
- `atomic_sub`
- `atomic_compare_exchange`
- `atomic_compare_exchange_weak`
- `atomic_and`
- `atomic_nand`
- `atomic_or`
- `atomic_xor`
- `atomic_max`
- `atomic_umax`
- `atomic_umin`

Also write contracts enforcing that the functions are not invoked with `order`s that would cause them to panic.

#### Part 3: Atomic Intrinsics

Write and verify safety contracts for the intrinsics invoked by the unsafe functions from Part 2 (in `core::intrinsics`):

| Operations | Functions |
|-----------------------|-------------|
| Store | `atomic_store_relaxed`, `atomic_store_release`, `atomic_store_seqcst` |
| Load | `atomic_load_relaxed`, `atomic_load_acquire`, `atomic_load_seqcst` |
| Swap | `atomic_xchg_relaxed`, `atomic_xchg_acquire`, `atomic_xchg_release`, `atomic_xchg_acqrel`, `atomic_xchg_seqcst` |
| Add | `atomic_xadd_relaxed`, `atomic_xadd_acquire`, `atomic_xadd_release`, `atomic_xadd_acqrel`, `atomic_xadd_seqcst` |
| Sub | `atomic_xsub_relaxed`, `atomic_xsub_acquire`, `atomic_xsub_release`, `atomic_xsub_acqrel`, `atomic_xsub_seqcst` |
| Compare Exchange | `atomic_cxchg_relaxed_relaxed`, `atomic_cxchg_relaxed_acquire`, `atomic_cxchg_relaxed_seqcst`, `atomic_cxchg_acquire_relaxed`, `atomic_cxchg_acquire_acquire`, `atomic_cxchg_acquire_seqcst`, `atomic_cxchg_release_relaxed`, `atomic_cxchg_release_acquire`, `atomic_cxchg_release_seqcst`, `atomic_cxchg_acqrel_relaxed`, `atomic_cxchg_acqrel_acquire`, `atomic_cxchg_acqrel_seqcst`, `atomic_cxchg_seqcst_relaxed`, `atomic_cxchg_seqcst_acquire`, `atomic_cxchg_seqcst_seqcst` |
| Compare Exchange Weak | `atomic_cxchgweak_relaxed_relaxed`, `atomic_cxchgweak_relaxed_acquire`, `atomic_cxchgweak_relaxed_seqcst`, `atomic_cxchgweak_acquire_relaxed`, `atomic_cxchgweak_acquire_acquire`, `atomic_cxchgweak_acquire_seqcst`, `atomic_cxchgweak_release_relaxed`, `atomic_cxchgweak_release_acquire`, `atomic_cxchgweak_release_seqcst`, `atomic_cxchgweak_acqrel_relaxed`, `atomic_cxchgweak_acqrel_acquire`, `atomic_cxchgweak_acqrel_seqcst`, `atomic_cxchgweak_seqcst_relaxed`, `atomic_cxchgweak_seqcst_acquire`, `atomic_cxchgweak_seqcst_seqcst` |
| And | `atomic_and_relaxed`, `atomic_and_acquire`, `atomic_and_release`, `atomic_and_acqrel`, `atomic_and_seqcst` |
| Nand | `atomic_nand_relaxed`, `atomic_nand_acquire`, `atomic_nand_release`, `atomic_nand_acqrel`, `atomic_nand_seqcst` |
| Or | `atomic_or_seqcst`, `atomic_or_acquire`, `atomic_or_release`, `atomic_or_acqrel`, `atomic_or_relaxed` |
| Xor | `atomic_xor_seqcst`, `atomic_xor_acquire`, `atomic_xor_release`, `atomic_xor_acqrel`, `atomic_xor_relaxed` |
| Max | `atomic_max_relaxed`, `atomic_max_acquire`, `atomic_max_release`, `atomic_max_acqrel`, `atomic_max_seqcst` |
| Min | `atomic_min_relaxed`, `atomic_min_acquire`, `atomic_min_release`, `atomic_min_acqrel`, `atomic_min_seqcst` |
| Umax | `atomic_umax_relaxed`, `atomic_umax_acquire`, `atomic_umax_release`, `atomic_umax_acqrel`, `atomic_umax_seqcst` |
| Umin | `atomic_umin_relaxed`, `atomic_umin_acquire`, `atomic_umin_release`, `atomic_umin_acqrel`, `atomic_umin_seqcst` |

## List of UBs

Expand All @@ -112,6 +122,5 @@ In addition to any properties called out as SAFETY comments in the source code,
* Reading from uninitialized memory.
* Invoking undefined behavior via compiler intrinsics.
* Producing an invalid value.
* Breaking the aliasing or conversion rules of `UnsafeCell` (defined [here](https://doc.rust-lang.org/std/cell/struct.UnsafeCell.html)).

Note: All solutions to verification challenges need to satisfy the criteria established in the [challenge book](../general-rules.md) in addition to the ones listed above.

0 comments on commit 3de338c

Please sign in to comment.