From 8b9600c29957234bf62782dec091c82583ac40f4 Mon Sep 17 00:00:00 2001 From: Nadeem Afana Date: Sun, 12 Nov 2023 12:30:09 -0800 Subject: [PATCH] Adds Upend ad --- _posts/archived/2015-07-10-memory-barriers-in-dot-net.aspx.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/_posts/archived/2015-07-10-memory-barriers-in-dot-net.aspx.md b/_posts/archived/2015-07-10-memory-barriers-in-dot-net.aspx.md index 914942b3c23..6316249b0f7 100644 --- a/_posts/archived/2015-07-10-memory-barriers-in-dot-net.aspx.md +++ b/_posts/archived/2015-07-10-memory-barriers-in-dot-net.aspx.md @@ -8,6 +8,10 @@ tags: [memory barrier, fence, intel, intel64, amd64, amd, reordering, processor] Nonblocking programming can provide performance benefits over locking, but getting it done right is significantly harder and requires careful testing. When it comes to memory barriers, there is all sorts of confusion and misleading information. Because memory barrier can be counterintuitive, using them wrongly is easy, and applying them correctly requires a lot of effort and headache. The benefits memory barriers provide may not be worth it in most high-level applications. However, memory barriers are handy in performance critical software. +
+ +
+ In the past, a system with a single processor executed concurrent threads using a trick called "timeslicing", where one thread is running and the others are sleeping. This means that all memory accesses done by one thread appeared in an exact order to all the other running threads. This is called a sequential consistency model. Nowadays, multiprocessor systems are very common and concurrent threads are truly running at the same time, so sequential consistency is not guaranteed because: 1. The compiler or JIT might re-order the memory instructions for optimization.