While I’m familiar with concurrent programming concepts such as mutexes and semaphores, I have never understood how they are implemented at the assembly language level.
I imagine there being a set of memory “flags” saying:
- lock A is held by thread 1
- lock B is held by thread 3
- lock C is not held by any thread
- etc
But how is access to these flags synchronized between threads? Something like this naive example would only create a race condition:
mov edx, [myThreadId]
wait:
cmp [lock], 0
jne wait
mov [lock], edx
; I wanted an exclusive lock but the above
; three instructions are not an atomic operation :(
(…and some spinning before giving up the time slice of the thread – usually by calling into a kernel function that switches context.)
xchgon x86/x64. So in a strict sense, a CAS is not needed for crafting a spinlock – but some kind of atomicity is still required. In this case, it makes use of an atomic operation that can write a register to memory and return the previous contents of that memory slot in a single step. (To clarify a bit more: the lock prefix asserts the #LOCK signal that ensures that the current CPU has exclusive access to the memory. On todays CPUs it is not necessarily carried out this way, but the effect is the same. By usingxchgwe make sure that we will not get preempted somewhere between reading and writing, since instructions will not be interrupted half-way. So if we had an imaginary lock mov reg0, mem / lock mov mem, reg1 pair (which we don’t), that would not quite be the same – it could be preempted just between the two movs.)pauseinstructions that serve as hints that you’re spinning – so that the core you are running on can do something useful during this