I have currently created a C++ class for a thread safe integer which simply stores an integer privately and has public get a set functions which use a boost::mutex to ensure that only one change at a time can be applied to the integer.
Is this the most efficient way to do it, I have been informed that mutexes are quite resource intensive? The class is used a lot, very rapidly so it could well be a bottleneck…
Googleing C++ Thread Safe Integer returns unclear views and oppinions on the thread safety of integer operations on different architectures.
Some say that a 32bit int on a 32bit arch is safe, but 64 on 32 isn’t due to ‘alignment’ Others say it is compiler/OS specific (which I don’t doubt).
I am using Ubuntu 9.10 on 32 bit machines, some have dual cores and so threads may be executed simultaneously on different cores in some cases and I am using GCC 4.4’s g++ compiler.
Thanks in advance…
Please Note: The answer I have marked as ‘correct’ was most suitable for my problem – however there are some excellent points made in the other answers and they are all worth reading!
It’s not compiler and OS specific, it’s architecture specific. The compiler and OS come into it because they’re the tools you work through, but they’re not the ones setting the real rules. This is why the C++ standard won’t touch the issue.
I have never in my life heard of an 64-bit integer write, which can be split into two 32-bit writes, being interrupted halfway through. (Yes, that’s an invitation to others to post counterexamples.) Specifically, I have never heard of a CPU’s load/store unit allowing a misaligned write to be interrupted; an interrupting source has to wait for the whole misaligned access to complete.
To have an interruptible load/store unit, its state would have to be saved to the stack… and the load/store unit is what saves the rest of the CPU’s state to the stack. This would be hugely complicated, and bug prone, if the load/store unit were interruptible… and all that you would gain is one cycle less latency in responding to interrupts, which, at best, is measured in tens of cycles. Totally not worth it.
Back in 1997, A coworker and I wrote a C++ Queue template which was used in a multiprocessing system. (Each processor had its own OS running, and its own local memory, so these queues were only needed for memory shared between processors.) We worked out a way to make the queue change state with a single integer write, and treated this write as an atomic operation. Also, we required that each end of the queue (i.e. the read or write index) be owned by one and only one processor. Thirteen years later, the code is still running fine, and we even have a version that handles multiple readers.
Still, if you want to treat a 64-bit integer write as atomic, align the field to a 64-bit bound. Why worry?
EDIT: For the case you mention in your comment, I’d need more information to be sure, so let me give an example of something that could be implemented without specialized synchronization code.
Suppose you have N writers and one reader. You want the writers to be able to signal events to the reader. The events themselves have no data; you just want an event count, really.
Declare a structure for the shared memory, shared between all writers and the reader:
(Make this a class or template or whatever as you see fit.)
Each writer needs to be told its index and given a pointer to this table:
When the writer wants to signal an event (or several), it updates its flag:
The reader keeps a local copy of all the flag values it has seen:
To find out if any events have happened, it just looks for changed values:
If something happened, we can check each source and get the event count:
Now the big gotcha in all this? It’s nonblocking, which is to say that you can’t make the Reader sleep until a Writer writes something. The Reader has to choose between sitting in a spin-loop waiting for
AnyEvents()to returntrue, which minimizes latency, or it can sleep a bit each time through, which saves CPU but could let a lot of events build up. So it’s better than nothing, but it’s not the solution to everything.Using actual synchronization primitives, one would only need to wrap this code with a mutex and condition variable to make it properly blocking: the Reader would sleep until there was something to do. Since you used atomic operations with the flags, you could actually keep the amount of time the mutex is locked to a minimum: the Writer would only need to lock the mutex long enough to send the condition, and not set the flag, and the reader only needs to wait for the condition before calling
AnyEvents()(basically, it’s like the sleep-loop case above, but with a wait-for-condition instead of a sleep call).