If you make a game architecture that splits up system components (IE, rendering, physics, logic, input, scripting, etc) into different threads, how do you handle cases where real-time communication is necessary?
For example, if a script wants to draw a box on the screen, it would ideally do this when the rendering component issues a “FrameDrawn” event so that the box would be drawn on top of the screen, at the end of each frame draw. How is this possible if the scripting component and the rendering component are on different threads from each other?
What most games will have is a massive pile of data and several threads referring to some subset of that data as needed. Communication between threads is rarely explicit and more commonly is implicit, done through a change in the data caused by one thread and noticed later by a second thread. The changes are protected by mutexes, semaphores, or other low-level synchronisation primitives.
For your drawing example, the scripting thread would change some data in the GUI component and the rendering thread, next time it renders the GUI, would see that new data and draw the box accordingly.
Bear in mind however that most game developers don’t thread things quite so much as is in your example, partly because it’s difficult to do that effectively with a model that shares so much data and relies on low level locking for correctness. Ideally more game developers would move towards a model that shares less data but that is difficult due to the soft-real-time presentation response requirements.