It was some time I followed the Go development, so I might be out of sync.
Go has the excellent adage to share by communication instead of communicate by sharing. That is, using a channel as a mechanism to implement critical sections. With the case of single writer and multiple readers, this is less efficient. There are the mutex package that will support the classical solution for that.
For a considerable time, I have been playing around with a
game server for a MMORPG, with the intent to support 10000+ simultaneous players on the same server. I found Go and the use of goroutines to be a very good tool, and had a thoroughly enjoyable experience. Even though GC is problematic in hard real time, tests indicates that timing is good enough for my game. This is a typical example of one writer (a goroutine for every player) and many readers (goroutines for other players and processes), as well as several critical data sections.
As I had several critical data sections, I got into some troubles regarding the use of locks. If I wasn't careful, I could get a dead-lock. One solution is to never have more than one lock, but that wasn't possible in my case. My solution was to acquire locks in a certain order, which prevents dead-locks. However, the game server was big and complex, and somewhat lacking on overall design (stemming from the classical growth pain problems). To get around that, I used a naming conventions on all functions, based on what locks they might acquire. An example is
ProcSpawnMonsters_WLwWLuWLqWLmBlWLc, which is a function that can acquire 6 different locks in worst case, using the syntax "WLx" and "RLx" to indicate a write lock or read lock on critical area 'x'.
So I ended up with a manual control system to prevent dead-locking. I suppose it would also be possible to add some kind of run-time checks. However, I started to think that this all could have been supported by the language itself. The compiler would generate an error if a situation could arise that leads to a dead-lock? I think the transition to massively parallel algorithms is accelerating, and so such a support would be nice.
Is this a feasible idea?
One trigger for the idea is how "const" works in C++. If you call a const member function, it can only call other const member-functions. So you get compiler support to force you to stay in line.
The language could then support
- setting priorities to critical sections
- automatically set and release locks
- deny access of data that is not locked properly
- automatic use of atomic operations instead of mutexes, where possible