That’s true. I should have mentioned, that there are many test cases and several handlers involved in the real code. I tried to boil it down to a simple snippet reproducing the issue (
https://sscce.org/), and some context got lost on that path. Let me explain a little more:
I test a binary (as part of integration tests), which accesses several endpoints on the (OIDC) server. The test cases define proper answers as well as invalid answers to test how the binary handles them. Usually an unexpected answer will let the binary exit, so not all handlers are hit in each test case. Also the server must be the same throughout all test cases, as they build on each other, so I can’t create a new test server in each test case.
I thought about using a channel for each handler, but that would not have been stable, as not all test cases hit all handlers and so only the involved handlers would have read from their channel leaving the other channels un-drained.
The solution with sync.Map handles this nicely. The key identifies the handler while the value is the response from the test case. For each test case I can write the response to the map, but only the called handlers will actually read from the map and use the value as their response. Unread values is not an issue here compared to the un-drained channels.
I guess sync.Map falls under the “Protect the racy data with a mutex or use atomics” category from your list.
Thanks Jason