--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/SG2PR06MB4868E572C5DE1CBD89726C689F4D2%40SG2PR06MB4868.apcprd06.prod.outlook.com.
--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/CACKEwjF6hZMzxxMpHd1BRGUqQEUmqf_DGwTffiPp6XmCTSBypQ%40mail.gmail.com.
I fairly certain it is valid to have multiple readers and writers on a single channel.
Thanks everyone for the responses so far.
@Ian, the program implicit_sync.go is just a code snippet aiming at illustrating how the memory model can reject a valid-looking program. It is not clear to us what constitutes bad practice in it, or how one can argue that this communication pattern should not arise in any context.
Of course, as @Dan points out, this program has a “logical race” in the sense that there is a race condition in it (that is, the order of execution affects global program behaviour), but race conditions are natural and abundant in concurrent programming.
@Alan, by parallel composition we mean that the two programs execute on the same resources (channel/shared memory). One would still expect certain compositionality properties to hold in this setting under specific conditions. To phrase it differently (see the new version of the pdf document attached):
In Fig 2, we now have an execution \sigma_1 in Fig.2 (a) that is race-free according to the memory model. Now we modify execution \sigma_1 to obtain the execution \sigma_2 in Fig.2 (b) by including a new thread (t3) executing a channel-send event. We would expect that this modification (composition of two executions, the initial one and the one from t3) preserves race-freeness, yet execution \sigma_2 is racy (according to the memory model). Again, we do not expect all compositions to preserve race-freeness, but compositions of this particular scheme should.
@Jorropo, we tried to communicate this message to https://github.com/golang/go/issues/69821 but the issue was closed. To us, these issues are important enough to merit a revision in the memory model, but of course, we do not make the call. If you agree, kindly open the issue, and we will follow along the discussion.
--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/CAOyqgcUPuhdT5nbCPM7hFcLwpiy%3DLjkg8Ue4yX5aXjvEsg5LPA%40mail.gmail.com.
@Alan, by parallel composition we mean that the two programs execute on the same resources (channel/shared memory). One would still expect certain compositionality properties to hold in this setting under specific conditions.
--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/CACKEwjGzQkvBjsHpbDsot8yfnJZZ4ceDcQ3Zda8o0U1vpQ9EhQ%40mail.gmail.com.
BlockingQueue happen-before actions subsequent to the access or removal of that element from the BlockingQueue in another thread.--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/CAMV2RqqQRYHzY_Q--S3bTajnt4AxMTy6wwsKQx-nQ5-N_%2B%3DDKg%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/CAMV2Rqp4wsyGcW%2Br59GVdXOYnG%2BfV9Bqu6Dv9yusEXY1E2KqRw%40mail.gmail.com.
The problem is not how to implement this. The problem is that the
programs posted are race-free even though the go memory model thinks
that they are. What I am suggesting is a change to the go memory
model, not a change to the implementation. The memory model is
unnecessarily restrictive than the implementation, it defines a
condition as racy even though it is not.
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/CAMV2RqryaO9mzyCvhmk%2BcLEhs8BH5d8%3DyFVMpHtd3dpoaT3WYQ%40mail.gmail.com.
Yes, but changing it to what you suggest
>>> "actions that move an object to the
>>> front of a blocking queue happen before the removal of that element
>>> from the blocking queue". The java definition also misses the events
>>> that happen after the object is put onto the queue but before it is
>>> removed.
The queue/channel memory barrier occurs on the initial put and the get - it can’t possibly include any writes that occur after the initial put into the queue - mainly because “any events that happen after the object is put” is asynchronous to the eventual read, so for example:
1. T1 puts A in Q
2. T1 changes X to 100
3. T2 reads A from Q
4. T2 reads X
At step 3 there is no guarantee that step 2 has occurred - which would mean that X could contain any value - steps 2 and 4 are independent events at this point - there is no fence/synchronization point.
So even if the memory model were changed, you still couldn’t reason about the value of X in step 5 - which would mean this would be inherently racy anyway.
Meaning changing the memory model in this way doesn’t solve anything. Even if you could make the read in 3 perform some sort of global synchronization point, and full memory flush across all cores, you couldn’t reason about the value of X - and the performance would be horrendous.
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/F7DBF4CA-DDDB-4351-9EBD-59B3A6E3AA56%40ix.netcom.com.
On Mon, Oct 28, 2024 at 5:34 PM robert engels <ren...@ix.netcom.com> wrote:
>
> Yes, but changing it to what you suggest
>
> >>> "actions that move an object to the
> >>> front of a blocking queue happen before the removal of that element
> >>> from the blocking queue". The java definition also misses the events
> >>> that happen after the object is put onto the queue but before it is
> >>> removed.
>
> The queue/channel memory barrier occurs on the initial put and the get - it can’t possibly include any writes that occur after the initial put into the queue - mainly because “any events that happen after the object is put” is asynchronous to the eventual read, so for example:
>
> 1. T1 puts A in Q
> 2. T1 changes X to 100
> 3. T2 reads A from Q
> 4. T2 reads X
>
> At step 3 there is no guarantee that step 2 has occurred - which would mean that X could contain any value - steps 2 and 4 are independent events at this point - there is no fence/synchronization point.
The fix I am talking about would be more like:
1. T1 puts A in Q
2. T1 puts B in Q
3. T1 changes X to 100
4. T2 reads A from Q
5. T2 reads B from Q
At step 5, it is guaranteed that step 3 occurred. But this is a race
according to the current memory model, because the write event for
step 5 is at step 2. But the implementation is such that the write
event for step 5 is the read evet at step 4.
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/CAMV2Rqq1XS7ek1Btp1M8PEP6cGAL%3DVdHrDfArKUwOb04C4bcOg%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/7226BE71-AD3D-4C34-B148-654D8579CDC8%40ix.netcom.com.
On Oct 29, 2024, at 1:04 AM, Keith Randall <keith....@gmail.com> wrote:
To view this discussion visit https://groups.google.com/d/msgid/golang-dev/CA%2BZMcOOWm3cciq5R6R1b53iNP6BUjwF%3DL88ELdCAZLeSWicxMw%40mail.gmail.com.
Thank you everyone for sharing your insights. In our view, issues like the above are important enough to merit a revision of the memory model.