We’re using a typical producer-consumer pattern: goroutines send messages to a channel, and a worker processes them. A colleague asked me why we even bother with a buffered channel (say, size 1000) if we’re waiting for the result anyway.
I tried to explain it like this: there are two kinds of waiting.
“Bad” waiting – when a goroutine is blocked trying to send to a full channel:
requestChan <- req // goroutine just hangs here, blocking the system
“Good” waiting – when the send succeeds quickly, and you wait for the result afterwards:
requestChan <- req // quickly enqueued
result := <-resultChan // wait for result without holding up others
The point: a big buffer lets goroutines hand off tasks fast and free themselves for new work. Under burst load, this is crucial — it lets the system absorb spikes without slowing everything down.
But here’s the twist: my colleague tested it with 2000 goroutines and got roughly the same processing time. His argument: “waiting to enqueue or dequeue seems to perform the same no matter how many goroutines are waiting.”
So my question is: does Go have any official docs that describe this idea? Effective Go shows semaphores, but it doesn’t really spell out this difference in blocking types.
Am I misunderstanding something, or is this just one of those “implicit Go concurrency truths” that everyone sort of knows but isn’t officially documented?
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-nuts/b4194b6b-51ea-42ff-af34-b7aa6093c15fn%40googlegroups.com.
Hi Robert, Jason,
Thank you both for your detailed and thoughtful responses — they helped me see the problem more clearly. Let me share some more details about our specific case:
We have exactly one consumer (worker), and we can’t add more because the underlying resource can only be accessed by one process at a time (think of it as exclusive access to a single connection).
The worker operation is a TCP connection, which is usually fast, but the network can occasionally be unreliable and introduce delays.
We may have lots of producers, and each producer waits for a result after submitting a request.
Given these constraints, can an unbuffered channel have any advantage over a buffered one for our case?
My understanding is that producers will just end up blocking when the single worker can’t keep up — so whether the blocking happens at “enqueue time” (unbuffered channel) or later (buffered channel).
What’s your view — is there any benefit in using an unbuffered/buffered channel in this situation?
Thanks again for the guidance!
On Sep 2, 2025, at 04:47, Egor P <guide.no...@gmail.com> wrote:
Hi Robert, Jason,
Thank you both for your detailed and thoughtful responses — they helped me see the problem more clearly. Let me share some more details about our specific case:
We have exactly one consumer (worker), and we can’t add more because the underlying resource can only be accessed by one process at a time (think of it as exclusive access to a single connection).
The worker operation is a TCP connection, which is usually fast, but the network can occasionally be unreliable and introduce delays.
We may have lots of producers, and each producer waits for a result after submitting a request.
Given these constraints, can an unbuffered channel have any advantage over a buffered one for our case?
My understanding is that producers will just end up blocking when the single worker can’t keep up — so whether the blocking happens at “enqueue time” (unbuffered channel) or later (buffered channel).What’s your view — is there any benefit in using an unbuffered/buffered channel in this situation?
Thanks again for the guidance!
понедельник, 1 сентября 2025 г. в 14:04:48 UTC-5, Jason E. Aten:
Hi Egor,
To view this discussion visit https://groups.google.com/d/msgid/golang-nuts/a8021225-814d-4ec5-bcd0-80e01e7a2acdn%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-nuts/8a063215-e980-4e93-a1b0-aac8bbd786b4n%40googlegroups.com.
I don’t think this is correct. There is only a single select on the consumer side - the order of sends by the producers is already random based on go routine wakeup/scheduling.On Sep 2, 2025, at 10:46, Jason E. Aten <j.e....@gmail.com> wrote:Yes, but not in terms of performance. Using a bufferedchannel could provide more "fairness" in the sense of "first-come, first served".If you depend on the (pseudo-randomized) select to decide on whichproducer's job gets serviced next, you could increase your responselatency by arbitrarily delaying an early job for.a long time, while alate arriving job can "jump the queue" and get serviced immediately.The buffered channel will preserve some of the arrival order. But--thisis only up to its length--after that the late arrivers will still be randomlyserviced--due to the pseudo random select mechanism. So if youdemand true FIFO for all jobs, then you might well be better servedby using a mutex and and a slice to keep your jobs anyway--suchthat the limit on your queue is available memory rather than afixed channel buffer size.
Of course if you over-run all of your memory, you are in troubleagain. As usual, back-pressure is a really critical componentof most designs. You usually want it.
To view this discussion visit https://groups.google.com/d/msgid/golang-nuts/818EF6F9-E919-477B-9B86-3DF8287EE9EC%40ix.netcom.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-nuts/CANrC0BiJ-gOZbufDprFf51mwJbnQZEwPjY45_GfZHuCEkpNmCw%40mail.gmail.com.
I don't think channel receive order is random when the senders are blocked.Sending goroutines are queued in a linked list in FIFO order within the runtime's channel struct (hchan)
Yes, but without external synchronization you have no ordering on the senders - so which actually blocks waiting for the receiver is random, further writers are blocked and are added to the list, but there is no ordering within them.
As the OP described, the writer needs to wait for a response, so the “goroutine doesn't have to wait for another goroutine to schedule in order to pick up the next work in the queue.” doesn’t apply.
So having unbuffered on both the request and response channels when you only have a single worker simplifies things - but if the requestor doesn’t read the response (or none is provided) you will have a deadlock.
On Tuesday, September 2, 2025 at 7:09:02 PM UTC+1 David Finkel wrote:I don't think channel receive order is random when the senders are blocked.Sending goroutines are queued in a linked list in FIFO order within the runtime's channel struct (hchan)Interesting--thanks for pointing this out, David.I think this is an implementation detail rather than a promise made by the language spec, no?i.e. something that is subject to change since the specification does not guarantee it.
In other words: beginners, you should not depend on it. Arguably since its not in the spec, itought to be randomized to prevent assuming it will always hold... since it could change in the future.
For replies I use the ticket + close-a-done-channel pattern, rather than single reply channel. Thus theworker never has to block itself once it is done with the work, and multiple parties canobserve that the job has finished if they have a pointer to it. See for example https://go.dev/play/p/ggviji5FfYz
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-nuts/52d5a833-3b7e-47a2-a95a-5c2d6da63cacn%40googlegroups.com.
David wrote:> The typical way is to include a channel of capacity 1 in the "message" that's going to the worker.I like this idea. But you have to guarantee only one observer will ever get to check onceif the job is done.
To view this discussion visit https://groups.google.com/d/msgid/golang-nuts/f91e450d-a660-41d9-8ed0-58af18d2f473n%40googlegroups.com.