If your asynchronous go channel receiver lags your sender your sender
will block. Most practical networks drop packets instead. Your code
will look very different depending on which situation you need to code
against.
Kai
Go channels can be asynchronous, but most of the time that's not what you want.
When communicating between goroutines running on the same machine a
synchronous send/recv improves program flow. Synchronous channels have
a lot of advantages by making program flow predictable and easier to
think about.
Once you start distributing an application across a network you have
to start dealing with a whole new set of problems; node failures,
network issues, lost messages, timeouts etc. If you have to deal with
all that stuff even while running on a single machine then you lose
performance. You only really need these kind of things at the boundary
between processes.
You're not going to distribute a single goroutine to a different
machine they are too lightweight for that. You'll have logical
groupings of goroutines (a process) on a machine with an
interface(netchan,IPC,RPC) to other logical groups of goroutines.
> The argument that Erlang folks use for their asynchronous sends is
> that it more closely resembles what's really going on in a computer
> network, and therefore is more natural to the problems that one can
> run into when writing a distributed application on a network.
Go is much more about concurrency than distributed parallel computing.
It's about how a program is designed rather than how it's executed and
synchronous channels more closely resemble what's really going on in a
computer that is switching between coroutines.
> I feel the netchan work Rob's been doing is pretty important stuff, so
> I would love to read some thoughts on people's feelings about the
> impact of the synchronous vs asynchronous send choice.
Netchans are nice, they can make the boundary for IPC look a lot more
like the internals, but I'd hate to have all channels behave like
that.
- jessta
--
=====================
http://jessta.id.au
That means you can't just perform a send and allow it to block until
the receiver is ready. You would have to create (or get a handle to)
the ack channel, as well as the sending channel, and perform a send
_and_ a receive. Much more bulky and unwieldy.
> That vs spawning a goroutine for every send seems about the same to me
> in terms of intrusiveness.
Jim's not suggesting spawning a goroutine for every send. He's
suggesting a goroutine that reads from one channel continuously
writing to a buffer, and another that reads from that buffer and
continuously sends the data on another channel. Only a few lines to
write, and allows you very explicit control over how the data is
buffered, as well as not bloating the sending side (it's still just a
send).
Andrew
What is an asynchronous channel?
well, it's a buffered channel with a buffer of infinite size.
Since a channel of infinite size is impractical, you instead have to
pick a finite size for the buffer. This buffer should be big enough
that in most average runs of the program it will never be full when a
process wants to send to it. But obviously in some circumstances the
buffer will be full and the sending process has to do something with
the message. The best way to handle this is to have the sending
process block until the buffer has a slot available.
All async communication has a buffer limit.
Given this Go's idiom of making async channels from goroutines(to
dynamically increase the size of the buffer as needed) and small
buffered channels seems to perfectly model what you want.
> The transitive properties of synchronized message passing are both
> great for understanding how a system works, and at the same time, a
> little bit of a nuisance if you're coming from asynchronous messaging
> background.
Something to remember about goroutines(when comparing to erlang
processes) is that they are cooperative and a lot of goroutines you
see in programs are designed to be in lockstep with each other, they
are often more about making the design of the program easier to
understand than to take advantage of multi-processors. It's a language
for concurrent programming, which doesn't always mean parallel
execution.
A great example of this is the prime sieve.
http://golang.org/doc/progs/sieve.go
once i've sent on a synchronous channel, i know that
the receiver must be in a particular state.
this makes them much easier to reason about.
for instance, consider the following code fragment,
where we're trying to execute one of two possible
requests and retrieve the result:
select{
case svc1 <- req1:
<-req1.reply
case svc2 <- req2:
<-req2.reply
}
if svc1 and svc2 are synchronous channels, then
we know that when we've sent a request that
the serving goroutine is actively working on that request
and will send a reply eventually.
if they are asynchronous, one of the sends will always
succeed (non-deterministically) and we've no way of knowing
whether the serving goroutine is actually yet aware
of the request - maybe it's exited and never will be.
this kind of property is very useful when creating relatively
closely coupled concurrent programs, but doesn't scale
well - if i interpose a goroutine, the channel becomes
asynchronous and you can't build synchronous channels from asynchronous
channels. that's just fine - it's easy to build asynchronous
channels if you need them, with whatever behaviour you
like when the queue is full.