--
Scott Lawrence
I'd be careful about making such sweeping generalizations.
Much better to measure and see what you find.
It might also vary from operating system to operating system.
> And there is a great difference of speed between pipes and sockets.
There's a great difference between shared memory and not.
The others are not nearly so clear cut. There's no inherent
reason one would perform better than another.
Russ
netchan over pipes would be nice, and easy. when i get my computing
world reassembled (don't ask) some netchan work will be high on my list.
-rob
Really? As I understand it, netchan will only work with TCP sockets -
unless the OS does some incredibly hacky stuff for the loopback
interface to avoid redundant chat, that should be way slower than
pipes. (I guess I'll have to look into this...)
> (don't ask)
I don't have to. I'll just go back to hating certain partitioning programs. ;-)
--
Scott Lawrence
Let us know what you find. Be sure to try multiple operating systems.
Russ
X86-64 LINUX
--------------------
For 4KB messages bounced back and forth 1000 times (measuring
latency), pipes clock in at 15ms, and sockets at 25ms. Up the
iterations to 10000, and for pipes its about 100ms, and sockets,
250ms.
40-KB messages, same scheme, the differences are more pronounced. Note
that the times aren't just multiplied by 10. 1000 iterations, 28
(pipes) and 180 (sockets) ms (yes, I'm averaging over multiple runs).
10000 iterations, 280ms and about a second.
Now for large messages, sent back and forth 10 times (latency is no
longer a factor).
4MB: 30ms and 100ms.
40MB: 293ms, 580ms.
qemu-system-x86_64 (w/ kvm) FreeBSD
------------------------------------------------------------
message-size iterations : pipems socketms
4KB 1000 : 13 50
4KB 10000 : 131 500
40KB 1000 : 100 700
40KB 10000 : 300 1600
4MB 10 : 80 180
40MB 10 : 250 2200
Conclusion: we need channels over pipes ;-)
The program I used: http://gist.github.com/555626 - let me know if you
receive drastically different results (or a segfault - especially a
segfault). I can't figure out how to close the socket correctly (I've
never liked sockets), so you have to change the port number in between
runs. Feel free to fork and fix that. Note that I send everything in
4096-byte chunks - you may be able to improve performance by tweaking
that. Let us know.
I'll post more thorough data (more operating systems and some pretty
pictures) sometime.
On 8/26/10, Russ Cox <r...@golang.org> wrote:
> On Thu, Aug 26, 2010 at 22:28, Scott Lawrence <byt...@gmail.com> wrote:
>> On 8/26/10, Russ Cox <r...@golang.org> wrote:
>>> There's a great difference between shared memory and not.
>>> The others are not nearly so clear cut. There's no inherent
>>> reason one would perform better than another.
>>
>> Really? As I understand it, netchan will only work with TCP sockets -
>> unless the OS does some incredibly hacky stuff for the loopback
>> interface to avoid redundant chat, that should be way slower than
>> pipes. (I guess I'll have to look into this...)
>
> Let us know what you find. Be sure to try multiple operating systems.
>
> Russ
>
--
Scott Lawrence
On the other hand this is a well-known pattern that's been being used in Unix software for decades so whilst caution is certainly necessary in getting the implementation right, it shouldn't be beyond the wit of many people here.
Now Channels over shared memory... well that would really be a challenge ;)
Ellie
Eleanor McHugh
Games With Brains
http://feyeleanor.tel
----
raise ArgumentError unless @reality.responds_to? :reason
Who would? But if speed of data blitting between processes is critical then shared memory may well be the right tool for the job despite its many potential pitfalls.
Actually, quite a few languages are garbage collected and compiled to
native code. Haskell, Ocaml and the whole ML family come to mind
(since I've been immersed in functional languages for a while), but I
believe D also falls in this category. You may be right about most
recent languages running in a VM, interpreter or JIT, though.
--
David Roundy
> The question that comes to my mind in this discussion is: What about
> context switching? Every time you have to switch to another process
> there is a lot that happens on the hardware and operating system
> level, how can that be avoided? I suppose it depends in no small part
> on the OS design.
Having a garbage collector which runs on its own core but which shares
memory with other goroutines is not a full-on context switch. It only
requires changing registers, not the page maps. This kind of context
switch is not free, but the cost is more on the order of a couple of
function calls, plus the memory cache hit. No operating system calls
are involved.
Ian
There is a niche to be filled: a language that has the good,
avoids the bad, and is suitable to modern computing
infrastructure:
- comprehensible
- statically typed
- light on the page
- fast to work in
- scales well
- doesn't require tools, but supports them well
So no, making a garbage collector was not a design goal of the
language. Designing a garbage collector helps some of the design
goals, but it's not in and of itself one. So, the garbage collector is
lower on the list than the compilers and bugs, and it also explains
why there's been such simple garbage collection thus far.
-Skip
It depends on the HPC, but most often large-memory problems involve...
large arrays. In such a case, there would be very little cache
penalty, since the GC wouldn't need to scan any of the large arrays
(since go is type safe, it can tell there aren't any pointers hiding
in there). Also, most reasonably-optimized go code won't do any
allocations in the core of the code, and thus won't require any gc.
The real problem with GC in HPC seems to me the question of
unpredictable lifetimes. Many (most) expensive algorithms do no
allocation (e.g. all of LAPACK), so it's not a problem. But there are
lots of cheap O(N) operations where I would like the convenience of
automatically-tracked temporaries, and there having GC seems likely to
fail miserably (as in, use too much memory).
That said, go is clearly not targeting the HPC market, and I doubt
it'll appeal much either with or without the overhead of garbage
collection.
David Roundy