--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
TODO: It may be bumped to 4096 for the release.
Did it get bumped to 4096?This may be affecting benchmark code disproportionately, as it's not
uncommon to use a dummy non-pointer value in a benchmark.
--
--
--
// TODO(rsc): When garbage collector changes, revisit // the allocations in this file that use unsafe.Pointer.
--
One more thing. A bug fix plus the introduction of atomic.Value makes it significantly faster under heavy load on multiple cores.
As always with performance, the full story is not well captured by a simple benchmark number.
-rob
Could someone educate me on what determines the optimal goroutine stack size? I understand that the larger the stack becomes, the more memory each goroutine uses, limiting the maximum number of goroutines. What's the pressure in the other direction? Likelihood of copying and moving the stack immediately? If so, in theory could that be improved with better static analysis?
gcc5 will likely support Go 1.4, but it is not released yet. GCC trunk supports 1.3.3 currently and GCC 4.9.1 comes with 1.2 (not 1.3). There is no version of gccgo that has a lingo version of 1.4, currently.
--
I understand more about what is happening here.The underlying cause is that the heap is a lot smaller in 1.4 than in 1.3. In this example, about 40%. Live heap data went down from 300K to 180K. Because we GC when we reach 2x live data, a smaller heap means we GC more often.
About 10% of the heap reduction is due to more efficient encoding of type information in the heap. The other 30% is reduction in (and change in accounting for) stacks. We no longer account for stacks as part of the heap. Non-default-sized stack segments used to be counted as part of the heap.
GC still seems slower than I thought it would be, but this effect accounts for most of it. You can check yourself by adjusting GOGC to a larger value for 1.4, to match heap sizes with 1.3 (you can see the heap sizes using GODEBUG=gctrace=1).
This is an unfortunate side effect of improving our space efficiency. I'm not sure what can be done about it, other than to increase the default GOGChave value. But that seems wrong also.
--
You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/7VAcfULjiB8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
Great analysis. It feels like each GC cycle has a fixed overhead so more, small GCs have larger wall time. How odd.
I understand more about what is happening here.The underlying cause is that the heap is a lot smaller in 1.4 than in 1.3. In this example, about 40%. Live heap data went down from 300K to 180K. Because we GC when we reach 2x live data, a smaller heap means we GC more often.
About 10% of the heap reduction is due to more efficient encoding of type information in the heap. The other 30% is reduction in (and change in accounting for) stacks. We no longer account for stacks as part of the heap. Non-default-sized stack segments used to be counted as part of the heap.
GC still seems slower than I thought it would be, but this effect accounts for most of it. You can check yourself by adjusting GOGC to a larger value for 1.4, to match heap sizes with 1.3 (you can see the heap sizes using GODEBUG=gctrace=1).
This is an unfortunate side effect of improving our space efficiency. I'm not sure what can be done about it, other than to increase the default GOGChave value. But that seems wrong also.
We have just released go1.4beta1, a beta version of Go 1.4.It is cut from the default branch at a revision tagged as go1.4beta1.
I ran a benchmark of my library Gomail (it builds an email). And like others benchmarks mentioned here, it shows a ~30% slowdown:benchmark old ns/op new ns/op deltaBenchmarkFull 143705 189744 +32.04%benchmark old allocs new allocs deltaBenchmarkFull 322 336 +4.35%benchmark old bytes new bytes deltaBenchmarkFull 38328 38287 -0.11%
--
Hi Go nuts,
We have just released go1.4beta1, a beta version of Go 1.4.It is cut from the default branch at a revision tagged as go1.4beta1.
Ints in interfaces allocate now, but they didn't before. Write barriers slow things down. The garbage collector is faster. The runtime has had some speedups.The performance effects of these confounding factors are dependent on the programs. If you can isolate a single simple benchmark that illustrates a slowdown, we can investigate.-rob
--
You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/7VAcfULjiB8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.