Garbage Collection Slides and Transcript now available

1,999 views
Skip to first unread message

r...@golang.org

unread,
Jul 17, 2018, 12:36:23 PM7/17/18
to golang-dev
The transcript from the 2018 ISMM Keynote about the Go Garbage collector is now available at https://blog.golang.org/ismmkeynote

Abstract
The Go language features, goals, and use cases have forced us to rethink the entire garbage collection stack and have led us to a surprising place. The journey has been exhilarating. This talk describes our journey. It is a journey motivated by open source and Google's production demands. Included are side hikes into dead end box canyons where numbers guided us home. This talk will provide insight into the how and the why of our journey, where we are in 2018, and Go's preparation for the next part of the journey.

Feel free to respond to this post if you have comments or questions.

- Rick

Brendan Tracey

unread,
Jul 17, 2018, 2:01:38 PM7/17/18
to Rick Hudson, golang-dev
Thanks for an excellent blog post.

Couple of comments:
1) For us “out of industry” folks, it would be nice to define SLO the first time you use it.

2) A billion nanoseconds is a second, not a millisecond . I’m not sure if that means the pauses were longer than you think, or if it should really be millions of nanoseconds 

3) The sentence "We weren't going to give up our latency, weren't not going to give up the fact that we don't move stuff.” is too convoluted. I double-double-negate it out as “are going to start moving stuff”, but then it talks about a non-moving collector.

4) I did not follow the transition between the failures and the future section. I buy your argument that more memory is cheaper than more cores, but I didn’t follow what that means for the tradeoff. I haven't read through multiple times, so it’s possible I missed it, but I was expecting it to come back to “so this means X tradeoff in go is good relative to Y”. I assume it’s realated to the generational / heap discussion, but I think I’m missing something (total non-GC expert though, so I’m possibly outside the target audience) 

--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Julien

unread,
Jul 17, 2018, 3:29:00 PM7/17/18
to golang-dev

Awesome work! Was there a video from that keynote?

Julien,

Rick Hudson

unread,
Jul 17, 2018, 3:57:30 PM7/17/18
to Brendan Tracey, golang-dev
Brendan,
Thanks, I'll clean up some of the language and my math.  I'm hesitant to move far from the actual transcript so I won't address 4 in the blog.
- Rick

To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.

r...@golang.org

unread,
Jul 17, 2018, 4:00:17 PM7/17/18
to golang-dev
There was no video.
- Rick

Michael Jones

unread,
Jul 17, 2018, 4:34:26 PM7/17/18
to r...@golang.org, golang-dev
That's truly fantastic, Rick! Thank you for the clear exposition, and you and everyone else for the many iterations of inspiration and implementation that produced these results. 

--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.


--
Michael T. Jones
michae...@gmail.com

so.q...@gmail.com

unread,
Jul 18, 2018, 12:16:30 AM7/18/18
to golang-dev
Reading this could it be said that for our own applications, passing structs to a function or method by value is generally more performant than by pointer?

Instead of passing a pointer to a function we would pass the actual value. Because we were passing a value, escape analysis would only have to do intraprocedural escape analysis and not interprocedural analysis.

r...@google.com

unread,
Jul 18, 2018, 10:35:44 AM7/18/18
to golang-dev
It depends. I would do what makes the code more readable/debuggable/maintainable. If measurements reveal that the code is a hotspot then consider whether the speedup justifies the change.

cherry

unread,
Jul 18, 2018, 11:18:58 AM7/18/18
to so.q...@gmail.com, golang-dev
On Wed, Jul 18, 2018 at 12:16 AM <so.q...@gmail.com> wrote:
Reading this could it be said that for our own applications, passing structs to a function or method by value is generally more performant than by pointer?

Instead of passing a pointer to a function we would pass the actual value. Because we were passing a value, escape analysis would only have to do intraprocedural escape analysis and not interprocedural analysis.

To be clear, the compiler does have an interprocedural escape analysis, so passing struct by pointer doesn't necessarily hurt performance. (But there are cases that the compiler is not able to statically prove some things don't escape.)



so.q...@gmail.com

unread,
Jul 18, 2018, 6:53:13 PM7/18/18
to golang-dev
Passing by value tends to be safer in my experience.
I would think there is some penalty for large data bags, but I haven't noticed anything substantive in practice.

Is that partly because of what is described?

Henrik Johansson

unread,
Jul 19, 2018, 7:28:39 AM7/19/18
to so.q...@gmail.com, r...@golang.org, golang-dev
Interesting about fragmentation not being a problem +Rick Hudson is there any data available or can you perhaps elaborate on any conditions under which it is valid?

--

Rick Hudson

unread,
Jul 19, 2018, 10:30:25 AM7/19/18
to Henrik Johansson, so.q...@gmail.com, golang-dev
I would point to programs, including long running servers, written in languages like C, C++, Rust, and so forth. Fragmentation hasn't been an issue since the advent of size segregated heap implementations such as Intel's Scalable Malloc, Hoard, tcmalloc, and so. Go continues to draw heavily on the learnings from from these implementations.

To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.

Henrik Johansson

unread,
Jul 19, 2018, 3:32:16 PM7/19/18
to Rick Hudson, golang-dev
Thanks Rick,

I really don't doubt it it's just that it seems to be is a common argument.

Awesome post, keep em coming!


To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.

Bryan C. Mills

unread,
Jul 19, 2018, 3:59:30 PM7/19/18
to dahan...@gmail.com, r...@golang.org, golang-dev
The research on fragmentation is old, but the myth persists. 🙂
If you want something you can cite, The Memory Fragmentation Problem: Solved? (Johnstone & Wilson '98) is a good and relatively short read.

You can likely extrapolate that result to Go because Go is a value-oriented language: its allocation patterns and object lifetimes tend to be similar to those of C and C++.
(I would expect the result to differ for reference-heavy languages, where it is more common to allocate a large tree of objects of varying sizes and lifetimes and more common to use bump-pointer allocation instead of a size-segregated heap.)

Florian Weimer

unread,
Jul 20, 2018, 2:46:56 PM7/20/18
to Bryan C. Mills, dahan...@gmail.com, r...@golang.org, golang-dev
On 07/19/2018 09:58 PM, 'Bryan C. Mills' via golang-dev wrote:
> The research on fragmentation is old, but the myth persists. 🙂
> If you want something you can cite, /The Memory Fragmentation Problem:
> Solved? /(Johnstone & Wilson '98) is a good and relatively short read.

That paper doesn't really explain the good performance of
size-segregated allocators such as tcmalloc. I don't think the version
of Doug Lea's malloc discussed in the paper performs any allocation
segregation at all (at least as relevant to the benchmarks). It's using
a single heap with per-object headers, and only the free lists are
separated by size.

We happen to have a dlmalloc-derived system allocator, and it seems to
have issues related to fragmentation, but we don't know yet to what
extent this is caused by performance hacks which cause the allocator to
postpone coalescing of neighboring free blocks (something the (current)
dlmalloc allocator is very good at).

Thanks,
Florian
Reply all
Reply to author
Forward
0 new messages