How does Go GC set target heap size in kubernetes?

208 views
Skip to first unread message

Felix Böwing

unread,
Jan 17, 2025, 8:44:58 AMJan 17
to golang-nuts

We recently had the problem, that the go garbage collector ran hot on a kubernetes pod and we didn’t find out why. Specs were as follows:

  • heap size as reported by the GC was roughly 3GB
  • GOGC percentage was at 100
  • GOMEMLIMIT was set to something way above current heap size (6750MiB).
  • kubernetes memory limit was at 9GB
  • kubernetes memory request was at 6GB

These things all looked good for me. 

We found out that the problem was fixed by setting the memory request from 6 to 9GB.

And here comes the question: How does the GO GC even know about the requested memory? I haven't found anything in the docs or via Google. Could someone point us to the section in code?

It is nice that this behavior prevents avoidable OOMs, but it also puzzled us for a while.

Michael Knyszek

unread,
Jan 17, 2025, 10:21:47 AMJan 17
to golang-nuts
The Go runtime isn't aware if it's running in a pod/container/cgroup/etc. or not, not out of the box anyway. Also, GOMEMLIMIT=infinity (effectively) by default.

Are you using something like https://github.com/KimMachineGun/automemlimit? At least, that would also explain why you saw things change when changing the memory request; possibly that number was being read by automemlimit. (Or it could be some other package dependency adjusting these parameters at init time. Or it could be something changing your program's environment variables.)

Felix Böwing

unread,
Jan 17, 2025, 10:59:16 AMJan 17
to golang-nuts
Thanks for your fast reply. We are not using automemlimit.

But your comment gave me the idea to check the env variables. And indeed I found that we expose the requested memory amount as env variable to the pod. Then I also found the function that calls debug.SetMemoryLimit() to set the limit automatically and override our GOMEMLIMIT env variable.

So the mystery is solved, we actively set this. We can close this Thread :)
Reply all
Reply to author
Forward
0 new messages