Actually, the explanation is a little further down in the docs, specifically, this paragraph:
For floating-point values, width sets the minimum width of the field and precision sets the number of places after the decimal, if appropriate, except that for %g/%G precision sets the total number of significant digits. For example, given 12.345 the format %6.3f prints 12.345 while %.3g prints 12.3. The default precision for %e and %f is 6; for %g it is the smallest number of digits necessary to identify the value uniquely.
// These print the same
fmt.Printf("%f\n", a+b)
fmt.Printf("%.6f\n", a+b)
// and these print the same *for this number! - not for all numbers*
fmt.Printf("%g\n", a+b)
fmt.Printf("%.15f\n", a+b)
So, again, the primary point is that for %g, the *default* precision is the smallest number of digits necessary to identify the value uniquely, *not* a pre-specified value. For 'a' alone, (i.e. 23.61),
fmt.Printf("%f\n", a) prints '23.610000'
and
fmt.Printf("%g\n", a) prints '23.61'
So this is documented behavior, just lacking the precision caveat on the bullet point description of %g.