slices grow at 25% after 1024 but why 1024?

1,661 views
Skip to first unread message

Miraddo

unread,
Sep 4, 2021, 1:14:30 PM9/4/21
to golang-nuts
Hey Guys,

We know slices grow by doubling until size 1024, the capacity will grow at 25%. The question that I had was why after 1024 elements? Why didn't the developers chose another number like 2048 or other numbers?

Thanks,
Milad

Rob Pike

unread,
Sep 4, 2021, 5:42:24 PM9/4/21
to Miraddo, golang-nuts
What would you pick? You need to pick something.

It was just arbitrary, I'm sure. 1024 is a nice number, and it's larger than the length of many slices.

Sometimes a number is just a number.

-rob


--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/21a5a2b5-2eaf-45d6-a538-5a882fafd0a6n%40googlegroups.com.

tapi...@gmail.com

unread,
Sep 4, 2021, 11:39:30 PM9/4/21
to golang-nuts
The problem of the current algorithm is it is not monotonically increasing:

    x1 := make([]int, 897)
    x2 := make([]int, 1024)
    y := make([]int, 100)
    println(cap(append(x1, y...))) // 2048
    println(cap(append(x2, y...))) // 1280

And it is not symmetrical:

    x := make([]int, 98)
    y := make([]int, 666)
    println(cap(append(x, y...))) // 768
    println(cap(append(y, x...))) // 1360

And it depends on the capacity of the first argument,
which is some logical but often not for many cases:

    x := make([]byte, 100, 500)
    y := make([]byte, 500)
    println(cap(append(x, y...))) // 1024
    println(cap(append(x[:len(x):len(x)], y...))) // 640

Kurtis Rader

unread,
Sep 4, 2021, 11:50:17 PM9/4/21
to tapi...@gmail.com, golang-nuts
On Sat, Sep 4, 2021 at 8:39 PM tapi...@gmail.com <tapi...@gmail.com> wrote:
The problem of the current algorithm is it is not monotonically increasing:

Please explain why that is a problem. Also, I think you are misusing "monotonically increasing". The behavior up to length 1024 is not "monotonically increasing". It is exponential growth.
 
    x1 := make([]int, 897)
    x2 := make([]int, 1024)
    y := make([]int, 100)
    println(cap(append(x1, y...))) // 2048
    println(cap(append(x2, y...))) // 1280

And it is not symmetrical:

Again, please explain why this is a problem.
 
    x := make([]int, 98)
    y := make([]int, 666)
    println(cap(append(x, y...))) // 768
    println(cap(append(y, x...))) // 1360

And it depends on the capacity of the first argument,
which is some logical but often not for many cases:

    x := make([]byte, 100, 500)
    y := make([]byte, 500)
    println(cap(append(x, y...))) // 1024
    println(cap(append(x[:len(x):len(x)], y...))) // 640

On Saturday, September 4, 2021 at 1:14:30 PM UTC-4 Miraddo wrote:
Hey Guys,

We know slices grow by doubling until size 1024, the capacity will grow at 25%. The question that I had was why after 1024 elements? Why didn't the developers chose another number like 2048 or other numbers?

Thanks,
Milad

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.


--
Kurtis Rader
Caretaker of the exceptional canines Junior and Hank

tapi...@gmail.com

unread,
Sep 4, 2021, 11:57:10 PM9/4/21
to golang-nuts
On Saturday, September 4, 2021 at 11:50:17 PM UTC-4 Kurtis Rader wrote:
On Sat, Sep 4, 2021 at 8:39 PM tapi...@gmail.com <tapi...@gmail.com> wrote:
The problem of the current algorithm is it is not monotonically increasing:

Please explain why that is a problem. Also, I think you are misusing "monotonically increasing". The behavior up to length 1024 is not "monotonically increasing". It is exponential growth.

I mean the whole length range. Why would you seect [0, 1023]?
I never deny it is exponential growth.
 
 
    x1 := make([]int, 897)
    x2 := make([]int, 1024)
    y := make([]int, 100)
    println(cap(append(x1, y...))) // 2048
    println(cap(append(x2, y...))) // 1280

And it is not symmetrical:

Again, please explain why this is a problem.

When I merge two slices, I expect the argument orders are not important.

Kurtis Rader

unread,
Sep 5, 2021, 12:12:46 AM9/5/21
to tapi...@gmail.com, golang-nuts
On Sat, Sep 4, 2021 at 8:57 PM tapi...@gmail.com <tapi...@gmail.com> wrote:
On Saturday, September 4, 2021 at 11:50:17 PM UTC-4 Kurtis Rader wrote:
On Sat, Sep 4, 2021 at 8:39 PM tapi...@gmail.com <tapi...@gmail.com> wrote:
The problem of the current algorithm is it is not monotonically increasing:

Please explain why that is a problem. Also, I think you are misusing "monotonically increasing". The behavior up to length 1024 is not "monotonically increasing". It is exponential growth.

I mean the whole length range. Why would you seect [0, 1023]?
I never deny it is exponential growth.

Okay, I agree that technically your use of "monotonically increasing" does not preclude exponential growth. However, your contrasting the behavior around the pivot point of 1024 implies that exponential growth is excluded since the growth is exponential on each side of the pivot point and you stated it was "monotonically increasing" on one side but not the other. 
 
    x1 := make([]int, 897)
    x2 := make([]int, 1024)
    y := make([]int, 100)
    println(cap(append(x1, y...))) // 2048
    println(cap(append(x2, y...))) // 1280

And it is not symmetrical:

Again, please explain why this is a problem.

When I merge two slices, I expect the argument orders are not important.

Why? That is an undocumented implementation detail. Furthermore, the length of "x1" and "x2" at the time when they are appended to, in combination with the value being appended, are reasonable predictors of the capacity of the new slice. It is up to you to prove that a different heuristic performs better. 

Dan Kortschak

unread,
Sep 5, 2021, 12:16:48 AM9/5/21
to golan...@googlegroups.com
This is what you're describing, right?
https://play.golang.org/p/RJbEkmFsPKM

The code that does this is here
https://github.com/golang/go/blob/9133245be7365c23fcd60e3bb60ebb614970cdab/src/runtime/slice.go#L183-L242
. Note that there are cap adjustments to optimise block sizes and
alignments. This explains why we see what we do.

tapi...@gmail.com

unread,
Sep 5, 2021, 12:35:01 AM9/5/21
to golang-nuts
On Sunday, September 5, 2021 at 12:16:48 AM UTC-4 kortschak wrote:
This is what you're describing, right?
https://play.golang.org/p/RJbEkmFsPKM

The code that does this is here
https://github.com/golang/go/blob/9133245be7365c23fcd60e3bb60ebb614970cdab/src/runtime/slice.go#L183-L242
. Note that there are cap adjustments to optimise block sizes and
alignments. This explains why we see what we do.

Yes, this is the current algorithm. I don't think aligning to a nearby memory block size is a problem.
Personally, I think the slice grow algorithm should not depend on slice capacities.

tapi...@gmail.com

unread,
Sep 5, 2021, 12:38:17 AM9/5/21
to golang-nuts
On Sunday, September 5, 2021 at 12:12:46 AM UTC-4 Kurtis Rader wrote:
On Sat, Sep 4, 2021 at 8:57 PM tapi...@gmail.com <tapi...@gmail.com> wrote:
On Saturday, September 4, 2021 at 11:50:17 PM UTC-4 Kurtis Rader wrote:
On Sat, Sep 4, 2021 at 8:39 PM tapi...@gmail.com <tapi...@gmail.com> wrote:
The problem of the current algorithm is it is not monotonically increasing:

Please explain why that is a problem. Also, I think you are misusing "monotonically increasing". The behavior up to length 1024 is not "monotonically increasing". It is exponential growth.

I mean the whole length range. Why would you seect [0, 1023]?
I never deny it is exponential growth.

Okay, I agree that technically your use of "monotonically increasing" does not preclude exponential growth. However, your contrasting the behavior around the pivot point of 1024 implies that exponential growth is excluded since the growth is exponential on each side of the pivot point and you stated it was "monotonically increasing" on one side but not the other. 

My opinions include:
1. there should not be a slip in the whole number range.
2. the increasing should totally depend on the sum length of the two arguments.
 
 
    x1 := make([]int, 897)
    x2 := make([]int, 1024)
    y := make([]int, 100)
    println(cap(append(x1, y...))) // 2048
    println(cap(append(x2, y...))) // 1280

And it is not symmetrical:

Again, please explain why this is a problem.

When I merge two slices, I expect the argument orders are not important.

Why? That is an undocumented implementation detail. Furthermore, the length of "x1" and "x2" at the time when they are appended to, in combination with the value being appended, are reasonable predictors of the capacity of the new slice. It is up to you to prove that a different heuristic performs better. 

Are there some evidences to prove the current algorithm is better?

Kurtis Rader

unread,
Sep 5, 2021, 12:52:06 AM9/5/21
to tapi...@gmail.com, golang-nuts
On Sat, Sep 4, 2021 at 9:38 PM tapi...@gmail.com <tapi...@gmail.com> wrote:
Why? That is an undocumented implementation detail. Furthermore, the length of "x1" and "x2" at the time when they are appended to, in combination with the value being appended, are reasonable predictors of the capacity of the new slice. It is up to you to prove that a different heuristic performs better. 

Are there some evidences to prove the current algorithm is better?

It is your responsibility to show that a different heuristic (not "algorithm"). is better. 

tapi...@gmail.com

unread,
Sep 5, 2021, 1:01:38 AM9/5/21
to golang-nuts
On Sunday, September 5, 2021 at 12:52:06 AM UTC-4 Kurtis Rader wrote:
On Sat, Sep 4, 2021 at 9:38 PM tapi...@gmail.com <tapi...@gmail.com> wrote:
Why? That is an undocumented implementation detail. Furthermore, the length of "x1" and "x2" at the time when they are appended to, in combination with the value being appended, are reasonable predictors of the capacity of the new slice. It is up to you to prove that a different heuristic performs better. 

Are there some evidences to prove the current algorithm is better?

It is your responsibility to show that a different heuristic (not "algorithm"). is better. 

I have shown some examples above, which outputs will be more in line with exception
if the heuristic will only depends on the sum length of the two arguments.

Clarification: I don't think the exponential growth algorithm is bad.
I just think it should make the capacities increase monotonically.

Axel Wagner

unread,
Sep 5, 2021, 2:16:25 AM9/5/21
to Miraddo, golang-nuts
On Sat, Sep 4, 2021 at 7:15 PM Miraddo <mposh...@gmail.com> wrote:
Hey Guys,

We know slices grow by doubling until size 1024, the capacity will grow at 25%. The question that I had was why after 1024 elements? Why didn't the developers chose another number like 2048 or other numbers?

I suspect: No reason. Someone probably ran some benchmarks and 1024 seemed like a decent enough number. Maybe not even that.

There is no algorithmic reason to switch - as long as the growth is geometric (i.e. the factor between growth-stages is a constant), you get the desired amortized constant cost of append. Whether that factor is 2 or 1.25 doesn't matter. It's just a bit of an optimization to make it 2 for small values - it doesn't terribly matter what's "small" for this heuristic.
 

Thanks,
Milad

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Axel Wagner

unread,
Sep 5, 2021, 2:26:23 AM9/5/21
to tapi...@gmail.com, golang-nuts
On Sun, Sep 5, 2021 at 7:02 AM tapi...@gmail.com <tapi...@gmail.com> wrote:


On Sunday, September 5, 2021 at 12:52:06 AM UTC-4 Kurtis Rader wrote:
On Sat, Sep 4, 2021 at 9:38 PM tapi...@gmail.com <tapi...@gmail.com> wrote:
Why? That is an undocumented implementation detail. Furthermore, the length of "x1" and "x2" at the time when they are appended to, in combination with the value being appended, are reasonable predictors of the capacity of the new slice. It is up to you to prove that a different heuristic performs better. 

Are there some evidences to prove the current algorithm is better?

There are two ways in which the current algorithm may be "better" than the alternative:
• If the alternative is to always double, then the current algorithm saves significant memory for large slices, which is better.
• The alternative is to use 1.25 starting at a different threshold (you can't *always* use 1.25, as that would fail for n<4). Finding evidence of whether this is better is easy: change the code, run the benchmarks, see how they change.
 
It is your responsibility to show that a different heuristic (not "algorithm"). is better. 

I have shown some examples above, which outputs will be more in line with exception
if the heuristic will only depends on the sum length of the two arguments.

Okay. Your expectation is wrong. What's the problem with that?

I have never cared about how much `append` grows a slice. Not even once. Even if I had cared to make a prediction and that prediction turned out wrong, it would never have mattered. Why do you think it does?
 

Clarification: I don't think the exponential growth algorithm is bad.
I just think it should make the capacities increase monotonically.
 

--
Kurtis Rader
Caretaker of the exceptional canines Junior and Hank

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Dan Kortschak

unread,
Sep 5, 2021, 4:00:56 AM9/5/21
to golan...@googlegroups.com
On Sun, 2021-09-05 at 08:15 +0200, 'Axel Wagner' via golang-nuts wrote:
> Whether that factor is 2 or 1.25 doesn't matter. It's just a bit of
> an optimization to make it 2 for small values - it doesn't terribly
> matter what's "small" for this heuristic.

There is a reason for choosing a factor less than 2; released blocks
during reallocation are able to be reused in later allocations (for the
same slice when grown — something that cannot happen for the case of 2
or greater). Interestingly, Go uses the 1.25× factor while others
appear to use values in the range of 1.5× to less than 2×, with a claim
that phi is the optimum.

There is no discussion in the introduction of append for the choice of
the growth factor: https://codereview.appspot.com/2757042


tapi...@gmail.com

unread,
Sep 5, 2021, 6:29:55 AM9/5/21
to golang-nuts
On Sunday, September 5, 2021 at 2:26:23 AM UTC-4 axel.wa...@googlemail.com wrote:
On Sun, Sep 5, 2021 at 7:02 AM tapi...@gmail.com <tapi...@gmail.com> wrote:


On Sunday, September 5, 2021 at 12:52:06 AM UTC-4 Kurtis Rader wrote:
On Sat, Sep 4, 2021 at 9:38 PM tapi...@gmail.com <tapi...@gmail.com> wrote:
Why? That is an undocumented implementation detail. Furthermore, the length of "x1" and "x2" at the time when they are appended to, in combination with the value being appended, are reasonable predictors of the capacity of the new slice. It is up to you to prove that a different heuristic performs better. 

Are there some evidences to prove the current algorithm is better?

There are two ways in which the current algorithm may be "better" than the alternative:
• If the alternative is to always double, then the current algorithm saves significant memory for large slices, which is better.
• The alternative is to use 1.25 starting at a different threshold (you can't *always* use 1.25, as that would fail for n<4). Finding evidence of whether this is better is easy: change the code, run the benchmarks, see how they change.
 
It is your responsibility to show that a different heuristic (not "algorithm"). is better. 

I have shown some examples above, which outputs will be more in line with exception
if the heuristic will only depends on the sum length of the two arguments.

Okay. Your expectation is wrong. What's the problem with that?

I have never cared about how much `append` grows a slice. Not even once. Even if I had cared to make a prediction and that prediction turned out wrong, it would never have mattered. Why do you think it does?

No expectations are wrong, or right. They are like personal tastes.
Personally, I think monotonically increasing is good taste.

Brian Candler

unread,
Sep 5, 2021, 6:51:13 AM9/5/21
to golang-nuts
I'm not sure you're clear about what "monotonically increasing" means.

Are you saying that there are some cases where append() results in the allocated size of a slice *shrinking*?  If so, please demonstrate.

Dan Kortschak

unread,
Sep 5, 2021, 7:02:43 AM9/5/21
to golan...@googlegroups.com
I think he means that the cap of the appended slice is not a
monotonically increasing function of the cap of the input slice.

https://play.golang.org/p/RJbEkmFsPKM


Axel Wagner

unread,
Sep 5, 2021, 7:10:57 AM9/5/21
to Dan Kortschak, golang-nuts
On Sun, Sep 5, 2021 at 10:00 AM 'Dan Kortschak' via golang-nuts <golan...@googlegroups.com> wrote:
There is a reason for choosing a factor less than 2; released blocks
during reallocation are able to be reused in later allocations (for the
same slice when grown — something that cannot happen for the case of 2
or greater).

This sounds interesting, but I don't understand it. Would you be willing to expand on this?
 
Interestingly, Go uses the 1.25× factor while others
appear to use values in the range of 1.5× to less than 2×, with a claim
that phi is the optimum.

There is no discussion in the introduction of append for the choice of
the growth factor: https://codereview.appspot.com/2757042


--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Dan Kortschak

unread,
Sep 5, 2021, 7:34:18 AM9/5/21
to golan...@googlegroups.com
On Sun, 2021-09-05 at 13:09 +0200, 'Axel Wagner' via golang-nuts wrote:
> This sounds interesting, but I don't understand it. Would you be
> willing to expand on this?

It's a consequence of the sum of successive powers of two. To have
reached an allocation of 2^n slots assuming a doubling n times you must
have deallocated and moved 2^n-1 total slots \sum(2^0, 2^1, 2^2, ...,
2^(n-1)). This is less than the amount of space that you are using now,
by one slot, so by definition you will not be able to use any of that
space. Note that here "slots" is any arbitrary number of contiguous
bytes.

This is pessimistic and only deals with allocations associated with a
single growing array, discounting the possibility of recovering from
unused allocations elsewhere.


Brian Candler

unread,
Sep 5, 2021, 7:35:03 AM9/5/21
to golang-nuts
Ah I see.  In particular, if you have a slice whose len and cap are both 1000, the new cap is 2048; but if you have a slice whose len and cap are both 1100, then the new cap is 1408.

I don't see that as a problem in any shape or form.  All that's required is:
(1) New cap is at least as large as new required len
(2) Some extra slack on cap to allow for some future growth.  Even that requirement is a heuristic, on the grounds that "if you've already appended to a slice at least once, then you're likely to append to it again" (but you may not).

I agree with what's been said already.  If the OP wants to propose and implement a specific new heuristic, and can prove via benchmarking that it doesn't degrade performance overall, then let them do so.

Axel Wagner

unread,
Sep 5, 2021, 8:17:07 AM9/5/21
to Dan Kortschak, golang-nuts
On Sun, Sep 5, 2021 at 1:34 PM 'Dan Kortschak' via golang-nuts <golan...@googlegroups.com> wrote:
On Sun, 2021-09-05 at 13:09 +0200, 'Axel Wagner' via golang-nuts wrote:
> This sounds interesting, but I don't understand it. Would you be
> willing to expand on this?

It's a consequence of the sum of successive powers of two. To have
reached an allocation of 2^n slots assuming a doubling n times you must
have deallocated and moved 2^n-1 total slots \sum(2^0, 2^1, 2^2, ...,
2^(n-1)). This is less than the amount of space that you are using now,
by one slot, so by definition you will not be able to use any of that
space.

Nifty. Good to learn new things :) There is a discussion here, which justifies the use of φ, somewhat convincingly:

Does Go use this? Naively, the way `append` works we can't re-use the storage anyway (as it might be aliased by a different slice), so ISTM doing this would require at least cooperation from the compiler to do alias analysis and make sure we are not keeping the slice around elsewhere. It *seems* like a sort of optimization Go doesn't do, but you would know better, I think.
 
Note that here "slots" is any arbitrary number of contiguous
bytes.

This is pessimistic and only deals with allocations associated with a
single growing array, discounting the possibility of recovering from
unused allocations elsewhere.


--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

jake...@gmail.com

unread,
Sep 5, 2021, 9:59:29 AM9/5/21
to golang-nuts
Like Brian, I think part of he problem is possibly a miscommunication based on "monotonically increasing". The term means that each point is greater than, or equal to, the previous one. https://en.wikipedia.org/wiki/Monotonic_function. In other words, it never decreases. In the example given  (https://play.golang.org/p/RJbEkmFsPKM), the capacities are "monotonically increasing", as no number in the second column is smaller than the one before it.

Arnaud Delobelle

unread,
Sep 5, 2021, 10:16:08 AM9/5/21
to jake...@gmail.com, golang-nuts


On Sun, 5 Sep 2021, 14:59 jake...@gmail.com, <jake...@gmail.com> wrote:
[...]
In the example given  (https://play.golang.org/p/RJbEkmFsPKM), the capacities are "monotonically increasing", as no number in the second column is smaller than the one before it.
 
Nitpick: that is not true.  I copy-pasted the output of the playground below.

0 8
100 208
200 416
300 640
400 896
500 1024
600 1280
700 1408
800 1792
900 2048
1000 2048
1100 1408 <-- at this point the new cap is less than for the previous row
1200 1536
1300 1792
1400 1792
1500 2048
1600 2048
1700 2304
1800 2304
1900 2688

Regards,

Arnaud

On Sunday, September 5, 2021 at 7:02:43 AM UTC-4 kortschak wrote:
On Sun, 2021-09-05 at 03:51 -0700, Brian Candler wrote:
> I'm not sure you're clear about what "monotonically increasing"
> means.
>
> Are you saying that there are some cases where append() results in
> the allocated size of a slice *shrinking*? If so, please
> demonstrate.

I think he means that the cap of the appended slice is not a
monotonically increasing function of the cap of the input slice.

https://play.golang.org/p/RJbEkmFsPKM


--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Jesper Louis Andersen

unread,
Sep 5, 2021, 11:29:49 AM9/5/21
to jake...@gmail.com, golang-nuts
On Sun, Sep 5, 2021 at 3:59 PM jake...@gmail.com <jake...@gmail.com> wrote:
Like Brian, I think part of he problem is possibly a miscommunication based on "monotonically increasing". The term means that each point is greater than, or equal to, the previous one. https://en.wikipedia.org/wiki/Monotonic_function. In other words, it never decreases. In the example given  (https://play.golang.org/p/RJbEkmFsPKM), the capacities are "monotonically increasing", as no number in the second column is smaller than the one before it.


If you take `f` to be a function from the array size to the capacity increase, it's not, because:

987 < 1024

but we have

f(987) = 2048 and,
f(1024) = 1280

That is f(987) !< f(1024)

breaking monotonicity. It also breaks in the playground at the 1000 and 1100 inputs.

Is it important in most programs? I don't think so. And for those programs in which it is important, chances are there are other problems you are facing way before this one rears its head. At least that would be my expectation.


jake...@gmail.com

unread,
Sep 5, 2021, 11:59:01 AM9/5/21
to golang-nuts
You are 100% correct. I missed that value. oops

Keith Randall

unread,
Sep 6, 2021, 9:16:58 PM9/6/21
to golang-nuts
I don't think this is an important thing to fix, but I agree it is a bit odd. If there's a simple way to restore monotonicity we'll consider it.

Arnaud Delobelle

unread,
Sep 6, 2021, 10:30:52 PM9/6/21
to Keith Randall, golang-nuts
If the growing function is currently

f(x) = 2x         for x < 1024
f(x) = x + x/4 otherwise

(Which I haven't checked), couldn't a simple way be to use e.g.

f(x) = 2x                    for x < 1024
f(x) = x + x/4 + 768 otherwise

Then

f(1023) = 2046
f(1024) = 2048

So the function is monotonic (and kind of "smooth")

Keith Randall

unread,
Sep 7, 2021, 1:02:58 PM9/7/21
to Arnaud Delobelle, golang-nuts
Sounds good. CL up for review at https://go-review.googlesource.com/c/go/+/347917

Arnaud Delobelle

unread,
Sep 8, 2021, 4:34:50 PM9/8/21
to Keith Randall, golang-nuts
Nice!

Arnaud
Reply all
Reply to author
Forward
0 new messages