> The part of this balance that I don't understand very well is the
> costs. As I said in
https://blog.golang.org/toward-go2:
>
>> Convincing others that a problem is significant is an essential
>> step. When a problem appears insignificant, almost every solution
>> will seem too expensive. But for a significant problem, there are
>> usually many solutions of reasonable cost. When we disagree about
>> whether to adopt a particular solution, we're often actually
>> disagreeing about the significance of the problem being solved.
>
> I don't understand the development models where changing major
> versions so often makes sense. It seems to me that probably such
> churn is a mistake. But I would like to understand these situations
> better.
>
> If you have experience with a project that bumps major version
> number frequently, can you please explain to my why that's essential
> to the way the project operates and why that's not impacting users?
It's common in organizations that primarily deliver an executable
product (not a library) and have different components developed by
different, independent teams. The producers and consumers of these
components belong to the same organization. These components may,
or may not be exposed publicly, but even if they are publicly
exposed, the development focus is towards providing support for the
other team in order to achieve a specific common goal, not to support
any other 3rd party consumer.
> If you have experience with a project that bumps major version
> number frequently, can you please explain to my why that's essential
> to the way the project operates and why that's not impacting users?
It is impacting users, but it's impacting mostly (or solely) internal
users, and the cost of this is offset by whatever is gained by the
change in the first place.
> For example, if you bump the major version when you delete a function,
> why not just mark the function deprecated, implement it in terms
> of whatever new functionality users should reach for instead, and
> leave it in?
That's possible, and probably the right choice when you are developing
a library that is supposed to be used by arbitrary people for
arbitrary reasons. However, in the case described above libraries
are only (or mostly) used internally, and the expectation is that
all code will migrate to the new API, so there won't be any more
consumers left that still use the old API. In this case maintaining
the old API is a cost that doesn't provide any benefit.
Someone might ask: "if there's no desire to maintain old APIs
anymore, why not update both the producers and the consumers
simultaneously?"
The answer is pretty obvious, it's the same reason we got type
aliases. Different teams need to move forward at their own pace.
They might not have the right, or the desire, to commit to each
other's source tree.
There's another reason why organizations might chose to use versioned
libraries internally, one that's more subtle. In big projects it's
normal for different teams to work on mutually-dependent seperate
modules, that like before, are more or less independently developed.
Go packages can't form cycles, so such organizations need to create
a 3rd, common package that both teams will import and both will
have to modify as time passes by. This requires collaboration between
teams because now they both have to work on the same code. This
creates friction between teams as their own different schedules
might not be ideal for these kind of modifications.
With versioned Go modules, this problem is much aleviated. Modules
can be mutually dependent. Of course there is still a need for
intermediary packages, but there is less of a need for coordination
when making changes to these packages. Versioning allows for a
"two-phase-commit" type of development where each team can make the
changes they deem necessary, and it is understood that the other
team will need to pick up. Preferabily as soon as possible, but
ultimately they can pick it up at their own pace.
--
Aram Hăvărneanu