Thanks for taking the time to look at it. It is a very good question; just like with generics, different people might have different expectations from a decimal type.
Here are what I consider to be the pros for the multi-precision approach taken in godec:
1. Exact arithmetic: my impression has been that many people who want to use a decimal type do so because they are concerned about the exactness of the result (like the OP), and it is often a greater concern than efficiency. (Those for whom the performance of decimal calculations is really critical likely use a platform that has hardware support for it anyway.) To this end, godec never rounds numbers implicitly and never overflows, and this implies multi-precision. (Technically, scale is fixed size and can overflow, but I can't think of a situation where that possibility would be a practical concern. I did consider adding panics for scale overflow but decided that even that would be overly defensive.)
2. Interoperability: it could be used for exchanging precise decimal data among various components. E.g. MySQL supports 65
decimals (216 bit precision), PostgreSQL up to 147455 decimals (up to
489835 bit precision), and Oracle 38 decimals (128 bit precision). It would be desirable that drivers can accept/return precise values as a decimal, but none
of these are covered by decimal128. Actually this thread got me to reconsider the pros and cons for having decimal support in an external package vs stdlib, and I've come to think that this is a strong argument for having a decimal type in the stdlib: using external packages in individual applications is usually
not an issue, but adding external dependencies to libraries that are
used in several projects (such as database drivers) can often be problematic / undesirable.
3. Code reuse: using the big.Int implementations for a large part of the arithmetic greatly reduces maintenance burden.
With the only con I am aware of being that it is potentially less efficient than using fixed-precision. Although even with fixed-precision, considering interoperability, I'd probably prefer/recommend using at least 256 bits, and then, assuming that small values are more frequently used, I'm not sure if operations on fixed-precision (200+ bit precision) values would be faster than multi-precision (but most often single word) values. (I guess the only way to know for sure is to implement and benchmark both...)
Based on these pros and cons, I consider it to be a reasonable compromise, but at the same time I understand that it may or may not be what others are looking for, or what the Go team would consider suitable for the stdlib.
Of course, it does not need to be one or another; there could be separate types for fixed and multi-precision, or maybe even a single type that can handle both representations internally, but I don't think it would be worth the added complexity.
Peter