Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How convergent was the general use of binary floating point?

311 views
Skip to first unread message

Russell Wallace

unread,
Dec 3, 2022, 10:01:46 PM12/3/22
to
Some historical events and trends are contingent. For example, the rise and dominance of the x86 architecture could easily have been otherwise, had the IBM PC team chosen a different chip such as the Motorola 68000. Others are convergent; somewhat different historical circumstances would still have converged on the same outcome. For example, given that the industry settled on the 8-bit byte, nothing short of Moore's law being interrupted by a global cataclysm would have prevented the existence of 64-bit computers in 2022.

I'm trying to figure out which category floating point as we know it falls into.

By this, I do not mean the specifics of the IEEE 754 standard for binary floating point. I hold the fairly conventional view that 754 is mostly pretty decent, certainly an improvement on the previous state of affairs; there are a few things I would like to fix, but in the grand scheme of things, that's not terribly important. 754 does a decent job of serving its intended users, e.g. physical scientists running simulations in Fortran (which general, fuzzy category of people I will refer to here as 'scientists' for short). Let's take as a starting point that IEEE in the late seventies and early eighties agreed on the standard that it did in our timeline.

But the phrase 'its intended users' in the above paragraph is doing a lot of work. The remarkable thing about the history of numerical computation in our world is not that binary floating point exists in its current form, or that it is used by scientists, but that it is also used by everyone else! It's not ideal for accountants, who would have preferred decimal floating point, where the rounding errors match what they would get on paper. It's not ideal for gamers, who would have preferred not to waste electricity on denormals, switchable rounding modes and signaling NaNs. It is for other reasons, not ideal for machine learning (though that is getting specialized hardware nowadays). But everyone ended up using the system designed for scientists. Why?

(Not that I'm complaining about the state of affairs. A case could be made that, because our species underinvests in public goods like scientific research, it is a good thing that our industry has, even if by accident, ended up on a path where gamers subsidize the development of hardware that can also be used by scientists. In this discussion, I am not taking a position on whether the result is good or bad, only asking whether it is convergent.)

Roughly speaking, the sequence of events was that Intel developed a floating-point coprocessor (the 8087), everyone programmed on the basis of 'use IEEE floating point so the code will run faster if an FPU is present', then with the 486, Intel started integrating the FPU into the CPU.

What exactly did people buy 8087s for, in the early eighties? Of course there were some number of scientists buying them to run simulations and suchlike in Fortran, the main intended purpose of binary floating point. But they were a relatively small group to begin with, and then the only ones who would settle for running their simulations on an 8087 (in the low tens of kiloflops) were those who couldn't get time on a bigger machine. As far as I can tell, most of the 8087s were purchased for one of two things:

Spreadsheets. VisiCalc had used decimal floating point, which is what you preferably want in a spreadsheet; not only does it do arithmetic in a way that better suits accountants, but at that tech level, it would be faster than binary, because the majority of numbers encountered in spreadsheets, are simple when expressed in decimal. It's faster to multiply arbitrary 64-bit numbers in binary, but spreadsheets don't typically work with arbitrary numbers, and at that tech level, it's faster to multiply 1.23 by 4.56 in decimal, because the multiply loop can exit early. But for whatever reason, Lotus decided to use binary for 1-2-3, and that meant your spreadsheets would recalculate faster if you bought an 8087.

CAD.

Point of departure: Lotus decides their spreadsheet should use decimal, like VisiCalc. Autodesk notices this, decides the 8087 won't be a big hit, and puts in the programming effort needed to make AutoCAD work with fixed point instead.

On this timeline, far fewer people have a reason to buy an 8087. Intel keeps making them for the relatively small handful of customers, but floating-point coprocessors remain a niche product. C and Pascal compiler vendors notice this, and don't bother supporting the 8087, instead shipping a different binary floating point format that's optimized for software implementation. Noting that this is still rather slow, Borland takes the step of providing language-extension support for fixed point, and other compiler vendors follow suit.

At the end of the eighties, Intel doesn't bother wasting silicon integrating an FPU into the 486, preferring to spend the transistors making integers, fixed point and decimal run faster. In the nineties, as floating point has not become fast by default, 3D game programmers don't start using it. They stick with fixed point, diverting the positive feedback loop that happened OTL from floating to fixed point. GPUs do likewise, as do machine learning researchers when that becomes important.

Scientists find themselves under pressure to find a way to get by with cheap commodity hardware, and Fortran compiler vendors respond by adding support for fixed point. Which is /not/ quite an adequate substitute. When you're writing computational fluid dynamics code, you would much prefer not to have to try to figure out in advance what the dynamic range will be; that's why floating point was invented in the first place! But sometimes 'cheap commodity off-the-shelf' beats 'ideally designed for my purposes'.

Is that a realistic alternate history or, given that point of departure, would numerical computation have still converged on the general use of binary floating point? If the latter, why and how? Note that I'm not making any claims about one form of numerical computation being better than another /in general/ (as opposed to for specific purposes). I'm trying to evaluate a conjecture about how the dynamics of the market would bring about a result.

MitchAlsup

unread,
Dec 3, 2022, 10:16:43 PM12/3/22
to
On Saturday, December 3, 2022 at 9:01:46 PM UTC-6, Russell Wallace wrote:
> Some historical events and trends are contingent. For example, the rise and dominance of the x86 architecture could easily have been otherwise, had the IBM PC team chosen a different chip such as the Motorola 68000. Others are convergent; somewhat different historical circumstances would still have converged on the same outcome. For example, given that the industry settled on the 8-bit byte, nothing short of Moore's law being interrupted by a global cataclysm would have prevented the existence of 64-bit computers in 2022.
>
> I'm trying to figure out which category floating point as we know it falls into.
>
> By this, I do not mean the specifics of the IEEE 754 standard for binary floating point. I hold the fairly conventional view that 754 is mostly pretty decent, certainly an improvement on the previous state of affairs; there are a few things I would like to fix, but in the grand scheme of things, that's not terribly important. 754 does a decent job of serving its intended users, e.g. physical scientists running simulations in Fortran (which general, fuzzy category of people I will refer to here as 'scientists' for short). Let's take as a starting point that IEEE in the late seventies and early eighties agreed on the standard that it did in our timeline.
>
> But the phrase 'its intended users' in the above paragraph is doing a lot of work. The remarkable thing about the history of numerical computation in our world is not that binary floating point exists in its current form, or that it is used by scientists, but that it is also used by everyone else! It's not ideal for accountants, who would have preferred decimal floating point, where the rounding errors match what they would get on paper. It's not ideal for gamers, who would have preferred not to waste electricity on denormals, switchable rounding modes and signaling NaNs. It is for other reasons, not ideal for machine learning (though that is getting specialized hardware nowadays). But everyone ended up using the system designed for scientists. Why?
>
> (Not that I'm complaining about the state of affairs. A case could be made that, because our species underinvests in public goods like scientific research, it is a good thing that our industry has, even if by accident, ended up on a path where gamers subsidize the development of hardware that can also be used by scientists. In this discussion, I am not taking a position on whether the result is good or bad, only asking whether it is convergent.)
>
> Roughly speaking, the sequence of events was that Intel developed a floating-point coprocessor (the 8087), everyone programmed on the basis of 'use IEEE floating point so the code will run faster if an FPU is present', then with the 486, Intel started integrating the FPU into the CPU.
>
> What exactly did people buy 8087s for, in the early eighties? Of course there were some number of scientists buying them to run simulations and suchlike in Fortran, the main intended purpose of binary floating point. But they were a relatively small group to begin with, and then the only ones who would settle for running their simulations on an 8087 (in the low tens of kiloflops) were those who couldn't get time on a bigger machine. As far as I can tell, most of the 8087s were purchased for one of two things:
>
> Spreadsheets. VisiCalc had used decimal floating point, which is what you preferably want in a spreadsheet; not only does it do arithmetic in a way that better suits accountants, but at that tech level, it would be faster than binary, because the majority of numbers encountered in spreadsheets, are simple when expressed in decimal. It's faster to multiply arbitrary 64-bit numbers in binary, but spreadsheets don't typically work with arbitrary numbers, and at that tech level, it's faster to multiply 1.23 by 4.56 in decimal, because the multiply loop can exit early. But for whatever reason, Lotus decided to use binary for 1-2-3, and that meant your spreadsheets would recalculate faster if you bought an 8087.
>
> CAD.
>
> Point of departure: Lotus decides their spreadsheet should use decimal, like VisiCalc. Autodesk notices this, decides the 8087 won't be a big hit, and puts in the programming effort needed to make AutoCAD work with fixed point instead.
>
> On this timeline, far fewer people have a reason to buy an 8087. Intel keeps making them for the relatively small handful of customers, but floating-point coprocessors remain a niche product. C and Pascal compiler vendors notice this, and don't bother supporting the 8087, instead shipping a different binary floating point format that's optimized for software implementation. Noting that this is still rather slow, Borland takes the step of providing language-extension support for fixed point, and other compiler vendors follow suit.
>
<
By the mid 1980s the RISC evolution was well underway, MIPS R2000 could do FADD in 2 cycles, FMUL in
3 (or was it 4) cycles, LDs were 2 cycles, and almost everything else was 1 cycle. This is where your alternate
history falls apart. The RISC camp delivered FP, and x86 had to follow suit and go from a separate optional
chip (386+387) to integrated and pipelined (486).
<
> At the end of the eighties, Intel doesn't bother wasting silicon integrating an FPU into the 486, preferring to spend the transistors making integers, fixed point and decimal run faster. In the nineties, as floating point has not become fast by default, 3D game programmers don't start using it. They stick with fixed point, diverting the positive feedback loop that happened OTL from floating to fixed point. GPUs do likewise, as do machine learning researchers when that becomes important.
>
> Scientists find themselves under pressure to find a way to get by with cheap commodity hardware, and Fortran compiler vendors respond by adding support for fixed point. Which is /not/ quite an adequate substitute. When you're writing computational fluid dynamics code, you would much prefer not to have to try to figure out in advance what the dynamic range will be; that's why floating point was invented in the first place! But sometimes 'cheap commodity off-the-shelf' beats 'ideally designed for my purposes'.
<
Unfortunately, RISC guys could not get the cost structure of workstations down to the cost structure of
PCs. Once x86 became pipelined, then SuperScalar, the cubic dollars available to fund design teams
were easily able to outrun the design teams of smaller companies (RISC guys) and by Pentium Pro
their (RISC) days had been numbered, counted, and scheduled.

John Levine

unread,
Dec 3, 2022, 11:59:58 PM12/3/22
to
According to Russell Wallace <russell...@gmail.com>:
>But the phrase 'its intended users' in the above paragraph is doing a lot of work. The remarkable thing about the
>history of numerical computation in our world is not that binary floating point exists in its current form, or that
>it is used by scientists, but that it is also used by everyone else! It's not ideal for accountants, who would have
>preferred decimal floating point, where the rounding errors match what they would get on paper. It's not ideal for
>gamers, who would have preferred not to waste electricity on denormals, switchable rounding modes and signaling
>NaNs. It is for other reasons, not ideal for machine learning (though that is getting specialized hardware
>nowadays). But everyone ended up using the system designed for scientists. Why?

Because binary floating point gets the job done. The first hardware
floating point on the IBM 704 in 1954 looks a lot like floating point
today. There was a sign bit, an 8 bit exponent stored in excess 128,
and a usually normalized 27 bit fraction. On the 704 they knew that
all normalized floats had a 1 in the high bit (the manual says so)
but it took two decades to notice that you could not store it and get
an extra precision bit.

A few decimal machines in the 1950s had decimal FP, and the IBM 360
had famously botched hex FP, but other than that, it was all binary
all the way.

I believe that some early IEEE FP implementations trapped when the
result would be denormal and did it in software, so I don't think it
were ever a performance issue for normal results. Denormals definitely
help to avoid surprising precision errors.

>Spreadsheets. VisiCalc had used decimal floating point, ...
> But for whatever reason, Lotus decided to use binary for 1-2-3, and that meant your spreadsheets would
>recalculate faster if you bought an 8087.

Visicalc used sort of a virtual machine to make it easy to port to all
of the different micros before the IBM PC, and I expect it was easier
to do decimal than to try and deal with all of the quirks of the
underlying machines' arithmetic. Lotus made a deliberate choice to
target only the IBM PC and take full advantage of all of its quirks,
such as using all of the keys on the keyboard. The people who started
Lotus knew the authors of Visicalc and I'm sure were aware of the
decimal vs binary issue.

Nobody actually wants decimal floating point, rather they want scaled
decimal with predictable precision, which is why DFP has all of that
quantum stuff.

Back in the 80s I worked on Javelin, a modeling package that sort of
competed with 1-2-3. I wrote the financial functions for bond price
and yield calculations, which are defined to do decimal rounding. It
was a minor pain to implement using 8087 floating point but it wasn't
all that hard. So long as you are careful about rounding to avoid
nonsense like 10 * 0.1 = 0.9999999997, few people could tell what
the internal radix was, and binary is a lot faster to implement.

--
Regards,
John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

robf...@gmail.com

unread,
Dec 4, 2022, 1:02:23 AM12/4/22
to
My guess is probably fairly convergent. I think it would be hard to beat for performance and power. Not necessarily IEEE format, but any other binary format as well. What if trinary numbers had become common? I seem to recall a computer system built around trinary values instead of binary ones.

I am reminded of the dinosaurs that converged on the same forms in different periods of time. Even though the forms converge it does not mean that there are not other forms in existence.

Stephen Fuld

unread,
Dec 4, 2022, 1:09:55 AM12/4/22
to
On 12/3/2022 8:59 PM, John Levine wrote:
> According to Russell Wallace <russell...@gmail.com>:
>> But the phrase 'its intended users' in the above paragraph is doing a lot of work. The remarkable thing about the
>> history of numerical computation in our world is not that binary floating point exists in its current form, or that
>> it is used by scientists, but that it is also used by everyone else! It's not ideal for accountants, who would have
>> preferred decimal floating point, where the rounding errors match what they would get on paper. It's not ideal for
>> gamers, who would have preferred not to waste electricity on denormals, switchable rounding modes and signaling
>> NaNs. It is for other reasons, not ideal for machine learning (though that is getting specialized hardware
>> nowadays). But everyone ended up using the system designed for scientists. Why?
>
> Because binary floating point gets the job done. The first hardware
> floating point on the IBM 704 in 1954 looks a lot like floating point
> today. There was a sign bit, an 8 bit exponent stored in excess 128,
> and a usually normalized 27 bit fraction. On the 704 they knew that
> all normalized floats had a 1 in the high bit (the manual says so)
> but it took two decades to notice that you could not store it and get
> an extra precision bit.
>
> A few decimal machines in the 1950s had decimal FP, and the IBM 360
> had famously botched hex FP, but other than that, it was all binary
> all the way.

A minor quibble, which you actually address below. As you said there,
contrary to what the OP said, accountants didn't want DFP. They wanted
scaled decimal fixed point with the compiler taking care of scaling
automatically.

In the 1970s, I worked at the United States Social Security
Administration, which, from a business/accounting point of view, looks a
lot like a large insurance company. All the main programs that "ran the
business" used COMP-3, packed decimal for all the calculations. I
believe that many/most large companies did similar. So it wasn't
"binary all the way", but, at least for business applications, it was
primarily scaled decimal.
- Stephen Fuld
(e-mail address disguised to prevent spam)

BGB

unread,
Dec 4, 2022, 4:20:51 AM12/4/22
to
In a general sense, I suspect floating-point was likely inevitable.

As for what parts could have varied:
Location of the sign and/or exponent;
Sign/Magnitude and/or Twos Complement;
Handling of values near zero;
Number of exponent bits for larger types;
...

For example, one could have used Twos Complement allowing a signed
integer compare to be used unmodified for floating-point compare.

For many purposes, subnormal numbers do not matter.
* DAZ / FTZ is "usually good enough";
* Could have maybe defined that "zero does not exist per-se".
** Zero would not be strictly zero, merely the smallest possible value.
** OTOH: ((0+0)!=0) would be "kinda stupid".
** IMHO: Zero does at least mostly justify the cost of its existence.


The relative value of giving special semantics (in hardware) to the
Inf/NaN cases could be debated. Though, arguably, they do have meaning
as "something has gone wrong in the math" signals, which would not
necessarily have been preserved with clamping.

One could have maybe also folded the Inf/NaN cases into 0, say:
Exp=0, all bits zero, Zero
Exp=0, high bits of mantissa are not zero, Inf or NaN.
With the largest exponent as the upper end of the normal range.

Cheap hardware might have calculations with these as inputs could simply
have them decay to zeroes.



One could similarly make an argument that the idea of user-specified
rounding modes (or, for that matter, any rounding other than "truncate
towards zero" need not exist).


For many purposes, more than a small number of sub ULP bits does not matter.
* One may observe that one gets "basically similar" results with 2 or 4
sub-ULP bits as they would with more.


Main exception case is things like multiply-accumulate and catastrophic
cancellation, but this almost makes more sense as a special-case
operator (since "A*B+C" may generate different results with FMAC than as
FMUL+FADD).

One could make a case for having a hardware FMAC that always produces
the "double rounded result" (say, being cheaper for hardware and
consistent with the FMUL+FADD sequence).

In this case, the non-double-rounded version could be provided as a
function in "math.h" or similar.


But, I guess one could raise the issue of, say:
If floating point math had been defined as strictly DAZ+FTZ,
Truncate-Only, defining the existence of exactly 4 sub-ULP bits for
Double ops, ...

Would anyone have really noticed?... ( Apart from maybe it being
slightly easier to get bit-identical results between implementations,
since the bar would have been set a little lower. )

It is sort of like, despite typically having less precision, it is
easier to get bit-identical results with fixed-point calculations than
with floating point.


It would be sort of like, say, if people agreed to do math with PI
defined as 201/64 (3.140625) or maybe 3217/1024 (3.1416015625) under the
rationale that, while not exactly the "true to life" values, its
relative imprecision can also give it more robustness in the face of
intermediate calculations.

...


Anton Ertl

unread,
Dec 4, 2022, 5:20:58 AM12/4/22
to
Russell Wallace <russell...@gmail.com> writes:
[Binary FP]
>I=
>t's not ideal for accountants, who would have preferred decimal floating po=
>int, where the rounding errors match what they would get on paper.

This statement is contradictory. What they do on paper is decimal
fixed point, so true decimal FP would produce different rounding
errors. That's why what was standardized as "decimal FP" actually is
not really a floating-point number representation: in the usual case
the point does not float. Still, the lack of hardware support
(outside IBM) and the lack of improved software libraries for decimal
FP speaks against this stuff being a success.

>Spreadsheets. VisiCalc had used decimal floating point, which is what you p=
>referably want in a spreadsheet; not only does it do arithmetic in a way th=
>at better suits accountants, but at that tech level, it would be faster tha=
>n binary, because the majority of numbers encountered in spreadsheets, are =
>simple when expressed in decimal. It's faster to multiply arbitrary 64-bit =
>numbers in binary, but spreadsheets don't typically work with arbitrary num=
>bers, and at that tech level, it's faster to multiply 1.23 by 4.56 in decim=
>al, because the multiply loop can exit early.

On hardware with a multiplier loop like the 386 and 486 the binary
multiplication time was data-dependent, i.e., it exits early on some
values. What should the supposed advantage of decimal be?

On hardware with a full-blown multiplier like the 88100 integer
multiply takes a few cycles independent of the data values. What
should the advantage of decimal be there?

>Point of departure: Lotus decides their spreadsheet should use decimal, lik=
>e VisiCalc.

Some competitor uses binary FP, and most users decide that they like
speed more than decimal rounding. Even the few who would like
commercial rounding rules would have failed to specify the rounding
rules to the spreadsheet program; such specifications would just have
been too complex to make and verify; this would have been a job for a
programmer, not the majority of 1-2-3 customers.

OTOH, having decimal fixed point as optional feature might have
supported marketing even if the users would not have used it
successfully in practice. It works for IBM, after all. Still, 1-2-3
sold well without such a feature (right?); and if a competitor
introduced such a feature, it did not help them enough to beat 1-2-3
or force Lotus to add that feature to 1-2-3.

Looking at the Wikipedia page of 1-2-3 it is interesting to see what
features were added over time.

>On this timeline, far fewer people have a reason to buy an 8087. Intel keep=
>s making them for the relatively small handful of customers, but floating-p=
>oint coprocessors remain a niche product. C and Pascal compiler vendors not=
>ice this, and don't bother supporting the 8087, instead shipping a differen=
>t binary floating point format that's optimized for software implementation=
>. Noting that this is still rather slow, Borland takes the step of providin=
>g language-extension support for fixed point, and other compiler vendors fo=
>llow suit.

No way. Fixed point lost to floating point everywhere where the CPU
was big enough to support floating-point hardware; this started in the
1950s with the IBM 704, repeated itself on the minis, and repeated
itself again on the micros. The programming ease of floating-point
won out over fixed point every time. Even without hardware
floating-point was very popular (e.g., in MS Basic on all the micros).

>Scientists find themselves under pressure to find a way to get by with chea=
>p commodity hardware,

Actually the fact that floating point won in scientific computing,
where the software crisis either has not happened or was much less
forceful, and it happened more than a decade before anyone even coined
the term "software crisis", shows how important the programming
advantage of floating-point is. Customers of scientific computers are
usually happy to save money on the hardware at the cost of additional
software expense (because they still save more on the hardware than
they have to pay for the additional programming effort; i.e., no
software crisis), but not in the case of floating-point vs. fixed
point.

And if floating point wins in scientific computing, it is all the more
important in areas where the software crisis is relevant.

>and Fortran compiler vendors respond by adding suppor=
>t for fixed point.

Ada has support for fixed point (including IIRC decimal fixed point).
Yet commercial users failed to flock to Ada. It seems that the desire
of commercial users for implementing their rounding rule is very
limited.

>Which is /not/ quite an adequate substitute. When you're=
> writing computational fluid dynamics code, you would much prefer not to ha=
>ve to try to figure out in advance what the dynamic range will be; that's w=
>hy floating point was invented in the first place! But sometimes 'cheap com=
>modity off-the-shelf' beats 'ideally designed for my purposes'.

In this case, floating point beat fixed point, every time, as soon as
hardware was big enough to support hardware floating-point.

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7...@googlegroups.com>

Thomas Koenig

unread,
Dec 4, 2022, 7:42:49 AM12/4/22
to
John Levine <jo...@taugh.com> schrieb:

> A few decimal machines in the 1950s had decimal FP, and the IBM 360
> had famously botched hex FP, but other than that, it was all binary
> all the way.

I believe Data General actually used IBM's floating point format.
If it was to save a few gates, or because they thought that
the market leader in computers could not possibly be wrong,
I don't know.

Russell Wallace

unread,
Dec 4, 2022, 7:57:03 AM12/4/22
to
On Sunday, December 4, 2022 at 10:20:58 AM UTC, Anton Ertl wrote:
> This statement is contradictory. What they do on paper is decimal
> fixed point, so true decimal FP would produce different rounding
> errors. That's why what was standardized as "decimal FP" actually is
> not really a floating-point number representation: in the usual case
> the point does not float.

Right, sort of. A fixed program, e.g. a payroll program in COBOL, typically uses actual fixed point, which is most efficiently represented as scaled binary integers, i.e. just represent the number of cents as a plain integer. What I'm referring to here as decimal floating point, is what you would normally do in a spreadsheet, where the point floats only when necessary. I'm agnostic about whether 'floating point' is the best term for that; it is not the same thing as fixed point. (Unlike the payroll program, VisiCalc can happily take very small numbers without reprogramming.)

> On hardware with a multiplier loop like the 386 and 486 the binary
> multiplication time was data-dependent, i.e., it exits early on some
> values. What should the supposed advantage of decimal be?

Two advantages:

Values that allow early exit in decimal, occur much more often in spreadsheets than values that allow early exit in binary; see my example of 1.23 x 4.56.

It's faster to update the display when the values are in decimal.

> On hardware with a full-blown multiplier like the 88100 integer
> multiply takes a few cycles independent of the data values. What
> should the advantage of decimal be there?

Now the faster-calculation advantage of decimal is gone, though by this stage both are probably fast enough. Decimal still has better rounding and faster display update.

> Even the few who would like
> commercial rounding rules would have failed to specify the rounding
> rules to the spreadsheet program; such specifications would just have
> been too complex to make and verify; this would have been a job for a
> programmer, not the majority of 1-2-3 customers.

Is it hard to specify rounding rules to a spreadsheet? It seems to me that in most cases, it could be done by selecting from a shortlist of options.

> Actually the fact that floating point won in scientific computing,
> where the software crisis either has not happened or was much less
> forceful, and it happened more than a decade before anyone even coined
> the term "software crisis", shows how important the programming
> advantage of floating-point is. Customers of scientific computers are
> usually happy to save money on the hardware at the cost of additional
> software expense (because they still save more on the hardware than
> they have to pay for the additional programming effort; i.e., no
> software crisis), but not in the case of floating-point vs. fixed
> point.
>
> And if floating point wins in scientific computing, it is all the more
> important in areas where the software crisis is relevant.

Yeah, this is a good point.

Quadibloc

unread,
Dec 4, 2022, 8:12:33 AM12/4/22
to
On Saturday, December 3, 2022 at 8:01:46 PM UTC-7, Russell Wallace wrote:

> Point of departure: Lotus decides their spreadsheet should use decimal, like VisiCalc.
> Autodesk notices this, decides the 8087 won't be a big hit, and puts in the programming
> effort needed to make AutoCAD work with fixed point instead.

If those things happened, indeed we might never have seen the chips by Weitek and
others which were intended as faster alternatives to the 8087. Those were often
directly pitched at Autodesk users.

But because binary floating-point was nearly universal in mainframes, ultimately
floating point would still have made it into microprocessors. Maybe by the time of the
Pentium II instead of the 486, but it is too obvious a feature to forever by abandoned
for fixed point simply because of a few key applications.

John Savard

John Dallman

unread,
Dec 4, 2022, 9:33:38 AM12/4/22
to
In article <tmi4k6$chju$2...@newsreader4.netcologne.de>,
Quite a few architectures of the 1960s and 1970s used IBM hex FP.
<https://en.wikipedia.org/wiki/IBM_hexadecimal_floating-point>

The early ones may have just assumed it was good, but the later ones seem
to have regarded it as a necessary form of compatibility. There were
considerable markets for minicomputers to pre-process data for feeding to
IBM mainframes, and the ICL mainframes were designed with ambitions to
replace IBM ones, at least in the UK.

There are still a few standards that require use of hex FP data, such as
the US FDA's submission process for new drugs. IBM mainframes didn't have
binary floating-point until 1998.

John

Scott Lurndal

unread,
Dec 4, 2022, 9:48:25 AM12/4/22
to
John Levine <jo...@taugh.com> writes:

>A few decimal machines in the 1950s had decimal FP, and the IBM 360
>had famously botched hex FP, but other than that, it was all binary
>all the way.

And at least one line of decimal machines with decimal FP was
still active in 2010 (Unisys nee Burroughs). Although I will
admit that very few customers actually used it; as it wasn't
particularly interesting to the banking, insurance and other
bookkeeeping applications.

Anton Ertl

unread,
Dec 4, 2022, 1:46:30 PM12/4/22
to
Russell Wallace <russell...@gmail.com> writes:
>On Sunday, December 4, 2022 at 10:20:58 AM UTC, Anton Ertl wrote:
>> On hardware with a multiplier loop like the 386 and 486 the binary=20
>> multiplication time was data-dependent, i.e., it exits early on some=20
>> values. What should the supposed advantage of decimal be?=20
>
>Two advantages:
>
>Values that allow early exit in decimal, occur much more often in spreadshe=
>ets than values that allow early exit in binary; see my example of 1.23 x 4=
>.56.

If these are represented as the binary integers 123 (7 bits) and 456
(9 bits), it seems to me that the multiplier can early-out at least as
early as if they are represented as BCD numbers (both 12 bits).

>It's faster to update the display when the values are in decimal.

Display update is so slow that the conversion cost is minor change.

>> Even the few who would like=20
>> commercial rounding rules would have failed to specify the rounding=20
>> rules to the spreadsheet program; such specifications would just have=20
>> been too complex to make and verify; this would have been a job for a=20
>> programmer, not the majority of 1-2-3 customers.=20
>
>Is it hard to specify rounding rules to a spreadsheet? It seems to me that =
>in most cases, it could be done by selecting from a shortlist of options.

I found it hard to specify the output format, but then I rarely use
spreadsheets. Have you seen a spreadsheet that lets you specify
rounding rules? We have enough CPU power, we have decimal FP
libraries, but does Excel or LibreOffice Calc etc. make any use of
them? Not to my knowledge.

The usual idea of spreadsheet usage seems to be that you type in some
numbers, you type in some formulas (already pretty advanced stuff for
the user who is not used to programming), and the spreadsheet does the
rest. You change some numbers, and look what you get. Now,
formatting: the user needs to abstract from one concrete result to a
general result, that's really getting into the deep. Rounding rules
are apparently too far out to even hide in a corner of the present-day
spreadsheets.

John Levine

unread,
Dec 4, 2022, 2:11:58 PM12/4/22
to
According to BGB <cr8...@gmail.com>:
>In a general sense, I suspect floating-point was likely inevitable.
>
>As for what parts could have varied:
> Location of the sign and/or exponent;
> Sign/Magnitude and/or Twos Complement;
> Handling of values near zero;
> Number of exponent bits for larger types;

I think I've seen variations in all of those.

>For example, one could have used Twos Complement allowing a signed
>integer compare to be used unmodified for floating-point compare.

Yup, the PDP-6/10 did that.

>For many purposes, subnormal numbers do not matter.

I believe that the argument is that if you don't know a lot about
numerical analysis, your intuitions about when they don't matter
are likely to be wrong. Denormals make it more likely that
naive code will get an accurate answer.

>The relative value of giving special semantics (in hardware) to the
>Inf/NaN cases could be debated. Though, arguably, they do have meaning
>as "something has gone wrong in the math" signals, which would not
>necessarily have been preserved with clamping.

Same point, they tell the naive programmer that the code didn't
work. As some wag pointed out a long time ago, if you don't care
if the results are right, I can make the program as fast as you want.

John Levine

unread,
Dec 4, 2022, 2:36:57 PM12/4/22
to
According to Stephen Fuld <sf...@alumni.cmu.edu.invalid>:
>> A few decimal machines in the 1950s had decimal FP, and the IBM 360
>> had famously botched hex FP, but other than that, it was all binary
>> all the way.
>
>A minor quibble, which you actually address below. As you said there,
>contrary to what the OP said, accountants didn't want DFP. They wanted
>scaled decimal fixed point with the compiler taking care of scaling
>automatically.

Sorry I wasn't clear. I meant that floating point was all binary. The
IBM 1620 was an odd little decimal scientific computer from the late
1950s much beloved by its users. It had a floating point option which
was decimal because the whole machine was. Someone else mentioned a
Unisys legacy architecture with decimal FP that survived into the
2000s, I expect emulated on mass market chips. But they were dead
ends. IBM's replacement for the 1620 was the 16 bit binary 1130
with FP in software.

It's quite striking how similar IEEE floating point is to 704's FP
from that 1950s. Once you have the idea, you converge pretty fast.

By the way, when I said it took two decades to invent the hidden bit,
I was wrong. Wikipedia says that the electromechanical Zuse Z1 in 1938
had floating point with a hidden bit. IBM licensed Zuse's patents
after the war so they were aware of his work.

Russell Wallace

unread,
Dec 4, 2022, 2:45:01 PM12/4/22
to
On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
> If these are represented as the binary integers 123 (7 bits) and 456
> (9 bits), it seems to me that the multiplier can early-out at least as
> early as if they are represented as BCD numbers (both 12 bits).

Right, scaled integers are the other good way to represent decimal numbers. (But this is a very different solution from binary floating point.)

> Display update is so slow that the conversion cost is minor change.

You're probably assuming a bitmap display with overlapping windows and scalable fonts? Once you're at that tech level, the arithmetic cost (in whichever format) is also minor change. But Lotus 1-2-3 ran on a character cell display, where updating each character was just a store to a single memory location.

> I found it hard to specify the output format, but then I rarely use
> spreadsheets. Have you seen a spreadsheet that lets you specify
> rounding rules? We have enough CPU power, we have decimal FP
> libraries, but does Excel or LibreOffice Calc etc. make any use of
> them? Not to my knowledge.

Right. I conjecture that's because when Lotus chose binary floating point, it diverted spreadsheets down a path away from where you could usefully specify rounding rules. And that in an alternate timeline where spreadsheets still used decimal (whether with BCD or scaled integers) letting you specify rounding rules would be a natural thing to do.

MitchAlsup

unread,
Dec 4, 2022, 3:00:12 PM12/4/22
to
On Sunday, December 4, 2022 at 1:11:58 PM UTC-6, John Levine wrote:
> According to BGB <cr8...@gmail.com>:
> >In a general sense, I suspect floating-point was likely inevitable.
> >
> >As for what parts could have varied:
> > Location of the sign and/or exponent;
> > Sign/Magnitude and/or Twos Complement;
> > Handling of values near zero;
> > Number of exponent bits for larger types;
> I think I've seen variations in all of those.
> >For example, one could have used Twos Complement allowing a signed
> >integer compare to be used unmodified for floating-point compare.
> Yup, the PDP-6/10 did that.
> >For many purposes, subnormal numbers do not matter.
> I believe that the argument is that if you don't know a lot about
> numerical analysis, your intuitions about when they don't matter
> are likely to be wrong. Denormals make it more likely that
> naive code will get an accurate answer.
<
Can we restate that to:: ...naïve code will get less inaccurate answers.
{denormals have already lost precision, just not a whole fraction's worth.}

Scott Lurndal

unread,
Dec 4, 2022, 3:42:15 PM12/4/22
to
John Levine <jo...@taugh.com> writes:
>According to Stephen Fuld <sf...@alumni.cmu.edu.invalid>:
>>> A few decimal machines in the 1950s had decimal FP, and the IBM 360
>>> had famously botched hex FP, but other than that, it was all binary
>>> all the way.
>>
>>A minor quibble, which you actually address below. As you said there,
>>contrary to what the OP said, accountants didn't want DFP. They wanted
>>scaled decimal fixed point with the compiler taking care of scaling
>>automatically.
>
>Sorry I wasn't clear. I meant that floating point was all binary. The
>IBM 1620 was an odd little decimal scientific computer from the late
>1950s much beloved by its users. It had a floating point option which
>was decimal because the whole machine was. Someone else mentioned a
>Unisys legacy architecture with decimal FP that survived into the
>2000s, I expect emulated on mass market chips.

No, the system in question was twenty five years old at the
time of replacement. They (a city in southern california)
replaced it with 26 windows servers.

The other legacy stack architecture burroughs mainframe line
is still sold, as you note, in emulation on 64-bit intel/amd processors.

Peter Lund

unread,
Dec 4, 2022, 4:38:14 PM12/4/22
to
On Sunday, December 4, 2022 at 11:20:58 AM UTC+1, Anton Ertl wrote:

> No way. Fixed point lost to floating point everywhere where the CPU
> was big enough to support floating-point hardware; this started in the
> 1950s with the IBM 704, repeated itself on the minis, and repeated
> itself again on the micros. The programming ease of floating-point
> won out over fixed point every time. Even without hardware
> floating-point was very popular (e.g., in MS Basic on all the micros).

Z3 had floating-point (22-bit binary add/sub/mul/div/sqrt). It also had +/- inf and undefined.

1941!

-Peter

MitchAlsup

unread,
Dec 4, 2022, 5:09:33 PM12/4/22
to
On Sunday, December 4, 2022 at 4:20:58 AM UTC-6, Anton Ertl wrote:
> Russell Wallace <russell...@gmail.com> writes:
>
> No way. Fixed point lost to floating point everywhere where the CPU
> was big enough to support floating-point hardware; this started in the
> 1950s with the IBM 704, repeated itself on the minis, and repeated
> itself again on the micros. The programming ease of floating-point
> won out over fixed point every time. Even without hardware
> floating-point was very popular (e.g., in MS Basic on all the micros).
<
Accurate but overstated.
<
Floating point is an ideal representation for calculations where the exponent
varies at least as much as the fraction--that is scientific and numerical codes.
Almost nothing here is known to more than 10 digits of precision anyway, and
there is no closed form solution to many of the codes, either.
<
Floating point was never in any real competition with fixed point--which is the
ideal representation for money--and calculations where one must not lose
precision (do to rounding).

BGB

unread,
Dec 4, 2022, 5:10:57 PM12/4/22
to
On 12/4/2022 1:11 PM, John Levine wrote:
> According to BGB <cr8...@gmail.com>:
>> In a general sense, I suspect floating-point was likely inevitable.
>>
>> As for what parts could have varied:
>> Location of the sign and/or exponent;
>> Sign/Magnitude and/or Twos Complement;
>> Handling of values near zero;
>> Number of exponent bits for larger types;
>
> I think I've seen variations in all of those.
>
>> For example, one could have used Twos Complement allowing a signed
>> integer compare to be used unmodified for floating-point compare.
>
> Yup, the PDP-6/10 did that.
>

Yep.


Have noted that things can get "fun" for microfloat formats, since (for
8 bit formats) one is operating on the bare edge of what is "sufficient".


So, things like whether or not there is a sign bit, exact size of the
exponent and mantissa, bias values, etc, will tend to vary by a
significant amount (even within the same domain, noting for example,
that both Mu-Law and A-Law exist, etc).


>> For many purposes, subnormal numbers do not matter.
>
> I believe that the argument is that if you don't know a lot about
> numerical analysis, your intuitions about when they don't matter
> are likely to be wrong. Denormals make it more likely that
> naive code will get an accurate answer.
>

Possible, though apart from edge cases (such as dividing by very small
numbers, or multiplying by very large numbers), any effect denormals
would have had on the result is likely insignificant.

In the case where one divides by a "would have been" denormal, turning
the result into NaN or Inf (as-if it were 0) almost makes more sense.


For Binary32 or Binary64, the dynamic range is large enough that they
are unlikely to matter.


For Binary16, it does mean that the smallest possible normal-range value
is 0.00006103515625, which "could matter", but people presumably aren't
going to be using this for things where "precision actually matters".


For some of my neural-net training experiments, things did tend to start
running into the dynamic-range limits of Binary16, which I ended hacking
around.

Ironically, adding a case to detect intermediate values going outside of
a reasonable range and then decaying any weights which fed into this
value, seemed to cause it to converge a little more quickly.


>> The relative value of giving special semantics (in hardware) to the
>> Inf/NaN cases could be debated. Though, arguably, they do have meaning
>> as "something has gone wrong in the math" signals, which would not
>> necessarily have been preserved with clamping.
>
> Same point, they tell the naive programmer that the code didn't
> work. As some wag pointed out a long time ago, if you don't care
> if the results are right, I can make the program as fast as you want.
>

Granted.

Keeping Inf and NaN semantics as diagnostic signals does at least make
sense. In my case, have mostly kept Inf and NaN as they seem able to
justify their existence.



There are also some limits though to how much can be gained by cheap
approximations.

For example, one could try to define a Binary32 reciprocal as:
recip=0x7F000000-divisor;
Or, FDIV as:
quot=0x3F800000+(dividend-divisor);

Does not take long to realize that, for most uses, this is not sufficient.

For things like pixel color calculations, this sorta thing may be
"mostly passable" though (or can be "made sufficient" with 1 to 3
Newton-Raphson stages).


For things like approximate bounding-sphere checks, it may be sufficient
to define square root as:
sqrta=0x1FC00000+(val>>1);

As can be noted, some of my neural-net stuff was also essentially using
these sort of definitions for the operators.


For an integer equivalent, had also noted that there were ways to
approximate distances, say:
dx=ax-bx; dy=ay-by;
dist=sqrt(dx*dx+dy*dy);
Being approximated as, say:
dx=abs(ax-bx); dy=abs(ay-by);
if(dx>=dy)
dist=dx+(dy>>1);
else
dist=dy+(dx>>1);

Had encountered this nifty trick in the ROTT engine, but had ended up
borrowing it for my BGBTech3 engine experiment.



Another recent experiment was getting the low-precision FPU in my case
up to (roughly) full Binary32 precision, however it still needs a little
more testing.

Lots of corner cutting in in this case, as the low-precision FPU was
more built to be "cheap" than good (even vs the main FPU, which was
already DAZ+FTZ).


Say, main FPU:
FADD: 64-bit mantissa, 12-bit exponent, ~ 9 sub-ULP bits.
FMUL: 54-bit mantissa, 12-bit exponent;
6x DSP48 "triangle" multiplier,
Plus a few LUT-based mini-multipliers for the low-order bits.
Full precision would need 9 DSP48s (with some mini-multipliers),
or 16 DSP48s (no mini multipliers).
Both operators have a 6-cycle latency;
Main FPU supports traditional rounding modes.


Low Precision FPU (original):
FADD: 16-bit mantissa, 9-bit exponent (truncated by 7 bits).
FMUL: 16-bit mantissa, 9-bit exponent (truncated by 7 bits);
1x DSP48 multiplier.
These operators had a 3-cycle latency.

The 16 bit mantissa was used as it maps exactly to the DSP48 in this
case. But, for Binary32, would truncate the mantissa.

In this version, FADD/FSUB was using One's Complement math for the
mantissa, as for truncated Binary32 this was tending to be "closer to
correct" than the result of using Two's Complement.



Low Precision FPU (with FP32 extension):
FADD: 28-bit mantissa, 9-bit exponent, ~ 2 sub-ULP bits (1).
FMUL: 26-bit mantissa, 9-bit exponent;
1x DSP48 multiplier, plus two 9-bit LUT based multipliers (2).
These operators still have a 3-cycle latency;
Effectively hard-wired as Truncate rounding.


The added precision makes it sufficient to use in a few cases where the
original low-precision FPU was not sufficient. It can now entirely take
over for Binary32 SIMD ops.

It is also possible to route some Binary64 ops through it as well,
albeit with the same effective dynamic range and precision as Binary32
(but, 3 cycle rather than 6 cycle).


1: s.eeeeeeee.fffffffffffffffffffffff,
Maps mantissa as:
001fffffffffffffffffffffff00
With the high-bits allowing for both sign and carry to a larger
exponent. Sub-ULP bits allowing for carry into the ULP.

In this version, FADD/FSUB uses Two's Complement for the mantissa.


2: DSP48 does high order bits, with the two 9 bit multipliers producing
the rest. The results from the two 9-bit multipliers can be added to the
low-order bits from the DSP48.

Say, mantissa maps as:
01fffffffffffffffffffffff0
With the top 18 bits each fed into the DSP48, producing a 36 bit result.

The 9 bit multipliers deal with multiplying the high bits from one input
against the low bits from the other (the low-bits that were out of reach
of the DSP48), with these results being added together and then added to
the appropriate place within the 36-bit initial result.

In this case, the 9b multipliers were built from 3x3->6 multipliers, eg:
0 1 2 3 4 5 6 7
0 00 00 00 00 00 00 00 00
1 00 01 02 03 04 05 06 07
2 00 02 04 06 10 12 14 16
3 00 03 06 11 14 17 22 25
4 00 04 10 14 20 24 30 34
5 00 05 12 17 24 31 36 43
6 00 06 14 22 30 36 44 52
7 00 07 16 25 34 43 52 61
Where values here are in Base-8.

In this case, the 3x3 multipliers used because these can fit more easily
into a LUT6 (so likely more efficient than either working on 2 or 4 bit
values in this case).

Needed to build the multipliers manually as otherwise Vivado seemed to
see the 9-bit multiplies and tried using DSP48s for these as well, but
in this case didn't want to burn 8 additional DSPs on the SIMD unit.


However, as noted since the "low*middle" and "low*low" results are
effectively not calculated, whatever they might have contributed to the
probability of "correctly rounded ULP" is effectively lost.


But, this becomes less a question on numerical precision per-se, but
more of the statistical probability of the results being different, or
of this probability having a visible or meaningful effect on the result.

Apart from a few edge cases, the potential contribution from these
low-order results in the final result effectively approaches zero.

At least, in the absence of hardware support for a Single-Rounded FMAC
operator.

...


Russell Wallace

unread,
Dec 4, 2022, 6:45:19 PM12/4/22
to
On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
> Have you seen a spreadsheet that lets you specify
> rounding rules? We have enough CPU power, we have decimal FP
> libraries, but does Excel or LibreOffice Calc etc. make any use of
> them? Not to my knowledge.

To expand a little on my last reply,

This change is definitely not going to happen today. Recently, scientists had to rename genes because Microsoft wouldn't fix the frigging CSV import in Excel. It's clearly a long way past the point where significantly changing the arithmetic model would have been on the cards.

That's one reason I'm casting this as alternate history: to understand the underlying dynamics, separate from the question of what is still open to change in the organizational politics of 2022.

Anton Ertl

unread,
Dec 5, 2022, 4:14:08 AM12/5/22
to
MitchAlsup <Mitch...@aol.com> writes:
>Floating point is an ideal representation for calculations where the exponent
>varies at least as much as the fraction--that is scientific and numerical codes.
>Almost nothing here is known to more than 10 digits of precision anyway, and
>there is no closed form solution to many of the codes, either.
><
>Floating point was never in any real competition with fixed point--which is the
>ideal representation for money--and calculations where one must not lose
>precision (do to rounding).

Reality check:

1) Spreadsheets use floating point, and people use spreadsheets for
computing stuff about money. I expect that spreadsheet companies
considered adding fixed point, but their market research told them
that they would not gain a competetive advantage.

2) Likewise, from what I hear about a big Forth application that also
deals with money, they use floating point for dealing with money.
That's despite fixed-point ideology and, to a certain degree, support
being strong in Forth. The FP representation used are the 80-bit 8087
FP numbers; when the Forth system they used was upgraded to AMD64, the
Forth system switched to 64-bit SSE2 FP numbers, but the developers of
that application complained, because they wanted the 80-bit numbers.

3) Every month I get an invoice about EUR 38.24 that is made up of the
sum of EUR 37 and EUR 1.25. I expect that the company that sends out
the invoice sends out thousands of invoices with that error every
month, so they lose tens of euros every month. It's apparently not
important enough to invest money for fixing this. And I can
understand this. A proper fix would require switching to fixed point,
which would cost maybe EUR 100000 of development time. And some small
correction of the existing FP computation might be cheaper in
development, but might lead to thousands of justified complaints about
erring in the other direction. Given that 1.25 and 38.25 are
representable in floating point without rounding error, it may be that
they just subtract EUR 0.01 from every invoice to get rid of rounding
errors in the other direction, and reduce the number of complaints.

For you and me it seems obvious that one should use fixed point for
money, just like it seemed obvious to von Neumann that one should use
fixed point for physics computation. But actual programmers and users
dealing with money work with floating-point, just as actual programmers
doing physics computations work with floating point.

So, in reality, floating point used to be competition for fixed point,
and floating point has won.

Anton Ertl

unread,
Dec 5, 2022, 4:28:55 AM12/5/22
to
Russell Wallace <russell...@gmail.com> writes:
>On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
>> If these are represented as the binary integers 123 (7 bits) and 456=20
>> (9 bits), it seems to me that the multiplier can early-out at least as=20
>> early as if they are represented as BCD numbers (both 12 bits).
>
>Right, scaled integers are the other good way to represent decimal numbers.=
> (But this is a very different solution from binary floating point.)
>
>> Display update is so slow that the conversion cost is minor change.=20
>
>You're probably assuming a bitmap display with overlapping windows and scal=
>able fonts? Once you're at that tech level, the arithmetic cost (in whichev=
>er format) is also minor change. But Lotus 1-2-3 ran on a character cell di=
>splay, where updating each character was just a store to a single memory lo=
>cation.

It was a "graphics" card memory location, and CPU access there tends
to be slower than main memory. Probably more importantly in the early
generations, there's an overhead in determining the memory location
that you have to write to, and if you have to write at all (i.e., if
the cell is on-screen of off-screen).

>> We have enough CPU power, we have decimal FP=20
>> libraries, but does Excel or LibreOffice Calc etc. make any use of=20
>> them? Not to my knowledge.=20
>
>Right. I conjecture that's because when Lotus chose binary floating point, =
>it diverted spreadsheets down a path away from where you could usefully spe=
>cify rounding rules.

If financial rounding rules were a real requirement for spreadsheet
users, one of the spreadsheet programs would have added fixed point or
decimal floating point by now, and a rounding rule feature like you
have in mind. The fact that this has not happened indicates that
binary floating-point is good enough.

Anton Ertl

unread,
Dec 5, 2022, 4:44:09 AM12/5/22
to
Russell Wallace <russell...@gmail.com> writes:
>On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
>> Have you seen a spreadsheet that lets you specify=20
>> rounding rules? We have enough CPU power, we have decimal FP=20
>> libraries, but does Excel or LibreOffice Calc etc. make any use of=20
>> them? Not to my knowledge.=20
>
>To expand a little on my last reply,
>
>This change is definitely not going to happen today. Recently, scientists h=
>ad to rename genes because Microsoft wouldn't fix the frigging CSV import i=
>n Excel.

If the users fail to switch to better software, the fix is obviously
not important enough for them, so why should it be important for
Microsoft?

>It's clearly a long way past the point where significantly changin=
>g the arithmetic model would have been on the cards.

Same thing here: If it was really important to users, a competitor
could make inroads by offering fixed point and a rounding rule
specification feature. This would induce the competitor to add this
feature, and eventually Microsoft would add it, too. But apparently
all spreadsheet makers think that this feature is not important for
the users. And they are probably right.

Stephen Fuld

unread,
Dec 5, 2022, 4:48:13 AM12/5/22
to
On 12/5/2022 12:39 AM, Anton Ertl wrote:
> MitchAlsup <Mitch...@aol.com> writes:
>> Floating point is an ideal representation for calculations where the exponent
>> varies at least as much as the fraction--that is scientific and numerical codes.
>> Almost nothing here is known to more than 10 digits of precision anyway, and
>> there is no closed form solution to many of the codes, either.
>> <
>> Floating point was never in any real competition with fixed point--which is the
>> ideal representation for money--and calculations where one must not lose
>> precision (do to rounding).
>
> Reality check:
>
> 1) Spreadsheets use floating point, and people use spreadsheets for
> computing stuff about money. I expect that spreadsheet companies
> considered adding fixed point, but their market research told them
> that they would not gain a competetive advantage.
>
> 2) Likewise, from what I hear about a big Forth application that also
> deals with money, they use floating point for dealing with money.

Question for you. Does Forth support of fixed point automatically
handle scaling, printing, etc. of non integer fixed point numbers? When
people moved away from COBOL, (which does all of that), to things like
C, which doesn't, they were confronted with a choice. Either code all
that stuff on their own or use binary floating point, which does handle
that stuff, but has rounding issues, etc. I expect many customers just
didn't want the mess of handling all of that themselves and were willing
to accept the binary floating point issues. If the more popular
languages that replaced COBOL had good support for fixed decimal, I
suspect that might have won out over binary floating point (especially
if there was some hardware support). By the time decimal floating point
came out, it was too late, and most people just didn't want to spend the
money to convert their existing programs to it.

Anton Ertl

unread,
Dec 5, 2022, 8:36:46 AM12/5/22
to
Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
>> 2) Likewise, from what I hear about a big Forth application that also
>> deals with money, they use floating point for dealing with money.
>
>Question for you. Does Forth support of fixed point automatically
>handle scaling, printing, etc. of non integer fixed point numbers?

Nothing is automatic in Forth. Out of the box you get support for
double-cell numbers (128 bits with 64-bit cells (aka machine words)),
integer formatting in arbitrary ways, and you get some scaling support
(useful for implementing fixed-point multiplication). If you really
want to run with fixed-point numbers, you then write your own words
(functions) for dealing with them.

>If the more popular
>languages that replaced COBOL had good support for fixed decimal, I
>suspect that might have won out over binary floating point (especially
>if there was some hardware support).

The main successor for COBOL seems to be Java. Searching for "Java
fixed point" leads me to <https://github.com/tools4j/decimal4j>. I
find having to write

Decimal2f interest = principal.multiplyBy(rate.multiplyExact(time));

cumbersome, but it might be just to the taste of COBOL programmers.

Anyway, if programmers really wanted fixed point for the jobs where
COBOL used to be used, Ada, C++ and Java would have allowed to make
convenient fixed-point libraries (or actual built-in support for fixed
point in the case of Ada). Apparently they did not care enough.

Bill Findlay

unread,
Dec 5, 2022, 10:50:23 AM12/5/22
to
On 5 Dec 2022, Anton Ertl wrote
(in article<2022Dec...@mips.complang.tuwien.ac.at>):
>
> Anyway, if programmers really wanted fixed point for the jobs where
> COBOL used to be used, Ada, C++ and Java would have allowed to make
> convenient fixed-point libraries (or actual built-in support for fixed
> point in the case of Ada). Apparently they did not care enough.
You mean something like:

package Interfaces.COBOL

and

type Money is delta 0.01 digits 15;

?
--
Bill Findlay

MitchAlsup

unread,
Dec 5, 2022, 11:42:10 AM12/5/22
to
On Monday, December 5, 2022 at 3:14:08 AM UTC-6, Anton Ertl wrote:
> MitchAlsup <Mitch...@aol.com> writes:
> >Floating point is an ideal representation for calculations where the exponent
> >varies at least as much as the fraction--that is scientific and numerical codes.
> >Almost nothing here is known to more than 10 digits of precision anyway, and
> >there is no closed form solution to many of the codes, either.
> ><
> >Floating point was never in any real competition with fixed point--which is the
> >ideal representation for money--and calculations where one must not lose
> >precision (do to rounding).
> Reality check:
>
> 1) Spreadsheets use floating point, and people use spreadsheets for
> computing stuff about money. I expect that spreadsheet companies
> considered adding fixed point, but their market research told them
> that they would not gain a competetive advantage.
<
The people using spreadsheets to calculate monetary stuff do not
have the budget-size to put FP64 anomalies in any jeopardy.

Stephen Fuld

unread,
Dec 5, 2022, 12:18:43 PM12/5/22
to
On 12/5/2022 5:21 AM, Anton Ertl wrote:
> Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
>>> 2) Likewise, from what I hear about a big Forth application that also
>>> deals with money, they use floating point for dealing with money.
>>
>> Question for you. Does Forth support of fixed point automatically
>> handle scaling, printing, etc. of non integer fixed point numbers?
>
> Nothing is automatic in Forth. Out of the box you get support for
> double-cell numbers (128 bits with 64-bit cells (aka machine words)),
> integer formatting in arbitrary ways, and you get some scaling support
> (useful for implementing fixed-point multiplication). If you really
> want to run with fixed-point numbers, you then write your own words
> (functions) for dealing with them.
>
>> If the more popular
>> languages that replaced COBOL had good support for fixed decimal, I
>> suspect that might have won out over binary floating point (especially
>> if there was some hardware support).
>
> The main successor for COBOL seems to be Java.

Now it is. But I think C was the direct successor, with Java coming on
later, even replacing C.



Searching for "Java
> fixed point" leads me to <https://github.com/tools4j/decimal4j>. I
> find having to write
>
> Decimal2f interest = principal.multiplyBy(rate.multiplyExact(time));
>
> cumbersome, but it might be just to the taste of COBOL programmers.

:-)

> Anyway, if programmers really wanted fixed point for the jobs where
> COBOL used to be used, Ada, C++ and Java would have allowed to make
> convenient fixed-point libraries (or actual built-in support for fixed
> point in the case of Ada). Apparently they did not care enough.

In the 1980s, when microprocessors were becoming popular, C was the
language of choice. The COBOL implementations on PCs were not great,
C++ was too new and different, and Java wasn't even conceived of. By
the time those choices became available, the die had already been cast.

John Levine

unread,
Dec 5, 2022, 12:20:15 PM12/5/22
to
According to MitchAlsup <Mitch...@aol.com>:
>> 1) Spreadsheets use floating point, and people use spreadsheets for
>> computing stuff about money. I expect that spreadsheet companies
>> considered adding fixed point, but their market research told them
>> that they would not gain a competetive advantage.
><
>The people using spreadsheets to calculate monetary stuff do not
>have the budget-size to put FP64 anomalies in any jeopardy.

Oh, wow, is that wrong. I can promise you that people on Wall
Street work out multi-billion dollar deals in Excel all the time.

There are a few financial formulas that have to be done in decimal to
get the right answer. So spreadsheets have functions to do that. See
the YIELDDISC() and PRICE() functions in Excel. Anything to do with
bonds and dates also has to deal with synthetic calendars that go back
to the era when this stuff was all done with desk calculators so they
pretended years had 360 days to make the arithmetic easier.

There are also a few functions that are numerically quite challenging.
The IRR Internal Rate of Return for N payments is the zero of an Nth
degree polynomial. Those don't have closed form solutions for N
above 3 so we cheat by using the knowledge that any interesting
answer is likely to be in the [-1, +1] range and doing numerical
interations until it converges or doesn't.

Stephen Fuld

unread,
Dec 5, 2022, 12:32:22 PM12/5/22
to
On 12/5/2022 12:39 AM, Anton Ertl wrote:

snip


> Reality check:
>
> 1) Spreadsheets use floating point, and people use spreadsheets for
> computing stuff about money. I expect that spreadsheet companies
> considered adding fixed point, but their market research told them
> that they would not gain a competetive advantage.

For those who don't know, Javelin was a really nice financially
oriented, sort of spreadsheet interface program, running on PCs in the
1980s.

https://en.wikipedia.org/wiki/Javelin_Software

John Levine was one of its authors, so perhaps he could enlighten us as
to what it used internally, and why they made the choices that they did,
i.e. a true reality check.

Anton Ertl

unread,
Dec 5, 2022, 1:12:34 PM12/5/22
to
John Levine <jo...@taugh.com> writes:
>There are a few financial formulas that have to be done in decimal to
>get the right answer. So spreadsheets have functions to do that. See
>the YIELDDISC() and PRICE() functions in Excel.

Looking at

https://support.microsoft.com/en-us/office/yielddisc-function-a9dbdbae-7dae-46de-b995-615faffaaed7
https://support.microsoft.com/en-us/office/price-function-3ea9deac-8dfa-436f-a7c8-17ea02c21b0a

I don't see any mention of decimal arithmetic. And given the nature
of the formulas involved, I don't see any advantage to using decimal.
If I want more precision, I would just use higher-precision binary FP.

Scott Lurndal

unread,
Dec 5, 2022, 1:13:34 PM12/5/22
to
Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
>On 12/5/2022 5:21 AM, Anton Ertl wrote:
>> Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
>>>> 2) Likewise, from what I hear about a big Forth application that also
>>>> deals with money, they use floating point for dealing with money.
>>>
>>> Question for you. Does Forth support of fixed point automatically
>>> handle scaling, printing, etc. of non integer fixed point numbers?
>>
>> Nothing is automatic in Forth. Out of the box you get support for
>> double-cell numbers (128 bits with 64-bit cells (aka machine words)),
>> integer formatting in arbitrary ways, and you get some scaling support
>> (useful for implementing fixed-point multiplication). If you really
>> want to run with fixed-point numbers, you then write your own words
>> (functions) for dealing with them.
>>
>>> If the more popular
>>> languages that replaced COBOL had good support for fixed decimal, I
>>> suspect that might have won out over binary floating point (especially
>>> if there was some hardware support).
>>
>> The main successor for COBOL seems to be Java.
>
>Now it is. But I think C was the direct successor, with Java coming on
>later, even replacing C.

Signficant amounts of of production COBOL code is still in production
on Z-series, Unisys Clearpath systems and commodity servers.

I doubt that any significant amount of COBOL code has
been ported to C, and as COBOL compilers are available for most
modern systems (windows, linux, et alia) it's likely that COBOL
shops continue to use COBOL. SAP, for instance, uses ABAP;
which seems similar to other business 4GLs like LINC et alia.

While Oracle is written in C, it is more of a data movement
engine than a calculation engine.

Anton Ertl

unread,
Dec 5, 2022, 1:15:12 PM12/5/22
to
Bill Findlay <findl...@blueyonder.co.uk> writes:
>On 5 Dec 2022, Anton Ertl wrote
>(in article<2022Dec...@mips.complang.tuwien.ac.at>):
>>
>> Anyway, if programmers really wanted fixed point for the jobs where
>> COBOL used to be used, Ada, C++ and Java would have allowed to make
>> convenient fixed-point libraries (or actual built-in support for fixed
>> point in the case of Ada). Apparently they did not care enough.
>You mean something like:
...
>type Money is delta 0.01 digits 15;

I have read about Ada in the 1980s, so my memory is dim, but this
rings a bell. "Interfaces.COBOL" doesn't.

Stephen Fuld

unread,
Dec 5, 2022, 1:45:28 PM12/5/22
to
On 12/5/2022 10:13 AM, Scott Lurndal wrote:
> Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
>> On 12/5/2022 5:21 AM, Anton Ertl wrote:
>>> Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
>>>>> 2) Likewise, from what I hear about a big Forth application that also
>>>>> deals with money, they use floating point for dealing with money.
>>>>
>>>> Question for you. Does Forth support of fixed point automatically
>>>> handle scaling, printing, etc. of non integer fixed point numbers?
>>>
>>> Nothing is automatic in Forth. Out of the box you get support for
>>> double-cell numbers (128 bits with 64-bit cells (aka machine words)),
>>> integer formatting in arbitrary ways, and you get some scaling support
>>> (useful for implementing fixed-point multiplication). If you really
>>> want to run with fixed-point numbers, you then write your own words
>>> (functions) for dealing with them.
>>>
>>>> If the more popular
>>>> languages that replaced COBOL had good support for fixed decimal, I
>>>> suspect that might have won out over binary floating point (especially
>>>> if there was some hardware support).
>>>
>>> The main successor for COBOL seems to be Java.
>>
>> Now it is. But I think C was the direct successor, with Java coming on
>> later, even replacing C.
>
> Signficant amounts of of production COBOL code is still in production
> on Z-series, Unisys Clearpath systems and commodity servers.

Of course. Lots of sites still use mainframes for various good reasons.
But there are very few "new name" mainframe customers. I was referring
to those companies that moved off of mainframes, and to those
applications that were built by (probably newer) companies who never
used mainframes.

> I doubt that any significant amount of COBOL code has
> been ported to C,

Probably true.

> and as COBOL compilers are available for most
> modern systems (windows, linux, et alia)

Now they are. But back in say the 1980s, when micros started their
ascendance, they either weren't or weren't very good. I know of at
least one accounting software system that was available on many
minicomputers and written in C.


> it's likely that COBOL
> shops continue to use COBOL.

I suspect it is mix. For modifications to existing systems, and some
smaller new systems, I agree. But I suspect that new, large, from
scratch, systems, they probably use something else.


> SAP, for instance, uses ABAP;
> which seems similar to other business 4GLs like LINC et alia.
>
> While Oracle is written in C, it is more of a data movement
> engine than a calculation engine.

But the Oracle applications that do accounting like functions are
written in Java, though that could be related to their acquisition of Sun.

https://en.wikipedia.org/wiki/Oracle_Cloud_Enterprise_Resource_Planning

Bill Findlay

unread,
Dec 5, 2022, 2:46:49 PM12/5/22
to
On 5 Dec 2022, Anton Ertl wrote
(in article<2022Dec...@mips.complang.tuwien.ac.at>):

> Bill Findlay<findl...@blueyonder.co.uk> writes:
> > On 5 Dec 2022, Anton Ertl wrote
> > (in article<2022Dec...@mips.complang.tuwien.ac.at>):
> > >
> > > Anyway, if programmers really wanted fixed point for the jobs where
> > > COBOL used to be used, Ada, C++ and Java would have allowed to make
> > > convenient fixed-point libraries (or actual built-in support for fixed
> > > point in the case of Ada). Apparently they did not care enough.
> > You mean something like:
> ...
> > type Money is delta 0.01 digits 15;

> I have read about Ada in the 1980s, so my memory is dim, but this
> rings a bell. "Interfaces.COBOL" doesn't.
What you read about Ada nearly half a century ago is irrelevant now.

--
Bill Findlay

John Levine

unread,
Dec 5, 2022, 3:00:35 PM12/5/22
to
It appears that Anton Ertl <an...@mips.complang.tuwien.ac.at> said:
>John Levine <jo...@taugh.com> writes:
>>There are a few financial formulas that have to be done in decimal to
>>get the right answer. So spreadsheets have functions to do that. See
>>the YIELDDISC() and PRICE() functions in Excel.
>
>Looking at
>
>https://support.microsoft.com/en-us/office/yielddisc-function-a9dbdbae-7dae-46de-b995-615faffaaed7
>https://support.microsoft.com/en-us/office/price-function-3ea9deac-8dfa-436f-a7c8-17ea02c21b0a
>
>I don't see any mention of decimal arithmetic. And given the nature
>of the formulas involved, I don't see any advantage to using decimal.

The formulas are defined to do decimal rounding. It was nearly 40 years
ago that I had to imeplement them and the details are kind of vague, but
a little googlage finds this discussion of financial rounding.

https://math.libretexts.org/Bookshelves/Applied_Mathematics/Business_Math_(Olivier)/16%3A_Appendices/16.03%3A_Rounding_Rules

>If I want more precision, I would just use higher-precision binary FP.

Well, sure. It's not all that hard to do decimal rounding with binary FP
but you have to be careful to do it correctly and consider the edge cases.
It took me a couple of tries to get it right.

John Levine

unread,
Dec 5, 2022, 3:03:27 PM12/5/22
to
According to Stephen Fuld <sf...@alumni.cmu.edu.invalid>:
>For those who don't know, Javelin was a really nice financially
>oriented, sort of spreadsheet interface program, running on PCs in the
>1980s.
>
>https://en.wikipedia.org/wiki/Javelin_Software
>
>John Levine was one of its authors, so perhaps he could enlighten us as
>to what it used internally, and why they made the choices that they did,
>i.e. a true reality check.

We used 8087 floating point, in hardware if available otherwise
simulated. I don't recall that we ever consdered anything else.
Some of the other programmers were familiar with the internals
of Visicalc.

As I've said in other messages, some financial functions need decimal
rounding, so I implemented decimal rounding using IEEE FP.

BGB

unread,
Dec 5, 2022, 3:22:45 PM12/5/22
to
On 12/5/2022 3:15 AM, Anton Ertl wrote:
> Russell Wallace <russell...@gmail.com> writes:
>> On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
>>> If these are represented as the binary integers 123 (7 bits) and 456=20
>>> (9 bits), it seems to me that the multiplier can early-out at least as=20
>>> early as if they are represented as BCD numbers (both 12 bits).
>>
>> Right, scaled integers are the other good way to represent decimal numbers.=
>> (But this is a very different solution from binary floating point.)
>>
>>> Display update is so slow that the conversion cost is minor change.=20
>>
>> You're probably assuming a bitmap display with overlapping windows and scal=
>> able fonts? Once you're at that tech level, the arithmetic cost (in whichev=
>> er format) is also minor change. But Lotus 1-2-3 ran on a character cell di=
>> splay, where updating each character was just a store to a single memory lo=
>> cation.
>
> It was a "graphics" card memory location, and CPU access there tends
> to be slower than main memory. Probably more importantly in the early
> generations, there's an overhead in determining the memory location
> that you have to write to, and if you have to write at all (i.e., if
> the cell is on-screen of off-screen).
>

I think the (near universal?) approach is typically to draw things to an
off-screen buffer, and then copy this buffer over to the display's
framebuffer under some criteria.

Granted, the DOS versions of some of the games I had ported to my ISA
did seem to draw directly to the VGA memory (for 320x200, often making
use of the card having 128K of VRAM as a way of implementing
double-buffering).



In my case, I am mostly using a "draw to offscreen buffer and then
transform the buffer during copy".


Mostly, this is so programs can use RGB555 linear framebuffers
independent of what the hardware is using in that mode, eg:
256-bit blocks containing 4x4 RGB555 pixels;
256-bit blocks containing 4x 4x4x2 color cell;
256-bit blocks containing 4x 4x8x1 color cell;
...

Where, in the color-cell modes, the high 2 bits of the first 2 32-bit
sub-words are used to encode which block format is being used, say:
0.x: Text and Hardware-Sprite Cells (64b cells).
1.x: Low-Res Graphical Cells (4x4x1 and 4x4x2) (64b cells).
2.x: Mostly cell formats for color-cell mode with 128b cells.
3.x: Mostly cell formats for color-cell mode with 256b cells.
Where:
3.0: 4x4 16x RGB555
( Can function like a hi-color raster mode. )
3.1: 8x16 as 4x 4x8x1
3.2: 4x8 as 8x 2x2x2 (8x RGB555 + 8x Y-delta)
(Looks like a 4:2:0 Chroma Subsampled Block)
3.3: 8x8 as 4x 4x4x2 (Two RGB555 endpoints per 4x4 block)
(Comparable to using DXT1 to encode the display memory)

When copying to display memory, the contents of the framebuffer would be
transformed into the appropriate format for the display mode.

So, say, text-mode is 80x25 cells (640x200 72Hz nominal):
Can also function as a 320x200 graphics mode.
There is a graphical mode that is 80x50 cells (640x400 72Hz nominal):
Can function as 320x200 hi-color, 640x400 graphics.
There is a mode with 100x38 cells (800x600 36Hz, nominal, 1):
Can function as 400x152, 800x300, or 800x600.

For 800x600 mode, one would mostly use 8x16 cells or similar.

1: Untested on a real CRT monitor, seems to be below the minimum refresh
for multisync monitors, but seems to work on an older LCD monitor.


For trying to run in a GUI like mode (in 640x400 or 800x600), running
the color-cell encoder during screen refresh seems like a big
performance bottleneck (the encoder doesn't work all that quickly with
encoding a 512K or 1MB framebuffer).


A had heard somewhere that apparently things like the SuperFX chip
included a hardware color-cell encoder (for the SNES), may need to look
into something like this at some point.


It does seem to be potentially faster in some cases (such as video
playback), to potentially skip the "decode to linear RGB555, re-encode
as color cell" and potentially trans-code directly from the codec into a
color-cell format (likely using UTX2 as the go-between, which can then
be transformed more efficiently into the on-screen format).

A program could possibly also have faster refresh if it were to operate
more directly in terms of color-cells.


For 320x200 high-color mode, things are a little faster mostly as the
repacking is more efficient.

...


Partial reason there are multiple cell sizes, is because I was
originally intending the screen to be 80x25 with 64b cells (with 16K of
VRAM). Ended up later going both to bigger cell formats, and to a larger
VRAM.

It is possible to display Doom or similar in 16K or 32K of VRAM in these
modes, though using 128K of VRAM and hi-color looks a lot better...

Granted, most mainline display hardware uses "actual" framebuffer graphics.

Though, I guess historically, there was the Amiga that apparently did
graphics via a user-configurable bit-plane scheme.


My scheme is a bit different from the C64/NES/SNES in that the
color-cell bits are typically held directly in the color-cell, rather
than via a character ROM or SRAM (though, a character SRAM does still
exist in my case, and may be used for text mode; or could be used in a
similar way to on a NES or SNES).

There was a "thus far unused" hardware sprite feature, which would
mostly consist of off-screen character cells that are drawn at
dynamically modifiable locations on screen (basically like a character
cell that also encodes an X/Y location and similar).

Well, along with an option to scroll the screen memory and/or set it to
be larger than the visible area (say, a sliding 40x25 window into an
80x50 space).

But, currently only supports a single layer. Something like multiple
independent layers and transparency between layers, would require a
redesign of the graphics device, and is unlikely to be useful short of
one wanting to try to use it like it were an SNES or something (and
programs that draw via copying over a framebuffer wouldn't really have
any use for any of this).

...


>>> We have enough CPU power, we have decimal FP=20
>>> libraries, but does Excel or LibreOffice Calc etc. make any use of=20
>>> them? Not to my knowledge.=20
>>
>> Right. I conjecture that's because when Lotus chose binary floating point, =
>> it diverted spreadsheets down a path away from where you could usefully spe=
>> cify rounding rules.
>
> If financial rounding rules were a real requirement for spreadsheet
> users, one of the spreadsheet programs would have added fixed point or
> decimal floating point by now, and a rounding rule feature like you
> have in mind. The fact that this has not happened indicates that
> binary floating-point is good enough.
>

There is also the issue that people keep using spreadsheets for stuff,
and spreadsheets like to try to interpret random bits of text as if they
were dates or similar (such as pretty much any text string where the
first 3 letters happen to match the 3 letter prefix of a month name, ...).

Apparently this is a significant issue for some fields, but they keep on
trying to stick their stuff in spreadsheets.

Would arguably be better in this case if it would only match patterns
that match a certain set of specified date formats, say:
YYYY-MM-DD, MM/DD/YYYY, ...

MitchAlsup

unread,
Dec 5, 2022, 3:36:49 PM12/5/22
to
On Monday, December 5, 2022 at 2:22:45 PM UTC-6, BGB wrote:
> On 12/5/2022 3:15 AM, Anton Ertl wrote:
<merciful snip>
> > If financial rounding rules were a real requirement for spreadsheet
> > users, one of the spreadsheet programs would have added fixed point or
> > decimal floating point by now, and a rounding rule feature like you
> > have in mind. The fact that this has not happened indicates that
> > binary floating-point is good enough.
> >
> There is also the issue that people keep using spreadsheets for stuff,
> and spreadsheets like to try to interpret random bits of text as if they
> were dates or similar (such as pretty much any text string where the
> first 3 letters happen to match the 3 letter prefix of a month name, ...).
>
> Apparently this is a significant issue for some fields, but they keep on
> trying to stick their stuff in spreadsheets.
<
Just yesterday, it was announced that some genetic research is "being
hampered" because eXcel will not allow 'them' to use names of codon-
strings as entries (i.e., names} in the eXcel spreadsheet(s).
>
> Would arguably be better in this case if it would only match patterns
> that match a certain set of specified date formats, say:
> YYYY-MM-DD, MM/DD/YYYY, ...
<
I really hate it when eXcel converts "1/4" into Jan-4-2022 when I literally
wanted the string 1/4 {as in DIV4} in the cell. The <ahem> prescribed
way of doing this it to reformat the cell from general to text and then
being careful to put something textural in the cell before putting 1/4
in the cell.
<
I also hate it that eXcel uses a negative sign as if it were a =- assignment.

Stephen Fuld

unread,
Dec 5, 2022, 3:47:01 PM12/5/22
to
On 12/5/2022 12:36 PM, MitchAlsup wrote:

snip

> I really hate it when eXcel converts "1/4" into Jan-4-2022 when I literally
> wanted the string 1/4 {as in DIV4} in the cell. The <ahem> prescribed
> way of doing this it to reformat the cell from general to text and then
> being careful to put something textural in the cell before putting 1/4
> in the cell.

I just tried on Excel 2021 and if you put and equals sign before the
1/4, the cell says .25

so =1/4 will get you what you want. Note this is the same mechanism to
force a cell to be numeric in other contexts.

Terje Mathisen

unread,
Dec 5, 2022, 4:42:50 PM12/5/22
to
Anton Ertl wrote:
> Russell Wallace <russell...@gmail.com> writes:
>> On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
>>> If these are represented as the binary integers 123 (7 bits) and 456=20
>>> (9 bits), it seems to me that the multiplier can early-out at least as=20
>>> early as if they are represented as BCD numbers (both 12 bits).
>>
>> Right, scaled integers are the other good way to represent decimal numbers.=
>> (But this is a very different solution from binary floating point.)
>>
>>> Display update is so slow that the conversion cost is minor change.=20
>>
>> You're probably assuming a bitmap display with overlapping windows and scal=
>> able fonts? Once you're at that tech level, the arithmetic cost (in whichev=
>> er format) is also minor change. But Lotus 1-2-3 ran on a character cell di=
>> splay, where updating each character was just a store to a single memory lo=
>> cation.
>
> It was a "graphics" card memory location, and CPU access there tends
> to be slower than main memory. Probably more importantly in the early
> generations, there's an overhead in determining the memory location
> that you have to write to, and if you have to write at all (i.e., if
> the cell is on-screen of off-screen).

I discovered this by measuring it around 1983/84, which is when I
switched my own programs to use a memory block as a shadow frame buffer,
only copying any updates to the real screen, and only when I had a few
spare milliseconds to do so. I.e. on a quickly scrolling screen I might
not update every single line.
>
>>> We have enough CPU power, we have decimal FP=20
>>> libraries, but does Excel or LibreOffice Calc etc. make any use of=20
>>> them? Not to my knowledge.=20
>>
>> Right. I conjecture that's because when Lotus chose binary floating point, =
>> it diverted spreadsheets down a path away from where you could usefully spe=
>> cify rounding rules.
>
> If financial rounding rules were a real requirement for spreadsheet
> users, one of the spreadsheet programs would have added fixed point or
> decimal floating point by now, and a rounding rule feature like you
> have in mind. The fact that this has not happened indicates that
> binary floating-point is good enough.

Right!

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

Terje Mathisen

unread,
Dec 5, 2022, 4:51:19 PM12/5/22
to
John Levine wrote:
> According to Stephen Fuld <sf...@alumni.cmu.edu.invalid>:
>> For those who don't know, Javelin was a really nice financially
>> oriented, sort of spreadsheet interface program, running on PCs in the
>> 1980s.
>>
>> https://en.wikipedia.org/wiki/Javelin_Software
>>
>> John Levine was one of its authors, so perhaps he could enlighten us as
>> to what it used internally, and why they made the choices that they did,
>> i.e. a true reality check.
>
> We used 8087 floating point, in hardware if available otherwise
> simulated. I don't recall that we ever consdered anything else.
> Some of the other programmers were familiar with the internals
> of Visicalc.
>
> As I've said in other messages, some financial functions need decimal
> rounding, so I implemented decimal rounding using IEEE FP.
>

I did the same for the Turbo Pascal code I wrote for my father-in-law's
gift card business: All double prec FP implementing Norwegian financial
rounding rules.

It only took a few tries to get it correct aross all edge cases, mostly
adding a suitable value (like 0.5000001 ?) and then truncating to
integer afair. I had sufficient precision and low enough total amounts
that this would always work.

Anton Ertl

unread,
Dec 5, 2022, 5:31:27 PM12/5/22
to
Bill Findlay <findl...@blueyonder.co.uk> writes:
>What you read about Ada nearly half a century ago is irrelevant now.

Have newer versions removed fixed point?

Bill Findlay

unread,
Dec 5, 2022, 6:45:04 PM12/5/22
to
On 5 Dec 2022, Anton Ertl wrote
(in article<2022Dec...@mips.complang.tuwien.ac.at>):

> Bill Findlay<findl...@blueyonder.co.uk> writes:
> > What you read about Ada nearly half a century ago is irrelevant now.
>
> Have newer versions removed fixed point?

"newer" (i.e. more than quarter-century old) Ada added
decimal fixed point. Binary fixed point was always there.

--
Bill Findlay


MitchAlsup

unread,
Dec 5, 2022, 7:39:23 PM12/5/22
to
On Monday, December 5, 2022 at 2:47:01 PM UTC-6, Stephen Fuld wrote:
> On 12/5/2022 12:36 PM, MitchAlsup wrote:
>
> snip
> > I really hate it when eXcel converts "1/4" into Jan-4-2022 when I literally
> > wanted the string 1/4 {as in DIV4} in the cell. The <ahem> prescribed
> > way of doing this it to reformat the cell from general to text and then
> > being careful to put something textural in the cell before putting 1/4
> > in the cell.
> I just tried on Excel 2021 and if you put and equals sign before the
> 1/4, the cell says .25
<
BUT I DO NOT WANT 0.25;
I WANT THE 3 CHARACTERS "1/4" in the cell.
I DO NOT WANT THE VALUE OF 1 DIVIDED BY THE VALUE 4
I WANT THE 3 CHARACTERS "1/4" in the cell.
>
> so =1/4 will get you what you want. Note this is the same mechanism to
> force a cell to be numeric in other contexts.
<
See above.

Stephen Fuld

unread,
Dec 5, 2022, 9:05:18 PM12/5/22
to
On 12/5/2022 4:39 PM, MitchAlsup wrote:
> On Monday, December 5, 2022 at 2:47:01 PM UTC-6, Stephen Fuld wrote:
>> On 12/5/2022 12:36 PM, MitchAlsup wrote:
>>
>> snip
>>> I really hate it when eXcel converts "1/4" into Jan-4-2022 when I literally
>>> wanted the string 1/4 {as in DIV4} in the cell. The <ahem> prescribed
>>> way of doing this it to reformat the cell from general to text and then
>>> being careful to put something textural in the cell before putting 1/4
>>> in the cell.
>> I just tried on Excel 2021 and if you put and equals sign before the
>> 1/4, the cell says .25
> <
> BUT I DO NOT WANT 0.25;
> I WANT THE 3 CHARACTERS "1/4" in the cell.
> I DO NOT WANT THE VALUE OF 1 DIVIDED BY THE VALUE 4
> I WANT THE 3 CHARACTERS "1/4" in the cell.

Oh, sorry I misunderstood. It makes the spacing a little funny, but if
you put a space before the 1/4, I believe you will get that.

Stephen Fuld

unread,
Dec 6, 2022, 1:18:01 AM12/6/22
to
I wasn't happy with that answer due to the extra space. Another
solution is to enter .25 in the cell, then change the cell's format to
"Fraction". There are also functions that allow you to do this
entirely within the data entry in the cell, but this takes more
keystrokes, and probably destroys your ability to use the value in the
cell as a numeric value, should that be useful.

Benny Lyne Amorsen

unread,
Dec 6, 2022, 7:21:41 AM12/6/22
to
MitchAlsup <Mitch...@aol.com> writes:

> I really hate it when eXcel converts "1/4" into Jan-4-2022 when I literally
> wanted the string 1/4 {as in DIV4} in the cell. The <ahem> prescribed
> way of doing this it to reformat the cell from general to text and then
> being careful to put something textural in the cell before putting 1/4
> in the cell.

I was going to recommend LibreOffice Calc, because typing 1/4 there gets
you ¼ (Unicode 1/4 symbol).

For kicks, I then tried 1/3... And got March 1st 2022.

Spreadsheets are a cursed technology.


/Benny

Bernd Linsel

unread,
Dec 6, 2022, 11:43:03 AM12/6/22
to
In Excel, you can type a ' (&apos; u+0027) at the start of a cell that
is to be formatted as text (and not interpreted as a date or a fraction
or...).
The apostrophe will be hidden and the cell contents won't be altered in
any way.

E.g. type "3/4", and Excel makes March 4th of current year out of it;
type "'3/4" and Excel stores the literal text 3/4.

Unluckily, this does not work when importing text or csv files.

MitchAlsup

unread,
Dec 6, 2022, 12:14:21 PM12/6/22
to
At least you tried......!!
>
>
> /Benny

John Levine

unread,
Dec 6, 2022, 1:59:36 PM12/6/22
to
According to MitchAlsup <Mitch...@aol.com>:
>BUT I DO NOT WANT 0.25;
>I WANT THE 3 CHARACTERS "1/4" in the cell.
>I DO NOT WANT THE VALUE OF 1 DIVIDED BY THE VALUE 4
>I WANT THE 3 CHARACTERS "1/4" in the cell.

Oh, then type "1/4"

With the quotes.

Excel does some dumb things but it does know what quotes mean.

Tim Rentsch

unread,
Dec 15, 2022, 9:52:01 AM12/15/22
to
Terje Mathisen <terje.m...@tmsw.no> writes:

> Anton Ertl wrote:
>
>> Russell Wallace <russell...@gmail.com> writes:
>>
>>> On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
>>>
>>>> If these are represented as the binary integers 123 (7 bits) and
>>>> 456 (9 bits), it seems to me that the multiplier can early-out at
>>>> least as early as if they are represented as BCD numbers (both 12
>>>> bits).
>>>
>>> Right, scaled integers are the other good way to represent
>>> decimal numbers. (But this is a very different solution from
>>> binary floating point.)
>>>
>>>> Display update is so slow that the conversion cost is minor change.
>>>
>>> You're probably assuming a bitmap display with overlapping
>>> windows and scalable fonts? Once you're at that tech level, the
>>> arithmetic cost (in whichever format) is also minor change. But
>>> Lotus 1-2-3 ran on a character cell display, where updating each
>>> character was just a store to a single memory location.
>>
>> It was a "graphics" card memory location, and CPU access there
>> tends to be slower than main memory. Probably more importantly
>> in the early generations, there's an overhead in determining the
>> memory location that you have to write to, and if you have to
>> write at all (i.e., if the cell is on-screen of off-screen).
>
> I discovered this by measuring it around 1983/84, which is when I
> switched my own programs to use a memory block as a shadow frame
> buffer, only copying any updates to the real screen, and only when
> I had a few spare milliseconds to do so. I.e. on a quickly
> scrolling screen I might not update every single line.

It's sad that we seem to have lost the advances in computer
graphics that were already known 50 years ago. The Alto, even
though it was only a 16-bit computer running at 6 MHz, could
easily do smooth scrolling over its entire display (606 x 808
pixels, IIRC). It did this by using a display list of regions
(horizontal strips of varying widths) rather than a simple frame
buffer. Because the display list was an actual linked list, it
was easy to prepare a new list and simply link it in at the
appropriate moment. Temporal fidelity was achieved by switching
in a new display list at a time when a vertical retrace was
occurring; no glitches, ever.

It would be nice if modern machines and operating systems could
provide a comparable high-performance graphics capability, with
the same level of temporal fidelity, as the Alto did almost 50
years ago. Alas, the scourge of frame buffers and rudimentally
designed windowing systems has put us in a world where, to use a
phrase, glitches happen.

Terje Mathisen

unread,
Dec 15, 2022, 10:24:48 AM12/15/22
to
This was of course the HW way to do it back when keeping a megapixel
screen updated at 60 Hz was _hard_.

In the code I mentioned above (a terminal emulator in fact, which needed
to do all the windowed scrolling operations supported by
VT52/VT100/VT220/NORD/etc terminals I did use a linked list memory store
for the frame buffer, with separate dirty flags for each element. Back
in those days all frame buffer writes had to be synced to either
horizontal (very short!) or vertical retrace, but even the vertical
retrace didn't last long enough to update everything in one go afair, so
my full screen update code would use both to get done asap.
>
> It would be nice if modern machines and operating systems could
> provide a comparable high-performance graphics capability, with
> the same level of temporal fidelity, as the Alto did almost 50
> years ago. Alas, the scourge of frame buffers and rudimentally
> designed windowing systems has put us in a world where, to use a
> phrase, glitches happen.
>
For most stuff it doesn't matter, and when it does matter, then we just
throw a full GPU at at. :-)

BGB

unread,
Dec 15, 2022, 3:36:13 PM12/15/22
to
So, I am guessing no "blit" operation to try to copy a "DIB Bitmap
Object" (and/or XImage / XPutImage) to the window 30 or 60 times per
second or so?...


Well, a little worse in my case, since the "GUI mode" first requires
copying over the framebuffer pixels from the DIB Object, and then
running a color-cell encoder over said pixels (to convert to the display
memory format, mostly because there is not enough VRAM for a traditional
raster oriented frambuffer).

Kinda hard to make this part work all that quickly at 640x400 on a 50
MHz CPU...


Some DOS-era software seemed to have two 8-bpp framebuffers, at A0000
and B0000 (or A000:0000 / B000:0000), drawing to these buffers, and then
twiddling some hardware bits to flip which buffer was being displayed.

Wouldn't work so well in my case, as even if the hardware were using an
8bpp linear framebuffer, the display interface doesn't allow byte-level
access to VRAM (the VRAM is accessed in terms of 64-bit units).


Well, apart from some software which had used the screen in modes where
it was split into 4 bit-planes (also sort of a thing that VGA did...).
Have to basically fully emulate this stuff in software (well, and also
don't have enough VRAM for 640x480 16-color either; but there is
640x400, and a 640x480/800x600 4-color mode, *).

*: Where one has multiple choices of "4 amazing colors" (on screen at
any given time):
Black/Cyan/Magenta/White
Black/Green/Red/Yellow
Black/Cyan/Red/White
Black/DarkGray/LightGray/White


A case could maybe be made for adding a "4 shades of olive" mode (say,
for a similar look to the original GameBoy or similar). Or maybe add a
mode register to allow 4 arbitrary RGB555 colors to be used in the
4-color palette.

Though, this could also be faked easily enough in the color-cell mode
(and 160x144 is a lot closer to 320x200 than to 640x480 or 800x600).


...

MitchAlsup

unread,
Dec 15, 2022, 3:53:02 PM12/15/22
to
Acccckkkkk::
<
I remember those days..........

BGB

unread,
Dec 15, 2022, 5:37:41 PM12/15/22
to
Yeah. It is a trick:

Try to do full color graphics where on screen, though there is only
enough memory for 2-bits-per-pixel and a pair of colors for every 4x4
pixel block...


Except in 640x480 or 800x600, where there is no longer enough memory for
the per-block colors (so, it is effectively limited to a 2bpp pseudo
bitmapped mode).


Well, unless one wants to do graphics in what is essentially a text mode
(using character glyph style color-cell).

Can theoretically go into what is essentially a 100x75 text mode (with
800x600 screen resolution), and then draw graphics more like on a
Commodore 64 (say, using a "Font RAM").

So, one has multiple choices here (say, the 4-color mode allows setting
every pixel independently, which is not actually possible in the other
mode; not quite bitmapped, and limited to 2 colors per 8x8 pixel block).


At 320x200, it can draw Doom in full quality, but doesn't have enough
video memory (nor the needed hardware support) for double-buffering the
display.

Mostly works for now...


Technically has more VRAM than the CGA, NES, or SNES.

But, less VRAM than the original VGA (hence its inability to do 640x480
16-color or similar). Roughly comparable to the EGA on this front.


Technically also an unusually slow refresh rate at 800x600, as the
(standard) 75Hz refresh would likely require effectively driving the
screen-refresh logic and similar internally at a higher clock frequency.

Though, 36 Hz is below the rated minimum refresh-rate for most multisync
monitors, but at least the LCD monitors and similar I have seemed to
accept it (and not really worth it to try to get ahold of a CRT monitor
to test with).

...

Stephen Fuld

unread,
Dec 16, 2022, 2:00:37 PM12/16/22
to
On 12/5/2022 5:21 AM, Anton Ertl wrote:
> Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:

snip

>> If the more popular
>> languages that replaced COBOL had good support for fixed decimal, I
>> suspect that might have won out over binary floating point (especially
>> if there was some hardware support).
>
> The main successor for COBOL seems to be Java.

That is true now, but by the time of Java's ascendancy, the die had
already been cast. The immediate successor to COBOL as the primary
language for business applications was C. IBM tried to push PL/1, but
wasn't successful for a variety of reasons. Later, C++ tried, and now,
as you say, Java is the winner. But once C became popular and it didn't
support decimal arithmetic directly, programmers just got used to that,
and, for the most part, accepted it, so future "business oriented"
languages didn't support it.



> Searching for "Java
> fixed point" leads me to <https://github.com/tools4j/decimal4j>. I
> find having to write
>
> Decimal2f interest = principal.multiplyBy(rate.multiplyExact(time));
>
> cumbersome, but it might be just to the taste of COBOL programmers.

:-) Of course, in COBOL, there was no "extra code" to specify decimal
arithmetic.


> Anyway, if programmers really wanted fixed point for the jobs where
> COBOL used to be used, Ada, C++ and Java would have allowed to make
> convenient fixed-point libraries (or actual built-in support for fixed
> point in the case of Ada). Apparently they did not care enough.

Generally agreed. Specifically, Ada wasn't intended for business
applications, and, of course, C++ built on C's heritage and had enough
problems of its own without introducing another feature. As I said, by
the time Java became available, everyone was used to no decimal support.

Tim Rentsch

unread,
Dec 17, 2022, 10:13:20 AM12/17/22
to
> traditional raster oriented frambuffer). [...]

My comment is about a general lack, not about the detailed
specifics.

If you want to know about the specifics, I suggest you look for
information about the design of the Alto (it should be easy to
find by doing some web searches). For one thing, the Alto didn't
have a framebuffer; the display was refreshed directly out of
main memory. There was no capability for color, which would have
been far too expensive at the time the Alto was designed (early
1970s). As it was approximately half the total memory available
was (typically) used to hold the bits of the display image. Lots
of other points of interest.

Tim Rentsch

unread,
Dec 17, 2022, 11:32:14 AM12/17/22
to
If the frame buffer is represented as a linked list, why not
just make up a new linked list and change the top-level link
during the vertical retrace? Surely changing the one link
could be done in the time it takes to do a vertical retrace.

>> It would be nice if modern machines and operating systems could
>> provide a comparable high-performance graphics capability, with
>> the same level of temporal fidelity, as the Alto did almost 50
>> years ago. Alas, the scourge of frame buffers and rudimentally
>> designed windowing systems has put us in a world where, to use a
>> phrase, glitches happen.
>
> For most stuff it doesn't matter,

I find this attitude reprehensible, especially coming from
someone as capable as you are. There is no reason in this day
and age to tolerate shitty software. Certainly there are things
about Apple that I don't like, but when it comes to software
quality developers should strive to be more like Apple and less
like Microsoft.

> and when it does matter, then we just throw a full GPU at
> at. :-)

This one too. It's ridiculous to pay $1000 for a solution when
there is another way of solving the same problem that costs less
than $10. Equally ridiculous is needing GB/sec of bandwidth in
cases where MB/sec would do. Or web pages that load multiple
megabytes to display only a few thousand bytes of content. These
excesses, and others like them, must be fought at every level.

MitchAlsup

unread,
Dec 17, 2022, 12:08:13 PM12/17/22
to
a) I applaud the suggestion that no one should accept shitty SW
b) There is the package known as Windows that fits your description
....of shitty
c) every web browser shovels shit at you in the form of advertising.
d) even adblockers are getting to the point of selectively shoveling
....you shit
So, while the premises good, there is no escape from shit in SW.
<
> Certainly there are things
> about Apple that I don't like, but when it comes to software
> quality developers should strive to be more like Apple and less
> like Microsoft.
> > and when it does matter, then we just throw a full GPU at
> > at. :-)
> This one too. It's ridiculous to pay $1000 for a solution when
> there is another way of solving the same problem that costs less
> than $10. Equally ridiculous is needing GB/sec of bandwidth in
> cases where MB/sec would do. Or web pages that load multiple
> megabytes to display only a few thousand bytes of content. These
> excesses, and others like them, must be fought at every level.
<
I think I would settle for delivering the pages content first, and
shoveling the advertisements later.
<
But then again, you can't buy a PC that does not already come
with a GPU built in (at a cost so low you don't see it).

Terje Mathisen

unread,
Dec 19, 2022, 9:09:33 AM12/19/22
to
Tim, we are really in violent agreement, it is just that either I am
older than you, or I have given up fixing this before you did.
>
>> and when it does matter, then we just throw a full GPU at
>> at. :-)
>
> This one too. It's ridiculous to pay $1000 for a solution when
> there is another way of solving the same problem that costs less
> than $10. Equally ridiculous is needing GB/sec of bandwidth in
> cases where MB/sec would do. Or web pages that load multiple
> megabytes to display only a few thousand bytes of content. These
> excesses, and others like them, must be fought at every level.
>
Having written some of the most performance-critical code in use today,
I strongly agree. However, I have found that I need to pick my battles
to the places where it still really matters.
:-(

Thomas Koenig

unread,
Dec 19, 2022, 12:06:04 PM12/19/22
to
Stephen Fuld <sf...@alumni.cmu.edu.invalid> schrieb:
> On 12/5/2022 5:21 AM, Anton Ertl wrote:
>> Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
>
> snip
>
>>> If the more popular
>>> languages that replaced COBOL had good support for fixed decimal, I
>>> suspect that might have won out over binary floating point (especially
>>> if there was some hardware support).
>>
>> The main successor for COBOL seems to be Java.
>
> That is true now, but by the time of Java's ascendancy, the die had
> already been cast. The immediate successor to COBOL as the primary
> language for business applications was C.

Really? Do you mean hand-crafted business applications (which were
later often replaced by SAP), or as the language for implementing
things like spreadsheets?

C was certainly in wide use among computer scientists, especially
in the UNIX field, and engineers also used it a lot after FORTRAN
had fallen behind too much, but for business?

Scott Lurndal

unread,
Dec 19, 2022, 1:29:27 PM12/19/22
to
Thomas Koenig <tko...@netcologne.de> writes:
>Stephen Fuld <sf...@alumni.cmu.edu.invalid> schrieb:
>> On 12/5/2022 5:21 AM, Anton Ertl wrote:
>>> Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
>>
>> snip
>>
>>>> If the more popular
>>>> languages that replaced COBOL had good support for fixed decimal, I
>>>> suspect that might have won out over binary floating point (especially
>>>> if there was some hardware support).
>>>
>>> The main successor for COBOL seems to be Java.
>>
>> That is true now, but by the time of Java's ascendancy, the die had
>> already been cast. The immediate successor to COBOL as the primary
>> language for business applications was C.
>
>Really? Do you mean hand-crafted business applications (which were
>later often replaced by SAP), or as the language for implementing
>things like spreadsheets?

Not really. The successor to COBOL is COBOL, or 4GL's like SAP.

Stephen Fuld

unread,
Dec 19, 2022, 2:46:45 PM12/19/22
to
On 12/19/2022 9:06 AM, Thomas Koenig wrote:
> Stephen Fuld <sf...@alumni.cmu.edu.invalid> schrieb:
>> On 12/5/2022 5:21 AM, Anton Ertl wrote:
>>> Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
>>
>> snip
>>
>>>> If the more popular
>>>> languages that replaced COBOL had good support for fixed decimal, I
>>>> suspect that might have won out over binary floating point (especially
>>>> if there was some hardware support).
>>>
>>> The main successor for COBOL seems to be Java.
>>
>> That is true now, but by the time of Java's ascendancy, the die had
>> already been cast. The immediate successor to COBOL as the primary
>> language for business applications was C.
>
> Really? Do you mean hand-crafted business applications (which were
> later often replaced by SAP), or as the language for implementing
> things like spreadsheets?

Some of each, and as the language for implementing things like
commercial accounting packages for GL/AP/AR, etc., particularly on
mini-computers and even micros where COBOL compilers were either not
available, or of terrible quality, or perhaps just to much to implement
reasonably on the smaller hardware.

> C was certainly in wide use among computer scientists, especially
> in the UNIX field, and engineers also used it a lot after FORTRAN
> had fallen behind too much, but for business?

Not so much on mainframes, but as smaller companies realized they could
use a lower cost mini for their business needs.

Thomas Koenig

unread,
Dec 19, 2022, 5:45:35 PM12/19/22
to
Stephen Fuld <sf...@alumni.cmu.edu.invalid> schrieb:
> On 12/19/2022 9:06 AM, Thomas Koenig wrote:
>> Stephen Fuld <sf...@alumni.cmu.edu.invalid> schrieb:
>>> On 12/5/2022 5:21 AM, Anton Ertl wrote:
>>>> Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
>>>
>>> snip
>>>
>>>>> If the more popular
>>>>> languages that replaced COBOL had good support for fixed decimal, I
>>>>> suspect that might have won out over binary floating point (especially
>>>>> if there was some hardware support).
>>>>
>>>> The main successor for COBOL seems to be Java.
>>>
>>> That is true now, but by the time of Java's ascendancy, the die had
>>> already been cast. The immediate successor to COBOL as the primary
>>> language for business applications was C.
>>
>> Really? Do you mean hand-crafted business applications (which were
>> later often replaced by SAP), or as the language for implementing
>> things like spreadsheets?
>
> Some of each, and as the language for implementing things like
> commercial accounting packages for GL/AP/AR, etc., particularly on
> mini-computers and even micros where COBOL compilers were either not
> available, or of terrible quality, or perhaps just to much to implement
> reasonably on the smaller hardware.

C as mostly coupled to UNIX in the first years of its existence,
I believe, and UNIX was very much in use at universities, less
so elsewhere.

The minicomputers running business applications were, of course,
the midrange IBM systems (certainly not C, rather RPG or Cobol),
Data General machines, DEC (PDP-11, VAX), Prime. I think these
mostly had FORTRAN, Cobol, Algol, Pascal, BASIC etc, from the 1970s
and early 1980s.

C compilers for non-UNIX systems started appearing in the 1980s,
I believe. The first C compiler for PCs was Lattice C, released
in 1982.

>> C was certainly in wide use among computer scientists, especially
>> in the UNIX field, and engineers also used it a lot after FORTRAN
>> had fallen behind too much, but for business?
>
> Not so much on mainframes, but as smaller companies realized they could
> use a lower cost mini for their business needs.

Sure, but programming it in C? Just what importance did UNIX
systems have in business in the 1970s and early 1980s, before
the PC really took off?

Anton Ertl

unread,
Dec 20, 2022, 5:09:31 AM12/20/22
to
Thomas Koenig <tko...@netcologne.de> writes:
>Just what importance did UNIX
>systems have in business in the 1970s and early 1980s, before
>the PC really took off?

In 1991 I worked in a small company that did business applications;
the customers got a Unix server and a bunch of terminals and the
software from this company. The software was written in C and used
IIRC Oracle as database. However, this was not replacing Cobol
software, but instead mostly non-computerized processes. Replacing
the terminals with PCs would have been more expensive and would not
have added value; replacing the Unix box with a Windows 3.1 system
would probably also have been a bad idea, because Windows 3.1 is not a
multi-user system.

Some years later (after I had left) they were doing Java.

Michael S

unread,
Dec 20, 2022, 6:09:15 AM12/20/22
to
On Tuesday, December 20, 2022 at 12:09:31 PM UTC+2, Anton Ertl wrote:
> Thomas Koenig <tko...@netcologne.de> writes:
> >Just what importance did UNIX
> >systems have in business in the 1970s and early 1980s, before
> >the PC really took off?
> In 1991 I worked in a small company that did business applications;
> the customers got a Unix server and a bunch of terminals and the
> software from this company. The software was written in C and used
> IIRC Oracle as database. However, this was not replacing Cobol
> software, but instead mostly non-computerized processes. Replacing
> the terminals with PCs would have been more expensive and would not
> have added value;

What sort of terminals was cheaper in 1991 than 386SX PC with VGA and 640 KB RAM?
I remember IBM's lower-end X-terminals from this era. Their displays were hard to
read even for my young eyes of then. And still I am not sure that they were cheap.
May be, a little cheaper than PC like above + Ethernet NIC. Or, may be, not cheaper.

> replacing the Unix box with a Windows 3.1 system
> would probably also have been a bad idea, because Windows 3.1 is not a
> multi-user system.
>

You don't need multi-user OS for server side of client-server app.

> Some years later (after I had left) they were doing Java.

Likely, for medium+ firms rather than for small-to-medium.

Anton Ertl

unread,
Dec 20, 2022, 6:48:09 AM12/20/22
to
Michael S <already...@yahoo.com> writes:
>What sort of terminals was cheaper in 1991 than 386SX PC with VGA and 640 KB RAM?

Something VT220 or VT320 compatible. I don't remember the exact
price, but the terminals cost around ATS 10K (~EUR 700). A 486 PC I
bought in 1993 cost ATS 40K, with a hard disk costing ATS 5K. In 1991
I don't think you could get a PC with screen, keyboard and HDD for ATS
10K. However, 10base2 cabling is cheaper than a RS232 star topology.

>I remember IBM's lower-end X-terminals from this era.

X-Terminals are a different kettle of fish.

>Their displays were hard to
>read even for my young eyes of then.

I used to work on an NCD X-Terminal, which had nice screens for the
time; while my PC from 1993 had a 14" colour sceen with IIRC up to
1024x768 resolution, the NCD X-Terminals from around the same time had
a 19" B&W screen with 1280x1024 resolution.

>And still I am not sure that they were cheap.

Of course they were not. IBM does not sell cheap equipment; at least
not cheap for their customers.

>> replacing the Unix box with a Windows 3.1 system
>> would probably also have been a bad idea, because Windows 3.1 is not a
>> multi-user system.
>>
>
>You don't need multi-user OS for server side of client-server app.

That would mean replacing the terminals with more versatile clients
(e.g., PCs), more expensive.

>> Some years later (after I had left) they were doing Java.
>
>Likely, for medium+ firms rather than for small-to-medium.

I don't know who the customers were in the Java times. My impression
from the time when I worked for them was that a project they worked a
lot on use about 20 terminals (which was more than usual), and that
most of the employees of the company would be using this system (so
they had a head count of maybe 30 people).

My guess is that they kept their customers, and switched the
technology to using Java (probably also networked PC at that time,
terminals were no longer competetive, but I have no information from
the company about that).

Stephen Fuld

unread,
Dec 20, 2022, 11:14:41 AM12/20/22
to
On 12/20/2022 3:09 AM, Michael S wrote:
> On Tuesday, December 20, 2022 at 12:09:31 PM UTC+2, Anton Ertl wrote:
>> Thomas Koenig <tko...@netcologne.de> writes:
>>> Just what importance did UNIX
>>> systems have in business in the 1970s and early 1980s, before
>>> the PC really took off?
>> In 1991 I worked in a small company that did business applications;
>> the customers got a Unix server and a bunch of terminals and the
>> software from this company. The software was written in C and used
>> IIRC Oracle as database. However, this was not replacing Cobol
>> software, but instead mostly non-computerized processes. Replacing
>> the terminals with PCs would have been more expensive and would not
>> have added value;
>
> What sort of terminals was cheaper in 1991 than 386SX PC with VGA and 640 KB RAM?

I think there were still what we used to call "glass teletypes"
available. These were basically a keyboard and a character oriented
screen, an RS232 serial port and whatever minimal hardware logic to
connect them all, certainly a lot less than a 386SX. No user
programmability. IIRC, they were in the hundreds of US dollars.

Stephen Fuld

unread,
Dec 20, 2022, 1:49:41 PM12/20/22
to
On 12/20/2022 2:00 AM, Anton Ertl wrote:
> Thomas Koenig <tko...@netcologne.de> writes:
>> Just what importance did UNIX
>> systems have in business in the 1970s and early 1980s, before
>> the PC really took off?
>
> In 1991 I worked in a small company that did business applications;
> the customers got a Unix server and a bunch of terminals and the
> software from this company. The software was written in C and used
> IIRC Oracle as database. However, this was not replacing Cobol
> software, but instead mostly non-computerized processes. Replacing
> the terminals with PCs would have been more expensive and would not
> have added value; replacing the Unix box with a Windows 3.1 system
> would probably also have been a bad idea, because Windows 3.1 is not a
> multi-user system.
>
> Some years later (after I had left) they were doing Java.

Yes. That is an excellent example of one of the kind of things I was
talking about.

BGB

unread,
Dec 21, 2022, 12:08:26 AM12/21/22
to
There is still a lot of stuff around that assumes VT102 behavior, ASCII
+ ANSI escape sequences, ... Also some amount of "Sixel" graphics (for
sending graphics over an ASCII terminal link).


I would assume at one point, these sorts of terminals were relatively
popular.

Albeit, I never encountered any of these personally...

...

Tim Rentsch

unread,
Dec 21, 2022, 5:27:43 AM12/21/22
to
MitchAlsup <Mitch...@aol.com> writes:

> On December 17, 2022 at 10:32:14 AM UTC-6, Tim Rentsch wrote:
>
>> Terje Mathisen <terje.m...@tmsw.no> writes:
>>
>>> Tim Rentsch wrote:
>>>
>>>> Terje Mathisen <terje.m...@tmsw.no> writes:

[.. speed of graphics operations ..]
Much of the blame for current attitudes towards software falls on
Microsoft, even before MS Windows. If there there such a thing
as a negative Nobel Prize, Bill Gates deserves one. Or more than
one.

> c) every web browser shovels shit at you in the form of advertising.
> d) even adblockers are getting to the point of selectively shoveling
> ....you shit
> So, while the premises good, there is no escape from shit in SW.

I don't like advertising any better than you do, but that's a
problem with content, not software.


>> Certainly there are things about Apple that I don't like, but when
>> it comes to software quality developers should strive to be more
>> like Apple and less like Microsoft.
>>
>>> and when it does matter, then we just throw a full GPU at
>>> at. :-)
>>
>> This one too. It's ridiculous to pay $1000 for a solution when
>> there is another way of solving the same problem that costs less
>> than $10. Equally ridiculous is needing GB/sec of bandwidth in
>> cases where MB/sec would do. Or web pages that load multiple
>> megabytes to display only a few thousand bytes of content. These
>> excesses, and others like them, must be fought at every level.
>
> I think I would settle for delivering the pages content first, and
> shoveling the advertisements later.

Try the add-on Reader View. I started using it recently and it is
fantastic. Pages don't load any faster (?) but the result is
pleasantly stripped of almost all extraneous junk.

> But then again, you can't buy a PC that does not already come
> with a GPU built in (at a cost so low you don't see it).

Terje said full GPU, not any run-of-the-mill GPU.

In any case I don't see that GPUs solve the problem. I use
so-called "smooth scrolling" in my browser, and it's really lame.
During scrolling the text is both jerky and blurry. I don't know
the details but I suspect the interface to the graphics systems
simply doesn't support the necessary operations. Inexcusable.

Terje Mathisen

unread,
Dec 21, 2022, 9:49:32 AM12/21/22
to
Our original mountain cabin in Telemark was paid for with my
VT100/52/220/etc terminal emulator for the PC. For a number of years it
was probably #4 on the list of best-selling Norwegian software (measured
in # of licenses). I.e. in the 1980'ies and early 90'ies there was a
large market for such SW. :-)

One of the big issues with VT100 emulation was that you needed to be
bug-for-bug compatible, esp in the face of non-conforming escape
sequences. To make it fast enough while still being able to parse
partially received sequences I re-invented co-routines, so that the
emulation part was a separate deeply nested set of parsers for each
possible type of sequence. These would just call the "get next char"
routine which would yield when the buffer was empty.

John Levine

unread,
Dec 21, 2022, 4:00:25 PM12/21/22
to
According to Anton Ertl <an...@mips.complang.tuwien.ac.at>:
>Michael S <already...@yahoo.com> writes:
>>What sort of terminals was cheaper in 1991 than 386SX PC with VGA and 640 KB RAM?
>
>Something VT220 or VT320 compatible. I don't remember the exact
>price, but the terminals cost around ATS 10K (~EUR 700). ...

Wikipedia says a DEC VT320 cost $495 in the late 1980s. You couldn't
get a computer for anything close to that.

>>> replacing the Unix box with a Windows 3.1 system
>>> would probably also have been a bad idea, because Windows 3.1 is not a
>>> multi-user system.
>>
>>You don't need multi-user OS for server side of client-server app.
>
>That would mean replacing the terminals with more versatile clients
>(e.g., PCs), more expensive. ...

In the 1980s I was using a shared database running on a DOS PC with other
DOS PCs on a thin coax Ethernet as the clients. It worked fine but as you
say, dumb terminals would have been cheaper but we'd have needed a more
capable server.

MitchAlsup

unread,
Dec 21, 2022, 4:49:24 PM12/21/22
to
On Wednesday, December 21, 2022 at 3:00:25 PM UTC-6, John Levine wrote:
> According to Anton Ertl <an...@mips.complang.tuwien.ac.at>:
> >Michael S <already...@yahoo.com> writes:
> >>What sort of terminals was cheaper in 1991 than 386SX PC with VGA and 640 KB RAM?
> >
> >Something VT220 or VT320 compatible. I don't remember the exact
> >price, but the terminals cost around ATS 10K (~EUR 700). ...
>
> Wikipedia says a DEC VT320 cost $495 in the late 1980s. You couldn't
> get a computer for anything close to that.
<
My first PC was 1990 a 33 MHz 486 with 4MB memory and a 100MB disk
that cost me $600±. A week after buying it I went back and paid $125 for
16 more MB of memory. A few months later I put in a 1GB disk at $300,
and a color printer $350.

Anton Ertl

unread,
Dec 21, 2022, 6:04:33 PM12/21/22
to
MitchAlsup <Mitch...@aol.com> writes:
>On Wednesday, December 21, 2022 at 3:00:25 PM UTC-6, John Levine wrote:
>> According to Anton Ertl <an...@mips.complang.tuwien.ac.at>:
>> >Michael S <already...@yahoo.com> writes:=20
>> >>What sort of terminals was cheaper in 1991 than 386SX PC with VGA and 6=
>40 KB RAM?=20
>> >=20
>> >Something VT220 or VT320 compatible. I don't remember the exact
>> >price, but the terminals cost around ATS 10K (~EUR 700). ...=20
>>=20
>> Wikipedia says a DEC VT320 cost $495 in the late 1980s. You couldn't=20
>> get a computer for anything close to that.
><
>My first PC was 1990 a 33 MHz 486 with 4MB memory and a 100MB disk
>that cost me $600=C2=B1. A week after buying it I went back and paid $125 f=
>or
>16 more MB of memory. A few months later I put in a 1GB disk at $300,
>and a color printer $350.=20

I find that hard to believe. In 1993 I bought a 486/66 with 16MB
memory, a 340MB disk, a 14" screen and a keyboard for ATS 40,000 (~USD
3300). The disk proved too small, so I bought a 540MB disk for ATS
4800 (~USD 400) shortly after (in 1994). The Pentium was already out
at the time, so the 486/66 was no longer a high-end model, whereas a
486/33 was the fastest PC you could get in 1990 (the 486/33 only
appeared in May 1990).

The Amiga 1200 with a 14MHz 68EC020, 2MB RAM, and without HDD was
introduced for $599 in 1992.

I also think that hard disks were advancing fast at the time, so a
cheaper and bigger hard disk four years ealier does not look
plausible.
<https://web.archive.org/web/20140728221058/http://ns1758.ca/winch/winchest.html>
shows $9/MB in September 1990 and $0.95/MB in September 1994, and a
Seagate 1GB drive in January 1995 costing $849, and a 520MB HDD
costing $380 in April 1995.

Anne & Lynn Wheeler

unread,
Dec 21, 2022, 7:02:28 PM12/21/22
to
an...@mips.complang.tuwien.ac.at (Anton Ertl) writes:
> I find that hard to believe. In 1993 I bought a 486/66 with 16MB
> memory, a 340MB disk, a 14" screen and a keyboard for ATS 40,000 (~USD
> 3300). The disk proved too small, so I bought a 540MB disk for ATS
> 4800 (~USD 400) shortly after (in 1994). The Pentium was already out
> at the time, so the 486/66 was no longer a high-end model, whereas a
> 486/33 was the fastest PC you could get in 1990 (the 486/33 only
> appeared in May 1990).
>
> The Amiga 1200 with a 14MHz 68EC020, 2MB RAM, and without HDD was
> introduced for $599 in 1992.

from a recent archived afc post
http://www.garlic.com/~lynn/2022h.html#38 Christmas 1989

I was posting SJMN sunday adverts on internal IBM forums showing prices
significantly cheaper than IBM Boca/PS2 predictions. Then had of Boca
contracted with Dataquest (since bought by Gartner) to do study of
future of PC ... including several hr video taped round table of silicon
valley experts. The responsible person at Datquest I had known for a
number of years and asked me to be one of the experts ... and promised
to garble my identity so Boca wouldn't recognize me as an IBM employee.

note fall 88 , clone makers on the other side of the pacific, had
built up large inventory of 286 machines for the xmas season ... and
then Intel announce 386sx (386sx consolidated lots of chips needed
for 286 build) and the market/prices drops out of the 286.

...

also reposted in archived long-winded facebook thread which also wanders
into the IBM communication group responsible for the downfall of IBM
http://www.garlic.com/~lynn/2002h.html#104 IBM 360

Not long before leaving IBM, Boca had contracted with Dataquest (since
acquired by Gartner) to do detailed study of PC business and its
future ... including a video taped round table discussion of silicon
valley experts. I had known the person running the Dataquest study for
years and was asked to be one of the silicon valley experts (they
promised to obfuscate by vitals so Boca wouldn't recognize me as IBM
employee ... and I cleared it with my immediate management). I had
also been posting SJMN sunday adverts of quantity one PC prices to IBM
forums for a number of years (trying to show how out of touch with
reality, Boca forecasts were).

...

from long ago archived afc post (with some of the earlier SJMN sunday
adverts)
http://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)

from ohter PC posts; 30 years of personal computer market share figures
https://arstechnica.com/features/2005/12/total-share/
and has graph of personal computer sales 1975-1980
http://arstechnica.com/articles/culture/total-share/3/
and graph from 1980 to 1984 ... with the only serious competitor to PC in number of sales was commodore 64
http://arstechnica.com/articles/culture/total-share/4/
and then from 1984 to 1987 the ibm pc (and clones) starting to completely swamp
http://arstechnica.com/articles/culture/total-share/5/
market 1990-1994
https://arstechnica.com/features/2005/12/total-share/7/
2001-2004
https://arstechnica.com/features/2005/12/total-share/9/

--
virtualization experience starting Jan1968, online at home since Mar1970

Thomas Koenig

unread,
Dec 22, 2022, 1:48:00 AM12/22/22
to
Stephen Fuld <sf...@alumni.cmu.edu.invalid> schrieb:
I worked on these for a bit. My first contact with UNIX was on
an institute server (HP 3000?) with HP terminals in the late 1980s.

I tried out a "hello world" in Fortran, just using cat, and
then tried to compile it using the Fortran compiler whose name I
knew, fc.

That turned out badly because fc was also a built-in function in
the shell, to edit the history (still is, in bash). That dropped
me into vi, and I didn't even know what that was, let alone how
to get out of it again...

I am typing this on vi, so I managed to get out of it eventually.

Scott Lurndal

unread,
Dec 22, 2022, 10:58:45 AM12/22/22
to
Thomas Koenig <tko...@netcologne.de> writes:
>Stephen Fuld <sf...@alumni.cmu.edu.invalid> schrieb:
>> On 12/20/2022 3:09 AM, Michael S wrote:
>>> On Tuesday, December 20, 2022 at 12:09:31 PM UTC+2, Anton Ertl wrote:
>>>> Thomas Koenig <tko...@netcologne.de> writes:
>>>>> Just what importance did UNIX
>>>>> systems have in business in the 1970s and early 1980s, before
>>>>> the PC really took off?
>>>> In 1991 I worked in a small company that did business applications;
>>>> the customers got a Unix server and a bunch of terminals and the
>>>> software from this company. The software was written in C and used
>>>> IIRC Oracle as database. However, this was not replacing Cobol
>>>> software, but instead mostly non-computerized processes. Replacing
>>>> the terminals with PCs would have been more expensive and would not
>>>> have added value;
>>>
>>> What sort of terminals was cheaper in 1991 than 386SX PC with VGA and 640 KB RAM?
>>
>> I think there were still what we used to call "glass teletypes"
>> available. These were basically a keyboard and a character oriented
>> screen, an RS232 serial port and whatever minimal hardware logic to
>> connect them all, certainly a lot less than a 386SX. No user
>> programmability. IIRC, they were in the hundreds of US dollars.
>
>I worked on these for a bit. My first contact with UNIX was on
>an institute server (HP 3000?) with HP terminals in the late 1980s.

The HP 3000 operating system was called MPE. It never ran Unix.
(The HP-3000 was a stack architecture machine designed in part
by ex Burroughs engineers).

You're likely thinking about the HP-300.

HP had been designing the snakes architecture (which became
the HP Precision Architecture (PA-RISC, HP-9000) in the late 1980s.

The HP series 300 was a FOCUS[*] based Unix system in the 1980s, which
is likely the system you were using.

[*] First commercial single-chip 32-bit microprocessor on the market.

Anton Ertl

unread,
Dec 22, 2022, 11:44:36 AM12/22/22
to
sc...@slp53.sl.home (Scott Lurndal) writes:
>Thomas Koenig <tko...@netcologne.de> writes:
>>My first contact with UNIX was on
>>an institute server (HP 3000?) with HP terminals in the late 1980s.
>
>The HP 3000 operating system was called MPE. It never ran Unix.
>(The HP-3000 was a stack architecture machine designed in part
>by ex Burroughs engineers).

Later HP 3000 machines used PA-RISC, the HP 3000/900 series.

>You're likely thinking about the HP-300.

You may be thinking of HP 9000/300. HP 9000 were the HP/UX systems.

>HP had been designing the snakes architecture (which became
>the HP Precision Architecture (PA-RISC, HP-9000) in the late 1980s.

|The architecture was introduced on 26 February 1986, when the HP 3000
|Series 930 and HP 9000 Model 840 computers were launched

<https://en.wikipedia.org/wiki/PA-RISC>

>The HP series 300 was a FOCUS[*] based Unix system in the 1980s, which
>is likely the system you were using.

Among the HP 9000, series 200, 300, 400 are based on 68k, 500 is based
on HP's FOCUS (32-bit stack architecture), and 800 and 700 are based
on PA-RISC. Later they switched to a different naming scheme.

I consider it more likely that he worked with an 68k or PA-RISC
system. I was an intern at HP's Vienna subsidiary (responsible for
much of eastern Europe) in 1988 and 1989, and they had a number of HP
9000/300 and HP 9000/800 systems around, but I did not see a HP
9000/500. If they had sold a lot of them, I would expect that they
have one around for support reasons, even if they have stopped selling
them by that time.

Tim Rentsch

unread,
Dec 24, 2022, 2:37:11 AM12/24/22
to
Terje Mathisen <terje.m...@tmsw.no> writes:

> Tim Rentsch wrote:

>> [complaints about crummy software]

> Tim, we are really in violent agreement, it is just that either I
> am older than you, or I have given up fixing this before you did.

Nolo contendere.

>> [complaints about wasteful solutions]
>
> Having written some of the most performance-critical code in use
> today, I strongly agree. However, I have found that I need to
> pick my battles to the places where it still really matters. :-(

Just be thankful my complaints here are limited to the issues I
think are *really* important. :)

Quadibloc

unread,
Jan 2, 2023, 7:02:02 PM1/2/23
to
On Monday, December 19, 2022 at 3:45:35 PM UTC-7, Thomas Koenig wrote:
> Stephen Fuld <sf...@alumni.cmu.edu.invalid> schrieb:

> > Not so much on mainframes, but as smaller companies realized they could
> > use a lower cost mini for their business needs.

> Sure, but programming it in C? Just what importance did UNIX
> systems have in business in the 1970s and early 1980s, before
> the PC really took off?

Not much, directly. But before the IBM PC, businesses tended
to use microcomputers running CP/M. C was as important
in developing programs to run on CP/M as it was in developing
programs to run on MS-DOS, even if the business users themselves,
if _they_ programmed their computers, would use one or another
dialect of BASIC.

John Savard

Anton Ertl

unread,
Jan 3, 2023, 3:41:52 AM1/3/23
to
Quadibloc <jsa...@ecn.ab.ca> writes:
>But before the IBM PC, businesses tended
>to use microcomputers running CP/M. C was as important
>in developing programs to run on CP/M as it was in developing
>programs to run on MS-DOS

In a way that is true: On both platforms the relevance of C was
approximately 0. More relevant languages for CP/M and MS-DOS were
assembly language, Pascal and PL/M. C became prominent on PCs late in
the 1980s, shortly before Windows took off. E.g., if you look at
<https://en.wikipedia.org/wiki/WordPerfect#History>, it says that they
wanted to program WordPerfect for the IBM PC in C, but no C compilers
were available, so they wrote it in assembly language. C was
eventually used for WordPerfect 5.1, released in 1989.

John Dallman

unread,
Jan 3, 2023, 5:06:14 AM1/3/23
to
In article <2023Jan...@mips.complang.tuwien.ac.at>,
an...@mips.complang.tuwien.ac.at (Anton Ertl) wrote:

> C became prominent on PCs late in the 1980s, shortly before
> Windows took off.

The job I was in must have been early: after writing and maintaining a
CAD system on the Apple II in assembler, we started on a PC version in C
in 1984. My next job, which I started in 1987, was with a company that
had been using C on PCs since at least 1985.

John

Anton Ertl

unread,
Jan 3, 2023, 7:26:43 AM1/3/23
to
Looking at word processors and spreadsheets, it seems that companies
wanted to use C quite a bit earlier, but the products written in
assembly language were more successful. But maybe things were
different in other areas.

Anyway, wrt spreadsheets, the most successful spreadsheet for the IBM
PC and compatibles on DOS was Lotus 1-2-3, written in assembly
language, which won out over Microsoft Multiplan, written in C
(compiled to virtual-machine rather than native code); it also won out
over an 1-2-3 clone written in C called "The Twin". When 1-2-3 was
rewritten in C, it became too large for the smaller machines, and they
had to maintain the assembly-language version in parallel; there may
also have been the second systems effect at work; after all, Microsoft
managed to let Multiplan run on small machines like the C64.

BTW, the reason why Multiplan was written in this (apparently
MS-internal) C was to support porting to the many different platforms
of the early 1980s. Most of these platforms eventually died out, with
the exception of the IBM PC compatible line and the MacIntosh line;
interestingly, the OSs on these platforms changed over time.

John Dallman

unread,
Jan 3, 2023, 11:15:26 AM1/3/23
to
In article <2023Jan...@mips.complang.tuwien.ac.at>,
an...@mips.complang.tuwien.ac.at (Anton Ertl) wrote:

> Looking at word processors and spreadsheets, it seems that companies
> wanted to use C quite a bit earlier, but the products written in
> assembly language were more successful. But maybe things were
> different in other areas.

> When 1-2-3 was rewritten in C, it became too large for the smaller
> machines, and they had to maintain the assembly-language version in
> parallel; there may also have been the second systems effect at work;
> after all, Microsoft managed to let Multiplan run on small machines
> like the C64.

Both of the products I worked on in that era demanded 640KB, and weren't
interested in supporting smaller machines. A possible advantage of being
in the UK was that there were few genuine IBM PCs or XTs around that
customers had paid huge sums for; clones were the usual fare.

> BTW, the reason why Multiplan was written in this (apparently
> MS-internal) C was to support porting to the many different
> platforms of the early 1980s.

So that was why Multiplan, early versions of Word, and so on, were so
slow. MS didn't really get anywhere with Office until it was on faster
Windows machines, and they accepted rampant piracy to build market share.


> Most of these platforms eventually died out, with the exception of
> the IBM PC compatible line and the MacIntosh line; interestingly,
> the OSs on these platforms changed over time.

Yup. Trying to port software for any other OS to classic MacOS was very
time-consuming and frustrating. You had to call yield() frequently, but
not so frequently as to waste time, since yield() was not quick. If you
hadn't designed this in originally, it was hard to get right. In contrast,
porting to current macOS is relatively easy.

John

John Levine

unread,
Jan 3, 2023, 1:11:43 PM1/3/23
to
According to Anton Ertl <an...@mips.complang.tuwien.ac.at>:
>In a way that is true: On both platforms the relevance of C was
>approximately 0. More relevant languages for CP/M and MS-DOS were
>assembly language, Pascal and PL/M. C became prominent on PCs late in
>the 1980s, shortly before Windows took off.

I was one of the authors of Javelin, a nice DOS time-series modelling
package that was unfortunately mis-sold as a spreadsheet. We started
work on it in 1984 and wrote it in C with small bits of assembler for
stuff you can't say in C. I don't recall there being any discussion of
using anything else. It helped that we used Wizard C, a nice little C
compiler from a one-man company that happened to be nearby, and he was
very good at shipping fixes when we sent him bug reports. He later
sold it to Borland who wrapped an IDE around it and sold it as Turbo
C.

I think that you could run Javelin in 512K with very little space for
your model. We used the Phoenix linker, which had an overlay scheme
essentially identical to the one OS/360 used 20 years earlier. One of
my jobs was to do the digital origami to fold the code into overlays
that worked (you can't call into a segment that will be loaded on top
of the caller) and didn't thrash too badly when running off floppies.
We also supported bank switched expanded memory, and one day loaded up
a PC with a then-astonishing 8 megabytes of RAM to make sure we could
use it all.

--
Regards,
John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

John Dallman

unread,
Jan 3, 2023, 2:51:28 PM1/3/23
to
In article <tp1r4r$j0e$1...@gal.iecc.com>, jo...@taugh.com (John Levine)
wrote:

> It helped that we used Wizard C, a nice little C compiler from a
> one-man company that happened to be nearby ... We used the Phoenix
> linker, which had an overlay scheme essentially identical to the
> one OS/360 used 20 years earlier.

That's the same set of tools my 1984-87 job used. They were still using
them when I returned in '89 and continued to do so until they switched to
Watcom with no overlays and a 32-bit DOS extender.

My '87-88 job started out using Aztec C, which was well-named: it felt a
lot like using tools chipped out of obsidian. Later we switched to
Microsoft, which by this time was reasonably workable.

John

Thomas Koenig

unread,
Jan 4, 2023, 1:37:01 AM1/4/23
to
John Dallman <j...@cix.co.uk> schrieb:

> Yup. Trying to port software for any other OS to classic MacOS was very
> time-consuming and frustrating. You had to call yield() frequently, but
> not so frequently as to waste time, since yield() was not quick. If you
> hadn't designed this in originally, it was hard to get right. In contrast,
> porting to current macOS is relatively easy.

And the Mac enthusiasts were telling you that this was a good thing,
because preemptive multitasking was bad in some way. I forget
what the argument was, but it was mooted by Apple introducing
preemptive multitasking.

I once wrote a scientific program on a Mac (which then froze).
After being told that I was then a programmer, and that I should
have done what you described above in whatever language I used at
the time, I never did so again.

A bit like people arguing in favor of Windows 8 tiles, which
(on a desktop) mostly take up loads of space and do nothing to
help the users. They also had the people arguing vehemently
in favor of them - if Microsoft decrees it, it cannot be bad,
correct?

John Dallman

unread,
Jan 4, 2023, 3:45:02 AM1/4/23
to
In article <tp36qb$21eco$1...@newsreader4.netcologne.de>,
tko...@netcologne.de (Thomas Koenig) wrote:

> And the Mac enthusiasts were telling you that this was a good thing,
> because preemptive multitasking was bad in some way.

Never had anyone try that argument on me. Our only Mac enthusiast was a
developer, and after some pain, had accepted that cooperative
multitasking and processor-intensive computation were a bad combination.

John

BGB

unread,
Jan 4, 2023, 12:35:57 PM1/4/23
to
On 1/4/2023 12:36 AM, Thomas Koenig wrote:
> John Dallman <j...@cix.co.uk> schrieb:
>
>> Yup. Trying to port software for any other OS to classic MacOS was very
>> time-consuming and frustrating. You had to call yield() frequently, but
>> not so frequently as to waste time, since yield() was not quick. If you
>> hadn't designed this in originally, it was hard to get right. In contrast,
>> porting to current macOS is relatively easy.
>
> And the Mac enthusiasts were telling you that this was a good thing,
> because preemptive multitasking was bad in some way. I forget
> what the argument was, but it was mooted by Apple introducing
> preemptive multitasking.
>

Preemptive multitasking is pros/cons.
Good for a general user OS, not as good for real-time systems.

Sort of issues also when one takes OS's not meant for real-time, and
sort of fudges them into a real-time use-case (such as a CNC controller).

Granted, yes, a background task monopolizing the CPU for an extended
period of time would also be bad.


Also, things like automatic lock-screen (if one doesn't mess with the
mouse/keyboard often enough), or automatic pop-up notifications, are
"not ideal" as far as a CNC controller goes (say, when keyboard inputs
and having input focus on said CNC program are the only way to E-Stop
the thing).

Well, and despite the temptation, one thing that is "generally a bad
idea is trying to use a web-browser while trying to mill something
(something like FireFox able to disrupt things enough to crash the machine).



> I once wrote a scientific program on a Mac (which then froze).
> After being told that I was then a programmer, and that I should
> have done what you described above in whatever language I used at
> the time, I never did so again.
>
> A bit like people arguing in favor of Windows 8 tiles, which
> (on a desktop) mostly take up loads of space and do nothing to
> help the users. They also had the people arguing vehemently
> in favor of them - if Microsoft decrees it, it cannot be bad,
> correct?

It was a stupid design choice for a desktop.
There was seemingly a period where both MS and also Ubuntu and similar,
seemingly thought that desktop PCs were tablets and phones, and so tried
redesigning the UI around this.


OTOH, I have also found a 4K monitor to be pros/cons:
One either uses UI scaling, which defeats the point;
One deals with the slight awkwardness of using a 4K display at 1x scaling.

I have mostly gotten used to it.


I am also starting to have an opinion that for "general
viewing/interaction", it seems like 144 ppi or so may make sense as a
practical limit for a pixel-oriented display.

So, for example, a cell-phone doesn't get much benefit from a screen
much over 1440x720 or so (or a tablet much past 1920x1080).


For a PC monitor, 2160p seems to already be in diminishing returns
territory.
Short of "actually using UI scaling", going much higher resolution than
this would likely be "basically unusable". And, if one needs to use
scaling, what is the point?...

I don't necessarily think "make pixels too small to be seen" is
necessarily a worthwhile goal as far as UI design goes.

Burning 48x64 pixels or something on each font character for the user to
be able to read them is not an effective use of resources IMO.

Say, if your font glyph needs to be larger than 8x12 pixels or so to be
readable, there is little benefit to going to a higher resolution.



But, as-is, does mean that I can tile 5x 640x480 windows side-to-side
across the screen, so it is useful for running simulations.


Still does not solve the issue of losing windows behind each-other though.


Would also be nice if MS were to realize they basically already gotten
the UI right somewhere around Windows 2000, and most stuff they have
been doing since then has been counter-productive in terms of usability.

...


Niklas Holsti

unread,
Jan 4, 2023, 12:57:11 PM1/4/23
to
On 2023-01-04 19:35, BGB wrote:
> On 1/4/2023 12:36 AM, Thomas Koenig wrote:
>> John Dallman <j...@cix.co.uk> schrieb:
>>
>>> Yup. Trying to port software for any other OS to classic MacOS was very
>>> time-consuming and frustrating. You had to call yield() frequently, but
>>> not so frequently as to waste time, since yield() was not quick. If you
>>> hadn't designed this in originally, it was hard to get right. In
>>> contrast,
>>> porting to current macOS is relatively easy.
>>
>> And the Mac enthusiasts were telling you that this was a good thing,
>> because preemptive multitasking was bad in some way. I forget
>> what the argument was, but it was mooted by Apple introducing
>> preemptive multitasking.
>>
>
> Preemptive multitasking is pros/cons.
> Good for a general user OS,


Yes.


> not as good for real-time systems.


I disagree, as would many others. Depends somewhat on the criticality of
the system (requirements to limit complexity).

But I agree, of course, that there are OS's that do preemptive
multitasking and _still_are not suitable for real-time systems.

Scott Lurndal

unread,
Jan 4, 2023, 1:12:42 PM1/4/23
to
BGB <cr8...@gmail.com> writes:
>On 1/4/2023 12:36 AM, Thomas Koenig wrote:
>> John Dallman <j...@cix.co.uk> schrieb:
>>
>>> Yup. Trying to port software for any other OS to classic MacOS was very
>>> time-consuming and frustrating. You had to call yield() frequently, but
>>> not so frequently as to waste time, since yield() was not quick. If you
>>> hadn't designed this in originally, it was hard to get right. In contrast,
>>> porting to current macOS is relatively easy.
>>
>> And the Mac enthusiasts were telling you that this was a good thing,
>> because preemptive multitasking was bad in some way. I forget
>> what the argument was, but it was mooted by Apple introducing
>> preemptive multitasking.
>>
>
>Preemptive multitasking is pros/cons.
>Good for a general user OS, not as good for real-time systems.

I would hope you can support that statement, unless you are
specifically limiting yourself to obsolete single-user operating systems.

The only thing that pre-emptive multitasking is useful for is,
well, nothing.



>
>Would also be nice if MS were to realize they basically already gotten
>the UI right somewhere around Windows 2000, and most stuff they have
>been doing since then has been counter-productive in terms of usability.

Windows has never had a good UI, or good anything.

MitchAlsup

unread,
Jan 4, 2023, 1:55:23 PM1/4/23
to
On Wednesday, January 4, 2023 at 11:35:57 AM UTC-6, BGB wrote:
> On 1/4/2023 12:36 AM, Thomas Koenig wrote:
> > John Dallman <j...@cix.co.uk> schrieb:
> >
> >> Yup. Trying to port software for any other OS to classic MacOS was very
> >> time-consuming and frustrating. You had to call yield() frequently, but
> >> not so frequently as to waste time, since yield() was not quick. If you
> >> hadn't designed this in originally, it was hard to get right. In contrast,
> >> porting to current macOS is relatively easy.
> >
> > And the Mac enthusiasts were telling you that this was a good thing,
> > because preemptive multitasking was bad in some way. I forget
> > what the argument was, but it was mooted by Apple introducing
> > preemptive multitasking.
> >
> Preemptive multitasking is pros/cons.
> Good for a general user OS, not as good for real-time systems.
<
I think it depends. The only downside of multitasking in a real time
environment is the cache hierarchy is stale when a task begins
running.
<
Nothing in multitasking prevents fast context switches, of the
highest priority threads, to cores with which they have affinity,
to run at their specified priority (and sometimes higher).
<
Especially if the above paragraph can be made manifest in 10-ish
cycles.
<
The only thing that remains are the core caches and MMU caches
catching up so the RT thread runs as fast as practicable.

MitchAlsup

unread,
Jan 4, 2023, 1:56:39 PM1/4/23
to
On Wednesday, January 4, 2023 at 12:12:42 PM UTC-6, Scott Lurndal wrote:
> BGB <cr8...@gmail.com> writes:
> >On 1/4/2023 12:36 AM, Thomas Koenig wrote:
> >> John Dallman <j...@cix.co.uk> schrieb:
> >>
> >>> Yup. Trying to port software for any other OS to classic MacOS was very
> >>> time-consuming and frustrating. You had to call yield() frequently, but
> >>> not so frequently as to waste time, since yield() was not quick. If you
> >>> hadn't designed this in originally, it was hard to get right. In contrast,
> >>> porting to current macOS is relatively easy.
> >>
> >> And the Mac enthusiasts were telling you that this was a good thing,
> >> because preemptive multitasking was bad in some way. I forget
> >> what the argument was, but it was mooted by Apple introducing
> >> preemptive multitasking.
> >>
> >
> >Preemptive multitasking is pros/cons.
> >Good for a general user OS, not as good for real-time systems.
> I would hope you can support that statement, unless you are
> specifically limiting yourself to obsolete single-user operating systems.
>
> The only thing that pre-emptive multitasking is useful for is,
> well, nothing.
<
How else does one run 1,000 different tasks on a handful of cores
efficiently, securely, and at low overhead ?

David Brown

unread,
Jan 4, 2023, 2:19:22 PM1/4/23
to
On 04/01/2023 18:35, BGB wrote:
> On 1/4/2023 12:36 AM, Thomas Koenig wrote:
>> John Dallman <j...@cix.co.uk> schrieb:
>>
>>> Yup. Trying to port software for any other OS to classic MacOS was very
>>> time-consuming and frustrating. You had to call yield() frequently, but
>>> not so frequently as to waste time, since yield() was not quick. If you
>>> hadn't designed this in originally, it was hard to get right. In
>>> contrast,
>>> porting to current macOS is relatively easy.
>>
>> And the Mac enthusiasts were telling you that this was a good thing,
>> because preemptive multitasking was bad in some way. I forget
>> what the argument was, but it was mooted by Apple introducing
>> preemptive multitasking.
>>
>
> Preemptive multitasking is pros/cons.
> Good for a general user OS, not as good for real-time systems.
>
> Sort of issues also when one takes OS's not meant for real-time, and
> sort of fudges them into a real-time use-case (such as a CNC controller).
>
> Granted, yes, a background task monopolizing the CPU for an extended
> period of time would also be bad.
>

I wonder if you are mixing "pre-emptive" with "time-slicing for tasks of
the same priority". All RTOS's - which are designed for real-time tasks
(though you can have a real-time system without an OS at all) - are
pre-emptive. At any time outside short critical regions used to
implement primitives like low-level mutexes, a running task can be
pre-empted by another task of higher priority that becomes runnable. If
higher priority tasks, perhaps triggered by an interrupt, cannot
pre-empt running lower priority tasks then you do not have a real-time
system.

On the other hand, if you have lots of tasks of the same priority, all
runnable, then there are definitely pros and cons as to whether you
allow time-slicing pre-emption. It can be more efficient, and make
inter-process communication easier and caches misses lower if tasks are
only paused when they are ready for it (with a "yield()" call). But
then you need to write the tasks in a cooperative manner.

>>
>> A bit like people arguing in favor of Windows 8 tiles, which
>> (on a desktop) mostly take up loads of space and do nothing to
>> help the users.  They also had the people arguing vehemently
>> in favor of them - if Microsoft decrees it, it cannot be bad,
>> correct?
>
> It was a stupid design choice for a desktop.
> There was seemingly a period where both MS and also Ubuntu and similar,
> seemingly thought that desktop PCs were tablets and phones, and so tried
> redesigning the UI around this.
>

Ubuntu has done a lot to damage the reputation of Linux on desktops by
convincing people that they make the "standard" desktop Linux for
"ordinary" users, while giving them a design that tries to outdo
Microsoft in hideousness and inefficiency.

>
> Would also be nice if MS were to realize they basically already gotten
> the UI right somewhere around Windows 2000, and most stuff they have
> been doing since then has been counter-productive in terms of usability.
>

A desktop should have the tools you need to run your applications, and
provide features such as printing, settings, menus, task switching,
virtual desktops, task control, notifications (of useful things), file
management, and perhaps little applets for things like cpu usage. Apart
from that, it should stay out of the way. I don't want adverts mixed
with my "start menu", or weather maps, or menus that re-arrange
themselves all the time. Apart from a lack of virtual desktops, I agree
that W2K pretty much covered everything. My Windows machine is Win7,
which is not bad once some of the silly settings are fixed. And my
impression of Win11 is that it is has fixed many of the worst mistakes
of Win8.

On Linux, I use Mate (based on Gnome 2). Fancy desktops like KDE are
great for impressing others, but if you actually use your computer for
work, browsing, games, videos, or pretty much anything else, you are
using applications and the desktop should be hidden.




Scott Lurndal

unread,
Jan 4, 2023, 2:59:34 PM1/4/23
to
MitchAlsup <Mitch...@aol.com> writes:
>On Wednesday, January 4, 2023 at 12:12:42 PM UTC-6, Scott Lurndal wrote:
>> BGB <cr8...@gmail.com> writes:
>> >On 1/4/2023 12:36 AM, Thomas Koenig wrote:
>> >> John Dallman <j...@cix.co.uk> schrieb:
>> >>
>> >>> Yup. Trying to port software for any other OS to classic MacOS was very
>> >>> time-consuming and frustrating. You had to call yield() frequently, but
>> >>> not so frequently as to waste time, since yield() was not quick. If you
>> >>> hadn't designed this in originally, it was hard to get right. In contrast,
>> >>> porting to current macOS is relatively easy.
>> >>
>> >> And the Mac enthusiasts were telling you that this was a good thing,
>> >> because preemptive multitasking was bad in some way. I forget
>> >> what the argument was, but it was mooted by Apple introducing
>> >> preemptive multitasking.
>> >>
>> >
>> >Preemptive multitasking is pros/cons.
>> >Good for a general user OS, not as good for real-time systems.
>> I would hope you can support that statement, unless you are
>> specifically limiting yourself to obsolete single-user operating systems.
>>
>> The only thing that pre-emptive multitasking is useful for is,
>> well, nothing.
><
>How else does one run 1,000 different tasks on a handful of cores
>efficiently, securely, and at low overhead ?

My error; Somehow I read it as cooperative multitasking rather than
pre-emtpive.

BGB

unread,
Jan 4, 2023, 3:18:01 PM1/4/23
to
On 1/4/2023 11:57 AM, Niklas Holsti wrote:
> On 2023-01-04 19:35, BGB wrote:
>> On 1/4/2023 12:36 AM, Thomas Koenig wrote:
>>> John Dallman <j...@cix.co.uk> schrieb:
>>>
>>>> Yup. Trying to port software for any other OS to classic MacOS was very
>>>> time-consuming and frustrating. You had to call yield() frequently, but
>>>> not so frequently as to waste time, since yield() was not quick. If you
>>>> hadn't designed this in originally, it was hard to get right. In
>>>> contrast,
>>>> porting to current macOS is relatively easy.
>>>
>>> And the Mac enthusiasts were telling you that this was a good thing,
>>> because preemptive multitasking was bad in some way. I forget
>>> what the argument was, but it was mooted by Apple introducing
>>> preemptive multitasking.
>>>
>>
>> Preemptive multitasking is pros/cons.
>> Good for a general user OS,
>
>
> Yes.
>
>
>> not as good for real-time systems.
>
>
> I disagree, as would many others. Depends somewhat on the criticality of
> the system (requirements to limit complexity).
>

Say...

Pulsing IO pins at 10s of kHz with regular timing, or else a motor loses
position, causing a rapidly spinning cutting tool to not be in the
correct place in relation to a big chunk of steel or similar.

If the position is slightly off, the "tolerances" or similar are messed
up. If it goes off by a larger amount, a "catastrophic failure" can
result (such as the cutting tool exploding).


Say, the cutting tool can happily take off ~ 0.010" in a cut, but if you
suddenly try to take off 0.050" or more in a cut, the tool explodes...

Because, well, steel is kinda hard...

Aluminum is at least a little more forgiving in this sense.



> But I agree, of course, that there are OS's that do preemptive
> multitasking and _still_are not suitable for real-time systems.
>

I was thinking of the whole thing of "try to use Windows or Linux with a
parallel port to drive a CNC machine's stepper drivers".

This sort of setup is "not ideal"...

Like, "try to use a web browser while machining something and your CNC
machine starts jerking and losing its position" levels of bad...

This would possibly be "slightly less bad" with servos, but still not ideal.


Partly, it is because a lot of the PCs which have the needed parallel
port, also typically limited in other ways, say:
Single core 1.67 GHz CPU, 32-bit only;
2GB of RAM;
...

Because, say, most systems with 64-bit capable CPUs, multi-core CPUs,
able to use more RAM, ..., almost invariably lack a dedicated parallel port.

Or, rarely, one can find something with a 64-bit capable dual-core which
still has a parallel port...



Luckily at least, a lot of the commercially made CNC conversion kits
come with controller boxes which seem to handle the G-Code interpreter
internally (for one machine, the parallel cable plugged into this box
rather than directly into the PC; but another machine uses the parallel
port on the PC to directly run the steppers).

Some other kits have the motor drivers directly integrated into the
control box (but the controller box is a lot bigger in this case).


But, then one does have the tradeoff of being limited to the
capabilities of closed-source tools, and needing to contact the CNC kit
company to redo the registration key every time they change something
non-trivial in the PC that is running the software (them using a copy
protection scheme that changes the machine hash pretty much anything
non-trivial changes on the the PC in question).


Though, at least, the software from the other company is happy enough as
long as it sees it is connected to that company's control box (seemingly
using the presence of the control box as its DRM scheme).

Either way, the controller boxes are "not exactly cheap".

...


This does sort of create a desire though for "some other option"...


Many people doing "full custom" machines typically also seem to write
their own CNC controller software (often using a microcontroller as the
go-between).

I had before tried using a RasPi (for a few experimental machines), but
task-scheduling within Linux interfered with my ability to get the
desired level of timing stability.


One can use an FPGA or microcontroller, but (if plugging into a PC) are
mostly limited to options which have an FTDI chip or similar (other
option being to use an RS232 serial port, which is at least still a
little easier to find on a PC than a parallel port).

Mostly, this matters in that it means one needs something like an
LaunchPad or Arduino or something, rather than being able to use a bare
AVR8 or MSP430 in a DIP socket (since, natively, these lack any way to
communicate natively over USB; and would usually do so via an FTDI chip
or similar on the board; with the microcontroller itself using an RS232
interface or similar to talk to the FTDI chip).

But, yeah, something like the RasPi Pico could also fall into this camp
(and is arguably a bit more capable than a lot of the AVR based Arduino
devices).

...


It is loading more messages.
0 new messages