Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why Bloat Is Still Software’s Biggest Vulnerability

145 views
Skip to first unread message

Fred Bloggs

unread,
Feb 10, 2024, 11:10:18 AMFeb 10
to
Another failure of 'let the market decide.'

https://spectrum.ieee.org/lean-software-development

John Larkin

unread,
Feb 10, 2024, 12:07:37 PMFeb 10
to
Complexity is a game that some people enjoy.

And some people like simplicity. Their stuff works better.

Fred Bloggs

unread,
Feb 10, 2024, 1:11:08 PMFeb 10
to
On Saturday, February 10, 2024 at 12:07:37 PM UTC-5, John Larkin wrote:
> On Sat, 10 Feb 2024 08:10:13 -0800 (PST), Fred Bloggs
> <bloggs.fred...@gmail.com> wrote:
>
> >Another failure of 'let the market decide.'
> >
> >https://spectrum.ieee.org/lean-software-development
> Complexity is a game that some people enjoy.

In the case of software it looks like the bloat is arising from all the development resources available, things like libraries of software, enabling businesses to create product they couldn't have begun to do otherwise. Engineering is full of dunno what you call them where the mental midget who couldn't write the simplest program can now specify something graphically and the development environment inserts the code. It may be much more high level by now. They claim the rush to completion justifies it. But isn't it the case that a moron who can't do anything in any amount of time is always in a rush when asked to do the slightest little thing?

Anthony William Sloman

unread,
Feb 10, 2024, 10:29:45 PMFeb 10
to
In the limited number of cases where it is complex enough to work at all. The simplest mechanism that actually works is always intellectually satisfying, but there's no guarantee that it works all that well.
Breaking up a complex problem into simpler sub-problems and solving each one of them separately tends to be as safer approach, and leads to more easily comprehensible circuits.

John Larkin prefers to avoid the complexities of transformer design. This isn't a virtue.

--
Bill Sloman, Sydney

Jan Panteltje

unread,
Feb 11, 2024, 1:43:41 AMFeb 11
to
On a sunny day (Sat, 10 Feb 2024 09:06:03 -0800) it happened John Larkin
<j...@997PotHill.com> wrote in <g4bfsidsbmg316tog...@4ax.com>:
It is cool coding in asm without using external libraries.
I can do anything I like in KILOBYTES:
https://panteltje.nl/panteltje/pic/scope_pic/index.html
nice to do Fourier transform in a few bytes... sine lookup table
has a Usenet compatible output, use fixed size font:
https://panteltje.nl/panteltje/pic/scope_pic/screen_dump2.txt

Most web things I have coded in a few lines of C,
started on a browser too, but that is a moving target.. takes too much time.
Also wrote this Newsreader I am posting this with, it runs on a Raspberry Pi4
raspberrypi: ~ # whereis NewsFleX
NewsFleX: /usr/local/bin/NewsFleX
raspberrypi: ~ # lb /usr/local/bin/NewsFleX
-rwxr-xr-x 1 root root 383796 Mar 13 2023 /usr/local/bin/NewsFleX*

lb is short for ls -rtl --color=none
383796 Mar 13 2023 /usr/local/bin/NewsFleX*

'lb' is short for ls -rtl --color=none
383,796 bytes
So < 400 kB
Linked in is libforms for the GUI.
Old verion for x86 here:
https://panteltje.nl/panteltje/newsflex/index.html
libforms however changed, so unless you use a very old verion of that it won't work.

I have dropped that xforms lib too and still have a GUI...
https://panteltje.nl/pub/boats_and_planes.gif
runs 24/7
-rwxr-xr-x 1 root root 329604 Feb 7 2021 xgpspc
329,604 bytes
monitors planes and boat traffic, does navighation, auto-pilot what not.
latest version even has a fire solution.. for defence of course
Only uses these libs, from the Makefile:
$(COMPILER) -o xgpspc $(XGPSPC) -lm -lpthread -lXaw -ljpeg
libmath, libjpeg and libXaw (for the display).

Simplicity, or simple city or whatever it was
of course gcc as compiler.
Or gpasm for the PIC asm code.

I think the ever more bloat comes from trying to sell ever more,
a capitalist trick to suck you for money.
More bloat causes need for ever more powerfull hardware.
So bloat writers get shares in hardware manufacurers and get rich.
Microsore or whatever is a big example.








Cursitor Doom

unread,
Feb 11, 2024, 4:44:29 AMFeb 11
to
On Sun, 11 Feb 2024 06:43:31 GMT, Jan Panteltje <al...@comet.invalid>
wrote:
That's all very impressive, Jan, but if you were *truly* a hardcore
programmer, you'd be using machine code. ;-)
More seriously, bloat enables coders to hide back doors much more
effectively. They'd never get away with that kind of subterfuge with
ASM.

Bill Sloman

unread,
Feb 11, 2024, 5:51:03 AMFeb 11
to
On 11/02/2024 8:44 pm, Cursitor Doom wrote:
> On Sun, 11 Feb 2024 06:43:31 GMT, Jan Panteltje <al...@comet.invalid> wrote:
>> On a sunny day (Sat, 10 Feb 2024 09:06:03 -0800) it happened John Larki <j...@997PotHill.com> wrote in <g4bfsidsbmg316tog...@4ax.com>:
>>> On Sat, 10 Feb 2024 08:10:13 -0800 (PST), Fred Bloggs <bloggs.fred...@gmail.com> wrote:

<snip>

> That's all very impressive, Jan, but if you were *truly* a hardcore
> programmer, you'd be using machine code. ;-)

Nobody writes machine code. Assembler has a one-to-one relationship with
machine code, but tit is easier to write and read.

> More seriously, bloat enables coders to hide back doors much more
> effectively. They'd never get away with that kind of subterfuge with
> ASM.

Of course they would. Have your ever tried to make sense of poorly
documented and commented assembly code?

And it is possible to make machine code self-modifying - at least on
some machines - which offers even more opportunity, to put in back doors
(and take then away again after you've exploited them).

--
Bill Sloman, Sydney

Jan Panteltje

unread,
Feb 11, 2024, 6:26:21 AMFeb 11
to
On a sunny day (Sun, 11 Feb 2024 09:44:22 +0000) it happened Cursitor Doom
<c...@notformail.com> wrote in <nh5hsit657809ebhc...@4ax.com>:
I have used machine code in the long ago past.
Here is a nice Z80 disassembler I wrote:
https://panteltje.nl/panteltje/z80/index.html
from emails I know people still use it.


>More seriously, bloat enables coders to hide back doors much more
>effectively. They'd never get away with that kind of subterfuge with
>ASM.

Yes, all those libraries.. I follow the news and sometimes things are loaded
that have backdoors.

But asm, long ago I was involved with card hacking,
things are read only, and how to list the code of a PIC micro
(in those days in the TV smart cards for encrypted TV channels).
That is how I got interested and came to use Microchip PICs..
It is not always easy to list those codes to get the secret algo they use to
encrypt TV transmissions.
I stopped when some EU politician got upset.. some persisted and got sentenced....
But that is how I learned about PICs and got interested in crypto.

Wanderer

unread,
Feb 11, 2024, 10:47:17 AMFeb 11
to
On Sun, 11 Feb 2024 06:43:31 GMT, Jan Panteltje wrote:

>It is cool coding in asm without using external libraries.
>I can do anything I like in KILOBYTES:


Back in the 20th century, I knew how to program in C. I
knew what the assembly code would like after I compiled it.

This is C++.


https://en.cppreference.com/w/cpp/links/libs


Now I program in Python. I really don't know how to program
in Python. I'm googlesmart. I google what I want to do,
download the appropriate library and follow the documentation.
I don't know if there is something malicious in there. That's
why I really hate every little stupid program and app that
thinks it needs to auto-update and needs admin approval to
install and screw with the operating system. If there is
a portable option, I get that and I keep old versions until
they break.

Jan Panteltje

unread,
Feb 11, 2024, 12:13:47 PMFeb 11
to
On a sunny day (Sun, 11 Feb 2024 10:47:05) it happened
Wanderer<do...@emailme.com> wrote in <980...@dontemail.com>:
I do not speak phyton...
No need...
Cplushplush is a crime against humanity, operator overloading etc.
If I see some open source C++ code I like, then I usually recode it in C,
makes it simpler much of the time, did that with some Arduino code.

Sometimes you really need libraries,
I just came across this voice to text program for the Raspberry Pi last week:
https://www.tomshardware.com/raspberry-pi/raspberry-pi-project-lets-you-generate-ai-art-for-your-tv-using-voice-commands
leads to
https://www.hackster.io/petewarden/recognizing-speech-with-a-raspberry-pi-50b0e6
seems to be 1 GB size, for voice recognition you need a lot..
Have not tried or downloaded it yet.

tomshardware.com has often Raspberry projects (all the way down on their main page).

All that said, I run Firefox browser on a Raspberry Pi4 8 GB..
I think it forwards everything I do to anybody ;-) ;-)

I have disabled WiFi and Bluetooth now in the startup file.
But still use a wireless keyboard....
So there is room for improvement as far as security goes.
Am using a Huawei USB stick for 4G internet access that works everywhere in the country or even Europe here.
But of course one could log / decode the RF...

see-eye-aaa must know everything about me by now...
May sent them in a loop!
If they were not there yet....

Cursitor Doom

unread,
Feb 11, 2024, 12:56:36 PMFeb 11
to
On Sun, 11 Feb 2024 11:26:15 GMT, Jan Panteltje <al...@comet.invalid>
Many thanks for that well thought-out and well-reasoned response, Jan.
Nice to hear from someone who knows what they're talking about instead
of some half-baked garbage from a moron like Bill Sloman who wouldn't
even be able to set up something as elementary as an Antikythera
orrery. ;-)

Cursitor Doom

unread,
Feb 11, 2024, 1:05:04 PMFeb 11
to
On Sun, 11 Feb 2024 10:47:05, Wanderer<do...@emailme.com> wrote:

>On Sun, 11 Feb 2024 06:43:31 GMT, Jan Panteltje wrote:
>
>>It is cool coding in asm without using external libraries.
>>I can do anything I like in KILOBYTES:
>
>
>Back in the 20th century, I knew how to program in C. I
>knew what the assembly code would like after I compiled it.
>
>This is C++.
>
>
>https://en.cppreference.com/w/cpp/links/libs

I never got on with C++. C has a certain elegance to it that I very
much like and I've never moved on from it. In fact I'm such a purist,
I stay faithful to the K&R variant. They tell me it's limiting to do
that, but it does *everything* I need to do so why go further? I find
the simplicity and lack of unnecesary bloat very appealing. I'd
probably still be coding in ASM if C hadn't come along. For me at
least, K&R C is perfection.


>Now I program in Python. I really don't know how to program
>in Python. I'm googlesmart. I google what I want to do,
>download the appropriate library and follow the documentation.
>I don't know if there is something malicious in there. That's
>why I really hate every little stupid program and app that
>thinks it needs to auto-update and needs admin approval to
>install and screw with the operating system. If there is
>a portable option, I get that and I keep old versions until
>they break.

Very wise. I like your style, Wanderer!

Jeroen Belleman

unread,
Feb 11, 2024, 3:04:26 PMFeb 11
to
Now that you mention it: That piece of hardware was actually pretty
sophisticated, and I think that even today, only few people would
have been able to use it to good effect.

There is a series of videos of someone who built a replica and he
explains its workings to some length. Search for "clickspring
antikythera" on youtube. I found it fascinating, and also somewhat
humbling to realize that my knowledge of our solar system is nothing
compared to what was encoded in this mechanism.

Of course, these days software does it better.

Jeroen Belleman

Don Y

unread,
Feb 11, 2024, 3:43:42 PMFeb 11
to
On 2/11/2024 10:47 AM, Wanderer wrote:
> On Sun, 11 Feb 2024 06:43:31 GMT, Jan Panteltje wrote:
>
>> It is cool coding in asm without using external libraries.
>> I can do anything I like in KILOBYTES:
>
> Back in the 20th century, I knew how to program in C. I
> knew what the assembly code would like after I compiled it.

A minor point; you THOUGHT you knew what the ASM would look like...
you knew what the processor should be *doing*.

Newer compilers are often considerably smarter than the
programmers using them. They will rearrange code (where
dependencies allow it) to avoid pipeline stalls. Or,
realign structures to avoid misaligned memory accesses.
Or even eliminate calls to functions that it can inline
more efficiently. Or, avoid generating code that will
tickle a "bug" in the targeted processor's HARDWARE!

But, the point of knowing what the processor is expected
to be doing is important.
C++ (and other OOPS) adds complexity to help the programmer
manage the complexity in his program/solution.

A big (huge!) part of software development is modeling
the application and its domain. A good model makes the
implementation intuitive... it just *fits*. This is
important because code is meant to be READ, not WRITTEN;
if the next guy (that you likely have never met and with
unknown capabilities) can't understand what you've written,
expecting him to make fixes or enhancements is a fool's hope.

E.g., my system is entirely object *based* despite not
being written in an OO language (think about the difference).
It makes sense for a developer (or user) to think of verbs
and nouns -- OPEN the GARAGE DOOR.

In a procedural language, you would have a plethora of
"routines" cluttering up the namespace: open_garage_door(),
close_garage_door(), open_front_door(), open_side_door(),
open_car_door(), open_access_panel_to_furnace(), etc.

They would all share some common characteristics -- yet,
each would have to REMEMBER to include those in its
implementation. (e.g., do you have to UNLOCK the door
before you can open it? Even if this only applies to
SOME doors, having it present in a base class reminds
you that you have to address that -- instead of waiting
for the issue to manifest as a bug!)

In my world, I can "move" an object to a different
"backing server" (the active piece of code that handles
requests to operate on objects of a particular type).
Or, even to a different backing server on another
processor elsewhere in the network (and, moving the
server -- which, of course, is also an object! -- there
to be waiting for the object to arrive!)

I.e., there are verbs (methods) that apply to all objects.
Defining the system as object based reminds me that I have to
address each of these verbs for EVERY object type.

It also makes it easier for me to address OPENing *any*
door, regardless of the actual "type" of door -- because
anything that derives from "Door" has an "open" method.
I don't have to say:

case (door_type) {
GARAGE => open_garage_door();
HOUSE => open_front_door();
FURNACE => open_access_panel_to_furnace();
...
}

[What happens when there's a new door type that THIS code
doesn't explicitly recognize?? DOGGIE => open_doggie_door()]

> Now I program in Python. I really don't know how to program
> in Python. I'm googlesmart. I google what I want to do,
> download the appropriate library and follow the documentation.

Thats part of the bloat problem (and the decline of software
quality, in general). It's *programming* not software engineering.
ANYONE can program... all you have to do is throw keystrokes at
it until it APPEARS to work! You don't need to understand
the hardware, the operating system, the libraries, etc.

Another part of the problem is fat interfaces; too many BUILT ways
to solve the same problem. And, nothing that enforces your choice
of solutions. The fact that these "mechanisms" are so poorly
characterized means you are free to IMAGINE how it will work IN
YOUR CASE instead of having a contract that you can both rely on
("you will use me in this way and I will provide this result").

Imagine semiconductors being as loosely characterized: a diode
allows for current to flow one way (how MUCH current? what is the
drop across the junction? how much power can the packaged device
dissipate? at what reverse voltage will it breakdown?).

How is this code supposed to work:
memcpy(LAST_LOGICAL_MEMORY_ADDRESS-VALUE, some_address, VALUE+7)
Or this?
memcpy(some_address, LAST_LOGICAL_MEMORY_ADDRESS-VALUE, VALUE+7)

How *will* it work on a 68K? 80386? ARMv8?

On bare metal? Under a toy OS? Under a "real" OS?

Operator overloading is a HUGE win, esp for arithmetic operators.
I can say:
temp = A.x * (B.y - C.y)
+ B.x * (C.y - A.y)
+ C.x * (A.y - B.y);
area = temp/2;
and:
- have a greater chance of getting it right
- have a greater chance of The Next Guy recognizing what I've done!
when all of those operators have been properly overloaded for:
Point A,B,C;
data -- which, today, I may have decided have components that are
Q24.8 but, tomorrow, I may decide should be Q40.24!

And, instead of just delaring A, B and C as simple structs made of
integers, a constructor for each must be (silently) invoked... in
case there are any niggling details involved.

[did you miss the implied casts to "temp" and "area"'s respective
data types? What if I want to change those types? how much of
THIS code will change???]

Imagine coding that in a procedural language and The Poor Bloke
who has to read what you've written!

coord_size_t t1 = coordinate_sub(B.y, C.y);
coord_size_t t2 = coordinate_mul(A.x, t1);
...

But, there are costs to this, imposed by the language. I run
similar operations MILLIONS of times each second in my gesture
recognizer... overhead has a cost! :<

> I don't know if there is something malicious in there. That's
> why I really hate every little stupid program and app that
> thinks it needs to auto-update and needs admin approval to
> install and screw with the operating system. If there is
> a portable option, I get that and I keep old versions until
> they break.

Portable just bundles the dependencies into the executable.
So, you end up with larger binaries -- that *can't* be
upgraded (someone has to build you a new portable version
with whichever dependencies -- or application -- updated).

Much of the reason for "bloat" can be attributed to (imagined?)
user demands. The Microsoft Mentality has users looking for
something to click on when they are performing a task.
E.g., spell check a document (no, it must do this WHILE they
are typing cuz they LOVE being distracted by decorated text
alerting them to the POSSIBILITY of a misspelling!).

The UNIX Mentality had smarter users who knew how to plumb
applications together to get a desired result.

E.g., to look for duplicate files on a machine, you
could recursively parse the hierarchy (or the portion of
interest) with "find <hierarchy> -name * -print" -- to
ignore all of the ".<whatever>" files. And, while
doing so, compute the MD5 hash of the file. Storing
these as (pathname, hash) in a "flat database" (i.e., FILE!),
you could then sort | uniq and get a list of duplicates.

In the Microsoft world, you need an app -- with its own
GUI! -- to do this.

Much, also, is a result of programmers being unable to
grasp (grok) all of an application's detail. So, they
may implement the same functionality many different
times (ways?) to address the same problem in different
places. Why do Windows apps all report the sizes of
files differently? Why is a "0 byte" file shown as "1K"?
Is that MB or MiB? (and why do you have to keep explaining
it to people?)

Try unpacking a deep file hierarchy from an archive
to a point deep in the windows filesystem. Why can't
it create:
/some/deep/point/in/the/windows/file/system/archive/with/a/long/path?
Yet, I can unpack the archive to C:\ and then *move* it to that
point in the filesystem!

If a program surprises the user with its behavior, is that
a bug?

In embedded devices, much bloat is due to folks overprovisioning
their solutions: "Let's use Linux!" Really? That's like
taking a drive to the beach in the "semi" tractor trailer (lorry)!

[Any idea how many lines of code -- i.e., latent bugs -- that
"component" brings into your design? Do you ANNUALLY send a
salary-scale contribution to the community that you are hoping
will fix problems that your customers uncover?? Or, are you
just a leach?]

Then, to justify the bloat that they've just built into their
product, they create reasons to use the extra features that
it makes available!

"We can create a folder for each DRIVER. And, under those, store
the prefered seat position in a file called seat_position. Likewise,
the mirror settings in mirror_settings. And, favorite radio
stations in a Radio subfolder with a file for each preset:
1, 2, 3, 4..."

So, they have now BURDENED their design with all of that "extra
stuff" that they didn't THINK they needed in the original
design document. Plus, imagined uses for it. And, of course,
the extra code that using it requires! (being able to search
for each file in the hierarchy, give the driver a NAME,
parse the contents of each file to ensure it is appropriate
for the type of data EXPECTED within, report errors to the
user when it is NOT, etc.)

And, they now have dependencies on a bunch of code that they
likely are not competent to maintain (show of hands: how many
kernel hackers??) let alone fully understand.

How big should the ARP cache be? What sort of admission policy
should it use? How to ensure your DNS isn't poisoned? What
if the filesystem gets corrupted (power outage during a write)?
Where do you keep the defaults to reinitialize all of those
corrupted files?

How do you FORMALLY test that design? Esp the parts that
you likely don't understand -- do you know where to probe
for weaknesses? And, their consequences??

"as simple as can be -- and no simpler"

Can you keep the details of your design IN YOUR SINGLE CRANIUM
in enough detail that you can explain how it works to a new
hire (and respond to detailed questions)?

We have a saying, here:
Enjoy your visit! Take someone BACK home with you!
You should have the same attitude when revisiting your code;
what can I remove or how can I refactor this to make it
SIMPLER?

Don Y

unread,
Feb 11, 2024, 3:56:45 PMFeb 11
to
On 2/11/2024 10:47 AM, Wanderer wrote:
> Back in the 20th century, I knew how to program in C. I
> knew what the assembly code would like after I compiled it.
>
> This is C++.
>
> https://en.cppreference.com/w/cpp/links/libs

You can also look at different "strains" of C++
(e.g., EC++) to avoid some of the cruft/overhead

[And, there are other OO languages that have
friendlier characteristics]

John Larkin

unread,
Feb 11, 2024, 6:10:26 PMFeb 11
to
https://en.wikipedia.org/wiki/List_of_programming_languages

Applications are boring. It's much more fun to invent programming
languages.

What's telling is that new programming languages are popular and older
ones aren't.

https://stackoverflow.blog/2017/10/31/disliked-programming-languages/

In other words, programming languages are fads.

Don Y

unread,
Feb 11, 2024, 7:06:38 PMFeb 11
to
On 2/11/2024 1:43 PM, Don Y wrote:
> How is this code supposed to work:
>     memcpy(LAST_LOGICAL_MEMORY_ADDRESS-VALUE, some_address, VALUE+7)
> Or this?
>     memcpy(some_address, LAST_LOGICAL_MEMORY_ADDRESS-VALUE, VALUE+7)
>
> How *will* it work on a 68K?  80386?  ARMv8?
>
> On bare metal?  Under a toy OS?  Under a "real" OS?

This is actually a delightful example of how poorly characterized
software components are (and why you should have your own sources
for everything that you use in a design!).

Taking just the first memcpy(3c) example...

*ASSUME*[1] that data is copied at "from" to "to" in ascending
sequential order, starting at "from". Eventually, "to" will
hit the LAST_LOGICAL_MEMORY_ADDRESS. Then, *pass* it. Will
this wrap around to "0x0"?

What if the LAST_LOGICAL_MEMORY_ADDRESS isn't the largest
representable as an integer? E.g., a device that has a limit
on "code space" that is smaller than the total address space?

What if "to" is "from+1"? E.g., will ABCDEFGHIJ end up
as AABCDEFGHIJ? Or, AAAAAAAAAA? Or, ABCDABCDAB? Or...

What if "to" is "from"? Will *any* reads (or writes) be
performed?

What if you naively use this to initialize a (memory-mapped)
I/O device? Will the "registers" in the device be accessed
in ascending, sequential order? Or, could they be
accessed in the order B A D C E F H J I? Is there anything
that ensures a location is only updated *exactly* once?
Is A A A A A A A A A A A A B C D E F G H I J possible?

What if the OS SIGSEGV's (as expected)? How much of this
"work" will have been done? Can you just abort the balance
of the operation on the assumption that "the first part"
did what you wanted it to do??

Defend each answer! :> Now, pick a different library
implementation or a different processor and make the same
claims... (and that's a trivial STANDARD LIBRARY function!)

-----------
[1] there are no guarantees that this assumption is correct
or any BETTER than any of the others that follow!


Anthony William Sloman

unread,
Feb 11, 2024, 7:53:06 PMFeb 11
to
It' s not all that well-thought-out. That isn't Jan's strong point - or Cursitor Doom's either.

> Nice to hear from someone who knows what they're talking about instead
> of some half-baked garbage from a moron like Bill Sloman who wouldn't
> even be able to set up something as elementary as an Antikythera
> orrery. ;-)

There's nothing elementary about the Antikythera orrery - which is more a calculating engine than an orrery anyway.

https://en.wikipedia.org/wiki/Orrery

https://en.wikipedia.org/wiki/Antikythera_mechanism

The mechanism seems to encode the arc along which the user would have to stand to observe exactly the predicted planetary movements, which is a point that doesn't seem to have made it into the wikipedia write-up.

--
Bill Sloman, Sydney

Dan Purgert

unread,
Feb 13, 2024, 7:01:06 AMFeb 13
to
On 2024-02-11, Don Y wrote:
> [...]
> E.g., my system is entirely object *based* despite not
> being written in an OO language (think about the difference).
> It makes sense for a developer (or user) to think of verbs
> and nouns -- OPEN the GARAGE DOOR.

You open the garage door. It's dark inside.


( sorry, had to :) )

--
|_|O|_|
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

Don Y

unread,
Feb 13, 2024, 1:25:09 PMFeb 13
to
On 2/13/2024 5:00 AM, Dan Purgert wrote:
> On 2024-02-11, Don Y wrote:
>> [...]
>> E.g., my system is entirely object *based* despite not
>> being written in an OO language (think about the difference).
>> It makes sense for a developer (or user) to think of verbs
>> and nouns -- OPEN the GARAGE DOOR.
>
> You open the garage door. It's dark inside.

You cast a spell of continual light -- then, extinguish it
lest your companions ALSO notice the giant spider (Mclaren
765LT) hiding therein!

Slipping inside, you drive off, crushing the Sorceror's foot
in the process. Incensed, he throws a fireball in
your direction but, alas, too slow to catch up with the 750
horses under the hood...

alb...@spenarnc.xs4all.nl

unread,
Feb 14, 2024, 8:05:41 AMFeb 14
to
In article <uqa8qg$ui04$1...@dont-email.me>,
Bill Sloman <bill....@ieee.org> wrote:
>On 11/02/2024 8:44 pm, Cursitor Doom wrote:
>> On Sun, 11 Feb 2024 06:43:31 GMT, Jan Panteltje <al...@comet.invalid> wrote:
>>> On a sunny day (Sat, 10 Feb 2024 09:06:03 -0800) it happened John Larki <j...@997PotHill.com> wrote in
><g4bfsidsbmg316tog...@4ax.com>:
>>>> On Sat, 10 Feb 2024 08:10:13 -0800 (PST), Fred Bloggs <bloggs.fred...@gmail.com> wrote:
>
><snip>
>
>> That's all very impressive, Jan, but if you were *truly* a hardcore
>> programmer, you'd be using machine code. ;-)
>
>Nobody writes machine code. Assembler has a one-to-one relationship with
>machine code, but tit is easier to write and read.

Nobody hu? Smith does. Written a compiler in hex code using only
a hex to bin converter.
https://dacvs.neocities.org/SF/
The take away is, it is easier than you expect.

>
>> More seriously, bloat enables coders to hide back doors much more
>> effectively. They'd never get away with that kind of subterfuge with
>> ASM.
>
>Of course they would. Have your ever tried to make sense of poorly
>documented and commented assembly code?
>
>And it is possible to make machine code self-modifying - at least on
>some machines - which offers even more opportunity, to put in back doors
>(and take then away again after you've exploited them).
You must silence hysteric virus detectors before you could do that.
>
>--
>Bill Sloman, Sydney
>

Groetjes Albert
--
Don't praise the day before the evening. One swallow doesn't make spring.
You must not say "hey" before you have crossed the bridge. Don't sell the
hide of the bear until you shot it. Better one bird in the hand than ten in
the air. First gain is a cat purring. - the Wise from Antrim -

alb...@spenarnc.xs4all.nl

unread,
Feb 14, 2024, 8:09:57 AMFeb 14
to
Totally agree. I'm waiting till one managed to subvert one
of the mainstream browsers with a backdoor via the obligatory
daily updates.

Anthony William Sloman

unread,
Feb 14, 2024, 8:40:51 AMFeb 14
to
On Thursday, February 15, 2024 at 12:05:41 AM UTC+11, alb...@spenarnc.xs4all.nl wrote:
> In article <uqa8qg$ui04$1...@dont-email.me>,
> Bill Sloman <bill....@ieee.org> wrote:
> >On 11/02/2024 8:44 pm, Cursitor Doom wrote:
> >> On Sun, 11 Feb 2024 06:43:31 GMT, Jan Panteltje <al...@comet.invalid> wrote:
> >>> On a sunny day (Sat, 10 Feb 2024 09:06:03 -0800) it happened John Larki <j...@997PotHill.com> wrote in
> ><g4bfsidsbmg316tog...@4ax.com>:
> >>>> On Sat, 10 Feb 2024 08:10:13 -0800 (PST), Fred Bloggs <bloggs.fred...@gmail.com> wrote:
> >
> ><snip>
> >
> >> That's all very impressive, Jan, but if you were *truly* a hardcore
> >> programmer, you'd be using machine code. ;-)
> >
> >Nobody writes machine code. Assembler has a one-to-one relationship with
> >machine code, but it is easier to write and read.
>
> Nobody hu? Smith does. Written a compiler in hex code using only
> a hex to bin converter.
> https://dacvs.neocities.org/SF/
> The take away is, it is easier than you expect.

Nobody sane. If you go to the trouble of memorising the hex codes it is obviously possible, but why bother?

> >> More seriously, bloat enables coders to hide back doors much more
> >> effectively. They'd never get away with that kind of subterfuge with
> >> ASM.
> >
> >Of course they would. Have your ever tried to make sense of poorly
> >documented and commented assembly code?
> >
> >And it is possible to make machine code self-modifying - at least on
> >some machines - which offers even more opportunity, to put in back doors
> >(and take then away again after you've exploited them).
> You must silence hysteric virus detectors before you could do that.

Or get around them some other way. When I wrote self-modifying code it was for a PDP-8, and long before the days of virus detectors.

--
Bill Sloman, Sydney

Don Y

unread,
Feb 14, 2024, 1:09:29 PMFeb 14
to
On 2/14/2024 6:05 AM, alb...@spenarnc.xs4all.nl wrote:
> Nobody hu? Smith does. Written a compiler in hex code using only
> a hex to bin converter.
> https://dacvs.neocities.org/SF/
> The take away is, it is easier than you expect.

One writes code to be *read*. Just because you CAN do something
doesn't mean you SHOULD do that something. People spend inane
amounts of time arranging dominos... just to knock them over
(what's the point in that?)

A kid I attended school with built his own little computer (pre-CP/M),
wrote a monitor in machine code that he then burned into ROM.
Used that to write an assembler. Then an OS, etc. Interesting
"hobby" and worthwhile only if your time has no value.

I had a job where we had a cheap, *live* system monitor that would
let us watch variables and patch code while the system was running.
But, the UI was limited to a six digit *numeric* display -- which
means "split octal" (0xFFFF is 377377) instead of hexadecimal -- and
keypad. So, you had to memorize opcodes in octal and convert
all arguments to that prior to use/recognition.

"Walking" (ADDRESS++) through the code required you to recognize
opcodes and recall how many bytes followed before the next opcode
would be encountered. Or *if* it would be encountered (as absolute
and relative jumps/calls could interrupt the sequential flow).

Having that *live* ability to interact with the system was a huge
asset (at a time when ICE was uncommon -- and expensive!) and was
present in every product that we released (so, you could carry a
tiny piece of hardware to a site and interact with the system).
You could twiddle data and code and watch how the system reacted
without having to go back to the development environment and
turn the crank for a "what if".

But, the requirement to "hand disassemble/assemble" was just ridiculous!
(why not the same hardware interface augmented with some code to make
the UX less risky? Why not tied into the symbol table of the
running executable so you KNEW what you were seeing and tweaking?)

Prior to that, I'd written machine code (again in octal) for the Nova.
Data entry via the 16 toggle switches on the front panel. Data
readout via the 16 indicator lamps associated with them.

Again, a convenient capability (when access to an assembler/compiler
wasn't possible in the field... "I need to throw together a little
routine to exercise some particular bit of hardware so I can
'scope the hardware) but annoyingly complex and not a very portable
skillset.

Dan Green

unread,
Feb 14, 2024, 1:28:38 PMFeb 14
to
I write in machine code sometimes when it's the best approach. On the
comp.lang.c newsgroup, we've had a *lot* of entries for the
'obfuscated C contest' over the years and a sub-set of us decided it
would be a hoot to have an obfuscated machine code contest as well.
Personally I found it really, really enjoyable (I was in the minority
as we never had another one, though).

Cursitor Doom

unread,
Feb 14, 2024, 1:34:34 PMFeb 14
to
On Tue, 13 Feb 2024 12:00:59 -0000 (UTC), Dan Purgert <d...@djph.net>
wrote:

>On 2024-02-11, Don Y wrote:
>> [...]
>> E.g., my system is entirely object *based* despite not
>> being written in an OO language (think about the difference).
>> It makes sense for a developer (or user) to think of verbs
>> and nouns -- OPEN the GARAGE DOOR.
>
>You open the garage door. It's dark inside.
>
>
>( sorry, had to :) )

In German they would say "can you the garage door open make?"

Kind of makes more sense to a computer (or that gnome in the Star Wars
films).

Don Y

unread,
Feb 14, 2024, 1:57:26 PMFeb 14
to
On 2/14/2024 11:28 AM, Dan Green wrote:
>> Again, a convenient capability (when access to an assembler/compiler
>> wasn't possible in the field... "I need to throw together a little
>> routine to exercise some particular bit of hardware so I can
>> 'scope the hardware) but annoyingly complex and not a very portable
>> skillset.
>
> I write in machine code sometimes when it's the best approach. On the
> comp.lang.c newsgroup, we've had a *lot* of entries for the
> 'obfuscated C contest' over the years and a sub-set of us decided it
> would be a hoot to have an obfuscated machine code contest as well.
> Personally I found it really, really enjoyable (I was in the minority
> as we never had another one, though).

Core Wars, anyone? :>

But, nowadays, you are -- most often -- interfacing to a system that
was written in a HLL. So, knowing where you are in the algorithm isn't
as easily discerned as the compiler could have moved code around, elided
things it thought superfluous, etc. If your goal is to get productive
work done, you'd likely want more assurances that your code would
be doing what you intended (and, sooner or later, you're going to
have to "write it for real").

The boot ROM (bipolar) on the Reading Machine was, IIRC, just sixteen
16-bit words. So, you didn't have the luxury of being able to
write what you *wanted* but, instead, had to settle for what would
*fit*. E.g., I think the software loaded to some random address
and then immediately copied itself to the correct address. The
"random address" happened to be the opcode for one of the other
instructions in the ROM.

In my early designs, we relied heavily on self-modifying code to
provide features at low cost. E.g., to disable a task, you
would change the opcode of the CALL to that task to another
"benign" opcode for another 3-byte instruction (cuz CALL was a
3-byte instruction). This allowed you to preserve the entry
point to the task (the target of the CALL) while disabling
its execution (by converting the instruction to something
that effectively ignored the "address" argument)

When I worked on i4004's, we each found utility in carrying
a cheat sheet of opcodes in our wallets -- a "pocket assembler".
The development tools were slow and klunky (and only one set
shared among us all!) so it was effective to patch binary images,
if you could fit your change into the space you're overwriting.

This was an efficient use of time when your access to the
"real" tools was limited to one or two turns of the crank
in an 8 hour day! While a colleague had *his* turn, you
could load the image from whichever 1702 was to be modified
into the programmer and manually patch the bytes. Then, write
the new image into a new 1702, plug it into the prototype
and see how your proposed change would perform. Then, mark up
the (ASM) listing to incorporate a cleaner patch next time
you had access to the development system.

Nowadays, everyone effectively has access to BETTER tools,
simulators, etc. so dealing with real hardware is more of
a nuisance...

Don Y

unread,
Feb 14, 2024, 1:59:14 PMFeb 14
to
On 2/14/2024 6:09 AM, alb...@spenarnc.xs4all.nl wrote:
>> I don't know if there is something malicious in there. That's
>> why I really hate every little stupid program and app that
>> thinks it needs to auto-update and needs admin approval to
>> install and screw with the operating system. If there is
>> a portable option, I get that and I keep old versions until
>> they break.
>
> Totally agree. I'm waiting till one managed to subvert one
> of the mainstream browsers with a backdoor via the obligatory
> daily updates.

How do you know that what you've "frozen" hasn't already been compromised?
A latent virus can wreak havoc on your system *years* after infection.

Wanderer

unread,
Feb 15, 2024, 12:02:56 PMFeb 15
to

On Wed, 14 Feb 2024 11:58:59 -0700 Don Y wrote

>How do you know that what you've "frozen" hasn't already been compromised?
>A latent virus can wreak havoc on your system *years* after infection.

I update the anti-virus, spyware and malware programs.
I got fan-made kerbal space program mods that want to access
to the internet to check for updates. Or your download some
freeware to open some .crap extension that some fool used on
a file and this app wants admin priviledges so it can integrate
into the operation and become a default program. When I was in
college I had programming Professor who taught Pascal and his
big thing was 'scoping'. Every procedure should be self-contained
and have simple defined connection to the global program. Now every
program wants to pepper my computer with dll's they that programmer
picked up from a package he got from who knows where.

You're right. I don't know if my current system is infected but I know
the odds of getting infected don't get better with more interactions and
more partners.

Don Y

unread,
Feb 15, 2024, 4:00:01 PMFeb 15
to
On 2/15/2024 12:02 PM, Wanderer wrote:
> On Wed, 14 Feb 2024 11:58:59 -0700 Don Y wrote
>
>> How do you know that what you've "frozen" hasn't already been compromised?
>> A latent virus can wreak havoc on your system *years* after infection.
>
> I update the anti-virus, spyware and malware programs.

But, if the particular "infestation" hasn't "manifested", in
public, previously, it is possible that the AV tools don't
KNOW about it -- yet.

I don't run AV on this machine (which is the only out-facing
machine here). I keep nothing on it (just a browser and mail
client) so can easily reconstruct it, if ever hosed.

Every 6 months, I pull the disk and set it on a shelf. I
install a fresh disk with the original machine image on it.
And, run a virus scan on the disk pulled 6 months *earlier*
(recycling that media once it has been verified as clean).

> I got fan-made kerbal space program mods that want to access
> to the internet to check for updates. Or your download some
> freeware to open some .crap extension that some fool used on
> a file and this app wants admin priviledges so it can integrate
> into the operation and become a default program. When I was in
> college I had programming Professor who taught Pascal and his
> big thing was 'scoping'. Every procedure should be self-contained
> and have simple defined connection to the global program.

Yes. "Compartmentalization". Most applications fail on this
score (whether for desktop or for an embedded instrument/appliance)
because they reside in a single process container -- any thread
can dick with any other thread's data.

Ideally, you want to partition the problem into small units
that have *slim* interfaces (minimize information sharing
as this leads to a more robust solution and a more *performant*
one, once you put up barriers between those units (processes).

> Now every
> program wants to pepper my computer with dll's they that programmer
> picked up from a package he got from who knows where.

Yup. People have no idea what they are "baking into" their "programs".
So many dweebs thinking "let's use Linux" (as the basis for a product)
without any understanding of what that entails, what risks it presents
and how to *maintain* it ("We'll just ask on some forum and hope
someone takes an interest in solving OUR CUSTOMERS' PROBLEMS").

My current system enforces *every* contract so a program can't
access anything that it hasn't *declared* a need to use (data
as well as code). This exposes all of the interconnects so
a user (customer) can decide if he wants to install that app
("Why does this app need to access...?") or if he wants to
further thwart any abuses he might suspect of the app ("Yeah,
you can access my address book! But, the version that YOU see
won't have any names in it!")

> You're right. I don't know if my current system is infected but I know
> the odds of getting infected don't get better with more interactions and
> more partners.

I think you have to just think about what you want *from* your machine
and use that as a filter when deciding if a new app/upgrade should be
installed, or not.

I keep a set of laptops for "play"; install an app on them to see if
it adds any value (or, to just get some temporary use from it) and
then reinstall the disk image (3 minutes). I used to use disposable
VMs for this (make a clean copy of a VM; install app; play; flush
dirty VM) but laptops are quicker than spinning up all the disks
on my ESXi server!

I avoid updates/upgrades, in general, as they tend to just replace
one set of "bad behaviors" with another. *AND*, take time for me
to identify those! Much easier to live with the problems I already
know about than to keep swapping them for a new set!

[It would be nice if each update came in two flavors: ONLY bug fixes
and bug fixes PLUS changes]

Don Y

unread,
Feb 16, 2024, 4:13:58 PMFeb 16
to
On 2/15/2024 3:13 AM, Peter wrote:
> Graet to see you Don after all these years - 2006!!

Hey there, Mr "Pool" :>

I trust all is well, remodel long completed, kids now grown
(which of them was first to make you "Gramps"? and wasn't your
youngest looking for his pilot's license?), thus PBfH having
less of an impact on your life, etc.

> I had a customer many years ago who did write a ton of code in hex. To
> enable modifications they had a bit of space after each function, so
> edits to a function did not need shifting everything after it :)

But what was their *reason* for this? I had an employer (*had* been
an engineer and deluded himself into thinking he could still *do*
engineering) who was stuck in the past -- as if the tools and
techniques he had used were still relavent, even a few years later!

When it took hours to assemble, link, burn images, it made sense to
have mechanisms to support minor tweeks to the code (overwriting
instructions with NOPs and filling in a "0xFF" postamble with new
code). But, nowadays, make world on even large projects is just
a coffee break -- and, you can dump your code into RAM to watch
it run (assuming you have to run on a target and not in a
simulator).

[Nowadays, I netboot images just for the savings that one step
makes possible!]

alb...@spenarnc.xs4all.nl

unread,
Feb 17, 2024, 5:49:40 AMFeb 17
to
In article <uqivkh$2ontb$2...@dont-email.me>,
Don Y <blocked...@foo.invalid> wrote:
>On 2/14/2024 6:05 AM, alb...@spenarnc.xs4all.nl wrote:
>> Nobody hu? Smith does. Written a compiler in hex code using only
>> a hex to bin converter.
>> https://dacvs.neocities.org/SF/
>> The take away is, it is easier than you expect.
>
>One writes code to be *read*. Just because you CAN do something
>doesn't mean you SHOULD do that something. People spend inane
>amounts of time arranging dominos... just to knock them over
>(what's the point in that?)

This project is meant to be read. You can't be serious suggesting
that this is a tool to be used.
If you spend the time looking at the code, you'd discover that it
is quite educational, and make you wonder where the software bloat
comes from.
>

<SNIP>

RichD

unread,
Feb 17, 2024, 3:18:21 PMFeb 17
to
On February 11, Don Y wrote:
> Newer compilers are often considerably smarter than the
> programmers using them. They will rearrange code (where
> dependencies allow it) to avoid pipeline stalls. Or,
> realign structures to avoid misaligned memory accesses.
> Or even eliminate calls to functions that it can inline
> more efficiently.

Indeed. This was the motivation, and result, of the original RISC
architecture. Revolutionary, in its day, as processors were
becoming astonishingly complex, with trig functions wired into
a single machine op code!

And still today, those ideas are misunderstood, as many engineers
say "yeah, RISC is faster because they have fewer instructions,
it's obvious." (also cheaper, in cost/benefit)

Given these considerations, does anybody write assembly code for
modern RISC processors?

--
Rich

John Larkin

unread,
Feb 17, 2024, 5:01:21 PMFeb 17
to
On Sun, 11 Feb 2024 13:43:33 -0700, Don Y
<blocked...@foo.invalid> wrote:

>On 2/11/2024 10:47 AM, Wanderer wrote:
>> On Sun, 11 Feb 2024 06:43:31 GMT, Jan Panteltje wrote:
>>
>>> It is cool coding in asm without using external libraries.
>>> I can do anything I like in KILOBYTES:
>>
>> Back in the 20th century, I knew how to program in C. I
>> knew what the assembly code would like after I compiled it.
>
>A minor point; you THOUGHT you knew what the ASM would look like...
>you knew what the processor should be *doing*.
>
>Newer compilers are often considerably smarter than the
>programmers using them. They will rearrange code (where
>dependencies allow it) to avoid pipeline stalls. Or,
>realign structures to avoid misaligned memory accesses.
>Or even eliminate calls to functions that it can inline
>more efficiently.


Or it may skip doing things that it thinks are unnecessary. As FPGA
compilers will do.

One trick is to do stuff so complex that the compiler optimizer gives
up. Or use terms that it can't know at compile time.


Lasse Langwadt Christensen

unread,
Feb 17, 2024, 6:22:49 PMFeb 17
to
if you need that you are doing something wrong..

John Larkin

unread,
Feb 17, 2024, 6:41:26 PMFeb 17
to
I'm doing stuff that works.

Don Y

unread,
Feb 17, 2024, 7:16:33 PMFeb 17
to
On 2/17/2024 3:49 AM, alb...@spenarnc.xs4all.nl wrote:
> In article <uqivkh$2ontb$2...@dont-email.me>,
> Don Y <blocked...@foo.invalid> wrote:
>> On 2/14/2024 6:05 AM, alb...@spenarnc.xs4all.nl wrote:
>>> Nobody hu? Smith does. Written a compiler in hex code using only
>>> a hex to bin converter.
>>> https://dacvs.neocities.org/SF/
>>> The take away is, it is easier than you expect.
>>
>> One writes code to be *read*. Just because you CAN do something
>> doesn't mean you SHOULD do that something. People spend inane
>> amounts of time arranging dominos... just to knock them over
>> (what's the point in that?)
>
> This project is meant to be read. You can't be serious suggesting
> that this is a tool to be used.

Why write code that isn't going to be used?

> If you spend the time looking at the code, you'd discover that it
> is quite educational, and make you wonder where the software bloat
> comes from.

Why does your PHONE have a clock in it? AND a calendar?
Don't you remember today's date? Don't you wear a wristwatch?
Why does your PC have these data, too? Don't you have a
*phone* (er, wristwatch and a memory for dates)?

Which of these has the CORRECT information? How do you know?

[I have an "atomic clock" that is supposed to keep correct
time/date. Except, twice a year, it diddles the time for
DST start end. *But*, we don't observe DST! So, while
all of the DUMB clocks in the house correctly track the REAL
time all year round, this one has to be "fixed", twice
a year, to get it to IGNORE the DST changes! So, twice
a year I look at the clock and wonder why *my* notion of
the current time differs from it's -- and then tell
it to move me to the next time zone, east or west, as
appropriate]

Why do I *need* an icon IN my word processor to invoke the
spell-checker? Thesaurus? Grammar check? Reading complexity
assessment? How does having them accessible *in* the word
processor help me if I am writing an email -- in an entirely
different "text editor" that mimics much of the functionality
of the word processor?

Does each possible application that allows for text entry
merit the availability of these tools? What if I wanted
to send an SMS?

Are you just LAZY and can't invoke each of these as STAND-ALONE
tools, as needed?

So, EVERY app gets burdened with new implementations of
tools that should be separate entities. To economize on
resources AND ensure for more consistent results!

[Windows lists files in "user-friendly order" -- which differs
from ALPHABETICAL order! Other applications opt for a more
traditional LRAlpha. Comparing file lists between the two
just ADDS to confusion (loss of productivity).]

Here is a trivial, ubiquitous, algorithm written in machine code:

DD217A00DD7E01FE323805DD3401185FDD360100DD7E02FE3C3805DD3402184F
DD360200DD7E03FE3C3805DD3403183FDD360300DD7E04FE183805DD3404182F
DD360400216F000600DD4E0609DD7E05BE3805DD34051817DD360500DD7E06FE
0C3805DD34061807DD360600DD34071F1F1C1F1E1F1E1F1F1E1F1E1F

No comments -- machines don't read comments! And, machines don't
need the code to be formatted to show instruction breaks -- it's
just a large array of bytes!

Of course, you would have to sort out where the code resides in
memory as all of the transfers of control *could* use absolute
addresses (I deliberately chose a processor that offers relative
addressing as a native addressing mode AND USED THAT to make the
binary less obscure).

Likewise, all absolute DATA addressing would require knowing WHAT
was being referenced, esp if the data are not as interrelated as
in this example.

Machine code *teaches* nothing -- beyond as a historical curio.


A bit easier to understand in ASM:

LD IX,TIMING

LD A,(IX+JIFFY)
CP A,JPS
JR C,NXTSEC

INC (IX+JIFFY)
JR DONE

NXTSEC:
LD (IX+JIFFY),0

LD A,(IX+SECOND)
CP A,SPM
JR C,NXTMIN

INC (IX+SECOND)
JR DONE

NXTMIN:
LD (IX+SECOND),0

LD A,(IX+MINUTE)
CP A,MPH
JR C,NXTHR

INC (IX+MINUTE)
JR DONE

NXTHR:
LD (IX+MINUTE),0

LD A,(IX+HOUR)
CP A,HPD
JR C,NXTDAY

INC (IX+HOUR)
JR DONE

NXTDAY:
LD (IX+HOUR),0

LD HL,DAYMON
LD B,0
LD C,(IX+MONTH)
ADD HL,BC

LD A,(IX+DAY)
CP A,(HL)
JR C,NXTMON

INC (IX+DAY)
JR DONE

NXTMON:
LD (IX+DAY),0

LD A,(IX+MONTH)
CP A,MPY
JR C,NXTYR

INC (IX+MONTH)
JR DONE

NXTYR:
LD (IX+MONTH),0

INC (IX+YEAR)

DONE:

six-significant-character labels (external linkage) not being unusual,
hysterically. (In practice, one would use a pointer to walk through the
list of counters instead of accessing them directly.)

In this form, the structure and intent is clearer. And, it uses exactly
the same run-time resources as the machine language variant, before!


This is how people THINK about the algorithm (assuming 0-based ordinals):

ASSERT( ( (jiffy >= 0) && (jiffy <= JIFFIES_PER_SECOND) ) )
if !(jiffy >= JIFFIES_PER_SECOND-1) {
jiffy++;
} else {
jiffy = 0;
ASSERT( ( (second >= 0) && (second <= SECONDS_PER_MINUTE) ) )
if !(second >= SECONDS_PER_MINUTE-1) {
second++;
} else {
second = 0;
ASSERT( ( (minute >= 0) && (minute <= MINUTES_PER_HOUR) ) )
if !(minute >= MINUTES_PER_HOUR-1) {
minute++;
} else {
minute = 0;
ASSERT( ( (hour >= 0) && (hour <= HOURS_PER_DAY) ) )
if !(hour >= HOURS_PER_DAY-1) {
hour++;
} else {
hour = 0;
ASSERT( ( (day >= 0) && (day <= DAYS_PER_MONTH[month]) ) )
if !(day >= DAYS_PER_MONTH[month]-1) {
day++;
} else {
if ( (0 == year % 4)
&& ((0 != year % 100) || (0 == year % 400)) ) {
day++;
} else {
day = 0;
ASSERT( ( (month >= 0)
&& (month <= MONTHS_PER_YEAR) ) )
if !(month >= MONTHS_PER_YEAR-1) {
month++;
} else {
month = 0;
year++;
}
}
}
}
}
}
}

Use of *longer* symbolic names makes the app easier to understand -- and
recognize! Even in the absence of explanatory comments (Joe Average likely
doesn't understand that 1900 and 2100 would NOT be leap years as the only
"century mark" in their lifetime -- 2000 -- was).

Note that the inclusion of the leap year test is intuitive -- yet absent in
the ASM and machine language versions. It would *likely* occur to Joe
Average if asked to explain how "time" is tracked.

And, I can add contractual declarations in a HLL that improve the quality
of the code, it's readability AND detect *some* types of data corruption
at run time (as most folks code in a single process container so no
protection from *themselves* or other threads, co-executing, there) Note
that people *do* think of these constraints, even if on a purely intuitive
level. Expressing them explicitly makes this more formal (and allows the
compiler, runtime and future developers to be aware of these "intuitions")

But only a newbie would code it like that! THIS form is fraught with the
potential for syntax and logic errors. Quick: which of the parens goes
with each -- if the indentation (which may be incorrect as the compiler
doesn't enforce indentation rules!) wasn't there to "help"? (Have *I*
botched it??)

A smarter way of expressing the algorithm:

{
while (FOREVER) {
ASSERT( ( (jiffy >= 0) && (jiffy <= JIFFIES_PER_SECOND) ) )
if !(jiffy >= JIFFIES_PER_SECOND-1) {
jiffy++;
break;
}

jiffy = 0;
ASSERT( ( (second >= 0) && (second <= SECONDS_PER_MINUTE) ) )
if !(second >= SECONDS_PER_MINUTE-1) {
second++;
break;
}

second = 0;
ASSERT( ( (minute >= 0) && (minute <= MINUTES_PER_HOUR) ) )
if !(minute >= MINUTES_PER_HOUR-1) {
minute++;
break;
}

minute = 0;
ASSERT( ( (hour >= 0) && (hour <= HOURS_PER_DAY) ) )
if !(hour >= HOURS_PER_DAY-1) {
hour++;
break;
}

hour = 0;
ASSERT( ( (day >= 0) && (day <= DAYS_PER_MONTH[month]) ) )
if !(day >= DAYS_PER_MONTH[month]-1) {
day++;
break;
}

if ( (0 == year % 4) && ((0 != year % 100) || (0 == year % 400)) ) {
day++;
break;
}

day = 0;
ASSERT( ( (month >= 0) && (month <= MONTHS_PER_YEAR) ) )
if !(month >= MONTHS_PER_YEAR-1) {
month++;
break;
}

month = 0;
ASSERT( (year >= 0) )
year++;
break;
}

ASSERT( ( (jiffy >= 0) && (jiffy <= JIFFIES_PER_SECOND) ) )
ASSERT( ( (second >= 0) && (second <= SECONDS_PER_MINUTE) ) )
ASSERT( ( (minute >= 0) && (minute <= MINUTES_PER_HOUR) ) )
ASSERT( ( (hour >= 0) && (hour <= HOURS_PER_DAY) ) )
ASSERT( ( (day >= 0) && (day <= DAYS_PER_MONTH[month]) ) )
ASSERT( ( (month >= 0) && (month <= MONTHS_PER_YEAR) ) )
ASSERT( (year >= 0) )

return;
}

This leads to a more structured way of recognizing and exploiting
the similarities in the code:

index = 0;
while (0 != divisors[index]) {
if (counters[index] < divisors[index]) {
counters[index]++;
break;
} else {
counters[index] = 0;
index++;
}
}


const int
divisors[] = {
JIFFIES_PER_SECOND,
SECONDS_PER_MINUTE,
MINUTES_PER_HOUR,
HOURS_PER_DAY,
DAYS_PER_MONTH,
MONTHS_PER_YEAR,
EOF
};

int
counters[] = {
0, // jiffy
0, // second
0, // minute
0, // hour
0, // day
0, // month
0, // year
};

STATIC_ASSERT( ( sizeof(counters[])/sizeof(counters[0]) )
== ( sizeof(divisors[])/sizeof(divisors[0]) ) );

but, this requires a "fixup" routine to address the leap-year handling
(which happens at 00:00 on 29 Feb when we reset the date to 1 Mar in
NON-leap-years). It, also, makes it harder to add the rest of the
contractual constraints!

Should we use signed/unsigned chars instead of ints to save a few bytes
of DATA *and* TEXT? How hard would it be to make that change, here -- vs.
in a machine language implementation?

Which of all of these implementations will be easiest to *accurately* modify
to handle Daylight Savings Time? Not just knowing *how* to apply the time
shift but *when* (date AND time-of-day) to apply it?? Or, to generalize
timezone handling (Newfoundland, anyone?)

How does a person set the time, in Boston, to 2024 Mar 10 02:45? And,
what point in time does 2024 Nov 3 01:45 reference? What time is it in
Chicago, then?? Or, the date to 5 October 1582?

Of course, all of this data would have to be wrapped in a mutex to
ensure some other actor doesn't see partial updates (like Mar 1 00:00
while the time was being updated to Apr 1 on at Mar 31 00:00!)
Or, better, provide an accessor instead of exposing the raw data!


An OOPS approach would make each of these "counters" as *objects* and then
just increment one, see if it signals a wrap-around and, if so, increment
the next more significant one, etc. The "day" counter would have friends
in the month and year counters so it could decide whether 28 becomes 29,
or not.

And, all of those annoying details would be hidden from the person
READING the code; jiffies become seconds, seconds become minutes,
minutes become hours, etc. Do we care if the hours are presented
as [0..23]? Or, [1..24]? Or, [1..12][AP]?


Adding textual names would bury those *in* their respective definitions
instead of further polluting the namespace:

month_names[] = {
"January",
"February",
...
"November",
"December",
};
STATIC_ASSERT( ( sizeof(month_names[])/sizeof(month_names[0]) )
== MONTHS_PER_YEAR )

And, what if you had to I18N these?

month_names_french[] = {
"Janvier",
"Février",
...
"Novembre",
"Décembre",
};
STATIC_ASSERT( ( sizeof(month_names_french[])/sizeof(month_names_french[0]) )
== MONTHS_PER_YEAR )

month_names_italian[] = {
"Gennaio",
"Febbraio",
...
"Novembre",
"Dicembre",
};
STATIC_ASSERT( ( sizeof(month_names_italian[])/sizeof(month_names_italian[0]) )
== MONTHS_PER_YEAR )

And, then likely have to add abbreviations for each:

month_abbrevs_french[] = {
"Janv.",
"Février",
...
"Nov.",
"Déc.",
};
STATIC_ASSERT( ( sizeof(month_abbrevs_french[])/sizeof(month_abbrevs_french[0]) )
== MONTHS_PER_YEAR )

[Oh, my! They don't all fit in a 3+1 character representation!! So much for
fixed-width date fields... would you have noticed that and coded to accommodate
it? Or, would you blindly copy the abbreviation into a char[3] because
that's how you *think* of month abbreviations???]


You'd likely add a tool to determine day-of-week from date. And,
I18N that, as well. Along with their abbreviations.

And, as this is utility that many apps would likely avail themselves
of, you'd probably wrap it in a library -- that other apps could
link into their own executables! But, gee, what if I don't really
want/need support for Tamil? Or, Marathi? Why does my app have
to bear that cost?


How much of this is bloat? Feeping Creaturism? Essential functionality?
MARKETABLE functionality? E.g., I like being able to let my PC tell
me what day of the week it is as I don't have normal exogenous
synchronizer in my lifestyle! Should the PC assume that functionality?
What's wrong with a WRISTWATCH? Why does a cell phone provide that?
Surely, I could be REQUIRED to use something other than the PC (or
cell phone) for timekeeping, right (in the interest of minimizing bloat!)?


The *justifiable* reasons for "code growth" (which differs from "bloat")
is to add functionality, structure and readability to the codebase.
These improve the quality of the code as well as its accuracy, reliability
and maintainability.


My first commercial product had 12KB of code and 256 bytes of RAM. And,
took 3 engineers to develop it. Entirely in ASM. Largely because there
were insufficient resources (memory, MIPS, real-time and tools) to solve the
problem as tasked. Yet this was a huge improvement on the i4004 design
that preceeded it -- both from the development point of view and the UX!

I suspect I could recreate the entire codebase in a HLL in a few weekends.
I wouldn't have to write -- and PROFILE (to verify real-time performance!)
and debug -- a floating point package; I could just write expressions in
infix notation. I wouldn't have to consider which values were represented
in binary vs. BCD.

I likely wouldn't have to share *bits* in a byte as flags for the code
to govern it's execution -- just write what you *want* the code to do and
let the compiler sort out the most efficient way of doing so! And, I
wouldn't have to read the code of my fellow developers to see if THEY
had decided to use a previously unused bit for THEIR routines; I'd declare
MY data PRIVATE to my modules and count on the compiler to enforce that
barrier -- yet still allow each of them to reuse names that I had
"already" used -- privately!! (i, j, x, y, index, count, etc.)

I could stub the code to emit key values for me to verify its operation
"on the bench" as well as in the actual implementation. I could sprinkle
invariants through the code to catch *my* errors.

Don Y

unread,
Feb 17, 2024, 7:29:27 PMFeb 17
to
On 2/17/2024 1:18 PM, RichD wrote:
> On February 11, Don Y wrote:
>> Newer compilers are often considerably smarter than the
>> programmers using them. They will rearrange code (where
>> dependencies allow it) to avoid pipeline stalls. Or,
>> realign structures to avoid misaligned memory accesses.
>> Or even eliminate calls to functions that it can inline
>> more efficiently.
>
> Indeed. This was the motivation, and result, of the original RISC
> architecture. Revolutionary, in its day, as processors were
> becoming astonishingly complex, with trig functions wired into
> a single machine op code!

Compilers can also have broader knowledge of YOUR code than
you do -- or WANT to! So, can deduce relationships that
may not have been obvious to you when writing the code.

Why EVER have "Code not reached"? Did you make some bad
assumptions about the data and control that led you to THINK
that code would be executed?

And, simplify things that you would foolishly try to
"hand optimize". E.g., writing:
foo <<= 1;
can make you THINK you are being clever. But, it causes a
cognitive hiccup in the reader's mind if the intent is:
foo /= 2;
or, more explicitly:
foo = foo / 2;
A good compiler will generate the same code for each
case (assuming integer data types) so why force the
reader to perform that extra step in recognition?

And, eliminate some common syntax "errors" that aren't
actually prohibited by the language:
x = foo & bar;
(are you sure you don't mean "foo && bar"?)
x = y = z;
(are you sure you don't mean "(y == z)"?)

Your goal should always be to make it MORE work for you to do something
wrong than to do it *right*!

> And still today, those ideas are misunderstood, as many engineers
> say "yeah, RISC is faster because they have fewer instructions,
> it's obvious." (also cheaper, in cost/benefit)

Not *fewer* instructions but, rather, SIMPLER instructions.
Less silicon.

> Given these considerations, does anybody write assembly code for
> modern RISC processors?

ASM or machine code? The two differ by the presence of a compiler
in the former that is absent in the latter.


Anthony William Sloman

unread,
Feb 17, 2024, 11:11:25 PMFeb 17
to
On Sunday, February 18, 2024 at 11:29:27 AM UTC+11, Don Y wrote:
> On 2/17/2024 1:18 PM, RichD wrote:
> > On February 11, Don Y wrote:

<snip>

> > Given these considerations, does anybody write assembly code for
> > modern RISC processors?
>
> ASM or machine code? The two differ by the presence of a compiler
> in the former that is absent in the latter.

Technically speaking, the software that converts assembly code into machine code is an assembler, not a compiler.

There's a one-to-one relationship between assembler mnemonics and numbers that constitute the machine code. Compilers can generate strings of machine code from a higher-level language command, so it is a real distinction. I wasn't conscious of this when I first learned Fortran coding, but I was taught how the Fortran compiler expanded single lines of code into strings of assembler when I did a year's course on "Theory of Computation" the following year (1966).

--
Bill Sloman. Sydney

Cursitor Doom

unread,
Feb 18, 2024, 5:13:40 AMFeb 18
to
On Sat, 17 Feb 2024 17:16:14 -0700, Don Y
<blocked...@foo.invalid> wrote:

>On 2/17/2024 3:49 AM, alb...@spenarnc.xs4all.nl wrote:
>> In article <uqivkh$2ontb$2...@dont-email.me>,
>> Don Y <blocked...@foo.invalid> wrote:
>>> On 2/14/2024 6:05 AM, alb...@spenarnc.xs4all.nl wrote:
>>>> Nobody hu? Smith does. Written a compiler in hex code using only
>>>> a hex to bin converter.
>>>> https://dacvs.neocities.org/SF/
>>>> The take away is, it is easier than you expect.
>>>
>>> One writes code to be *read*. Just because you CAN do something
>>> doesn't mean you SHOULD do that something. People spend inane
>>> amounts of time arranging dominos... just to knock them over
>>> (what's the point in that?)
>>
>> This project is meant to be read. You can't be serious suggesting
>> that this is a tool to be used.
>
>Why write code that isn't going to be used?

One perfectly good reason is to make life hard for the
reverse-engineers. Build in a pile of redundant code, only release
executables, never the source, and thereby improve your software
sales.

>> If you spend the time looking at the code, you'd discover that it
>> is quite educational, and make you wonder where the software bloat
>> comes from.
>
>Why does your PHONE have a clock in it? AND a calendar?
>Don't you remember today's date? Don't you wear a wristwatch?
>Why does your PC have these data, too? Don't you have a
>*phone* (er, wristwatch and a memory for dates)?
>
>Which of these has the CORRECT information? How do you know?
>
>[I have an "atomic clock" that is supposed to keep correct
>time/date. Except, twice a year, it diddles the time for
>DST start end. *But*, we don't observe DST! So, while
>all of the DUMB clocks in the house correctly track the REAL
>time all year round, this one has to be "fixed", twice
>a year, to get it to IGNORE the DST changes! So, twice
>a year I look at the clock and wonder why *my* notion of
>the current time differs from it's -- and then tell
>it to move me to the next time zone, east or west, as
>appropriate]

Atomic clocks don't do that. Sounds like you have one of those
radio-controlled jobs that's works off a time signal.

>Why do I *need* an icon IN my word processor to invoke the
>spell-checker? Thesaurus? Grammar check? Reading complexity
>assessment? How does having them accessible *in* the word
>processor help me if I am writing an email -- in an entirely
>different "text editor" that mimics much of the functionality
>of the word processor?
>
>Does each possible application that allows for text entry
>merit the availability of these tools? What if I wanted
>to send an SMS?
>
>Are you just LAZY and can't invoke each of these as STAND-ALONE
>tools, as needed?
>
>So, EVERY app gets burdened with new implementations of
>tools that should be separate entities. To economize on
>resources AND ensure for more consistent results!
>
>[Windows lists files in "user-friendly order" -- which differs
>from ALPHABETICAL order! Other applications opt for a more
>traditional LRAlpha. Comparing file lists between the two
>just ADDS to confusion (loss of productivity).]
>
>Here is a trivial, ubiquitous, algorithm written in machine code:
>
>DD217A00DD7E01FE323805DD3401185FDD360100DD7E02FE3C3805DD3402184F
>DD360200DD7E03FE3C3805DD3403183FDD360300DD7E04FE183805DD3404182F
>DD360400216F000600DD4E0609DD7E05BE3805DD34051817DD360500DD7E06FE
>0C3805DD34061807DD360600DD34071F1F1C1F1E1F1E1F1F1E1F1E1F

That's hex, not MC.

>No comments -- machines don't read comments! And, machines don't
>need the code to be formatted to show instruction breaks -- it's
>just a large array of bytes!

.... and this is why obfuscated language competitions are such fun.
Except obfuscated Perl, of course - as there's no point! :-)

Lasse Langwadt Christensen

unread,
Feb 18, 2024, 5:27:00 AMFeb 18
to
søndag den 18. februar 2024 kl. 05.11.25 UTC+1 skrev Anthony William Sloman:
> On Sunday, February 18, 2024 at 11:29:27 AM UTC+11, Don Y wrote:
> > On 2/17/2024 1:18 PM, RichD wrote:
> > > On February 11, Don Y wrote:
> <snip>
> > > Given these considerations, does anybody write assembly code for
> > > modern RISC processors?
> >
> > ASM or machine code? The two differ by the presence of a compiler
> > in the former that is absent in the latter.
> Technically speaking, the software that converts assembly code into machine code is an assembler, not a compiler.
>
> There's a one-to-one relationship between assembler mnemonics and numbers that constitute the machine code.

usually but not always

Don Y

unread,
Feb 18, 2024, 7:01:04 AMFeb 18
to
Optimizing assemblers have been around since the 70's. The most common
optimization is for branch encoding on processors that support different
branch style (e.g., long, absolute, relative, etc.) Most often, the
"best" (shortest/fastest) opcode has limitations on how "far" the
supported reach. You don't want to have to PICK a particular opcode
based on the layout of the code, *now*... and, after inserting/moving
a few instructions, have to go back to choose a *different* opcode
because the target is now beyond the reach of the opcode you
previously chose!

The UNIX assembler has done this... forever. You can find tools that
will do this on VAX, 68K, 6809, Z80, SPARC, etc.

It's an NP-complete problem so expecting an "optimal" solution isn't
possible. However, the alternative -- MANUALLY trying to determine the
distance between branch opcode and targeted location -- is silly to
expect programmers to do with any degree of success (can you remember
how many bytes *each* instruction between "here" and "there" occupy?
Do you WANT to????)

[If the target of a branch lies in another module, then the linkage editor
has to sort out the required displacement.]

Other tools are smart enough to know that, for example, a "register load
of #0" is equivalent to XORing the register with itself, subtracting itself
from itself, etc. Or, may have an alternate form for "small" constants
than "larger" constants. The cost (time and space) of each approach
vary so deciding how you want the code to be optimized is important
(if at all).

Is subtracting 1 faster/smaller than decrementing a register?

Is shifting a register left cheaper than adding a register to itself?

Considering that most programmers (in ASM) use symbols instead of absolute
values, it's not uncommon for you to see code like:
LD A,SOME_VALUE
and, discover (at compile time), that SOME_VALUE is actually 0. Or,
some very small (e.g., 4 bit) integer. So, taking the instruction as
written, literally, is more costly than it needs to be! Should the
developer HAVE to know all of this? (Why?)

Should the developer have to know how to keep the pipeline from stalling?
Why not let the assembler reorder opcodes to ensure it doesn't?!

The assembler has to guarantee that the same *functionality* is provided
as defined in the ASM source; if it can do this more effectively than
the nominal instruction sequence presented by the programmer, more power
to it! (If you don't want it to dick with your code, then disable the
optimizations!)

Consider accessing the Nth element (where N is a variable) in an array
of structures.

STRUCT_SIZE equ <something>

LD A,N
LD B,#STRUCT_SIZE
MUL
LDA B,X (assumes product < 256)

Imagine if STRUCT_SIZE was 2. Or 4. Or 16. Or *1*!

Or, 3, 5, 9, etc.

This could easily be reduced in time and/or space! The developer
wouldn't want to have to go through and examine every symbolic
reference to see how the associated code MIGHT be tweeked given
the *current* numeric bindings (i.e., the STRUCT_SIZE may change
as the code evolves so this could potentially need to be revisited
each time the code is compiled!) -- the *tool* should do this,
instead!

Nowadays, we rely on HLLs to do the optimization *before* generating
their ASM "output". But, who (what!) optimizes ASM itself?

Anthony William Sloman

unread,
Feb 18, 2024, 11:04:18 AMFeb 18
to
On Sunday, February 18, 2024 at 9:13:40 PM UTC+11, Cursitor Doom wrote:
> On Sat, 17 Feb 2024 17:16:14 -0700, Don Y <blocked...@foo.invalid> wrote:
> >On 2/17/2024 3:49 AM, alb...@spenarnc.xs4all.nl wrote:
> >> In article <uqivkh$2ontb$2...@dont-email.me>,
> >> Don Y <blocked...@foo.invalid> wrote:
> >>> On 2/14/2024 6:05 AM, alb...@spenarnc.xs4all.nl wrote:

> >[I have an "atomic clock" that is supposed to keep correct
> >time/date. Except, twice a year, it diddles the time for
> >DST start end. *But*, we don't observe DST! So, while
> >all of the DUMB clocks in the house correctly track the REAL
> >time all year round, this one has to be "fixed", twice
> >a year, to get it to IGNORE the DST changes! So, twice
> >a year I look at the clock and wonder why *my* notion of
> >the current time differs from it's -- and then tell
> >it to move me to the next time zone, east or west, as
> >appropriate]
>
> Atomic clocks don't do that. Sounds like you have one of those
> radio-controlled jobs that's works off a time signal.

And the time signal is usually local.

Our English radio-controlled clock didn't work in the Netherlands, and we had to buy a Dutch/German version that relied on the German broad-cast timing signal.
There's no Australian equivalent, but my mobile phone gets its - very accurate - clock signal from the much more local near-by cell tower.

<snip>

> >Here is a trivial, ubiquitous, algorithm written in machine code:
> >
> >DD217A00DD7E01FE323805DD3401185FDD360100DD7E02FE3C3805DD3402184F
> >DD360200DD7E03FE3C3805DD3403183FDD360300DD7E04FE183805DD3404182F
> >DD360400216F000600DD4E0609DD7E05BE3805DD34051817DD360500DD7E06FE
> >0C3805DD34061807DD360600DD34071F1F1C1F1E1F1E1F1F1E1F1E1F
>
> That's hex, not MC.
>
> >No comments -- machines don't read comments! And, machines don't
> >need the code to be formatted to show instruction breaks -- it's
> >just a large array of bytes!

The PDP-8 used 12-bit words, which are neither octal (bytes) nor hexadecimal. Byte's used to be just the memory cell size , and if you meant meant you would say "octets", but the language changes with time.

<snip>

--
Bill Sloman, Sydney

RichD

unread,
Feb 18, 2024, 2:42:54 PMFeb 18
to
On February 18, skrev Lasse Langwadt Christensen:
>>>> Given these considerations, does anybody write assembly code for
>>>> modern RISC processors?
>
>>> ASM or machine code? The two differ by the presence of a compiler
>>> in the former that is absent in the latter.
>
>> Technically speaking, the software that converts assembly code into machine code is an assembler,
>> not a compiler.
>> There's a one-to-one relationship between assembler mnemonics and numbers that constitute the machine code.
>
> usually but not always

?
Exceptions?

--
Rich

Don Y

unread,
Feb 18, 2024, 3:58:02 PMFeb 18
to
<https://ftp.gnu.org/old-gnu/Manuals/gas-2.9.1/html_chapter/as_23.html#SEC258>
this behavior is inherited from the early days of as(1).

<http://www.bitsavers.org/pdf/sun/sunos/4.1/800-3807-10A_Sun-3_Assembly_Language_Reference_Manual_199003.pdf>
section 6.2
(30+ years ago)

<https://www.roug.org/retrocomputing/os/flex/ASM09-6809-assembler.pdf>
and SWTPC wasn't a software trailblazer (40+ years ago):

Optimize Assembly. The "F" option will cause the
assembler to suppress any optimization of object code.
Foreward references will be assembled “using the least
restrictive addressing modes. . This. option will force the.
assembler to complete in two passes, but ‘object.. code may. be
considerably larger than required. This option is especially
useful while debugging a program which will later be
optimized. Note that the "R" option takes priority over this




Lasse Langwadt Christensen

unread,
Feb 18, 2024, 4:50:01 PMFeb 18
to
for example LDR in an ARM, it loads a constant into a register
depending on the value of the constant the assembler decides if is possible to do directly with the constant embedded in the instruction or if it needs to do a load from a pool of constants somewhere in memory

0 new messages