On Mar 26, 5:12 pm, Mark Thorson <nos...@sonic.net
> Joe keane wrote:
> > PDP-11:
> > 000000 -> hlt
> > VAX:
> > 00 -> hlt
> > 6502:
> > 00 -> brk
> Did you want to extend this list, or were you
> making some comment?
It looks like Alpha used the zero primary
opcode for CALL_PAL.
PA-RISC seems to have used the zero primary opcode
for "system_op" with all zeros with 5-bit immediate
MIPS uses the zero primary opcode plus 0b001101 in
bits 0 through 5 (with a 20 bit arbitrary "code"
value) for break, but uses all zeros as the
preferred NOP (shift left logical using zero
registers) with other special values encoding
superscalar NOP (how quaint! :-) and Execution
Hazard Barrier. I.e., MIPS can slide into
"executable data" with preceding zero-initialized
Power makes all zero instructions permanently
illegal: "An instruction consisting entirely of
binary 0s is illegal, and is guaranteed to be
illegal in all future versions of this
OpenRISC has the zero primary opcode be a jump,
so executing an all-zero instruction will stall
the processor (jump to the jump-with-offset-zero
> I suppose when people were programming without
> any software tools, making zero = halt may have
> been useful for catching a jump into memory
> that had been initialized to zero. I'm not sure
> it has any value these days.
Making all zero either a perpetually undefined
instruction or an explicit trap, syscall, or the
like is still useful. It not only catches trying
to execute zeroed memory but also the common case
of a zero data value. Mitch Alsup has discussed
here his ISA which maintains many more common data
patterns as undefined instructions to help protect
against executing data.
Using such for a valid instruction avoids wasting
opcode space, but introduces the possibility of
executing privileged code on behalf of an
application when the application is misbehaving.
A debug break might be a little safer than a
break used for a syscall.
A wait-for-interrupt (halt) instruction might be
somewhat safe, but slightly less desirable when
I/O or other processors might overwrite memory
that would be helpful in determining the cause of
failure; such would also waste execution resources.
Using such for a SCHEDULE instruction (where all
zeros might be a terminate thread instruction)
might be interesting but would seem to make
detecting a failure and determining its cause even
more difficult (register values would be lost and
the detection of the error could be greatly
I would think such would be unnecessary given the
ability to exclude execute permission from a page;
but I can sort of understand using such for
defense in depth (somewhat like having the low
address area being reserved to catch stray
references relative to a zero null-pointer).
> When I worked at Weitek, there was a custom of
> initializing memory to (hexadecimal) DEAD.
> That distinguished initialized but unused memory
> nicely in the various debugging tools we used.
I thought 0xdeadbeef was traditional (for 32-bit