Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Data-Oriented Design and Avoiding the C++ Object-Oriented Programming Zimmer Frame

127 views
Skip to first unread message

Mr Flibble

unread,
Aug 27, 2018, 8:46:19 PM8/27/18
to
Hi.

Just published on article on C++ and data-oriented design:

https://leighjohnston.wordpress.com/2018/08/27/data-oriented-design-and-avoiding-the-c-object-oriented-programming-zimmer-frame/

"Object-oriented programming, OOP, or more importantly object-oriented
design (OOD) has been the workhorse of software engineering for the past
three decades but times are changing: data-oriented design is the new old
kid on the block."

/Flibble

--
"Suppose it’s all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I’d say, bone cancer in children? What’s that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It’s not right, it’s utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That’s what I would say."

Melzzzzz

unread,
Aug 28, 2018, 12:16:11 AM8/28/18
to
On 2018-08-28, Mr Flibble <flibbleREM...@i42.co.uk> wrote:
> Hi.
>
> Just published on article on C++ and data-oriented design:
>
> https://leighjohnston.wordpress.com/2018/08/27/data-oriented-design-and-avoiding-the-c-object-oriented-programming-zimmer-frame/
>
> "Object-oriented programming, OOP, or more importantly object-oriented
> design (OOD) has been the workhorse of software engineering for the past
> three decades but times are changing: data-oriented design is the new old
> kid on the block."

I really don't get this. If I need something to be polymorphic, it is, if
I don't, it's not. And nowadays computers are not just orders of
magnitude faster then zx spectrum ;)

>
> /Flibble
>


--
press any key to continue or any other to quit...

Mr Flibble

unread,
Aug 28, 2018, 10:49:53 AM8/28/18
to
I think you will find that 3.5GHz is three orders of magnitude faster than
3.5MHz.

Scott Lurndal

unread,
Aug 28, 2018, 11:13:09 AM8/28/18
to
Mr Flibble <flibbleREM...@i42.co.uk> writes:
>On 28/08/2018 05:16, Melzzzzz wrote:
>> On 2018-08-28, Mr Flibble <flibbleREM...@i42.co.uk> wrote:
>>> Hi.
>>>
>>> Just published on article on C++ and data-oriented design:
>>>
>>> https://leighjohnston.wordpress.com/2018/08/27/data-oriented-design-and-avoiding-the-c-object-oriented-programming-zimmer-frame/
>>>
>>> "Object-oriented programming, OOP, or more importantly object-oriented
>>> design (OOD) has been the workhorse of software engineering for the past
>>> three decades but times are changing: data-oriented design is the new old
>>> kid on the block."
>>
>> I really don't get this. If I need something to be polymorphic, it is, if
>> I don't, it's not. And nowadays computers are not just orders of
>> magnitude faster then zx spectrum ;)
>
>I think you will find that 3.5GHz is three orders of magnitude faster than
>3.5MHz.

It's a lot faster than that, due to fast multi-level caches, multiple pipelines
and out-of-order execution. That 1985 processor had slow memory, no cache
and an IPC probably less than 0.1 (as compared with a modern processor which
pushes an IPC close to 2.0).

bol...@cylonhq.com

unread,
Aug 28, 2018, 11:29:17 AM8/28/18
to
On Tue, 28 Aug 2018 01:46:04 +0100
Mr Flibble <flibbleREM...@i42.co.uk> wrote:
>Hi.
>
>Just published on article on C++ and data-oriented design:
>
>https://leighjohnston.wordpress.com/2018/08/27/data-oriented-design-and-avoidin
>g-the-c-object-oriented-programming-zimmer-frame/
>
>"Object-oriented programming, OOP, or more importantly object-oriented
>design (OOD) has been the workhorse of software engineering for the past
>three decades but times are changing: data-oriented design is the new old
>kid on the block."

If you don't need OO and purely care about speed then why bother with C++ at
all, just use C and assembler. You don't need a Sprites class nor do you
need a vector of sprite bodies, just use a C array or one of the *alloc*()
functions or sbrk() if you want to so your own low level memory management
(assuming you're using Linux , not the toy OS from MS).

As for your intrusive_sort - your swapper seems inordinately complex with
little explanation as to what index() or reverse_index() actually do. I can't
see many takers frankly.

Mr Flibble

unread,
Aug 28, 2018, 12:13:23 PM8/28/18
to
Yes indeed and if we take that and multiple cores into account it still
doesn't invalidate my assertion that machines of today are orders of
magnitude faster than the machines of the 1980s which is why I don't
understand Melzzzzz's point. Perhaps Melzzzzz doesn't understand the term
"orders of magnitude".

Mr Flibble

unread,
Aug 28, 2018, 12:18:59 PM8/28/18
to
The troll returns. You can't see many takers because you literally have
no clue as to how to use C++ properly as evidenced by your previous posts
to this group. I wasn't suggesting that you should stop using OOP all
together but that it is just one tool in toolbox and data-oriented design
is making a comeback due to the nature of modern hardware. You obviously
didn't even attempt to understand my article at all. Troll: be gone!

Rick C. Hodgin

unread,
Aug 28, 2018, 4:28:07 PM8/28/18
to
On Monday, August 27, 2018 at 8:46:19 PM UTC-4, Mr Flibble wrote:
> Hi.
>
> Just published on article on C++ and data-oriented design:
>
> https://leighjohnston.wordpress.com/2018/08/27/data-oriented-design-and-avoiding-the-c-object-oriented-programming-zimmer-frame/
>
> "Object-oriented programming, OOP, or more importantly object-oriented
> design (OOD) has been the workhorse of software engineering for the past
> three decades but times are changing: data-oriented design is the new old
> kid on the block."

My philosophy for the past 8+ years has been lower-level than C++,
and has been that we have to look at data as data and not as the
limitations of compute abilities exposed by the underlying hardware.

We need to view data in constructs which have solidly defined rules
by their data needs, and not by the machine needs which compute them
under the hood.

This necessarily translates the needs of a C or C++ compiler into a
tool which addresses those fundamental constraints, and ahead of the
mere computability of something, but rather the correct computability
of something regardless of the machines underlying capabilities.

--
Rick C. Hodgin

A retort to Stephen Fry:

["Suppose it’s all true, and you walk up to the
pearly gates, and are confronted by God," Bryne
asked on his show The Meaning of Life. "What will
Stephen Fry say to him, her, or it?"]

God has identified Himself as male. He calls the church His bride,
which hints of our future with Him, married to Him, possessing this
entire universe as our own, as we will receive what He has as ours,
and He will receive what we have as His, just as marriages here on
Earth are patterned after. This gives us a future.

["I’d say, bone cancer in children? What’s that about?"
Fry replied.]

God has taught us the cost of sin, and the nature of our free will.
He has given us information about the power of our choices, that
our own decisions can affect eternity for us, and others. He has
revealed just how costly sin is, that it will completely subdue a
man for all eternity. He has revealed His love to us in that even
though we are all sinners, He still makes a way out for us, and He
makes it available to us for free.

["How dare you? How dare you create a world to which
there is such misery that is not our fault.]

It /IS/ our fault. We disobeyed God's instruction, and He warned us
in advance of the consequences. Adam would've been perfect in his
understanding, and he would've known what God had meant. Adam made
a choice to rebel against God, and it cost us everything we see in
this world that is not right, and utterly, utterly evil.

[It’s not right, it’s utterly, utterly evil."]

It is not right, and it is utterly, utterly evil that such a thing
exists. But it is /our own/ doing, and God has made a completely
and totally free way to escape it. And even with that free way
being given to people, taught on nearly every street corner, men
and women like me proclaiming it even in the places people who do
not go to church would hear ... yet will so many still ignore it,
pass it by, think it's garbage, until the very day they are taken
hold of and manhandled physically, cast into the lake of fire for
all eternity.

Then the true cost of sin will be known ... upon every soul who
was not saved by their own rejection of God ... again.

["Why should I respect a capricious, mean-minded,
stupid God who creates a world that is so full of
injustice and pain. That’s what I would say."]

God gives us the calling, power, ability, and choice, to not follow
after that world so full of injustice and pain. But, because of
our own sin, we do not follow after God's path here in this world.
We choose to go the way of hate and meanness. We call people names
and use profanity and insult attempts to bring things into a right
order by presenting them as something that should be subject to God.

No, Stephen Fry, God has not created this evil you see here. Man
has. He has taken hold of sin with both hands, wrapped it around
his body, and put up a perimeter of defense around himself such that
anyone attempting to cross that border and strip him of his sin will
be shot on sight multiple times until they are dead, dead, dead!

That is the cost of sin in this world, Mr. Fry. And that is why God
created a special place to contain it. No one from there can exit,
and no one from here can enter, save through the judgment on that
great and final day for so many.

I'm sorry, Mr. Fry, but you do not understand as you should, which
is why you are confused. If you do not straighten that out, it will
cost you your eternal soul, which is a price more than you are able
or willing to pay. It's why you need Jesus to pay that price for you.

Mr Flibble

unread,
Aug 28, 2018, 4:42:28 PM8/28/18
to
On 28/08/2018 21:27, Rick C. Hodgin wrote:
> On Monday, August 27, 2018 at 8:46:19 PM UTC-4, Mr Flibble wrote:
>> Hi.
>>
>> Just published on article on C++ and data-oriented design:
>>
>> https://leighjohnston.wordpress.com/2018/08/27/data-oriented-design-and-avoiding-the-c-object-oriented-programming-zimmer-frame/
>>
>> "Object-oriented programming, OOP, or more importantly object-oriented
>> design (OOD) has been the workhorse of software engineering for the past
>> three decades but times are changing: data-oriented design is the new old
>> kid on the block."
>
> My philosophy for the past 8+ years has been lower-level than C++,
> and has been that we have to look at data as data and not as the
> limitations of compute abilities exposed by the underlying hardware.
>
> We need to view data in constructs which have solidly defined rules
> by their data needs, and not by the machine needs which compute them
> under the hood.
>
> This necessarily translates the needs of a C or C++ compiler into a
> tool which addresses those fundamental constraints, and ahead of the
> mere computability of something, but rather the correct computability
> of something regardless of the machines underlying capabilities.

And fossils are an invention of Satan yes?

/Flibble

--
"Suppose it’s all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I’d say, bone cancer in children? What’s that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It’s not right, it’s utterly, utterly evil."

Chris M. Thomasson

unread,
Aug 28, 2018, 4:47:21 PM8/28/18
to
On 8/27/2018 5:46 PM, Mr Flibble wrote:
> Hi.
>
> Just published on article on C++ and data-oriented design:
>
> https://leighjohnston.wordpress.com/2018/08/27/data-oriented-design-and-avoiding-the-c-object-oriented-programming-zimmer-frame/
>
>
> "Object-oriented programming, OOP, or more importantly object-oriented
> design (OOD) has been the workhorse of software engineering for the past
> three decades but times are changing: data-oriented design is the new
> old kid on the block."

It is a good practice to allocate all of the sprites _data_ in a
contiguous memory. Way more efficient than allocating each one
individually. Still need to take a closer look at intrusive_sort. :^)

Rick C. Hodgin

unread,
Aug 28, 2018, 5:02:11 PM8/28/18
to
On Tuesday, August 28, 2018 at 4:42:28 PM UTC-4, Mr Flibble wrote:
> And fossils are an invention of Satan yes?

I'll answer your questions.

You've never listened to a single reply of mine, have you, Leigh?
Because I've answered this one before (more than once IIRC).

No, fossils are not an invention of Satan, only the explanation
is his that they are the result of evolution laid down over mil-
lions and billions of years.

Fossils exist properly as they are, buried in rock layers all
over the world, and they have a true scientific explanation ...
but you'll never hear it until God draws you from within, and
that will never happen until you begin to seek the truth and
don't just assert things you think are correct.

--
Rick C. Hodgin

Vir Campestris

unread,
Aug 28, 2018, 6:03:01 PM8/28/18
to
On 28/08/2018 16:12, Scott Lurndal wrote:
> It's a lot faster than that, due to fast multi-level caches, multiple pipelines
> and out-of-order execution. That 1985 processor had slow memory, no cache
> and an IPC probably less than 0.1 (as compared with a modern processor which
> pushes an IPC close to 2.0).

That 1985 processor was a Z80. I can't quote you figures, but for an
8085 the IPC was around 2/3, and the speed was similar to a Z80. Many
instructions took 4 clocks, quite a lot an extra 3, and not many any
more than that.

8 bit operands and no HW divide would also slow you down a bit!

Andy

David Brown

unread,
Aug 29, 2018, 2:15:01 AM8/29/18
to
On 29/08/18 00:02, Vir Campestris wrote:
> On 28/08/2018 16:12, Scott Lurndal wrote:
>> It's a lot faster than that, due to fast multi-level caches, multiple
>> pipelines
>> and out-of-order execution.   That 1985 processor had slow memory, no
>> cache
>> and an IPC probably less than 0.1 (as compared with a modern processor
>> which
>> pushes an IPC close to 2.0).
>
> That 1985 processor was a Z80. I can't quote you figures, but for an
> 8085 the IPC was around 2/3, and the speed was similar to a Z80. Many
> instructions took 4 clocks, quite a lot an extra 3, and not many any
> more than that.

4 clocks per instruction means an IPC of 0.25, not "around 2/3". And if
my memory serves, that was the minimum cycle count for an instruction on
the Z80A. Unlike some processors of the time (like the 6502), there was
no pipelining in the Z80A. The 4 clock instructions were for short
register-to-register instructions (it had quite a few registers), but
many instructions used prefixes, and memory reads and writes quickly
added to the times. On the other hand, it did handle a fair number of
16-bit instructions and had some powerful addressing modes, which was a
boost compared to other 8-bit devices of the era.

>
> 8 bit operands and no HW divide would also slow you down a bit!
>

HW divide is often overrated - division is rarely used, and in many
older designs the hardware division is slow and very costly in real
estate. (On one sub-family of the 68k, the hardware division
instruction was dropped when it was discovered that software routines
were faster!).

HW multiply is another matter - and the Z80A did not have hardware
multiplication.


Rosario19

unread,
Aug 29, 2018, 3:18:09 AM8/29/18
to
On Tue, 28 Aug 2018 15:49:33 +0100, Mr Flibble wrote:

>"I’d say, bone cancer in children?

because people use carcinogenic substances, first of all, the more
danger radiation alpha, beta, and gamma

in 90% of cases i think it is one human choice, below human
responsability

>What’s that about?" Fry replied.
>"How dare you? How dare you create a world to which there is such misery
>that is not our fault. It’s not right, it’s utterly, utterly evil."
>"Why should I respect a capricious, mean-minded, stupid God who creates a
>world that is so full of injustice and pain. That’s what I would say."

It's all a test, there is something to prove
the time is little here compared to Eternity

bol...@cylonhq.com

unread,
Aug 29, 2018, 4:25:17 AM8/29/18
to
On Tue, 28 Aug 2018 17:18:42 +0100
Mr Flibble <flibbleREM...@i42.co.uk> wrote:
>On 28/08/2018 16:29, bol...@cylonHQ.com wrote:
>> As for your intrusive_sort - your swapper seems inordinately complex with
>> little explanation as to what index() or reverse_index() actually do. I can't
>
>> see many takers frankly.
>
>The troll returns. You can't see many takers because you literally have
>no clue as to how to use C++ properly as evidenced by your previous posts

And it would seem you have no clue about human nature. For most sorting
functions users already have to write the copy constructor and assignment
operator function along with the comparitor, now with your sort they have
to write the swapper too! Whats left - the core sorting algorithm. Big deal,
they might as well write that themselves as well! A quicksort and shell sort
are all of 20 lines of code max especially if you're not even going to bother
to explain exactly what 2 of the functions in your swapper actually do.
"Returns a sparse array" is not documentation.

>to this group. I wasn't suggesting that you should stop using OOP all
>together but that it is just one tool in toolbox and data-oriented design
>is making a comeback due to the nature of modern hardware. You obviously

Data oriented design never went away in people doing to the metal coding.
There's a good reason the core of most OS's and device drivers are written in C
and assembler, not C++.


bol...@cylonhq.com

unread,
Aug 29, 2018, 4:32:25 AM8/29/18
to
Sounds like they got the intern to write the microcode there. Software should
never be faster than hardware to do the same thing on the same CPU.

>HW multiply is another matter - and the Z80A did not have hardware
>multiplication.

Well, to be pedantic it had both hardware and software division so long as
you only wanted to do so in multiples of 2 :)

David Brown

unread,
Aug 29, 2018, 4:52:21 AM8/29/18
to
On 29/08/18 10:32, bol...@cylonHQ.com wrote:
> On Wed, 29 Aug 2018 08:14:46 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 29/08/18 00:02, Vir Campestris wrote:
>> estate. (On one sub-family of the 68k, the hardware division
>> instruction was dropped when it was discovered that software routines
>> were faster!).
>
> Sounds like they got the intern to write the microcode there. Software should
> never be faster than hardware to do the same thing on the same CPU.
>

That sounds like you don't understand the nature of processor design.

It is unusual to have a situation like this, but it happens. The basic
order of events goes like this:

1. You have a design where everything takes quite a number of clock
cycles. (This was from the beginning of the 1980's, remember.)

2. Doing division in software is very slow, so you implement a hardware
divider. This is also slow - a fast divider would take inordinate
amount of space in an era where 40K transistor counts was big. But it
is a lot faster than using software routines.

3. New family members take advantage of newer technologies and larger
transistor counts - pipelining, wider buses and ALUs, caches and
buffers, all conspire to give you a tenfold or more increase in IPC
counts for the important common instructions. The division hardware
might get a revision improving its speed by 2 or 3 - but a fast divider
is still too big to justify.

4. At some point, the software is faster than the hardware divider. So
the hardware divider is dropped.

5. Later, it becomes practical to have a hardware divider again -
transistors are smaller, newer algorithms are available, and you can
make a hardware divider that is a good deal faster than the software.
For many big processors, you therefore have a fast hardware divider (but
still much slower than most operations). For the 68k descendents,
low-power and low-cost outweighed the benefits of a hardware divider
that was rarely used in real software.

You see the same thing in other complex functions. Big processors used
to have all sorts of transcendental floating point functions in hardware
- now they are often done in software because that gives a better
cost-benefit ratio.


>> HW multiply is another matter - and the Z80A did not have hardware
>> multiplication.
>
> Well, to be pedantic it had both hardware and software division so long as
> you only wanted to do so in multiples of 2 :)
>

That is not what anyone means by hardware multiplier.


bol...@cylonhq.com

unread,
Aug 29, 2018, 5:03:51 AM8/29/18
to
On Wed, 29 Aug 2018 10:52:08 +0200
David Brown <david...@hesbynett.no> wrote:
>On 29/08/18 10:32, bol...@cylonHQ.com wrote:
>> On Wed, 29 Aug 2018 08:14:46 +0200
>> David Brown <david...@hesbynett.no> wrote:
>>> On 29/08/18 00:02, Vir Campestris wrote:
>>> estate. (On one sub-family of the 68k, the hardware division
>>> instruction was dropped when it was discovered that software routines
>>> were faster!).
>>
>> Sounds like they got the intern to write the microcode there. Software should
>
>> never be faster than hardware to do the same thing on the same CPU.
>>
>
>That sounds like you don't understand the nature of processor design.

I think you missed the point that anything that can be done in software can
also be done in microcode on the die itself, otherwise you'd be claiming that
software can execute magic hardware functions that the hardware itself can't
access! Ergo, whoever wrote the microcode for the division royally fucked up.

>>> HW multiply is another matter - and the Z80A did not have hardware
>>> multiplication.
>>
>> Well, to be pedantic it had both hardware and software division so long as
>> you only wanted to do so in multiples of 2 :)
>>
>
>That is not what anyone means by hardware multiplier.

*sigh* Yes, I know, hence the smiley. Did it need to be signposted?

But non rotational bit shifting is multiplying/dividing by 2 and was often used
as a short cut in assembler.

David Brown

unread,
Aug 29, 2018, 5:20:39 AM8/29/18
to
On 29/08/18 11:03, bol...@cylonHQ.com wrote:
> On Wed, 29 Aug 2018 10:52:08 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 29/08/18 10:32, bol...@cylonHQ.com wrote:
>>> On Wed, 29 Aug 2018 08:14:46 +0200
>>> David Brown <david...@hesbynett.no> wrote:
>>>> On 29/08/18 00:02, Vir Campestris wrote:
>>>> estate. (On one sub-family of the 68k, the hardware division
>>>> instruction was dropped when it was discovered that software routines
>>>> were faster!).
>>>
>>> Sounds like they got the intern to write the microcode there. Software should
>>
>>> never be faster than hardware to do the same thing on the same CPU.
>>>
>>
>> That sounds like you don't understand the nature of processor design.
>
> I think you missed the point that anything that can be done in software can
> also be done in microcode on the die itself, otherwise you'd be claiming that
> software can execute magic hardware functions that the hardware itself can't
> access! Ergo, whoever wrote the microcode for the division royally fucked up.

Again, it sounds like you don't understand the nature of processor design.

Modern processors do not use microcode for most instructions - many do
not have microcode at all. It is a /long/ time since cpu designs used
microcode for basic register-ALU-register instructions.

Software has access to features that the hardware blocks do not - unless
the hardware replicates such features. It can use the registers, the
ALU, loop accelerators, caches, multiple execution units, register
renames, and all the other smart hardware that makes cpus fast today. A
hardware block cannot access any of these - because they are in use by
the rest of the processor in parallel, and because the hardware
interlocks and multiplexers needed to allow their usage would greatly
affect the performance for all other code.


>
>>>> HW multiply is another matter - and the Z80A did not have hardware
>>>> multiplication.
>>>
>>> Well, to be pedantic it had both hardware and software division so long as
>>> you only wanted to do so in multiples of 2 :)
>>>
>>
>> That is not what anyone means by hardware multiplier.
>
> *sigh* Yes, I know, hence the smiley. Did it need to be signposted?

Adding a smiley does not make an incorrect statement correct. If it had
been remotely funny, interesting, observant or novel, it would have
beenfine.

>
> But non rotational bit shifting is multiplying/dividing by 2 and was often used
> as a short cut in assembler.
>

Yes, I think everyone already knows that.

bol...@cylonhq.com

unread,
Aug 29, 2018, 5:43:46 AM8/29/18
to
On Wed, 29 Aug 2018 11:20:26 +0200
David Brown <david...@hesbynett.no> wrote:
>On 29/08/18 11:03, bol...@cylonHQ.com wrote:
>> I think you missed the point that anything that can be done in software can
>> also be done in microcode on the die itself, otherwise you'd be claiming that
>
>> software can execute magic hardware functions that the hardware itself can't
>> access! Ergo, whoever wrote the microcode for the division royally fucked up.
>
>
>Again, it sounds like you don't understand the nature of processor design.
>
>Modern processors do not use microcode for most instructions - many do

Either you're stupid or you're just being an ass for the sake of arguing.
Call it microcode, call it microops, its the same thing. What would you
call the risc type instructions an x86 instruction gets converted into in
intel processors for example?

>Software has access to features that the hardware blocks do not - unless
>the hardware replicates such features. It can use the registers, the
>ALU, loop accelerators, caches, multiple execution units, register
>renames, and all the other smart hardware that makes cpus fast today. A
>hardware block cannot access any of these - because they are in use by
>the rest of the processor in parallel, and because the hardware
>interlocks and multiplexers needed to allow their usage would greatly
>affect the performance for all other code.

Well that told me, clearly its impossible to implement fast division in
hardware then!

>>> That is not what anyone means by hardware multiplier.
>>
>> *sigh* Yes, I know, hence the smiley. Did it need to be signposted?
>
>Adding a smiley does not make an incorrect statement correct. If it had

Except it wasn't incorrect was it.

>been remotely funny, interesting, observant or novel, it would have
>beenfine.

Do yourself a favour and pull that rod out of your backside. Unless you're
just another aspie robot who doesn't get tongue in cheek.

>> But non rotational bit shifting is multiplying/dividing by 2 and was often
>used
>> as a short cut in assembler.
>>
>
>Yes, I think everyone already knows that.

You just said it was incorrect, do try and make your mind up. Get back to
me when you've managed it.

Juha Nieminen

unread,
Aug 29, 2018, 6:16:46 AM8/29/18
to
bol...@cylonhq.com wrote:
> I think you missed the point that anything that can be done in software can
> also be done in microcode on the die itself, otherwise you'd be claiming that
> software can execute magic hardware functions that the hardware itself can't
> access!

Hardware doesn't necessarily always implement the theoretically fastest
implementation of complex operations.

For example, multiplication (integer or floating point) can be done in one
single clock cycle, but that requires a very large amount of chip space
(because it requires a staggering amount of transistors). Making a
compromise where multiplication takes 2 or 3 clock cycles reduces this
physical chip area requirement exponentially.

> Ergo, whoever wrote the microcode for the division royally fucked up.

There may be other reasons why hardware implementation of something might
be slower than an alternative software implementation.

One reason might be accuracy, or the exact type of operations that need
to be performed in certain exceptional situations. Floating point
calculations can be an example of this (where eg. IEEE standard-compliant
calculations might require more complex operations than would be necessary
for the application at hand).

Another good example is the RDRAND opcode in newer Intel processors.
Depending on the processor, it can be 20 times slower than Mersenne
Twister. On the other hand, there are reasons why it's slower.

Juha Nieminen

unread,
Aug 29, 2018, 6:18:05 AM8/29/18
to
bol...@cylonhq.com wrote:
> Well that told me, clearly its impossible to implement fast division in
> hardware then!

Is it possible to implement fast *accurate* division in hardware?
Is the software implementation giving the exact same result in every
possible situation?

David Brown

unread,
Aug 29, 2018, 6:51:33 AM8/29/18
to
On 29/08/18 11:43, bol...@cylonHQ.com wrote:
> On Wed, 29 Aug 2018 11:20:26 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 29/08/18 11:03, bol...@cylonHQ.com wrote:
>>> I think you missed the point that anything that can be done in software can
>>> also be done in microcode on the die itself, otherwise you'd be claiming that
>>
>>> software can execute magic hardware functions that the hardware itself can't
>>> access! Ergo, whoever wrote the microcode for the division royally fucked up.
>>
>>
>> Again, it sounds like you don't understand the nature of processor design.
>>
>> Modern processors do not use microcode for most instructions - many do
>
> Either you're stupid or you're just being an ass for the sake of arguing.
> Call it microcode, call it microops, its the same thing. What would you
> call the risc type instructions an x86 instruction gets converted into in
> intel processors for example?
>

Micro-ops. They are completely, totally and utterly different from
microcode.

It's fine that you don't know about this sort of thing. Few people do -
the details of cpu architecture are irrelevant to most C++ programmers.
If you want to know more about this, I am happy to explain what
microcode and micro-ops are. But please stop making wild assertions.

>> Software has access to features that the hardware blocks do not - unless
>> the hardware replicates such features. It can use the registers, the
>> ALU, loop accelerators, caches, multiple execution units, register
>> renames, and all the other smart hardware that makes cpus fast today. A
>> hardware block cannot access any of these - because they are in use by
>> the rest of the processor in parallel, and because the hardware
>> interlocks and multiplexers needed to allow their usage would greatly
>> affect the performance for all other code.
>
> Well that told me, clearly its impossible to implement fast division in
> hardware then!

No, clearly it /is/ possible to implement fast division in hardware.
But it is not necessarily cost-effective to do so. There is a vast
difference between what is possible, and what is practical or sensible.

>
>>>> That is not what anyone means by hardware multiplier.
>>>
>>> *sigh* Yes, I know, hence the smiley. Did it need to be signposted?
>>
>> Adding a smiley does not make an incorrect statement correct. If it had
>
> Except it wasn't incorrect was it.

Yes, it was. No matter what numbers I might want to multiply by or
divide by, the Z80A did not have hardware multiplication or hardware
division. Saying it can multiply and divide "in multiples of 2" does
not change that.

(It's not clear what you mean by "in multiples of 2". Perhaps you meant
"by powers of 2". It would still be questionable how much hardware
support the Z80A has for them, since it could only shift and rotate one
step at a time in a single instruction.)

>
>> been remotely funny, interesting, observant or novel, it would have
>> beenfine.
>
> Do yourself a favour and pull that rod out of your backside. Unless you're
> just another aspie robot who doesn't get tongue in cheek.

Tongue in cheek is fine. But don't expect people to be particularly
impressed by your insults.

>
>>> But non rotational bit shifting is multiplying/dividing by 2 and was often
>> used
>>> as a short cut in assembler.
>>>
>>
>> Yes, I think everyone already knows that.
>
> You just said it was incorrect, do try and make your mind up. Get back to
> me when you've managed it.
>

You use bit-shifts for multiplying or dividing by 2 (being particularly
careful with signs). That does not make a bit-shifter a multiplier or
divider.


bol...@cylonhq.com

unread,
Aug 29, 2018, 7:03:36 AM8/29/18
to
On Wed, 29 Aug 2018 12:51:20 +0200
David Brown <david...@hesbynett.no> wrote:
>On 29/08/18 11:43, bol...@cylonHQ.com wrote:
>> Either you're stupid or you're just being an ass for the sake of arguing.
>> Call it microcode, call it microops, its the same thing. What would you
>> call the risc type instructions an x86 instruction gets converted into in
>> intel processors for example?
>>
>
>Micro-ops. They are completely, totally and utterly different from
>microcode.

No, they're really not. A assembler instruction is broken down into lower
level instructions that are directly interpreted by the hardware in both
cases. The only difference is micro ops are a slightly higher level than
microcode but the paradigm is exactly the same.

>>> been remotely funny, interesting, observant or novel, it would have
>>> beenfine.
>>
>> Do yourself a favour and pull that rod out of your backside. Unless you're
>> just another aspie robot who doesn't get tongue in cheek.
>
>Tongue in cheek is fine. But don't expect people to be particularly
>impressed by your insults.

Can dish it out but can't take it? The usual story on usenet.

>> You just said it was incorrect, do try and make your mind up. Get back to
>> me when you've managed it.
>>
>
>You use bit-shifts for multiplying or dividing by 2 (being particularly
>careful with signs). That does not make a bit-shifter a multiplier or
>divider.

There is no difference between multiplying by 2 and shifting 1 bit to the
left or dividing by 2 and shifting 1 bit to the right, other than some
processor specific carry or overflow flag settings afterwards. Argue the toss
all you want, its not up for debate.

David Brown

unread,
Aug 29, 2018, 7:04:31 AM8/29/18
to
Yes.

There are many methods, with different balances between speed (both
latency and throughput), die space, power requirements, and complexity.
A single-precision floating point divider can be faster than a
double-precision divider, but is less accurate in absolute terms -
however, it will still be accurate for the resolution of the numbers
provided.

And yes, these will give exactly the same results as a matching software
implementation in every possible situation. For integer division, it's
easy. For floating point, with rounding, it gets a bit more complicated
- but it is all precisely specified in the IEEE standards (for floating
point hardware following those standards - as most do).

There are other floating point operations and combinations of operations
that are much worse. These can need quite complicated software
libraries to get exactly matching results - libraries optimised for this
matching rather than for speed. gcc (and probably other compilers) use
such libraries so that they can do compile-time calculations that are
bit-perfect results of the equivalent run-time calculations, even when
the compiler and the target are different processors.

David Brown

unread,
Aug 29, 2018, 7:08:49 AM8/29/18
to
On 29/08/18 12:16, Juha Nieminen wrote:
> bol...@cylonhq.com wrote:
>> I think you missed the point that anything that can be done in software can
>> also be done in microcode on the die itself, otherwise you'd be claiming that
>> software can execute magic hardware functions that the hardware itself can't
>> access!
>
> Hardware doesn't necessarily always implement the theoretically fastest
> implementation of complex operations.
>

No, indeed.

> For example, multiplication (integer or floating point) can be done in one
> single clock cycle, but that requires a very large amount of chip space
> (because it requires a staggering amount of transistors). Making a
> compromise where multiplication takes 2 or 3 clock cycles reduces this
> physical chip area requirement exponentially.

Exactly.

>
>> Ergo, whoever wrote the microcode for the division royally fucked up.
>
> There may be other reasons why hardware implementation of something might
> be slower than an alternative software implementation.
>
> One reason might be accuracy, or the exact type of operations that need
> to be performed in certain exceptional situations. Floating point
> calculations can be an example of this (where eg. IEEE standard-compliant
> calculations might require more complex operations than would be necessary
> for the application at hand).

It is not uncommon, especially in processors with smaller dies, to have
a hardware implementation for normal finite floating point operations,
but throw a trap to software emulation for NaNs, denormals, or other
more unusual values. That gives a good balance between speed for common
operations without undue costs for rare ones.

>
> Another good example is the RDRAND opcode in newer Intel processors.
> Depending on the processor, it can be 20 times slower than Mersenne
> Twister. On the other hand, there are reasons why it's slower.
>

Yes - the opcode /looks/ like it gives more "random" numbers than a
pseudo-random generator, but really it feeds out a sequence the NSA can
predict...

(No, I don't believe that.)

bol...@cylonhq.com

unread,
Aug 29, 2018, 7:25:41 AM8/29/18
to
Surely it can't be too hard to implement a noise based truly random number
generator on a CPU by now?

David Brown

unread,
Aug 29, 2018, 7:35:33 AM8/29/18
to
On 29/08/18 13:03, bol...@cylonHQ.com wrote:
> On Wed, 29 Aug 2018 12:51:20 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 29/08/18 11:43, bol...@cylonHQ.com wrote:
>>> Either you're stupid or you're just being an ass for the sake of arguing.
>>> Call it microcode, call it microops, its the same thing. What would you
>>> call the risc type instructions an x86 instruction gets converted into in
>>> intel processors for example?
>>>
>>
>> Micro-ops. They are completely, totally and utterly different from
>> microcode.
>
> No, they're really not. A assembler instruction is broken down into lower
> level instructions that are directly interpreted by the hardware in both
> cases. The only difference is micro ops are a slightly higher level than
> microcode but the paradigm is exactly the same.

Micro-ops are used in a processors with complex CISC ISAs with variable
lengths. The instructions in an ISA like the x86 are inherently
complicated, mixing address calculations, loads, operations, stores, and
register updates in the same instruction. This is painful to deal with
in a cpu with pipelining, multiple execution units, speculative
execution, etc. So the early stages of the instruction decode break
down an instruction from something equivalent to :

a += x[i++ + 4]

into

r0 = i
r1 = r0 + 4
r2 = x[r1]
r3 = a
r4 = r3 + r2
a = r4
r5 = r0 + 1
i = r6

Each of these is a RISC-style instruction with a single function, and
will be encoded in a straight-forward easy-to-parse format, so that the
rest of the processor looks like a RISC cpu. The details of the
micro-op "instruction set" are often independent of cpu implementation
details such as the number of execution units.


Microcode instructions are much lower level. They are not used in
modern processors (except, possibly, modern implementations of some old
designs). In a fully microcoded cpu, the source instructions are used
to call routines in microcode, stored in a ROM. The microcode
instructions themselves are very wide - they can be hundreds of bits
wide - with a very direct connection to the exact hardware. These
directly control things like the register-to-ALU multiplexers, latch
enables, gate inputs, and other details. Microcode is impractical for
normal operations on processors with multiple execution units.


Complicated modern processors - whether they are RISC ISA or use
micro-ops to have a RISC core - can have a kind of microcode. This is
basically routines stored in ROM (or flash or ram) for handling rare,
complex operations - or sometimes operations that have a single ISA
instruction but require multiple cycles (like a push/pop multiple
register operation). The instructions in this "microcode" are of the
same form as normal RISC (or micro-op) instructions, perhaps slightly
extended with access to a few internal registers.

>
>>>> been remotely funny, interesting, observant or novel, it would have
>>>> beenfine.
>>>
>>> Do yourself a favour and pull that rod out of your backside. Unless you're
>>> just another aspie robot who doesn't get tongue in cheek.
>>
>> Tongue in cheek is fine. But don't expect people to be particularly
>> impressed by your insults.
>
> Can dish it out but can't take it? The usual story on usenet.

I haven't insulted you, unless you count quoting your own words back at you.

>
>>> You just said it was incorrect, do try and make your mind up. Get back to
>>> me when you've managed it.
>>>
>>
>> You use bit-shifts for multiplying or dividing by 2 (being particularly
>> careful with signs). That does not make a bit-shifter a multiplier or
>> divider.
>
> There is no difference between multiplying by 2 and shifting 1 bit to the
> left or dividing by 2 and shifting 1 bit to the right, other than some
> processor specific carry or overflow flag settings afterwards. Argue the toss
> all you want, its not up for debate.
>

No one is debating that. But suggesting that having a bit shift means
you have a multiplier and a divider is not up for debate either - it was
nonsense when you said it, and nonsense it remains. Smiley or no smiley.


David Brown

unread,
Aug 29, 2018, 7:46:45 AM8/29/18
to
"True" random number generators have been designed using many different
principles. I think thermal noise over a reverse biased diode is one
common method. Another is the interaction of unsynchronised oscillators.

These can be good sources of entropy, but are not necessarily a good
source of random numbers. Random numbers need a known distribution -
typically, you want to start with a nice linear distribution and then
shape it according to application needs. You also need your random data
at a fast enough rate, again according to application need. A typical
method of linearising or "whitening" your entropy source is to use it to
seed a pseudo-random generator - such as a Mersenne twister.

When I say opcodes like RDRAND "look" more random, what I mean is that
people often think such hardware sources are somehow more "random" than
a purely software solution. In real usage, however, when people want
"random" numbers they usually want one or both of two things - an
unpredictable sequence (i.e., after rolling 2, 3, then 4, you can't tell
what the next roll will be, nor can you guess what the roll before was),
and a smooth distribution (i.e., rolling 1, 1, 1 should be as common as
rolling 5, 2, 4). You can get the unpredictable sequences quite happily
with a good pseudo-random generator regularly re-seeded from network
traffic timings or other entropy sources - and these are often more
linearly distributed than hardware sources.


Scott Lurndal

unread,
Aug 29, 2018, 9:11:12 AM8/29/18
to
bol...@cylonHQ.com writes:
>On Wed, 29 Aug 2018 10:52:08 +0200
>David Brown <david...@hesbynett.no> wrote:
>>On 29/08/18 10:32, bol...@cylonHQ.com wrote:
>>> On Wed, 29 Aug 2018 08:14:46 +0200
>>> David Brown <david...@hesbynett.no> wrote:
>>>> On 29/08/18 00:02, Vir Campestris wrote:
>>>> estate. (On one sub-family of the 68k, the hardware division
>>>> instruction was dropped when it was discovered that software routines
>>>> were faster!).
>>>
>>> Sounds like they got the intern to write the microcode there. Software should
>>
>>> never be faster than hardware to do the same thing on the same CPU.
>>>
>>
>>That sounds like you don't understand the nature of processor design.
>
>I think you missed the point that anything that can be done in software can
>also be done in microcode on the die itself, otherwise you'd be claiming that
>software can execute magic hardware functions that the hardware itself can't
>access! Ergo, whoever wrote the microcode for the division royally fucked up.

What modern processor is microcoded? X86/AMD64 has a very small bit of microcode to
handle certain non-performance related management functions, but the math instructions are
all implemented in gates.

Our 64-core ARM64 processor has no microcode.

Scott Lurndal

unread,
Aug 29, 2018, 9:13:01 AM8/29/18
to
bol...@cylonHQ.com writes:
>On Wed, 29 Aug 2018 11:20:26 +0200
>David Brown <david...@hesbynett.no> wrote:
>>On 29/08/18 11:03, bol...@cylonHQ.com wrote:
>>> I think you missed the point that anything that can be done in software can
>>> also be done in microcode on the die itself, otherwise you'd be claiming that
>>
>>> software can execute magic hardware functions that the hardware itself can't
>>> access! Ergo, whoever wrote the microcode for the division royally fucked up.
>>
>>
>>Again, it sounds like you don't understand the nature of processor design.
>>
>>Modern processors do not use microcode for most instructions - many do
>
>Either you're stupid or you're just being an ass for the sake of arguing.
>Call it microcode, call it microops, its the same thing. What would you
>call the risc type instructions an x86 instruction gets converted into in
>intel processors for example?

Instruction fission is in no way microcode, nor is instruction
fusion (i.e. combining adjacent instructions (e.g. test conditional branch)
in fetch stage).


Scott Lurndal

unread,
Aug 29, 2018, 9:15:12 AM8/29/18
to
Juha Nieminen <nos...@thanks.invalid> writes:
>bol...@cylonhq.com wrote:
>> Well that told me, clearly its impossible to implement fast division in
>> hardware then!
>
>Is it possible to implement fast *accurate* division in hardware?

Of course it is. It's been possible for a couple of decades, with
very low latency (< 5 cycles for many processors).

Scott Lurndal

unread,
Aug 29, 2018, 9:20:14 AM8/29/18
to
bol...@cylonHQ.com writes:
>On Wed, 29 Aug 2018 12:51:20 +0200
>David Brown <david...@hesbynett.no> wrote:
>>On 29/08/18 11:43, bol...@cylonHQ.com wrote:
>>> Either you're stupid or you're just being an ass for the sake of arguing.
>>> Call it microcode, call it microops, its the same thing. What would you
>>> call the risc type instructions an x86 instruction gets converted into in
>>> intel processors for example?
>>>
>>
>>Micro-ops. They are completely, totally and utterly different from
>>microcode.
>
>No, they're really not. A assembler instruction is broken down into lower
>level instructions that are directly interpreted by the hardware in both
>cases. The only difference is micro ops are a slightly higher level than
>microcode but the paradigm is exactly the same.

That's a layman's description suitable for laymen. It's not what
actually happens in the processor, however.

The processor, when it fetches certain instructions, will either
pass it directly to the execution pipeline engines (subject to
dependency analysis), or will fission it into multiple operations
that can be executed in parallel by multiple engines in the pipeline,
or will fuse multiple instructions into a single operation that can
be executed by one of the pipeline engines). None of this is controlled
by any form of programmable microcode - it's implemented directly
in gates.

Juha Nieminen

unread,
Aug 29, 2018, 9:37:21 AM8/29/18
to
David Brown <david...@hesbynett.no> wrote:
> When I say opcodes like RDRAND "look" more random, what I mean is that
> people often think such hardware sources are somehow more "random" than
> a purely software solution.

I suppose that it comes down to whether the stream of random numbers
is deterministic (and completely predictable given the initial conditions),
or whether they numbers are impossible to predict, no matter what
information you have.

Any cryptographically strong PRNG is completely indistinguishable from
a "true" source of randomness, using almost any form of measurement you
may conjure. (Given two very large streams of numbers produced by
both methods, it's impossible to tell for certain which one was
generated with a software PRNG and which one is from a "true" source
of randomness.)

However, as said, I suppose people instinctively object to the notion
that PRNGs always produce the same results when the initial conditions
are the same, and thus think of it as "less random".

David Brown

unread,
Aug 29, 2018, 9:45:17 AM8/29/18
to
On 29/08/18 15:37, Juha Nieminen wrote:
> David Brown <david...@hesbynett.no> wrote:
>> When I say opcodes like RDRAND "look" more random, what I mean is that
>> people often think such hardware sources are somehow more "random" than
>> a purely software solution.
>
> I suppose that it comes down to whether the stream of random numbers
> is deterministic (and completely predictable given the initial conditions),
> or whether they numbers are impossible to predict, no matter what
> information you have.

Yes, that is a difference. Pseudo-random generators are deterministic,
but (if they are good algorithms and wide enough numbers) unpredictable
unless you know the seed numbers. True random sources can't be
predicted at all. But as long as the seed numbers are kept safe (or
change with real entropy), there is no way to distinguish the two.

>
> Any cryptographically strong PRNG is completely indistinguishable from
> a "true" source of randomness, using almost any form of measurement you
> may conjure. (Given two very large streams of numbers produced by
> both methods, it's impossible to tell for certain which one was
> generated with a software PRNG and which one is from a "true" source
> of randomness.)

Exactly.

>
> However, as said, I suppose people instinctively object to the notion
> that PRNGs always produce the same results when the initial conditions
> are the same, and thus think of it as "less random".
>

Yes.

(There are other newsgroups where there are people vastly more versed in
randomness and cryptography than me, if you want to know more or discuss
more.)


bol...@cylonhq.com

unread,
Aug 29, 2018, 9:49:48 AM8/29/18
to
On Wed, 29 Aug 2018 13:35:20 +0200
David Brown <david...@hesbynett.no> wrote:
>On 29/08/18 13:03, bol...@cylonHQ.com wrote:
>> Can dish it out but can't take it? The usual story on usenet.
>
>I haven't insulted you, unless you count quoting your own words back at you.

Being patronising is being insulting however much you'd like to pretend
otherwise.

>> There is no difference between multiplying by 2 and shifting 1 bit to the
>> left or dividing by 2 and shifting 1 bit to the right, other than some
>> processor specific carry or overflow flag settings afterwards. Argue the toss
>
>> all you want, its not up for debate.
>>
>
>No one is debating that. But suggesting that having a bit shift means
>you have a multiplier and a divider is not up for debate either - it was
>nonsense when you said it, and nonsense it remains. Smiley or no smiley.

Ok, I think we've now established you really don't understand what tongue in
cheek actually means. Don't worry about.

bol...@cylonhq.com

unread,
Aug 29, 2018, 9:58:32 AM8/29/18
to
On Wed, 29 Aug 2018 13:37:07 -0000 (UTC)
Juha Nieminen <nos...@thanks.invalid> wrote:
>However, as said, I suppose people instinctively object to the notion
>that PRNGs always produce the same results when the initial conditions
>are the same, and thus think of it as "less random".

Thats because they're not random, they're chaotic - two entirely different
things. A chaotic system given the EXACT same starting parameters WILL produce
exactly the same outcome, though a tiny change in those parameters (different
seed) will produce an entirely different result. With a truly random system
start parameters are irrelevant, the sequence cannot be force repeated.

bol...@cylonhq.com

unread,
Aug 29, 2018, 9:59:34 AM8/29/18
to
The processor in question was a 68K. ITYF it was microcoded.

David Brown

unread,
Aug 29, 2018, 11:18:49 AM8/29/18
to
On 29/08/18 15:49, bol...@cylonHQ.com wrote:
> On Wed, 29 Aug 2018 13:35:20 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 29/08/18 13:03, bol...@cylonHQ.com wrote:
>>> Can dish it out but can't take it? The usual story on usenet.
>>
>> I haven't insulted you, unless you count quoting your own words back at you.
>
> Being patronising is being insulting however much you'd like to pretend
> otherwise.

If you think I have been patronising, then I agree that can be
insulting. I suppose the comment about your lack of knowledge about cpu
design can be taken as patronising. On the other hand, you /did/ put
yourself in hole, and then kept digging.

>
>>> There is no difference between multiplying by 2 and shifting 1 bit to the
>>> left or dividing by 2 and shifting 1 bit to the right, other than some
>>> processor specific carry or overflow flag settings afterwards. Argue the toss
>>
>>> all you want, its not up for debate.
>>>
>>
>> No one is debating that. But suggesting that having a bit shift means
>> you have a multiplier and a divider is not up for debate either - it was
>> nonsense when you said it, and nonsense it remains. Smiley or no smiley.
>
> Ok, I think we've now established you really don't understand what tongue in
> cheek actually means. Don't worry about.
>

Very drool.

David Brown

unread,
Aug 29, 2018, 11:24:12 AM8/29/18
to
It was in the 68k family - which were originally microcoded. Later
members, including the derivative Coldfire, had steadily less microcode.
Division was one of relatively few instructions that retained microcode
until it was dropped as a hardware instruction in Coldfire cores (at
least the ones I used). Microcode is a slow technique from a bygone era
in cpu design - it exists in modern designs only where it is very
convenient to have a single instruction at the ISA level (usually due to
backwards compatibility), the instruction is complex, its speed is
irrelevant, and microcoding of some sort can save significant die area.


bol...@cylonhq.com

unread,
Aug 29, 2018, 11:53:12 AM8/29/18
to
On Wed, 29 Aug 2018 17:18:33 +0200
David Brown <david...@hesbynett.no> wrote:
>On 29/08/18 15:49, bol...@cylonHQ.com wrote:
>> On Wed, 29 Aug 2018 13:35:20 +0200
>> David Brown <david...@hesbynett.no> wrote:
>>> On 29/08/18 13:03, bol...@cylonHQ.com wrote:
>>>> Can dish it out but can't take it? The usual story on usenet.
>>>
>>> I haven't insulted you, unless you count quoting your own words back at you.
>
>>
>> Being patronising is being insulting however much you'd like to pretend
>> otherwise.
>
>If you think I have been patronising, then I agree that can be
>insulting. I suppose the comment about your lack of knowledge about cpu
>design can be taken as patronising. On the other hand, you /did/ put
>yourself in hole, and then kept digging.

No, you decided to argue the toss over semantics in order - presumably - to
try and win the point. There is little qualitative difference between microcode
and microops however you wish to spin it.

>> Ok, I think we've now established you really don't understand what tongue in
>> cheek actually means. Don't worry about.
>>
>
>Very drool.

I assume that was supposed to be amusing and not simply a typo. I'd give it
another go tbh.

David Brown

unread,
Aug 29, 2018, 11:59:46 AM8/29/18
to
On 29/08/18 17:53, bol...@cylonHQ.com wrote:
> On Wed, 29 Aug 2018 17:18:33 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 29/08/18 15:49, bol...@cylonHQ.com wrote:
>>> On Wed, 29 Aug 2018 13:35:20 +0200
>>> David Brown <david...@hesbynett.no> wrote:
>>>> On 29/08/18 13:03, bol...@cylonHQ.com wrote:
>>>>> Can dish it out but can't take it? The usual story on usenet.
>>>>
>>>> I haven't insulted you, unless you count quoting your own words back at you.
>>
>>>
>>> Being patronising is being insulting however much you'd like to pretend
>>> otherwise.
>>
>> If you think I have been patronising, then I agree that can be
>> insulting. I suppose the comment about your lack of knowledge about cpu
>> design can be taken as patronising. On the other hand, you /did/ put
>> yourself in hole, and then kept digging.
>
> No, you decided to argue the toss over semantics in order - presumably - to
> try and win the point. There is little qualitative difference between microcode
> and microops however you wish to spin it.
>

There is a huge difference. Read my explanation in another post. If
you would rather post wildly wrong statements and have them left
uncorrected, then I think you will be disappointed here - people in this
group try to correct others. Most of us like it that way.


<snipping the rest>

Mr Flibble

unread,
Aug 29, 2018, 2:57:11 PM8/29/18
to
On 29/08/2018 09:25, bol...@cylonHQ.com wrote:
> On Tue, 28 Aug 2018 17:18:42 +0100
> Mr Flibble <flibbleREM...@i42.co.uk> wrote:
>> On 28/08/2018 16:29, bol...@cylonHQ.com wrote:
>>> As for your intrusive_sort - your swapper seems inordinately complex with
>>> little explanation as to what index() or reverse_index() actually do. I can't
>>
>>> see many takers frankly.
>>
>> The troll returns. You can't see many takers because you literally have
>> no clue as to how to use C++ properly as evidenced by your previous posts
>
> And it would seem you have no clue about human nature. For most sorting
> functions users already have to write the copy constructor and assignment
> operator function along with the comparitor, now with your sort they have
> to write the swapper too! Whats left - the core sorting algorithm. Big deal,
> they might as well write that themselves as well! A quicksort and shell sort
> are all of 20 lines of code max especially if you're not even going to bother
> to explain exactly what 2 of the functions in your swapper actually do.
> "Returns a sparse array" is not documentation.

It is clear that you totally missed the point my article was making; if
you hadn't you would have understood the need for my intrusive_sort
algorithm. I suggest you re-read my article but this time with your brain
engaged and you will see that what the "swapper" actually does is clearly
documented and that the swapper is not actually part of my intrusive_sort
algorithm: it is just an example swapper.

>
>> to this group. I wasn't suggesting that you should stop using OOP all
>> together but that it is just one tool in toolbox and data-oriented design
>> is making a comeback due to the nature of modern hardware. You obviously
>
> Data oriented design never went away in people doing to the metal coding.
> There's a good reason the core of most OS's and device drivers are written in C
> and assembler, not C++.

Data-oriented design was usurped by OOA/D/P in the mainstream application
space; OS kernels and device drivers are niche.

/Flibble

--
"Suppose it’s all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I’d say, bone cancer in children? What’s that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It’s not right, it’s utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That’s what I would say."

bol...@cylonhq.com

unread,
Aug 30, 2018, 5:13:27 AM8/30/18
to
On Wed, 29 Aug 2018 19:56:55 +0100
Mr Flibble <flibbleREM...@i42.co.uk> wrote:
>On 29/08/2018 09:25, bol...@cylonHQ.com wrote:
>> to write the swapper too! Whats left - the core sorting algorithm. Big deal,
>> they might as well write that themselves as well! A quicksort and shell sort
>> are all of 20 lines of code max especially if you're not even going to bother
>
>> to explain exactly what 2 of the functions in your swapper actually do.
>> "Returns a sparse array" is not documentation.
>
>It is clear that you totally missed the point my article was making; if
>you hadn't you would have understood the need for my intrusive_sort
>algorithm. I suggest you re-read my article but this time with your brain
>engaged and you will see that what the "swapper" actually does is clearly
>documented and that the swapper is not actually part of my intrusive_sort
>algorithm: it is just an example swapper.

I think you're missing the point that no one is going to bother to use a
sorting library where they have to do most of the work. It rather defeats
the point of using a library in the first place.

>>> to this group. I wasn't suggesting that you should stop using OOP all
>>> together but that it is just one tool in toolbox and data-oriented design
>>> is making a comeback due to the nature of modern hardware. You obviously
>>
>> Data oriented design never went away in people doing to the metal coding.
>> There's a good reason the core of most OS's and device drivers are written
>in C
>> and assembler, not C++.
>
>Data-oriented design was usurped by OOA/D/P in the mainstream application
>space; OS kernels and device drivers are niche.

Up to a point, though with Python being very popular these days I often find
that code written in it is more data oriented than OO no doubt because it
makes manipulating complex data structures very easy without needing to use its
mediocre OO subsystem. Plus there's still boatloads of C, COBOL and other
procedural language code knocking about.


Mr Flibble

unread,
Aug 30, 2018, 4:14:21 PM8/30/18
to
On 30/08/2018 10:13, bol...@cylonHQ.com wrote:
> On Wed, 29 Aug 2018 19:56:55 +0100
> Mr Flibble <flibbleREM...@i42.co.uk> wrote:
>> On 29/08/2018 09:25, bol...@cylonHQ.com wrote:
>>> to write the swapper too! Whats left - the core sorting algorithm. Big deal,
>>> they might as well write that themselves as well! A quicksort and shell sort
>>> are all of 20 lines of code max especially if you're not even going to bother
>>
>>> to explain exactly what 2 of the functions in your swapper actually do.
>>> "Returns a sparse array" is not documentation.
>>
>> It is clear that you totally missed the point my article was making; if
>> you hadn't you would have understood the need for my intrusive_sort
>> algorithm. I suggest you re-read my article but this time with your brain
>> engaged and you will see that what the "swapper" actually does is clearly
>> documented and that the swapper is not actually part of my intrusive_sort
>> algorithm: it is just an example swapper.
>
> I think you're missing the point that no one is going to bother to use a
> sorting library where they have to do most of the work. It rather defeats
> the point of using a library in the first place.

You obviously didn't re-read my article with your brain engaged like I
suggested. What do you mean by "most of the work"? All the user has to do
is write a swapper which is trivial.

Vir Campestris

unread,
Aug 30, 2018, 4:37:57 PM8/30/18
to
On 29/08/2018 07:14, David Brown wrote:
> 4 clocks per instruction means an IPC of 0.25, not "around 2/3".

Brainfart. The thing was IIRC clocked at 4MHz, and so the memory cycle
time was ~1MHz..

Sorry. Next time I'll look it up, not rely on decades old memory.
(Except my manuals are in store until next month.)

Andy

David Brown

unread,
Aug 31, 2018, 2:16:53 AM8/31/18
to
For most of us, the Spectrum and the Z80A was a /long/ time ago. The
brain remembers many pointless and useless details, but rarely the ones
we actually want for such "I'm a bigger nerd than you" competitions.

(And wasn't it 3.5 MHz, or perhaps 3.75 MHz, rather than 4 MHZ ? Will
that show I remember better than you - or will I have egg on my face?
It's a race to Wikipedia!)


But I think we can all agree that a modern x86 cpu is orders of
magnitude faster than a Z80A :-)
0 new messages