Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Are there any asm-instructions to support OOP

262 views
Skip to first unread message

Christian Hanné

unread,
Aug 25, 2020, 12:38:21 PM8/25/20
to
Is there any CPU-architecture with CPU-instructions to support
object protection-levels like private, protected, publich (and
package in Java)?
I think that would be very cool since you could establish security
-mechanisms on top of that.

Christian Hanné

unread,
Aug 25, 2020, 1:07:54 PM8/25/20
to
Imagine that the methods are dynamically dispatched according to their
signature-hash through special CPU-instructions. That would be rather
fast and you could have dispatching-mechanism for component-based soft-
ware with security-checking.

Alf P. Steinbach

unread,
Aug 25, 2020, 1:14:52 PM8/25/20
to
As I recall IBM has at least one mainframe with hardware Java support.

Wikipedia lists, at <url: https://en.wikipedia.org/wiki/Java_processor>:


• picoJava was the first attempt by Sun Microsystems to build a Java
processor
• aJ102 and aJ200 from aJile Systems, Inc. Available on boards from
Systronix
• Cjip from Imsys Technologies. Available on boards and with wireless
radios from AVIDwireless
• Komodo is a multithreaded Java microcontroller for research on
real-time scheduling
• FemtoJava is a research project to build an application specific Java
processor
• ARM926EJ-S is an ARM processor able to run Java bytecode, this
technology being named Jazelle
• Java Optimized Processor for FPGAs. A PhD thesis is available
• SHAP bytecode processor from the TU Dresden
• <url:
https://www.sciencedirect.com/science/article/abs/pii/S0141933105000967?via%3Dihub>
provides hardware support for object-oriented functions
• ObjectCore is a multicore Java processor designed by Vivaja Technologies.
• Java Offload Engine (JOE) is a high performance Java co-processor from
Temple Computing Labs LLP.


Curiously the Wikipedia article doesn't mention the IBM machine or machines.


- Alf

Stefan Monnier

unread,
Aug 25, 2020, 1:18:32 PM8/25/20
to
> Is there any CPU-architecture with CPU-instructions to support
> object protection-levels like private, protected, publich (and
> package in Java)?

The details of what is allowed and what is not in languages offering
those kinds of "protection levels" tend to be ever so slightly different
in each language, so it would probably be hard to provide hardware that
can efficiently reflect faithfully all those slight variations.

Furthermore, AFAIK Java doesn't implement those checks at run-time but
at load-time instead (while typechecking the JVM byte-code), because it
would be costly to make those checks at run-time.

Maybe dedicated hardware could perform such checks more cheaply than the
JVM's runtime on stock hardware, but IIUC the cost doesn't come just
from a few extra instructions executed during the check but also from
the extra information that needs to be preserved, propagated, and
accessed in order to perform the checks, and dedicated hardware would
still have to pay that extra cost.

And the result would still be slower than performing those checks at load-time.

So I suspect that the market for such a thing is too small to justify
the investment.


Stefan

Jorgen Grahn

unread,
Aug 25, 2020, 2:39:34 PM8/25/20
to
["Followup-To:" header set to comp.lang.c++.]

On Tue, 2020-08-25, Christian Hanné wrote:
> Is there any CPU-architecture with CPU-instructions to support
> object protection-levels like private, protected, publich (and
> package in Java)?

That would be pointless, because private/protected/public are checked
at compile time.

> I think that would be very cool since you could establish security
> -mechanisms on top of that.

They aren't security mechanisms. (Not in C++ anyway.)

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Melzzzzz

unread,
Aug 25, 2020, 3:05:42 PM8/25/20
to
OOP is out of fashion...


--
current job title: senior software engineer
skills: c++,c,rust,go,nim,haskell...

press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Svi smo svedoci - oko 3 godine intenzivne propagande je dovoljno da jedan narod poludi -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala

Ivan Godard

unread,
Aug 25, 2020, 4:14:50 PM8/25/20
to
Yes. Research "capability machines"

Öö Tiib

unread,
Aug 25, 2020, 5:09:46 PM8/25/20
to
That protection (at least in C++) is compile time phenomena.
It is not meant to be for security. As it is compile time only
there are no traces whatsoever of it in generated machine
code.

Protection is useful only for programmers (and teams) who want
to avoid accidentally preventing maintainability of class or
even breaking invariant of object instances by access that
bypasses encapsulation.

For programmers who wants to bypass encapsulation levels
occasionally (for example for automatically debugging or
unit-testing something) there are ways to do it with totally
well-defined code in C++.

For programmers who do not care about encapsulation at all
there is possibility to code everything as public.

For programmers who want some kind of actual security there
are thick books and entire newsgroups dedicated to that topic
alone.


Rick C. Hodgin

unread,
Aug 25, 2020, 5:12:02 PM8/25/20
to
On 8/25/20 3:05 PM, Melzzzzz wrote:
> On 2020-08-25, Christian Hanné <the....@gmail.com> wrote:
>> Is there any CPU-architecture with CPU-instructions to support
>> object protection-levels like private, protected, publich (and
>> package in Java)?
>> I think that would be very cool since you could establish security
>> -mechanisms on top of that.
>
> OOP is out of fashion...

OOP still has its place. Especially in business applications in lang-
uages like C#, and even BASIC variants and Visual FoxPro.

To the OP, the goal of CPU architecture enhancements, like Jazelle in
ARM, are to enhance compute abilities, not to enforce syntax or data
access protocols. The syntax tests for private, protected, public,
etc., are compile-time considerations which determine whether or not
a particular data access is legal via syntax constraints, not hardware
constraints. If you dip down into low-level debugging, you can figure
out a way to access almost anything you want. It's just that the lang-
uage itself won't do that for you via its syntax.

Compilers do their job. Hardware does its job. The two are closely
coupled in many ways, but they are also quite disparate and un-coupled
in others.

https://en.wikipedia.org/wiki/Jazelle

--
Rick C. Hodgin

Juha Nieminen

unread,
Aug 26, 2020, 2:20:02 AM8/26/20
to
In comp.lang.c++ Rick C. Hodgin <rick.c...@gmail.com> wrote:
>> OOP is out of fashion...
>
> OOP still has its place. Especially in business applications in lang-
> uages like C#, and even BASIC variants and Visual FoxPro.

I can't even imagine how a GUI library could be implemented without OOP...

Maybe one could do it in some other way that kind of mimics what OOP provides
to such a library but without using OOP per se. Although it's still hard to
imagine how you would do that. (Perhaps some kind of templated code that
generates code for every single possible type of GUI element? Although it's
still hard to come up with an implemenentation that's "not OOP" given that
you ostensibly need data containers that can manage objects of different
types, and to perform the same operations to all the objects even though
they are of different types...)

Perhaps the most prominent industry where OOP is falling out of fashion is
the game engine one. OOP seemed like the natural approach, for pretty much
the same reason as with GUI libraries. However, OOP just kills low-level
performance because it interacts poorly with cache coherence, memory
locality, the CPU's pipeline, and the compiler's ability to autovectorize
the code. (Changing the codebase of a game engine from an object-oriented
design to a data-oriented design can increase efficiency by even an order
of magnitude, for the mere reason that the latter behaves much better with
respect to the CPU's cache and pipelines, as well as helping the compiler
to better optimize the code eg. with autovectorization.)

Bonita Montero

unread,
Aug 26, 2020, 2:25:45 AM8/26/20
to
> ... However, OOP just kills low-level
> performance because it interacts poorly with cache coherence, memory
> locality, the CPU's pipeline, and the compiler's ability to autovectorize
> the code. ...

That's stupid stuff.

David Brown

unread,
Aug 26, 2020, 4:31:40 AM8/26/20
to
No, he is correct.

But it is not just OOP that makes the difference here - it is structured
data in the first place. OOP just makes it even worse performance-wise
(while making it neater in the source code).

You might logically have a structure for a bullet in a game that holds x
and y coordinates, age, type, size, colour, speeds, explosiveness, and a
dozen other attributes. With OOP, some of these will be inherited from
a more generic "flying thing" object, and other types will inherit from
the bullet - everything is neat, maintainable, expandable in the code.
Let's guess 100 bytes of data for a "bullet" object.

But when you want to do a time-step, you do "pos_x += speed_x *
time_step" for every object in your list. Let's say you have 10,000
objects in your list (games can easily have vast numbers of objects at a
time). For each object, you will need to pull in at least one, but
sometimes two, 64-byte cache lines - varying as the alignment changes
throughout the array. And for each updated position variable, you have
a 64-bit dirty cache line that needs to be written out again. 640 KB of
memory - thrashing your L1 cache and making a solid dent in your L2
cache. Almost all of the cache space used, and memory bandwidth, is
wasted. And of course there can be no vectorisation.

In game programming, you prefer to have an array of pos_x, an array of
speed_x, and so on - the object's data is all split up. Now you are
running through two arrays, of 40 KB each (for 4-byte float data) - an
order of magnitude more efficient. Every byte read is a byte used, as
is every byte written. Vectorisation will let you handle perhaps 8
objects every clock cycle. It is a completely different world in
performance terms.

For almost any programming where run-time speed is important, it comes
down to caches.



Bonita Montero

unread,
Aug 26, 2020, 4:36:21 AM8/26/20
to
> No, he is correct.

No, that's wrong. If you have structures and associated functionis
or methods bound to ne namespace of the structures doesn't make a
difference.

> But when you want to do a time-step, you do "pos_x += speed_x *
> time_step" for every object in your list. Let's say you have 10,000
> objects in your list (games can easily have vast numbers of objects at a
> time). For each object, you will need to pull in at least one, but
> sometimes two, 64-byte cache lines - varying as the alignment changes
> throughout the array. And for each updated position variable, you have
> a 64-bit dirty cache line that needs to be written out again. 640 KB of
> memory - thrashing your L1 cache and making a solid dent in your L2
> cache. Almost all of the cache space used, and memory bandwidth, is
> wasted. And of course there can be no vectorisation.

LOL, what studpid dreaming.

bol...@nuttyella.co.uk

unread,
Aug 26, 2020, 5:19:37 AM8/26/20
to
On Wed, 26 Aug 2020 10:31:23 +0200
David Brown <david...@hesbynett.no> wrote:
>In game programming, you prefer to have an array of pos_x, an array of
>speed_x, and so on - the object's data is all split up. Now you are

I find it hard to believe that in any significant game that may have a
codebase of millions of lines of code and thousands of different game object
types, that they revert to BASIC style programming with object values being
stored in arrays indexed by some object id. Perhaps it does speed things up
but the spaghetti code and potential design problems and bugs it would create
would far outweigh the gains IMO.

Paavo Helde

unread,
Aug 26, 2020, 6:46:29 AM8/26/20
to
"object values being stored in arrays indexed by some object id" is a
common data structure in programming; it's called "table". From what I
have heard there are some big name vendors earning great bucks on them.

A table can be represented either row-wise and column-wise in memory. If
one frequently needs to perform operations on whole columns, then
storing columns compactly in memory means huge performance wins.

bol...@nuttyella.co.uk

unread,
Aug 26, 2020, 6:51:56 AM8/26/20
to
On Wed, 26 Aug 2020 13:46:13 +0300
Paavo Helde <ees...@osa.pri.ee> wrote:
>26.08.2020 12:19 bol...@nuttyella.co.uk kirjutas:
>> On Wed, 26 Aug 2020 10:31:23 +0200
>> David Brown <david...@hesbynett.no> wrote:
>>> In game programming, you prefer to have an array of pos_x, an array of
>>> speed_x, and so on - the object's data is all split up. Now you are
>>
>> I find it hard to believe that in any significant game that may have a
>> codebase of millions of lines of code and thousands of different game object
>> types, that they revert to BASIC style programming with object values being
>> stored in arrays indexed by some object id. Perhaps it does speed things up
>> but the spaghetti code and potential design problems and bugs it would create
>
>> would far outweigh the gains IMO.
>>
>
>"object values being stored in arrays indexed by some object id" is a
>common data structure in programming; it's called "table". From what I
>have heard there are some big name vendors earning great bucks on them.

I'm not talking about tables/maps/hashes since if you use a complex object
like that you might as well just use objects directly as there's a lot going
on underneath. I suspect what he was talking about is C style arrays indexed
by sequentially created integer object id that identifies an object.

Jorgen Grahn

unread,
Aug 26, 2020, 7:06:46 AM8/26/20
to
["Followup-To:" header set to comp.lang.c++.]

Surely it would affect only parts of the code, and surely you can
build reasonable abstractions on top of it, in C++ anyway.

(Maybe OOP in the sense "lots of inheritance everywhere" would be hard
to get, though. If anyone wants that.)

Paavo Helde

unread,
Aug 26, 2020, 7:37:05 AM8/26/20
to
OOP with lots of inheritance might imply dynamic allocation in common
usage scenarios, and dynamic allocation might be slow, for several reasons.

But it all depends on how OOP is used. E.g, in image processing, if a
pixel is an OOP object allocated dynamically then there is no hope for
any decent performance. OTOH, if an image containing millions of pixels
is an OOP object and allocated dynamically, there is no problem.

David Brown

unread,
Aug 26, 2020, 8:13:47 AM8/26/20
to
On 26/08/2020 13:06, Jorgen Grahn wrote:
> ["Followup-To:" header set to comp.lang.c++.]
>
> On Wed, 2020-08-26, bol...@nuttyella.co.uk wrote:
>> On Wed, 26 Aug 2020 10:31:23 +0200
>> David Brown <david...@hesbynett.no> wrote:
>>> In game programming, you prefer to have an array of pos_x, an array of
>>> speed_x, and so on - the object's data is all split up. Now you are
>>
>> I find it hard to believe that in any significant game that may have a
>> codebase of millions of lines of code and thousands of different game object
>> types, that they revert to BASIC style programming with object values being
>> stored in arrays indexed by some object id. Perhaps it does speed things up
>> but the spaghetti code and potential design problems and bugs it would create
>> would far outweigh the gains IMO.
>
> Surely it would affect only parts of the code, and surely you can
> build reasonable abstractions on top of it, in C++ anyway.
>

Of course (to both points).

Some references:

<https://gameprogrammingpatterns.com/data-locality.html>

<https://jacksondunstan.com/articles/3860>

David Brown

unread,
Aug 26, 2020, 8:53:05 AM8/26/20
to
No one suggested spaghetti code or "BASIC style" programming. You would
use C++ to get nice code - wrapping access to private static arrays (or
more likely, arrays of SIMD vectors) in abstractions that give you neat
source code that compiles to optimised SIMD object code with run-time
adaption to different processors.

C++ has the tools you need to write this kind of code in a (relatively)
clear and maintainable way, with a separation of the abstract interface
used in code and the highly optimised implementation code. Using an
obvious OOP hierarchy with data held in classes will not cut it.

And the speed gains can easily be an order of magnitude - it /is/ worth
doing. In the opinion of real games developers.

bol...@nuttyella.co.uk

unread,
Aug 26, 2020, 11:27:59 AM8/26/20
to
On Wed, 26 Aug 2020 14:52:46 +0200
David Brown <david...@hesbynett.no> wrote:
>On 26/08/2020 11:19, bol...@nuttyella.co.uk wrote:
>> On Wed, 26 Aug 2020 10:31:23 +0200
>> David Brown <david...@hesbynett.no> wrote:
>>> In game programming, you prefer to have an array of pos_x, an array of
>>> speed_x, and so on - the object's data is all split up. Now you are
>>
>> I find it hard to believe that in any significant game that may have a
>> codebase of millions of lines of code and thousands of different game object
>> types, that they revert to BASIC style programming with object values being
>> stored in arrays indexed by some object id. Perhaps it does speed things up
>> but the spaghetti code and potential design problems and bugs it would
>create
>> would far outweigh the gains IMO.
>>
>
>No one suggested spaghetti code or "BASIC style" programming. You would
>use C++ to get nice code - wrapping access to private static arrays (or
>more likely, arrays of SIMD vectors) in abstractions that give you neat
>source code that compiles to optimised SIMD object code with run-time
>adaption to different processors.

Don't you simply get overhead with the abstracted calls instead?

>C++ has the tools you need to write this kind of code in a (relatively)
>clear and maintainable way, with a separation of the abstract interface
>used in code and the highly optimised implementation code. Using an
>obvious OOP hierarchy with data held in classes will not cut it.

Just sounds like old style bottom up programming with some lipstick on top.
If you can't use much in the way of OOP you might as well just use C.

>And the speed gains can easily be an order of magnitude - it /is/ worth
>doing. In the opinion of real games developers.

I'll take your word for it.

David Brown

unread,
Aug 26, 2020, 11:44:37 AM8/26/20
to
On 26/08/2020 17:27, bol...@nuttyella.co.uk wrote:
> On Wed, 26 Aug 2020 14:52:46 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 26/08/2020 11:19, bol...@nuttyella.co.uk wrote:
>>> On Wed, 26 Aug 2020 10:31:23 +0200
>>> David Brown <david...@hesbynett.no> wrote:
>>>> In game programming, you prefer to have an array of pos_x, an array of
>>>> speed_x, and so on - the object's data is all split up. Now you are
>>>
>>> I find it hard to believe that in any significant game that may have a
>>> codebase of millions of lines of code and thousands of different game object
>>> types, that they revert to BASIC style programming with object values being
>>> stored in arrays indexed by some object id. Perhaps it does speed things up
>>> but the spaghetti code and potential design problems and bugs it would
>> create
>>> would far outweigh the gains IMO.
>>>
>>
>> No one suggested spaghetti code or "BASIC style" programming. You would
>> use C++ to get nice code - wrapping access to private static arrays (or
>> more likely, arrays of SIMD vectors) in abstractions that give you neat
>> source code that compiles to optimised SIMD object code with run-time
>> adaption to different processors.
>
> Don't you simply get overhead with the abstracted calls instead?

No. Your abstractions are not calls, just wrappers so that code using
the data doesn't have to be concerned about the details of the
implementation of the data storage. (It might /look/ like function
calls, templates, references, etc., but done properly, it will all
disappear in inlining and optimisation.)

Also, your really critical code probably accesses the data using the
low-level vector information.

>
>> C++ has the tools you need to write this kind of code in a (relatively)
>> clear and maintainable way, with a separation of the abstract interface
>> used in code and the highly optimised implementation code. Using an
>> obvious OOP hierarchy with data held in classes will not cut it.
>
> Just sounds like old style bottom up programming with some lipstick on top.
> If you can't use much in the way of OOP you might as well just use C.
>

No. C++ supports OOP - but doing OOP is certainly not the only reason
to choose C++ over C.

>> And the speed gains can easily be an order of magnitude - it /is/ worth
>> doing. In the opinion of real games developers.
>
> I'll take your word for it.
>

Marvellous!

You can also look at these (due to the follow-ups, I posted these in the
C++ group but not the comp.arch group in a previous post).


<https://gameprogrammingpatterns.com/data-locality.html>

<https://jacksondunstan.com/articles/3860>

Chris Vine

unread,
Aug 26, 2020, 12:51:07 PM8/26/20
to
On Wed, 26 Aug 2020 06:19:50 +0000 (UTC)
Juha Nieminen <nos...@thanks.invalid> wrote:
> In comp.lang.c++ Rick C. Hodgin <rick.c...@gmail.com> wrote:
> >> OOP is out of fashion...
> >
> > OOP still has its place. Especially in business applications in lang-
> > uages like C#, and even BASIC variants and Visual FoxPro.
>
> I can't even imagine how a GUI library could be implemented without OOP...

That depends on whether by 'OOP' you mean OOP implemented by C++'s
notion of inheritance sub-typing. Every serious language has to
have some means of properly structuring code, and the ability to build
code up from the bottom step by step.

Rust has structs and traits. The MLs have module signatures, module
inclusions and functors, and some of them also implement sub-type
polymorphism via polymorphic variants. And of course C does not have
inheritance as such (although it can be emulated).

Lynn McGuire

unread,
Aug 26, 2020, 1:47:00 PM8/26/20
to
Seen a GUI library implemented before in C. Lots and lots of void *.

Lynn

Juha Nieminen

unread,
Aug 26, 2020, 2:02:11 PM8/26/20
to
In comp.lang.c++ Bonita Montero <Bonita....@gmail.com> wrote:
>> No, he is correct.
>
> No, that's wrong. If you have structures and associated functionis
> or methods bound to ne namespace of the structures doesn't make a
> difference.

I don't think you have any idea what either of us is talking about.

Do you even know how CPU caches work?

Juha Nieminen

unread,
Aug 26, 2020, 2:14:21 PM8/26/20
to
In comp.lang.c++ Jorgen Grahn <grahn...@snipabacken.se> wrote:
> (Maybe OOP in the sense "lots of inheritance everywhere" would be hard
> to get, though. If anyone wants that.)

Inheritance and dynamic binding (ie. virtual functions) are not the
only performance-killers.

One of the most fundamental core building blocks of OOP, and its
precursor, modular programming, ie. the fact that you gather all
the data related to one object into the same place, the same
data structure, is one of the major performance killers in modern
CPUs. This even if you manage objects by value putting them in arrays.
(While putting them by-value into arrays does help quite some, it's
still not optimal.)

Quite often, especially in games, you only read and/or write a
particular or a couple of particular member variables of each object
(like for example the coordinate variables of the object), and you
do this for a huge bunch of objects (in modern games we may be talking
about thousands, maybe even tens or thousands of objects, on each
frame, at least 60 times per second, in modern PCs even more, like
up to 144 times per second and even higher.)

Even if your objects are in an array, what happens is that you will
be jumping in this array in big steps, because each object will have
a big bunch of other ancillary member variables in between the ones
you are interested in. This kills cache locality. The most optimal
way would be to traverse all the consecutive bytes of an array,
because this minimizes the amount of cache misses. However, OOP
kills this because now you will be jumping in big steps.

And that's assuming your objects are in an array in the first place.
If you just dynamically allocated every object individually, it will
be much worse. Now they will be all over the place, at random
locations in memory.

The other reason why OOP is a performance killer is that function
calls inhibit compiler optimizations. And we are not necessarily
even talking about virtual function calls. Just regular function
calls (assuming the functions aren't inlined).

A function call in itself isn't very inefficient at runtime because
of branch prediction. The CPU will start reading instructions from
the location of the jump, so there's very little penalty. Depending
on the function call the stack might not even be involved (if all
the parameters can be passed in CPU registers.)

However, what may well kill performance is that a function call
(that can't be inlined) will completely inhibit the compiler's
ability to optimize the code, eg. if you are calling the function
in a loop (which is often the case when you are updating a bunch
of objects).

If the code directly sees the data in an array and performs no
function calls, the compiler may well be able to auto-vectorize
the operations you are doing to the data.

Juha Nieminen

unread,
Aug 26, 2020, 2:19:16 PM8/26/20
to
In comp.lang.c++ bol...@nuttyella.co.uk wrote:
> I find it hard to believe that in any significant game that may have a
> codebase of millions of lines of code and thousands of different game object
> types, that they revert to BASIC style programming with object values being
> stored in arrays indexed by some object id. Perhaps it does speed things up
> but the spaghetti code and potential design problems and bugs it would create
> would far outweigh the gains IMO.

You may find it hard to believe, but that's what all the major game
engines have been doing for the past few years: Moving away from OOP
and towards data-oriented design.

The whole idea of "data-oriented design" is to make your code as
efficient as possible at the low level (ie. in terms of the CPU cache,
the CPU pipeline and the compiler autovectorization) without your code
becoming spaghettified. It's definitely not as easy and nice as OOP,
but it's not impossible either. And it may result in *significant*
improvements in terms of speed.

Given that video games are becoming bigger and bigger, and more and
more complex by the year, and given that not everything can be offloaded
to the GPU, and given that display framerates are also increasing by
the year (displays are already surpassing 144 Hz refresh rates), being
able to process data as efficiently as possible with the CPU can be
a huge advantage.

Juha Nieminen

unread,
Aug 26, 2020, 2:24:16 PM8/26/20
to
In comp.lang.c++ Lynn McGuire <lynnmc...@gmail.com> wrote:
> Seen a GUI library implemented before in C. Lots and lots of void *.

It's still OOP even if it has to be "manually simulated".
I'm more talking about program design rather than whether the
language has direct support for OOP.

A GUI typically needs some kind of base code that handles all
screen elements (so that it can eg. draw them and do all other
kinds of stuff to them). Since not all elements are equal, the
base code needs to still be able to handle all of them, no matter
how different they may be. Be it a button, a checkbox, an image,
a text label, a textfield, a window, a menu, a menu element...
the base code still needs to be able to eg. know its dimensions
and positions (and a myriad of other common characteristics)
and to be able to draw them somehow. It also needs to know which
elements are inside which other elements, and so on.

I can't even imagine how this coule be achieved with anything
other than OOP (even if the OOP has to be "simulated" in a
language that doesn't have explicit support).

Ben Bacarisse

unread,
Aug 26, 2020, 2:37:02 PM8/26/20
to
Juha Nieminen <nos...@thanks.invalid> writes:

> In comp.lang.c++ Jorgen Grahn <grahn...@snipabacken.se> wrote:
>> (Maybe OOP in the sense "lots of inheritance everywhere" would be hard
>> to get, though. If anyone wants that.)
>
> Inheritance and dynamic binding (ie. virtual functions) are not the
> only performance-killers.
>
> One of the most fundamental core building blocks of OOP, and its
> precursor, modular programming, ie. the fact that you gather all
> the data related to one object into the same place, the same
> data structure, is one of the major performance killers in modern
> CPUs. This even if you manage objects by value putting them in arrays.
> (While putting them by-value into arrays does help quite some, it's
> still not optimal.)

A minor point of little practical value is that the collecting together
is only in the source code. One could imagine a OO language that could
arrange for vectors of object to have certain properties arranged in
contiguous storage. It would probably be complex to manage, but it
might be possible.

> Quite often, especially in games, you only read and/or write a
> particular or a couple of particular member variables of each object
> (like for example the coordinate variables of the object), and you
> do this for a huge bunch of objects (in modern games we may be talking
> about thousands, maybe even tens or thousands of objects, on each
> frame, at least 60 times per second, in modern PCs even more, like
> up to 144 times per second and even higher.)
>
> Even if your objects are in an array, what happens is that you will
> be jumping in this array in big steps, because each object will have
> a big bunch of other ancillary member variables in between the ones
> you are interested in. This kills cache locality. The most optimal
> way would be to traverse all the consecutive bytes of an array,
> because this minimizes the amount of cache misses. However, OOP
> kills this because now you will be jumping in big steps.
>
> And that's assuming your objects are in an array in the first place.
> If you just dynamically allocated every object individually, it will
> be much worse. Now they will be all over the place, at random
> locations in memory.

Right. But this is in part bad design. I blame the rather simplistic
view of OO that gets pushed by online tutorials and so on. There is no
reason to consider position to be an intrinsic property of an object.
Object locations could be stored contiguously in an instance of a
LocationArray class and linked (vie pointers or indexes) or otherwise
associated with the object or objects that have those locations.

--
Ben.

David Brown

unread,
Aug 26, 2020, 5:38:15 PM8/26/20
to
On 26/08/2020 20:36, Ben Bacarisse wrote:
> Juha Nieminen <nos...@thanks.invalid> writes:
>
>> In comp.lang.c++ Jorgen Grahn <grahn...@snipabacken.se> wrote:
>>> (Maybe OOP in the sense "lots of inheritance everywhere" would be hard
>>> to get, though. If anyone wants that.)
>>
>> Inheritance and dynamic binding (ie. virtual functions) are not the
>> only performance-killers.
>>
>> One of the most fundamental core building blocks of OOP, and its
>> precursor, modular programming, ie. the fact that you gather all
>> the data related to one object into the same place, the same
>> data structure, is one of the major performance killers in modern
>> CPUs. This even if you manage objects by value putting them in arrays.
>> (While putting them by-value into arrays does help quite some, it's
>> still not optimal.)
>
> A minor point of little practical value is that the collecting together
> is only in the source code. One could imagine a OO language that could
> arrange for vectors of object to have certain properties arranged in
> contiguous storage. It would probably be complex to manage, but it
> might be possible.
>

I find it hard to imagine how this might work, at least for a language
to automate the process - how will it know which properties should be
put in contiguous arrays, and what should be kept in individual objects?

With C++ (or any other language that lets you have templates, classes,
references, and similar abstraction features), the low-level programmer
has to make a "vector-of-items" class that is specialised for the type
of the items and has contiguous blocks for the required item properties,
instead of using something like std::vector<item>.

C++ has the machinery you need here - but of course it is possible that
a more specialised language (or a future version of C++) could have
features that make it easier.

>> Quite often, especially in games, you only read and/or write a
>> particular or a couple of particular member variables of each object
>> (like for example the coordinate variables of the object), and you
>> do this for a huge bunch of objects (in modern games we may be talking
>> about thousands, maybe even tens or thousands of objects, on each
>> frame, at least 60 times per second, in modern PCs even more, like
>> up to 144 times per second and even higher.)
>>
>> Even if your objects are in an array, what happens is that you will
>> be jumping in this array in big steps, because each object will have
>> a big bunch of other ancillary member variables in between the ones
>> you are interested in. This kills cache locality. The most optimal
>> way would be to traverse all the consecutive bytes of an array,
>> because this minimizes the amount of cache misses. However, OOP
>> kills this because now you will be jumping in big steps.
>>
>> And that's assuming your objects are in an array in the first place.
>> If you just dynamically allocated every object individually, it will
>> be much worse. Now they will be all over the place, at random
>> locations in memory.
>
> Right. But this is in part bad design. I blame the rather simplistic
> view of OO that gets pushed by online tutorials and so on. There is no
> reason to consider position to be an intrinsic property of an object.
> Object locations could be stored contiguously in an instance of a
> LocationArray class and linked (vie pointers or indexes) or otherwise
> associated with the object or objects that have those locations.
>

I am not sure that you can call this "bad design". Some design
decisions are good from one viewpoint, and bad from another. Putting
location in the object could be good for modularisation and
encapsulation, which could make the coding simpler and clearer -
everything you might want to know about the object is there in one
place. But it might not make sense from other viewpoints.

The lack of single right answers and consideration for multiple
viewpoints is part of what makes programming fun!

Bonita Montero

unread,
Aug 26, 2020, 8:43:32 PM8/26/20
to
>> No, that's wrong. If you have structures and associated functionis
>> or methods bound to ne namespace of the structures doesn't make a
>> difference.

> I don't think you have any idea what either of us is talking about.

Yes, I know.
And I've designed cache-aware and cache-oblivious OOP-algorithms.

Juha Nieminen

unread,
Aug 27, 2020, 3:24:00 AM8/27/20
to
Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
> Right. But this is in part bad design. I blame the rather simplistic
> view of OO that gets pushed by online tutorials and so on. There is no
> reason to consider position to be an intrinsic property of an object.
> Object locations could be stored contiguously in an instance of a
> LocationArray class and linked (vie pointers or indexes) or otherwise
> associated with the object or objects that have those locations.

I don't think it's a "simplistic view of OO". It's the standard view that
has always existed, since the very beginning of OOP.

The way that object-oriented programming (and modular programming) works
is rather logical and practical: Every object has an internal state
(usually in the form of member variables) and some member functions.
You can easily handle such objects, such as passing them around, copying
them, querying or modifying their state, have objects manage other
objects, and so on. When coupled with the concept of a public/private
interface division, it makes even large programs manageable, maintainable
and the code reusable.

Back when OOP was first developed, in the 70's and 80', this design didn't
really have any sort of negative impact on performance. After all, there
were no caches, no pipelines, no fancy-pansy SIMD. Every machine code
instruction typically took the same amount of clock cycles regardless of
anything. How your data was arranged in memory had pretty much zero impact
on the efficiency of the program. Conditionals and function calls took
always the same amount of clock cycles and thus were inconsequential.
(You could make the code faster by reducing the amount of conditionals,
but not because they were conditionals, but because they were just extra
instructions, like everything else.)

However, since the late-90's and forward this has been less and less the
case. The introduction of CPU pipelines saw a drastic increase in machine
code throughput (in the beginning with instructions that typically took
at least 3 clock cycles being reduced to taking just 1 clock cycle).
On the flipside, this introduced the possibility of the pipeline getting
invalidated (usually because of a conditional jump, sometimes because
of other reasons). As time passed pipelines became more and more
complicated, and longer and longer, and consequently the cost of a
pipeline invalidation became larger and larger in terms of clock cycle
penalties. Optimal code saw incredible IPS numbers, but conversely
suboptimal code that constantly causes pipeline invalidation would suffer
tremendously.

Likewise the introduction of memory caches brought great improvements to
execution speed, as often-used data from RAM was much quicker to access.
But, of course, for a program to take advantage of this it would need to
cause as few cache misses as possible.

Nowadays SIMD has become quite a thing in modern CPUs, and compilers are
becoming better and better at optimizing code to use it. However, for
optimal results the code needs to be written in a certain way that allows
the compiler to optimize it. If you write it in the "wrong" way, the
compiler won't be able to take much advantage of SIMD.

The problem with OOP is that, while really logical, practical and quite
awesome from a programmer's point of view, making it much easier to manage
very large programs, it was never designed to be optimal for modern CPUs.
A typical OOP program will cause lots of cache misses, lots of pipeline
invalidations, and typically be hard for the compiler to optimize for
SIMD.

In order to circumvent that problem, if one still wants to keep the code
more or less object-oriented, one needs to resort to certain design decisions
that are not traditional (and often not the best, from an object-oriented
design point of view). Such as no longer objects having their own internal
state, separate from all the other objects, but instead grouping the states
of all objects in arrays. Or not having distinct objects for certain things
at all, and instead use data arrays for those things. Likewise eschewing
using member functions for certain things, and instead accessing the state
data arrays directly (something that bypasses the abstraction princples
of OOP.)

Many an expert efficiency-conscious programmer will often choose a mixed
approach: Use "pure" OOP for things that don't require efficiency, and
use more of a Data-Oriented Design for things that do. (Also, encapsulating
the DOD style arrays inside classes, to make them "more OO".)

Jorgen Grahn

unread,
Aug 27, 2020, 3:49:12 AM8/27/20
to
On Wed, 2020-08-26, Juha Nieminen wrote:
> In comp.lang.c++ Jorgen Grahn <grahn...@snipabacken.se> wrote:
>> (Maybe OOP in the sense "lots of inheritance everywhere" would be hard
>> to get, though. If anyone wants that.)
>
> Inheritance and dynamic binding (ie. virtual functions) are not the
> only performance-killers.

The others were discussed in the part you snipped (although not in a
lot of detail) and upthread in general.

> One of the most fundamental core building blocks of OOP, and its
> precursor, modular programming, ie. the fact that you gather all
> the data related to one object into the same place, the same
> data structure, is one of the major performance killers in modern
> CPUs.

[snip long and useful text]

Terje Mathisen

unread,
Aug 27, 2020, 4:01:36 AM8/27/20
to
_Anything_ that can give you a 10% speedup is worth it in games
programming, at least doing it to the point where you measure a real
increase in average/worst case frame rate.

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

Chris M. Thomasson

unread,
Aug 27, 2020, 4:06:38 AM8/27/20
to
Big time agreed.

bol...@nuttyella.co.uk

unread,
Aug 27, 2020, 4:08:15 AM8/27/20
to
On Wed, 26 Aug 2020 17:44:20 +0200
David Brown <david...@hesbynett.no> wrote:
>On 26/08/2020 17:27, bol...@nuttyella.co.uk wrote:
>> Just sounds like old style bottom up programming with some lipstick on top.
>> If you can't use much in the way of OOP you might as well just use C.
>>
>
>No. C++ supports OOP - but doing OOP is certainly not the only reason
>to choose C++ over C.

Its the main reason. Without any kids of objects - meaning no STL either - all
you're really left with that means a damn is exceptions (limited usefullness if
you can only throw POD types), generics, lambdas and overloading. Whether thats
enough to make it worthwhile I guess depends on your use case.

bol...@nuttyella.co.uk

unread,
Aug 27, 2020, 4:10:42 AM8/27/20
to
On Wed, 26 Aug 2020 18:19:00 +0000 (UTC)
Juha Nieminen <nos...@thanks.invalid> wrote:
>Given that video games are becoming bigger and bigger, and more and
>more complex by the year, and given that not everything can be offloaded
>to the GPU, and given that display framerates are also increasing by
>the year (displays are already surpassing 144 Hz refresh rates), being
>able to process data as efficiently as possible with the CPU can be
>a huge advantage.

Given the human eye generally only notices flicker below about 30hz any
greater refresh rate is simply game developer willy waving.

Ben Bacarisse

unread,
Aug 27, 2020, 6:40:45 AM8/27/20
to
Juha Nieminen <nos...@thanks.invalid> writes:

> Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
>> Right. But this is in part bad design. I blame the rather simplistic
>> view of OO that gets pushed by online tutorials and so on. There is no
>> reason to consider position to be an intrinsic property of an object.
>> Object locations could be stored contiguously in an instance of a
>> LocationArray class and linked (vie pointers or indexes) or otherwise
>> associated with the object or objects that have those locations.
>
> I don't think it's a "simplistic view of OO". It's the standard view that
> has always existed, since the very beginning of OOP.
>
> The way that object-oriented programming (and modular programming) works
> is rather logical and practical: Every object has an internal state
> (usually in the form of member variables) and some member functions.
> You can easily handle such objects, such as passing them around, copying
> them, querying or modifying their state, have objects manage other
> objects, and so on. When coupled with the concept of a public/private
> interface division, it makes even large programs manageable, maintainable
> and the code reusable.

Of course, but what properties belong to what objects is a matter for
careful thought. We should not assume the position of something in a
game will obviously be a property of the thing. Maybe there is a
"scene" object that associates a vector of objects with a vector of
positions. That might complicate other parts of the program, so this
organisation can't be assumed to be the way to do it either.

My talking about a "simplistic view of OO" was just my experience of
students who tend to put everything they can think of that relates to a
single object into that object. That's not always the right way.

<many thoughtful comments cut -- I just don't have much to add to them>
--
Ben.

Terje Mathisen

unread,
Aug 27, 2020, 6:54:31 AM8/27/20
to
First of all, bragging rights do matter, and there is at least some
evidence of top e-sports players doing better with a real 60 Hz vs 30 Hz
update rate.

More importantly at this point is the minimum frame rate: Can your game
engine stumble just at the point where maximum "stuff" is happening
on-screen at the same time?

bol...@nuttyella.co.uk

unread,
Aug 27, 2020, 9:59:56 AM8/27/20
to
On Thu, 27 Aug 2020 11:40:26 +0100
Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
>Juha Nieminen <nos...@thanks.invalid> writes:
>> The way that object-oriented programming (and modular programming) works
>> is rather logical and practical: Every object has an internal state
>> (usually in the form of member variables) and some member functions.
>> You can easily handle such objects, such as passing them around, copying
>> them, querying or modifying their state, have objects manage other
>> objects, and so on. When coupled with the concept of a public/private
>> interface division, it makes even large programs manageable, maintainable
>> and the code reusable.
>
>Of course, but what properties belong to what objects is a matter for
>careful thought. We should not assume the position of something in a
>game will obviously be a property of the thing. Maybe there is a
>"scene" object that associates a vector of objects with a vector of
>positions. That might complicate other parts of the program, so this
>organisation can't be assumed to be the way to do it either.

Any data that is unique to an object should be internal to that object unless
there's a *really* good reason for that not to happen.

bol...@nuttyella.co.uk

unread,
Aug 27, 2020, 10:02:56 AM8/27/20
to
On Thu, 27 Aug 2020 12:54:20 +0200
Terje Mathisen <terje.m...@tmsw.no> wrote:
>bol...@nuttyella.co.uk wrote:
>> On Wed, 26 Aug 2020 18:19:00 +0000 (UTC)
>> Juha Nieminen <nos...@thanks.invalid> wrote:
>>> Given that video games are becoming bigger and bigger, and more and
>>> more complex by the year, and given that not everything can be offloaded
>>> to the GPU, and given that display framerates are also increasing by
>>> the year (displays are already surpassing 144 Hz refresh rates), being
>>> able to process data as efficiently as possible with the CPU can be
>>> a huge advantage.
>>
>> Given the human eye generally only notices flicker below about 30hz any
>> greater refresh rate is simply game developer willy waving.
>>
>First of all, bragging rights do matter, and there is at least some
>evidence of top e-sports players doing better with a real 60 Hz vs 30 Hz
>update rate.

Or it could be just that more expensive monitors that can support higher
refresh rates also simply provide a better picture - eg less aliasing artifacts,
no tearing etc.

Though tbh, any activity that involves a bunch of pasty faced out of condition
kids sitting in armchairs for hours and calls itself a sport without any hint
of irony is hard to take seriously.

Stefan Monnier

unread,
Aug 27, 2020, 11:10:00 AM8/27/20
to
> Its the main reason. Without any kids of objects - meaning no STL either - all
> you're really left with that means a damn is exceptions (limited usefullness if
> you can only throw POD types), generics, lambdas and overloading.

Hmm... remove variable assignments while you're at it, and then give it
a sane syntax and call it Haskell.

> Whether thats enough to make it worthwhile I guess depends on your
> use case.

It's actually a pretty nice language if you ask me ;-)


Stefan

Juha Nieminen

unread,
Aug 27, 2020, 12:28:54 PM8/27/20
to
In comp.lang.c++ bol...@nuttyella.co.uk wrote:
> Given the human eye generally only notices flicker below about 30hz any
> greater refresh rate is simply game developer willy waving.

The "humans can't distingish anything above 24/25/30/whatever Hz
refresh rate" is one the most widespread and most persistent
misconceptions in modern times.

It comes from cinema having standardized 24 frames per second for
a very long time, and later TV standardizing 25 (PAL) and 30 (NTSC)
frames per second. People don't understand why these framerates were
chosen and have this completely wrong misconception about them.

In cinematography 24 frames per second was chosen (more or less by
trial and error) as *the absolute minimum* framerate that's *not
distracting* to the average person. Film was extremely expensive
especially during the first decades of cinematography, so they wanted
a framerate as low as possible (because it saves film) that still
"works" in the sense that people don't get too bothered by it, and
it's not distracting.

This doesn't mean people cannot see the difference between 24 Hz and
something higher. They *most definitely* can. Pretty much *all* people
can. Why do you think that when The Hobbit used 48 frames per second
a huge bunch of people complained about it looking somehow "unnatural"?
*Because they could see the difference.* It doesn't really matter why
they felt it looked unnatural, the mere fact that they noticed is
definitive proof that people *clearly see the difference* between
24 Hz and 48 Hz.

Of course there are countless double-blind tests where people are
tested whether they can see a difference between 60 Hz and 120 Hz,
and they can, especially those who have played games with the latter
refresh rate a lot). And it's not like these people get it like 75%
right or something. They get it 100% right, no matter how many tests
are performed. So yes, even when we go as high as beyond 60 Hz,
people *can still see the difference*.

Thinking that "people can't see the difference between 30 Hz and
60 Hz" is like thinking that people can't see the difference
between a candle light and full sunlight, just because people can
read a book with both.

Anyway, the fact remains that modern games need to be able to
calculate a crapload of things at a very minimum 60 times per
second (which is about 16 milliseconds per frame), preferably
144 times per second and beyond.

mac

unread,
Aug 27, 2020, 12:53:30 PM8/27/20
to
Christian Hanné <the....@gmail.com> wrote:
> Is there any CPU-architecture with CPU-instructions to support
> object protection-levels like private, protected, publich (and
> package in Java)?
> I think that would be very cool since you could establish security

In prehistoric times (1980s) intel promoted “the Silicon Operating System”,
because that stuff is too hard to do in software. As recent history shows,
it’s even harder in hardware.

Alf P. Steinbach

unread,
Aug 27, 2020, 1:44:16 PM8/27/20
to
I agree wholeheartedly with I perceive you mean, but did you get this
backwards?

It seems to me that people must have reacted to the 24 Hz Hobbit and
found the 48 Hz Hobbit more acceptable, yes?

Personally I find 23.9 Hz OK for very slow moving stuff, but especially
horizontal panning and horizontal running is very annoyingly jerky at
that frame rate -- which is all I have with my "private copy" films (I
generally download and show my old mother one movie each Sunday, at
first because in these rural parts of Norway it would be risky to use
streaming service, but now also simply because of better quality such as
better subtitles and no ads or warnings etc. on downloaded movies).


> Of course there are countless double-blind tests where people are
> tested whether they can see a difference between 60 Hz and 120 Hz,
> and they can, especially those who have played games with the latter
> refresh rate a lot). And it's not like these people get it like 75%
> right or something. They get it 100% right, no matter how many tests
> are performed. So yes, even when we go as high as beyond 60 Hz,
> people *can still see the difference*.
>
> Thinking that "people can't see the difference between 30 Hz and
> 60 Hz" is like thinking that people can't see the difference
> between a candle light and full sunlight, just because people can
> read a book with both.
>
> Anyway, the fact remains that modern games need to be able to
> calculate a crapload of things at a very minimum 60 times per
> second (which is about 16 milliseconds per frame), preferably
> 144 times per second and beyond.

- Alf

David Brown

unread,
Aug 27, 2020, 2:00:49 PM8/27/20
to
That would be logical - but people are not logical. People complained
about the /higher/ rate, and felt it was "unnatural". What they really
meant was it is different from what they were used to. You get this
effect in many areas - the most obvious case being in audio hifi. When
CD's came out, people complained they sounded "artificial" compared to
records, when they were actually more accurate. People who use valve
amplifiers feel transistor amplifiers are "unnatural" because the
transistor amplifiers lack the second harmonic distortion that they have
become accustomed to. And so on.

>
> Personally I find 23.9 Hz OK for very slow moving stuff, but especially
> horizontal panning and horizontal running is very annoyingly jerky at
> that frame rate -- which is all I have with my "private copy" films (I
> generally download and show my old mother one movie each Sunday, at
> first because in these rural parts of Norway it would be risky to use
> streaming service, but now also simply because of better quality such as
> better subtitles and no ads or warnings etc. on downloaded movies).
>

I've just read on the news that a popular illegal movie copying group
and website has just been caught, with a Norwegian ringleader. It
wasn't you (or your mother), was it? :-)

Juha Nieminen

unread,
Aug 27, 2020, 2:16:21 PM8/27/20
to
In comp.lang.c++ Alf P. Steinbach <alf.p.stein...@gmail.com> wrote:
> I agree wholeheartedly with I perceive you mean, but did you get this
> backwards?
>
> It seems to me that people must have reacted to the 24 Hz Hobbit and
> found the 48 Hz Hobbit more acceptable, yes?

No, people complained about the 48 Hz version.

When you are not used to it, it looks too smooth, too "clean", too much
like TV rather than a movie. If you have watched 24 Hz movies your entire
life, it looks really unnatural, a bit uncanney.

You get used to it very fast, though. By the end of the movie it stops
being distracting and bothering.

daniel...@gmail.com

unread,
Aug 27, 2020, 2:57:14 PM8/27/20
to
On Thursday, August 27, 2020 at 2:00:49 PM UTC-4, David Brown wrote:

> You get this
> effect in many areas - the most obvious case being in audio hifi. When
> CD's came out, people complained they sounded "artificial" compared to
> records, when they were actually more accurate.

Not so. Vinyl allows for the playback of frequencies over 20 kHz,
16/44.1 CD does not.

Daniel

bol...@nuttyella.co.uk

unread,
Aug 28, 2020, 5:09:13 AM8/28/20
to
On Thu, 27 Aug 2020 16:28:40 +0000 (UTC)
Juha Nieminen <nos...@thanks.invalid> wrote:
>In comp.lang.c++ bol...@nuttyella.co.uk wrote:
>> Given the human eye generally only notices flicker below about 30hz any
>> greater refresh rate is simply game developer willy waving.
>
>The "humans can't distingish anything above 24/25/30/whatever Hz
>refresh rate" is one the most widespread and most persistent
>misconceptions in modern times.
>
>It comes from cinema having standardized 24 frames per second for
>a very long time, and later TV standardizing 25 (PAL) and 30 (NTSC)
>frames per second. People don't understand why these framerates were
>chosen and have this completely wrong misconception about them.

Perhaps you never used CRT monitors, but the only time you would notice
any flicker was out the corner of your eye - which is ironically more
sensitive to light - when a bright image was showing and the refresh rate
was at one of its lower settings. Otherwise forget it. Ditto CRT TVs. With LCD
screens the picture never goes "off" inbetween frames as it did with CRTs as
its simply displays its picture buffer until updated with the next frame so
flicker is even less noticable.

>This doesn't mean people cannot see the difference between 24 Hz and
>something higher. They *most definitely* can. Pretty much *all* people
>can. Why do you think that when The Hobbit used 48 frames per second
>a huge bunch of people complained about it looking somehow "unnatural"?

A huge bunch? I've never heard about it.

>Of course there are countless double-blind tests where people are
>tested whether they can see a difference between 60 Hz and 120 Hz,

And no doubt countless ones where no one noticed any difference.

>calculate a crapload of things at a very minimum 60 times per
>second (which is about 16 milliseconds per frame), preferably
>144 times per second and beyond.

Someone's drunk the games fps kool aid.

David Brown

unread,
Aug 28, 2020, 5:14:05 AM8/28/20
to
That's a reasonable rule of thumb.

Performance of a game is a /really/ good reason for that not to happen.

Separation of logical aspects, as Ben describes, can also be a /really/
good reason. Sometimes "position" is best considered to be an aspect of
the object and ideally it should be stored in that object. In other
cases, it is an aspect of the "scene" rather than the object. I think
all Ben is saying is that programmers should beware of trying to put
everything connected with an object into members of that class, instead
of thinking more carefully about where the data fits most logically.

bol...@nuttyella.co.uk

unread,
Aug 28, 2020, 5:34:18 AM8/28/20
to
Thats fine, but if you take out something like position for performance
reasons then presumably something like drawing the object should also be
inlined out of the class and then calculation algorithms and so on. And in
the end you end up with nothing left inside the object and what is essentially
a procedural program.


David Brown

unread,
Aug 28, 2020, 5:54:01 AM8/28/20
to
That is true, in a /very/ limited sense.

First, there is a limit to how high a frequency people can here. The
limit varies from person to person, over age, and with gender. There
are also theoretical complex effects possible due to harmonics even when
these are themselves above human audio range.

The standard limit used is 20 kHz. Very few people can hear above that
- and they are almost all kids. (The record is something like 25 or 26
kHz.) If you limit the sample to people who are interested enough in
audio hifi, and have the means to buy high end equipment, then
statistically speaking the limit is more like 16 kHz.

In experiments - done by qualified researchers rather than high-end
audio salesmen - no measurable difference has been detected by listeners
when harmonics higher than 20 kHz are included with sounds.

Thus we can conclude that for the end result, 20 kHz is a perfectly good
cut-off point - anything higher is pointless.

That said, there is of course a point in using higher frequency sampling
in intermediary processing - it lets you use much cleaner filters, and
reduces noise when converting sample rates.

A record player amplifier, however, does not need filters - there is no
benefit in having greater than 20 kHz from the platter. Earlier and
cheaper CD players had poor filters, and bad frequency response between,
say, 16 kHz and 22 kHz. Now they all use digital filters and give flat
response up to 20 kHz and a sharp cut-off. Good quality record players
could therefore give better high frequency (but less than 20 kHz) sound
than early CD players.


Secondly, vinyl only supports frequencies above 20 kHz if the master
source has frequencies above that limit. In virtually all cases, they
do not - any higher frequencies left over from high sample rate digital
processing are removed before cutting the vinyl. And for older analogue
mastering, the tapes did not support higher frequencies.


Thirdly, records wear - you will only get 20 kHz frequencies for the
first 5 to 10 playbacks of a record (depending on the record player and
record type).


Fourthly, and perhaps most importantly, high frequency response is only
one small part of the quality and accuracy of the sound reproduction.
CD playback is more accurate in many aspects - in pretty much any way
you try to measure the accuracy of the copy.


None of this detracts from the fact that some people genuinely prefer
the sound of vinyl. But that is a psychological effect - they prefer
the /imperfections/ and they like the noise, distortion, and other
aspects. That is fine, of course - it's just like preferring a painting
to a photograph. And that was my point - people can prefer the lower
quality audio or film, and find the higher quality version to be
"unnatural", in contrast to the actual precision of the reproduction.

David Brown

unread,
Aug 28, 2020, 6:05:26 AM8/28/20
to
On 28/08/2020 11:09, bol...@nuttyella.co.uk wrote:
> On Thu, 27 Aug 2020 16:28:40 +0000 (UTC)
> Juha Nieminen <nos...@thanks.invalid> wrote:
>> In comp.lang.c++ bol...@nuttyella.co.uk wrote:
>>> Given the human eye generally only notices flicker below about 30hz any
>>> greater refresh rate is simply game developer willy waving.
>>
>> The "humans can't distingish anything above 24/25/30/whatever Hz
>> refresh rate" is one the most widespread and most persistent
>> misconceptions in modern times.
>>
>> It comes from cinema having standardized 24 frames per second for
>> a very long time, and later TV standardizing 25 (PAL) and 30 (NTSC)
>> frames per second. People don't understand why these framerates were
>> chosen and have this completely wrong misconception about them.
>
> Perhaps you never used CRT monitors, but the only time you would notice
> any flicker was out the corner of your eye - which is ironically more
> sensitive to light - when a bright image was showing and the refresh rate
> was at one of its lower settings. Otherwise forget it. Ditto CRT TVs. With LCD
> screens the picture never goes "off" inbetween frames as it did with CRTs as
> its simply displays its picture buffer until updated with the next frame so
> flicker is even less noticable.

CRT monitors had varying refresh rates - 50 Hz would be the absolute
minimum useable. (Note that TV uses interlacing to give 50/60 Hz
refresh rates even though the frame rate is 25/30 Hz.) 72 Hz was, IIRC,
the standard minimum for being "flicker-free".

You are right that with LCD's and other technologies that do not have
"refresh", you don't get flicker no matter what the frame rate. But
again, something like 20 Hz is the minimum for motion to appear somewhat
smooth, and most people are easily capable of seeing improvements up to
about 50 or 60 Hz. Serious gamers or others who regularly use high
speed systems can notice the difference of higher rates.

(It is not "ironic" that your peripheral vision is more sensitive to low
light levels and faster movement than the centre part of the eye - our
eyes and vision processing have evolved that way due to the significant
benefits of that arrangement.)

>
>> This doesn't mean people cannot see the difference between 24 Hz and
>> something higher. They *most definitely* can. Pretty much *all* people
>> can. Why do you think that when The Hobbit used 48 frames per second
>> a huge bunch of people complained about it looking somehow "unnatural"?
>
> A huge bunch? I've never heard about it.
>
>> Of course there are countless double-blind tests where people are
>> tested whether they can see a difference between 60 Hz and 120 Hz,
>
> And no doubt countless ones where no one noticed any difference.
>
>> calculate a crapload of things at a very minimum 60 times per
>> second (which is about 16 milliseconds per frame), preferably
>> 144 times per second and beyond.
>
> Someone's drunk the games fps kool aid.
>

Someone thinks their own opinions and experiences are the only ones that
matter, and doesn't believe anyone else.

David Brown

unread,
Aug 28, 2020, 8:23:07 AM8/28/20
to
As has been explained several times, in some cases - like games
programming - performance is so important that this is an acceptable
cost. Key parts of games and game engines are written in /assembly/ -
losing a bit of encapsulation is a minor cost in comparison to that.

bol...@nuttyella.co.uk

unread,
Aug 28, 2020, 8:39:52 AM8/28/20
to
On Fri, 28 Aug 2020 11:53:50 +0200
David Brown <david...@hesbynett.no> wrote:
>On 27/08/2020 20:57, daniel...@gmail.com wrote:
>First, there is a limit to how high a frequency people can here. The
>limit varies from person to person, over age, and with gender. There

Indeed. I remember being able to clearly hear the scan flyback noise
generated by TVs when I was kid which for PAL was 15.6 Khz but that ability
vanished in my early 30s, which given the amount of metal concerts I'd been
to by then was a testament to my ears :)

>None of this detracts from the fact that some people genuinely prefer
>the sound of vinyl. But that is a psychological effect - they prefer
>the /imperfections/ and they like the noise, distortion, and other
>aspects. That is fine, of course - it's just like preferring a painting
>to a photograph. And that was my point - people can prefer the lower
>quality audio or film, and find the higher quality version to be
>"unnatural", in contrast to the actual precision of the reproduction.

The other downsides of vinyl is that the dynamic range has to be compressed
particularly at the low end otherwise one groove would be overlapping another
and also IIRC there is some phasing that simply can't be reproduced as the
needle can't physically move in 2 different directions at the same time.

bol...@nuttyella.co.uk

unread,
Aug 28, 2020, 8:43:02 AM8/28/20
to
Well, one can only really go by ones own experiences. But having seen all the
nonsense from the audiophool world thats is provenly BS, it wouldn't surprise
me if gamers subconciously project something similar with fps and monitors.

Alf P. Steinbach

unread,
Aug 28, 2020, 9:44:24 AM8/28/20
to
On 27.08.2020 20:00, David Brown wrote:
> On 27/08/2020 19:44, Alf P. Steinbach wrote:
>> [snip]
>> Personally I find 23.9 Hz OK for very slow moving stuff, but especially
>> horizontal panning and horizontal running is very annoyingly jerky at
>> that frame rate -- which is all I have with my "private copy" films (I
>> generally download and show my old mother one movie each Sunday, at
>> first because in these rural parts of Norway it would be risky to use
>> streaming service, but now also simply because of better quality such as
>> better subtitles and no ads or warnings etc. on downloaded movies).
>>
> [snip]
> I've just read on the news that a popular illegal movie copying group
> and website has just been caught, with a Norwegian ringleader. It
> wasn't you (or your mother), was it? :-)

No, that was about people who tricked the movie companies into giving or
selling them movies at a very early point in the movie life cycle, then
cracked the protection and placed the movies on the public net.

That's like direct sabotage of the companies' business model, using
dishonest means, so I think they probably deserved getting caught.

However, the (especially US) entertainment industry's fight against
sharing of movies, series and music, is interestingly deeply irrational.
Nearly all the illegal copying that detracts from their sales income
happens in China and maybe Asia in general, but they don't address that.
And they don't do anything about the issues that cause people like me to
use "private" shared copies rather than buying, such as quality
(especially subtitles), availability (e.g. I could not find Dr. Zhivago
available to pay for, with reasonable quality), having control (e.g. I
remember I had to delete some scenes from the Mr. Robot series before my
old mother could view it, and I believe that would be impossible with
streaming). In short they don't do anything that would help increase
sales income. Instead they shelve out a lot of money to greedy lawyers,
who invariably choose the most spectacular with the least work actions,
such as going after poor Indian mothers and so on, which does not help
the companies they serve at all: it destroys their reputation. Mystery.

- Alf (off topic mode)

Torbjorn Lindgren

unread,
Aug 28, 2020, 10:25:19 AM8/28/20
to
<bol...@nuttyella.co.uk> wrote:
>On Thu, 27 Aug 2020 16:28:40 +0000 (UTC)
>With LCD screens the picture never goes "off" inbetween frames as it
>did with CRTs as its simply displays its picture buffer until updated
>with the next frame so flicker is even less noticable.

Well, most LCD's use PWM to control backlight intensity and a
surprising number use frequency low enough that it causes visible
artifacts at lower light intensities, especially when you move your
eyes quickly, the eye is much more sensitive to flicker when either
the eye or the scene is moving.

I'd say somewhere around a PWM frequency of 200Hz or lower there is
serious danger of this while > 1000Hz ought to be safe. CCFL usually
run at much higher frequencies (10+kHz) and also have afterglow which
reduce the issue but some LED backlight definitely runs way too slow
(and has no afterglow to reduce it). The article below argues for
2000+ Hz for LED backlight PWM and there's no real reason to not just
run the PWM at 10++kHz.

Hence "flicker free backlight" as a term.

Also, high end screens with LED backlight often *DO* have entirely
dark period between each frame, this reduces max brightness but they
can use higher power LEDs so there's no actual net loss (same heat
output) and due to how the eye work this increases the perceived
sharpness.

I've seen some manufacturer advertise that their screens do multiple
on/off periods per frame, I'm not sure how common that is.

IIRC VR glasses often does this and it does seem to help there so
there's probably a benefit for normal screens too but it's probably
much smaller - VR glasses is pretty much "worst case scenario".

This article is a bit old since it predates higher frequency screens
and most backlight strobing but covers most of the basics:
https://www.tftcentral.co.uk/articles/pulse_width_modulation.htm


>>This doesn't mean people cannot see the difference between 24 Hz and
>>something higher. They *most definitely* can. Pretty much *all* people
>>can. Why do you think that when The Hobbit used 48 frames per second
>>a huge bunch of people complained about it looking somehow "unnatural"?
>
>A huge bunch? I've never heard about it.

It was definitely a BIG brouhaha about it during the launch.

Öö Tiib

unread,
Aug 28, 2020, 10:27:50 AM8/28/20
to
I think Ben told about something else. I felt it was about carefully
considering layout of data.

It is worth to note that (regardless if it is game or whatever
other processing-heavy software) less than 5% of code base of
it alters performance in any noticeable manner. Also the performance
problems are mostly because of inefficient or non-scalable algorithms
used and so low level tweaks can benefit only quarter of those 5%.

So when our code base is million lines of code with 2000 classes
then I expect that less than 100 of those alter performance in
noteworthy manner and less than 25 to benefit from any low level
tweaking. 1900 do not alter performance at all and also those 75
can (and should) follow whatever programming paradigms chosen to
project by letter, just to be less naive a bit.

Stefan Monnier

unread,
Aug 28, 2020, 10:34:51 AM8/28/20
to
> again, something like 20 Hz is the minimum for motion to appear somewhat
> smooth, and most people are easily capable of seeing improvements up to
> about 50 or 60 Hz.

I wonder how this limit changes with motion blur: on TV I tend to notice
when the individual images are "perfectly" crisp (as is typically the
case in sports on TV where it's recorded at higher rates to be able to
replay in slow motion IIUC).

> Serious gamers or others who regularly use high speed systems can
> notice the difference of higher rates.

I know computer-graphics-generated movies go through extra trouble to
add motion blur. Do games do the same? If not, then that explain could
the "need" for higher refresh rates?


Stefan

bol...@nuttyella.co.uk

unread,
Aug 28, 2020, 10:59:17 AM8/28/20
to
On Fri, 28 Aug 2020 14:25:09 -0000 (UTC)
Torbjorn Lindgren <t...@none.invalid> wrote:
><bol...@nuttyella.co.uk> wrote:
>>On Thu, 27 Aug 2020 16:28:40 +0000 (UTC)
>>With LCD screens the picture never goes "off" inbetween frames as it
>>did with CRTs as its simply displays its picture buffer until updated
>>with the next frame so flicker is even less noticable.
>
>Well, most LCD's use PWM to control backlight intensity and a
>surprising number use frequency low enough that it causes visible
>artifacts at lower light intensities, especially when you move your
>eyes quickly, the eye is much more sensitive to flicker when either
>the eye or the scene is moving.

Yes, for apparent brightness reasons a lot of LED systems use PWM including
car headlights instead of just using a steady DC supply.

>I'd say somewhere around a PWM frequency of 200Hz or lower there is

Can't say I've ever noticed. Perhaps I just have bad eyes.

bol...@nuttyella.co.uk

unread,
Aug 28, 2020, 11:02:52 AM8/28/20
to
On Fri, 28 Aug 2020 10:34:17 -0400
Stefan Monnier <mon...@iro.umontreal.ca> wrote:
>> again, something like 20 Hz is the minimum for motion to appear somewhat
>> smooth, and most people are easily capable of seeing improvements up to
>> about 50 or 60 Hz.
>
>I wonder how this limit changes with motion blur: on TV I tend to notice
>when the individual images are "perfectly" crisp (as is typically the
>case in sports on TV where it's recorded at higher rates to be able to
>replay in slow motion IIUC).

A current problem with video streams is the compression algorithm not being
able to keep up with fast motion or panning and introducing a noticable
jerkiness to the output. This only seem to happen on HD so possibly its an
artifact of H264 because SD streams just used to break up into blocks.



David Brown

unread,
Aug 28, 2020, 11:17:30 AM8/28/20
to
Audiophiles and gamers are very different here.

For gamers, there is a certain prestige in having the biggest screen or
the fastest computer. But for the most part, they are interested in
what works - what gives them a better competitive edge and higher
scores. This is especially true for at the top level. Remember, these
are people that make their living from gaming - from winning
competitions, from youtube channels, and that sort of thing. They will
not consider buying a screen with a faster refresh rate if spending the
same money on two slower screens will let them have a marginally better
chance in a competition. Of course big numbers, and imagined benefits,
will have some influence - but the primary motivation is better results,
and the results are /measurable/.

In the high-end audio world, the results are not measurable in any way.
Unscrupulous suppliers will promote figures, but they have no
meaningful value. And more naïve customers will be fooled by these
figures and pseudo-technical drivel. But for more honest suppliers and
more knowledgeable customers, it is a matter of what sound the customer
likes, and what total impression they like. Some people /like/ the
feeling of a big, solid box producing their sound. That's fine - it is
all part of the experience. Having your dinner from a silver plate with
a string quartet in the corner of the room does not change the chemical
composition of the food, but it changes the experience of eating it. As
long as hi-end audio suppliers and customers are honest about this, and
don't pretend the sound is quantitatively "better", there's nothing
wrong or "phoolish" about it.

(In the audio world, there is also a small but not insignificant section
that exists primarily for money laundering. The process goes like this.
A drug baron in, for example, Mexico, sends packets of drugs into the
USA by plane drop. The American distributor picks it up, and sells it.
He takes the supplier's part of that money and uses it to buy
ridiculously over-priced speaker cables (or whatever) from a particular
brand and a particular reseller. This brand is owned by the drug baron,
who produces the cables in Mexico for a tiny fraction of the resale
value. The result is white-washing of the drug money, which goes safely
back to the drug baron. Occasionally, some over-rich numpty buys one of
the cables thinking they are so expensive that they must be great -
that's just a bonus.)


We can agree that we can only really judge from our own experiences. So
when you don't have any related experience, you should refrain from
judging. You clearly know little of the high-end gaming world, and the
high-end audio world, and know little of either.

David Brown

unread,
Aug 28, 2020, 11:20:54 AM8/28/20
to
Computer game graphics and movies have very different needs - one is
interactive, and the other passive. For films, you want smooth motion
and that includes motion blur - the aim is to show the feature to the
viewers with minimal effort for the viewer's visual cortex. For games,
the viewer needs precision, and therefore crisper and faster images.
This means it is more work for their brain in viewing and processing the
images - high-end gaming is actually quite hard work.


Stephen Fuld

unread,
Aug 28, 2020, 11:32:46 AM8/28/20
to
On 8/28/2020 8:17 AM, David Brown wrote:

snip

> (In the audio world, there is also a small but not insignificant section
> that exists primarily for money laundering. The process goes like this.
> A drug baron in, for example, Mexico, sends packets of drugs into the
> USA by plane drop. The American distributor picks it up, and sells it.
> He takes the supplier's part of that money and uses it to buy
> ridiculously over-priced speaker cables (or whatever) from a particular
> brand and a particular reseller. This brand is owned by the drug baron,
> who produces the cables in Mexico for a tiny fraction of the resale
> value. The result is white-washing of the drug money, which goes safely
> back to the drug baron. Occasionally, some over-rich numpty buys one of
> the cables thinking they are so expensive that they must be great -
> that's just a bonus.)

I had never heard of this. Ingenious, but since presumably the
distributor pays for the cables in cash, doesn't the cable manufacturer
have the same problem of either making large cash deposits to his bank,
or transferring them across borders as the the distributor would have?


--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Juha Nieminen

unread,
Aug 28, 2020, 12:20:52 PM8/28/20
to
In comp.lang.c++ bol...@nuttyella.co.uk wrote:
> Perhaps you never used CRT monitors, but the only time you would notice
> any flicker

Who's talking about flicker? Nobody has mentioned anything flicker.

The framerate is distinguished by how jittery the motion is. Next time
you are watching a movie, pay attention to when the camera eg. pans
slowly horizontally, and notice how the movement is done in quite
noticeable little jumps, rather than being completely smooth.

Compare a 60 Hz movie with horizontal panning with a 30 Hz version
and you'll notice the difference.

>>This doesn't mean people cannot see the difference between 24 Hz and
>>something higher. They *most definitely* can. Pretty much *all* people
>>can. Why do you think that when The Hobbit used 48 frames per second
>>a huge bunch of people complained about it looking somehow "unnatural"?
>
> A huge bunch? I've never heard about it.

And since you have personally never heard about it, it never happened.
Of course.

>>Of course there are countless double-blind tests where people are
>>tested whether they can see a difference between 60 Hz and 120 Hz,
>
> And no doubt countless ones where no one noticed any difference.

In your opinion if there's even one test where the participants don't
notice any difference, that completely invalidates the tests where
the participants can tell with 100% accuracy which display is
showing 60 Hz and which one is showing 120 Hz?

>>calculate a crapload of things at a very minimum 60 times per
>>second (which is about 16 milliseconds per frame), preferably
>>144 times per second and beyond.
>
> Someone's drunk the games fps kool aid.

Great counter-argument to what I said.

Juha Nieminen

unread,
Aug 28, 2020, 12:22:55 PM8/28/20
to
In comp.lang.c++ Stefan Monnier <mon...@iro.umontreal.ca> wrote:
>> again, something like 20 Hz is the minimum for motion to appear somewhat
>> smooth, and most people are easily capable of seeing improvements up to
>> about 50 or 60 Hz.
>
> I wonder how this limit changes with motion blur: on TV I tend to notice
> when the individual images are "perfectly" crisp (as is typically the
> case in sports on TV where it's recorded at higher rates to be able to
> replay in slow motion IIUC).

Motion blur might help a bit, especially if you aren't specifically paying
attention, but you can still see jittery motion, especially when eg. the
camera is slowly panning horizontally. If you pay close attention, you
notice how the motion is jumpy rather than completely smooth.

The more you start seeing it, the more you'll be unable to unsee it.

Juha Nieminen

unread,
Aug 28, 2020, 12:23:58 PM8/28/20
to
In comp.lang.c++ bol...@nuttyella.co.uk wrote:
> A current problem with video streams is the compression algorithm not being
> able to keep up with fast motion or panning and introducing a noticable
> jerkiness to the output. This only seem to happen on HD so possibly its an
> artifact of H264 because SD streams just used to break up into blocks.

You are making stuff up as you go, aren't you?

What will you conjure up next?

Jean-Marc Bourguet

unread,
Aug 28, 2020, 3:04:22 PM8/28/20
to
bol...@nuttyella.co.uk writes:

> On Wed, 26 Aug 2020 17:44:20 +0200
> David Brown <david...@hesbynett.no> wrote:
>>On 26/08/2020 17:27, bol...@nuttyella.co.uk wrote:
>>> Just sounds like old style bottom up programming with some lipstick on top.
>>> If you can't use much in the way of OOP you might as well just use C.
>>>
>>
>>No. C++ supports OOP - but doing OOP is certainly not the only reason
>>to choose C++ over C.
>
> Its the main reason. Without any kids of objects - meaning no STL either

The STL is strongly generic, value oriented and not at all object oriented,
virtual functions -- which is the feature for oriented object programming
in C++, is rare in the SL and totally absent of the subset called STL (it
is present in the IOStream and Locale domain, which is not at all in the
spirit of the rest of the SL).

> - all you're really left with that means a damn is exceptions (limited
> usefullness if you can only throw POD types), generics, lambdas and
> overloading. Whether thats enough to make it worthwhile I guess depends
> on your use case.

That's for sure the place where most of last 20 years evolution of C++ is
taking place. And I said most to be prudent, I'm unable to think of one
improvement related to object oriented programming.

Yours,

--
Jean-Marc

daniel...@gmail.com

unread,
Aug 28, 2020, 8:28:37 PM8/28/20
to
On Friday, August 28, 2020 at 3:04:22 PM UTC-4, Jean-Marc Bourguet wrote:
> That's for sure the place where most of last 20 years evolution of C++ is
> taking place. And I said most to be prudent, I'm unable to think of one
> improvement related to object oriented programming.
>
Well, there's the final specifier and the override specifier introduced with C++ 11.

And modern compilers do a very good job of devirtualizing when possible. In a
significant number of cases there is no cost to a polymorphic structure versus
the alternatives.

And then there's std::pmr::polymorphic_allocator, which makes using
stateful allocators much simpler.

Daniel


bol...@nuttyella.co.uk

unread,
Aug 29, 2020, 10:37:22 AM8/29/20
to
On Fri, 28 Aug 2020 17:17:16 +0200
David Brown <david...@hesbynett.no> wrote:
>long as hi-end audio suppliers and customers are honest about this, and
>don't pretend the sound is quantitatively "better", there's nothing
>wrong or "phoolish" about it.

Depends how much money they are spending. Anyone who spents 5 figure numbers
on say an amplifier because it has "warmth" from a bunch of valves and
large heat sinks IMO is a fool. But then I tend to think the same about guys
who blow a fortune on a lambo or ferrari so each to their own.

>(In the audio world, there is also a small but not insignificant section
>that exists primarily for money laundering. The process goes like this.
> A drug baron in, for example, Mexico, sends packets of drugs into the
>USA by plane drop. The American distributor picks it up, and sells it.

I'll take your word for this, I've never heard about it. Certainly a clever
way to do it.

>judging. You clearly know little of the high-end gaming world, and the
>high-end audio world, and know little of either.

You're right, I know little about the high end gaming world , but you're
wrong about the audio side. I've been into audio since I was a teenager,
not only hifi but musics sythesizers too so Ihave a pretty good idea whats
what.

bol...@nuttyella.co.uk

unread,
Aug 29, 2020, 10:45:43 AM8/29/20
to
LOL :) Says the man who posted this with a straight face:

"Next time you are watching a movie, pay attention to when the camera eg.
pans slowly horizontally, and notice how the movement is done in quite
noticeable little jumps, rather than being completely smooth."

Err no, thats nothing to do with the frame rate you div. A constant frame
rate doesn't produce noticably jerky motion unless the film is from the 1920s.
The jerkiness is down to the digital codec and if you'd ever watched 24fps
films filmed and replayed on 35mm you'd understand this but I guess you're too
young. Google "H264 jerky" if you don't believe me. Or don't , I couldn't
care less.

David Brown

unread,
Aug 29, 2020, 1:32:05 PM8/29/20
to
I have only heard of this in America, where large bundles of cash
apparently don't cause the same concern to the authorities as they would
in many places in Europe. Anyway, it would be the drug reseller in the
USA who has the bundles of cash, and spends it in the hi-fi shop. The
hi-fi is a legitimate business, and has no (apparent) direct connection
to the criminals. As far as the shop is concerned - and as far as the
shop's bank is concerned - they buy ridiculously expensive cables from a
supplier in Mexico. They add a markup, and sell them to customers in
the USA - who are legally entitled to pay by cash if they want. When
the shop buys cables from the manufacturer in Mexico, it can use normal
international bank transfers, because all the money is now "clean".


Juha Nieminen

unread,
Aug 29, 2020, 2:49:08 PM8/29/20
to
In comp.lang.c++ bol...@nuttyella.co.uk wrote:
> LOL :) Says the man who posted this with a straight face:
>
> "Next time you are watching a movie, pay attention to when the camera eg.
> pans slowly horizontally, and notice how the movement is done in quite
> noticeable little jumps, rather than being completely smooth."
>
> Err no, thats nothing to do with the frame rate you div.

What do you mean? Of course it has everything to do with framerate.

Suppose you only show every 24th frame of the movie at 1-second intervals.
Which means the movie is now 1 Hz. Rather obviously you will see huge
jumps every second, when the camera pans horizontally.

The same goes for 2 Hz, just twice as often, and 4 Hz, and 10 Hz etc.

At 24 Hz it starts being a lot less noticeable, but if you pay close
attention to it, you can still see the jumps, how the movement is not
completely smooth, but makes these little jumps at very frequent
intervals.

Compare that to eg, the same movie being filmed and shown at 60 Hz.
Now it's practically impossible to see any jumps and it will look
extremely smooth.

Just because you don't believe and/or understand it doesn't mean
it's not true. Just try it for yourself.

> Or don't , I couldn't care less.

Rather obviously you care a lot, given that you can't stop responding.

bol...@nuttyella.co.uk

unread,
Aug 30, 2020, 12:04:18 PM8/30/20
to
On Sat, 29 Aug 2020 18:48:53 +0000 (UTC)
Juha Nieminen <nos...@thanks.invalid> wrote:
>In comp.lang.c++ bol...@nuttyella.co.uk wrote:
>> LOL :) Says the man who posted this with a straight face:
>>
>> "Next time you are watching a movie, pay attention to when the camera eg.
>> pans slowly horizontally, and notice how the movement is done in quite
>> noticeable little jumps, rather than being completely smooth."
>>
>> Err no, thats nothing to do with the frame rate you div.
>
>What do you mean? Of course it has everything to do with framerate.
>
>Suppose you only show every 24th frame of the movie at 1-second intervals.
>Which means the movie is now 1 Hz. Rather obviously you will see huge
>jumps every second, when the camera pans horizontally.

Thats not the kind of jerkiness I was talking about which if you had half
a working braincell you'd have understood. When an HD video codec can't keep up
with movement, particularly panning, you often see very noticable jerking that
is unconnected to and much lower than the framerate. Presumably its playing
catchup everytime a full frame is sent in the stream rather than a difference
frame.

Stephen Fuld

unread,
Aug 30, 2020, 12:56:18 PM8/30/20
to
On 8/29/2020 10:31 AM, David Brown wrote:
> On 28/08/2020 17:32, Stephen Fuld wrote:
>> On 8/28/2020 8:17 AM, David Brown wrote:
>>
>> snip
>>
>>> (In the audio world, there is also a small but not insignificant section
>>> that exists primarily for money laundering.  The process goes like this.
>>>   A drug baron in, for example, Mexico, sends packets of drugs into the
>>> USA by plane drop.  The American distributor picks it up, and sells it.
>>>   He takes the supplier's part of that money and uses it to buy
>>> ridiculously over-priced speaker cables (or whatever) from a particular
>>> brand and a particular reseller.  This brand is owned by the drug baron,
>>> who produces the cables in Mexico for a tiny fraction of the resale
>>> value.  The result is white-washing of the drug money, which goes safely
>>> back to the drug baron.  Occasionally, some over-rich numpty buys one of
>>> the cables thinking they are so expensive that they must be great -
>>> that's just a bonus.)
>>
>> I had never heard of this.  Ingenious, but since presumably the
>> distributor pays for the cables in cash, doesn't the cable manufacturer
>> have the same problem of either making large cash deposits to his bank,
>> or transferring them across borders as the the distributor would have?
>>
>
> I have only heard of this in America, where large bundles of cash
> apparently don't cause the same concern to the authorities as they would
> in many places in Europe.

I did a little searching research. No, the authorities in America do
care, but there is apparently a loop hole.

https://money.usnews.com/banking/articles/if-you-deposit-a-lot-of-cash-does-your-bank-report-it-to-the-government

If an individual deposits $10,000 or more, or even makes a series of
deposits totaling that much during a reasonable amount of time, the bank
does report it to the IRS (part of the US Treasury Department). This
even applies to things like money orders, etc.

But the "loop hole" is that this isn't done for business. The reason
for that is to not cause problems for those small businesses such as
food trucks, small restaurants, barber shops, hair and nail salons, etc.
that routinely accept cash for small retail services, and might easily
want to deposit that much cash over a modest amount of time. In those
cases, the business itself is required by law to report the deposits.
The problem occurs, of course, if the business breaks the law and
doesn't report it. And a "business", whose real purpose is breaking the
law anyway, wouldn't care about breaking another one. :-(

I suppose there are ways to fix this, but apparently none have been
tried, or if tried haven't worked. :-(



> Anyway, it would be the drug reseller in the
> USA who has the bundles of cash, and spends it in the hi-fi shop. The
> hi-fi is a legitimate business, and has no (apparent) direct connection
> to the criminals. As far as the shop is concerned - and as far as the
> shop's bank is concerned - they buy ridiculously expensive cables from a
> supplier in Mexico. They add a markup, and sell them to customers in
> the USA - who are legally entitled to pay by cash if they want. When
> the shop buys cables from the manufacturer in Mexico, it can use normal
> international bank transfers, because all the money is now "clean".


Yes, I get all that. I'm sorry if I messed up the terminology in
previous posts and caused confusion.

Juha Nieminen

unread,
Aug 30, 2020, 1:11:21 PM8/30/20
to
In comp.lang.c++ bol...@nuttyella.co.uk wrote:
> Thats not the kind of jerkiness I was talking about which if you had half
> a working braincell you'd have understood.

Notice how you are the only one here throwing insults at people.

I wonder why that is.

Öö Tiib

unread,
Aug 30, 2020, 2:02:08 PM8/30/20
to
People who try to pointlessly insult or belittle others do it usually
because of inferiority complex and a lack of self-respect.

David Brown

unread,
Aug 30, 2020, 4:03:59 PM8/30/20
to
Generally with loopholes there is always someone interested in having
the loopholes remain. (Although I suppose the old maxim about "never
attribute to malice that which can equally well be explained by
incompetence" might also apply.)

>
>
>> Anyway, it would be the drug reseller in the
>> USA who has the bundles of cash, and spends it in the hi-fi shop.  The
>> hi-fi is a legitimate business, and has no (apparent) direct connection
>> to the criminals.  As far as the shop is concerned - and as far as the
>> shop's bank is concerned - they buy ridiculously expensive cables from a
>> supplier in Mexico.  They add a markup, and sell them to customers in
>> the USA - who are legally entitled to pay by cash if they want.  When
>> the shop buys cables from the manufacturer in Mexico, it can use normal
>> international bank transfers, because all the money is now "clean".
>
>
> Yes, I get all that.  I'm sorry if I messed up the terminology in
> previous posts and caused confusion.
>

No problem. I think it's fair to say (or at least hope) that this is
not a topic many of us here (me included) know much about, or need to
know much about.

Stefan Monnier

unread,
Aug 30, 2020, 5:45:02 PM8/30/20
to
> Given the human eye generally only notices flicker below about 30hz any
> greater refresh rate is simply game developer willy waving.

That reminds me of the "vision boosters": that's how we used to call some
popular hazelnut cookies (https://produits.migros.ch/batons-aux-noisettes)
back in my undergrad computer lab, because they made you "see faster".

More specifically, being pretty hard cookies, when you ate them while
staring at the (CRT) screen of our beloved DEC Alpha workstations you'd
notice the "tearing" of the 60Hz (or was it 70Hz?) redraw.


Stefan

Terje Mathisen

unread,
Aug 31, 2020, 1:50:08 AM8/31/20
to
:-(
>
> I wonder why that is.
>
To all the people here "discussing" needed frame rates etc.: Please take
a look at one or more of Michael Abrash' keynote presentations during
Oculus Connect!

They have done a _lot_ of research into this over the last short decade,
you can start at the 2014 video:

https://www.youtube.com/watch?v=knQSRTApNcs

or go directly to one of the later which shows what they figured out in
the meantime.

In order to avoid visual artifacts which destroy the AR/VR experience,
they have to get a serious amount of processing done in a very short
timeframe.

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

Juha Nieminen

unread,
Aug 31, 2020, 2:46:09 AM8/31/20
to
In comp.lang.c++ Terje Mathisen <terje.m...@tmsw.no> wrote:
> To all the people here "discussing" needed frame rates etc.: Please take
> a look at one or more of Michael Abrash' keynote presentations during
> Oculus Connect!

That's a good example. If it were indeed true that people can't distinguish
anything beyond something like 24 (or 25, or 30) frames per second, then
it shouldn't matter if a VR headset updates at eg. 30 Hz. That would
certainly be beneficial since even lower-end PCs would be able to handle it.

Yet, something like 80-90 Hz refresh rate is required so that the refresh
rate itself doesn't contribute to nausea. Quite clearly people can distinguish
between even 60 Hz and 80 Hz.

bol...@nuttyella.co.uk

unread,
Aug 31, 2020, 4:51:20 AM8/31/20
to
Because I get tired of arguing the toss with someone who doesn't understand
the point despite me being very clear about it. Did you seriously think when
I mentioned jerkiness in HIGH DEFINITION video it had anything to do with
frame rate in 2020?? Try engaging your brain first next time.

bol...@nuttyella.co.uk

unread,
Aug 31, 2020, 4:53:24 AM8/31/20
to
Thanks for your valuable insights there Yoda. Any more wisdom you can bless
us with from your Dummies Guide to Psychology book?

Juha Nieminen

unread,
Aug 31, 2020, 8:15:21 AM8/31/20
to
No, what you are doing is trying to deflect and move goalposts, in order to
not have to admit having made a mistake.

Your original claim, which started the framerate discussion, was that it's
useless for video games to go above 3 Hz refresh rates because humans can't
distinguish anything higher. That game developers aiming at higher
framerates is just them trying to show off.

Now you are talking about some "high definition video", as if it had anything
at all to do with your original assertion (which was about game framerates,
absolutely nothing to do with video codecs).

bol...@nowhere.co.uk

unread,
Aug 31, 2020, 11:35:21 AM8/31/20
to
On Mon, 31 Aug 2020 12:15:09 +0000 (UTC)
Juha Nieminen <nos...@thanks.invalid> wrote:
>In comp.lang.c++ bol...@nuttyella.co.uk wrote:
>> On Sun, 30 Aug 2020 17:11:07 +0000 (UTC)
>> Juha Nieminen <nos...@thanks.invalid> wrote:
>>>In comp.lang.c++ bol...@nuttyella.co.uk wrote:
>>>> Thats not the kind of jerkiness I was talking about which if you had half
>>>> a working braincell you'd have understood.
>>>
>>>Notice how you are the only one here throwing insults at people.
>>>
>>>I wonder why that is.
>>
>> Because I get tired of arguing the toss with someone who doesn't understand
>> the point despite me being very clear about it. Did you seriously think when
>> I mentioned jerkiness in HIGH DEFINITION video it had anything to do with
>> frame rate in 2020?? Try engaging your brain first next time.
>
>No, what you are doing is trying to deflect and move goalposts, in order to
>not have to admit having made a mistake.
>
>Your original claim, which started the framerate discussion, was that it's
>useless for video games to go above 3 Hz refresh rates because humans can't
>distinguish anything higher. That game developers aiming at higher
>framerates is just them trying to show off.

2 entirely different threads which you've clearly got mixed up in your head
because A) you're an idiot or B) you're just looking for an argument. Here's
my original post:

--------------
From: bol...@nuttyella.co.uk
Subject: Re: Are there any asm-instructions to support OOP
Message-ID: <rib6af$1kuh$1...@gioia.aioe.org>

On Fri, 28 Aug 2020 10:34:17 -0400
Stefan Monnier <mon...@iro.umontreal.ca> wrote:
>> again, something like 20 Hz is the minimum for motion to appear somewhat
>> smooth, and most people are easily capable of seeing improvements up to
>> about 50 or 60 Hz.
>
>I wonder how this limit changes with motion blur: on TV I tend to notice
>when the individual images are "perfectly" crisp (as is typically the
>case in sports on TV where it's recorded at higher rates to be able to
>replay in slow motion IIUC).

A current problem with video streams is the compression algorithm not being
able to keep up with fast motion or panning and introducing a noticable
jerkiness to the output. This only seem to happen on HD so possibly its an
artifact of H264 because SD streams just used to break up into blocks.
--------------


Marcel Mueller

unread,
Sep 9, 2020, 1:35:34 AM9/9/20
to
Am 25.08.20 um 18:38 schrieb Christian Hanné:
> Is there any CPU-architecture with CPU-instructions to support
> object protection-levels like private, protected, publich (and
> package in Java)?
> I think that would be very cool since you could establish security
> -mechanisms on top of that.

Basically you need to protect memory, either from reading or writing or
execution.

The first limiting factor is the granularity. Common architectures use
4k block size for this purpose. To protect every variable you need to
reduce this in fact significantly. Otherwise excessive padding would be
required to separate each block with different access rights.

The second point is volatility. The protection level changes with every
method call. So to CPU protection information has to be updated (to some
degree) on every method invocation. In case of inlining of trivial
methods like getters these updates would remain and make the entire
solution extremely inefficient.

It is by far more efficient, to check this constraints *at compile time*
once than doing the same over an over at run time.

In fact almost any movement of execution to the compile process reduces
resources taken at execution time. constexpr is a good example to allow
such optimizations.

So I would call this a bad idea.


Marcel

Christian Hanné

unread,
Sep 9, 2020, 9:51:31 AM9/9/20
to
> The first limiting factor is the granularity. Common architectures use
> 4k block size for this purpose. To protect every variable you need to
> reduce this in fact significantly. Otherwise excessive padding would be
> required to separate each block with different access rights.

Ok, then the different fields have to be assigned to different pages.
This would be most useful by having multiple this-pointers.

> The second point is volatility. The protection level changes with every
> method call. So to CPU protection information has to be updated (to some
> degree) on every method invocation. In case of inlining of trivial
> methods like getters these updates would remain and make the entire
> solution extremely inefficient.

That's not an issue. The kernel can do this.

Richard Damon

unread,
Sep 9, 2020, 10:35:33 AM9/9/20
to
The issue isn't that it can't be done, but that it will reduce your
performance incredibly. Basically you are adding the overhead of a
system call into the kernel and much of the overhead of a task switch to
EVERY method call, and return.

Christian Hanné

unread,
Sep 9, 2020, 10:37:23 AM9/9/20
to
> The issue isn't that it can't be done, but that it will reduce your
> performance incredibly. Basically you are adding the overhead of a
> system call into the kernel and much of the overhead of a task switch to
> EVERY method call, and return.

Security always rules over performance.

James Kuyper

unread,
Sep 9, 2020, 11:20:14 AM9/9/20
to
Absolutes are generally false. This is a prime example.

There's no upper limit on how far you can compromise performance in the
name of increased security - the only completely secure computer system
is one that has been turned off, which corresponds to infinitely poor
performance. At some point you have to decide that the value of a small
amount of extra security does not justify a large cost in decreased
performance.

The performance hit of this suggestion would be very large, and since
it's turning issues that are suppose be dealt with at compile time into
issues that need to be dealt with at run-time, the security benefit is
very close to 0 (possibly negative).

Christian Hanné

unread,
Sep 9, 2020, 11:22:02 AM9/9/20
to
> Absolutes are generally false. ...

Not in this case.
But as I see you're a person that easily offers security for nothing.

bol...@nuttyella.co.uk

unread,
Sep 9, 2020, 11:39:15 AM9/9/20
to
On Wed, 9 Sep 2020 11:19:47 -0400
James Kuyper <james...@alumni.caltech.edu> wrote:
>On 9/9/20 10:37 AM, Christian Hanné wrote:
>> Security always rules over performance.
>
>Absolutes are generally false. This is a prime example.
>
>There's no upper limit on how far you can compromise performance in the
>name of increased security - the only completely secure computer system
>is one that has been turned off,

And not even then if someone has physical access to the machine.

James Kuyper

unread,
Sep 9, 2020, 11:46:03 AM9/9/20
to
Those comments display as little knowledge about me as they do about
security.

Christian Hanné

unread,
Sep 9, 2020, 12:01:53 PM9/9/20
to
>> Not in this case.
>> But as I see you're a person that easily offers security for nothing.

> Those comments display as little knowledge about me as they do about
> security.

I'm an expert in this.

Richard Damon

unread,
Sep 9, 2020, 1:30:36 PM9/9/20
to
On 9/9/20 10:37 AM, Christian Hanné wrote:
If you are using this sort of 'access rule' for 'security', you have
already lost. Access rules are to limit what the PROGRAMMER needs to
think about, so makes things easier for them.

It is designed to help block mistakes, not determined attacks.

Jorgen Grahn

unread,
Sep 9, 2020, 4:04:35 PM9/9/20
to
Scott Newman again, surely.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Juha Nieminen

unread,
Sep 10, 2020, 3:15:43 AM9/10/20
to
If this is a question of high-level software security, resistant to
attempt to bypass the language-level public/private class restrictions
by malicious code via runtime checks, then perhaps normal OOP is not the
proper approach.

Tim Rentsch

unread,
Sep 10, 2020, 10:19:26 AM9/10/20
to
Jorgen Grahn <grahn...@snipabacken.se> writes:

> On Wed, 2020-09-09, James Kuyper wrote:
>
>> On 9/9/20 11:21 AM, Christian Hanne wrote:
>>
>>>> Absolutes are generally false. ...
>>>
>>> Not in this case.
>>> But as I see you're a person that easily offers security for nothing.
>>
>> Those comments display as little knowledge about me as they do about
>> security.
>
> Scott Newman again, surely.

I think comments like this one work against your desired
goal of reducing the crank volume.

Tim Rentsch

unread,
Sep 14, 2020, 12:04:22 AM9/14/20
to
Juha Nieminen <nos...@thanks.invalid> writes:

[..in an earlier posting..]

> One of the most fundamental core building blocks of OOP, and its
> precursor, modular programming, ie. the fact that you gather all
> the data related to one object into the same place, the same
> data structure,

> Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
>
>> Right. But this is in part bad design. I blame the rather
>> simplistic view of OO that gets pushed by online tutorials and so
>> on. There is no reason to consider position to be an intrinsic
>> property of an object. Object locations could be stored
>> contiguously in an instance of a LocationArray class and linked
>> (vie pointers or indexes) or otherwise associated with the object
>> or objects that have those locations.
>
> I don't think it's a "simplistic view of OO". It's the standard
> view that has always existed, since the very beginning of OOP.
>
> The way that object-oriented programming (and modular programming)
> works is rather logical and practical: Every object has an
> internal state (usually in the form of member variables) and some
> member functions. You can easily handle such objects, such as
> passing them around, copying them, querying or modifying their
> state, have objects manage other objects, and so on. When coupled
> with the concept of a public/private interface division, it makes
> even large programs manageable, maintainable and the code
> reusable.
>
> Back when OOP was first developed, in the 70's and 80',

The purported history is wrong. Modular programming was not a
precursor to OOP. Preliminary work on what came to be OOP was
done in the early 1960's, by Alan Kay, and independently by Ivan
Sutherland with Sketchpad. The programming language Simula had
classes, virtual functions, and inheritance in 1967. The earliest
mention I'm aware of even of the term modular programming was in
1968, and modules themselves were years later. I was hearing Alan
Kay talk about Smalltalk and OOP before the programming language
Modula existed. By then Smalltalk had already gone through two
iterations, and even the first version, Smalltalk 72, was firmly
object-oriented.

I won't comment on your description of what OOP is, but certainly
there is a sharp contrast with what Alan Kay had to say about
Smalltalk, generally recognized as the canonical object-oriented
language:

Though Smalltalk's structure allows the technique now known as
data abstraction to be easily (and more generally) employed,
the entire thrust of its design has been to supersede the idea
of data and procedures entirely and to replace these with the
more generally useful notions of activity, communication and
inheritance.

Alan Kay, 1972
It is loading more messages.
0 new messages