Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

OO primitives

12 views
Skip to first unread message

Gerry

unread,
Feb 23, 2009, 4:27:07 AM2/23/09
to
The recent discussion on "Is gforth OP serious?" triggered me into
re-reading last years discussion on "OO Syntax"
http://groups.google.co.uk/group/comp.lang.forth/browse_frm/thread/cd9b769a3bb45cf?q=#be0b07b3617c55bd

In this John Passaniti suggested following Lua's example in devising a set
of
primitives that could form the basis for the various OO Forth systems that
are
available and listed some benefits that this would give. There was some
discussion on a set of OO primitives that was inconclusive and fizzled out.
A reference was provided to a paper on "Open Reusable Object Models", an
updated version of which is available here:
http://www.vpri.org/pdf/tr2006003a_objmod.pdf
but there was no response to that. This paper proposes an object model with
3
object types and 5 methods, it is extensible, presumably allowing any type
of OO system, whether prototype or class based etc to be built on top of it.

It seems to me that the model described could form the basis for a standard
set of OO primitives in Forth but would it be worth it? One difference
between Forth and Lua is that Forth already has the features needed to build
a comprehensive OO system as has been proven many times; whereas, I suppose,
suitable primitives had to be built into Lua to give the same capability. If
that is true then there is clearly less incentive to build such primitives
into Forth and implementers would need to see some measurable benefits in
doing so before reworking their systems to use the primitives.

What do people think about the model proposed in the above paper, would it
form a suitable basis for standard set of OO primitives? Would it provide
sufficient benefits to encourage implementers to use it (see JP's list)? Or
what?

Gerry


Stephen Pelc

unread,
Feb 23, 2009, 6:21:39 AM2/23/09
to
On Mon, 23 Feb 2009 09:27:07 -0000, "Gerry"
<ge...@jackson9000.fsnet.co.uk> wrote:

>It seems to me that the model described could form the basis for a standard
>set of OO primitives in Forth but would it be worth it?

MPE ships at least three OOP models with VFX Forth. About all
they have in common is a lot of saving of the current "this"
pointer on the return stack. OOP models are there to provide
a higher level of abstraction for application programmers.
Given that most serious OOP omplementations in Forth grow
to 1000+ source lines, I am of the opinion that OOP models
should be source libraries, and then we'll see which one(s)
survive the acid test of programmer acceptance.

Stephen


--
Stephen Pelc, steph...@mpeforth.com
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691
web: http://www.mpeforth.com - free VFX Forth downloads

RogerLevy

unread,
Feb 23, 2009, 8:28:04 AM2/23/09
to
> then we'll see which one(s)
> survive the acid test of programmer acceptance.
>
> Stephen

In classic Forth fashion I've modified my version of GForth-objects to
the point of being pretty much source-incompatible. >D

I don't think standardization of OOF is ever going to happen (without
a document written on stone tablets like ANS.)

Andrew Haley

unread,
Feb 23, 2009, 11:19:40 AM2/23/09
to
Gerry <ge...@jackson9000.fsnet.co.uk> wrote:
> The recent discussion on "Is gforth OP serious?" triggered me into
> re-reading last years discussion on "OO Syntax"
> http://groups.google.co.uk/group/comp.lang.forth/browse_frm/thread/cd9b769a3bb45cf?q=#be0b07b3617c55bd

> In this John Passaniti suggested following Lua's example in devising a set
> of
> primitives that could form the basis for the various OO Forth systems that
> are
> available and listed some benefits that this would give. There was some
> discussion on a set of OO primitives that was inconclusive and fizzled out.
> A reference was provided to a paper on "Open Reusable Object Models", an
> updated version of which is available here:
> http://www.vpri.org/pdf/tr2006003a_objmod.pdf
> but there was no response to that. This paper proposes an object
> model with 3 object types and 5 methods, it is extensible,
> presumably allowing any type of OO system, whether prototype or
> class based etc to be built on top of it.

It's a very interesting paper. It's based on a model of OO that is
very dynamic and isn't strongly typed, which certainly suits my own
tastes: very Smalltalk, very Forth.

The paper is long, but the core idea is that the a is itself an
object. So, instead of

call object->message (args)

being implemented as a lookup in a table followed by a routine in the
object's class being called, it's done by sending the message "lookup"
to the object's vtable:

method = call object->vtable->lookup(message)
call method (args)

All a programmer has to do to implement a novel model of message
dispatch is to define the lookup method appropriately. You want
multiple inheritance, you can have that.

But this double indirect dispatch will cost, so the authors allocate
an inline cache: at the point of every send a couple of words are
allocated, and if the vtable is the same as the last time this message
was sent we don't have to do the lookup again. (An aside: this inline
cache will not work as described on a multi-threaded system -- the
cache would have to be in thread-local memory.) They also implement a
global cache, much like that used in some Smalltalk implementations.

However, it's fairly obvious that this technique is going to be slower
than early binding, which they measure as about twice as fast as their
own method with an inline cache.

> What do people think about the model proposed in the above paper,
> would it form a suitable basis for standard set of OO primitives?
> Would it provide sufficient benefits to encourage implementers to
> use it (see JP's list)? Or what?

It might be good enough for me but some Forthers won't like it because
it is a bit less efficient than a straightforward indexed call. I
think I may play with the idea in Forth. Something like this idea
could certainly be implemented in Forth with a small amount of code.

Andrew.

Andrew Haley

unread,
Feb 23, 2009, 11:22:06 AM2/23/09
to
Andrew Haley <andr...@littlepinkcloud.invalid> wrote:
> Gerry <ge...@jackson9000.fsnet.co.uk> wrote:
> > The recent discussion on "Is gforth OP serious?" triggered me into
> > re-reading last years discussion on "OO Syntax"
> > http://groups.google.co.uk/group/comp.lang.forth/browse_frm/thread/cd9b769a3bb45cf?q=#be0b07b3617c55bd

> > In this John Passaniti suggested following Lua's example in devising a set
> > of
> > primitives that could form the basis for the various OO Forth systems that
> > are
> > available and listed some benefits that this would give. There was some
> > discussion on a set of OO primitives that was inconclusive and fizzled out.
> > A reference was provided to a paper on "Open Reusable Object Models", an
> > updated version of which is available here:
> > http://www.vpri.org/pdf/tr2006003a_objmod.pdf
> > but there was no response to that. This paper proposes an object
> > model with 3 object types and 5 methods, it is extensible,
> > presumably allowing any type of OO system, whether prototype or
> > class based etc to be built on top of it.

> It's a very interesting paper. It's based on a model of OO that is
> very dynamic and isn't strongly typed, which certainly suits my own
> tastes: very Smalltalk, very Forth.

> The paper is long, but the core idea is that the a is itself an

Should be

" ... that the vtable is itself an object ..."

John Passaniti

unread,
Feb 24, 2009, 12:38:54 PM2/24/09
to
Andrew Haley wrote:
> It's a very interesting paper. It's based on a model of OO that is
> very dynamic and isn't strongly typed, which certainly suits my own
> tastes: very Smalltalk, very Forth.

Also, very Lua. What I found interesting about the paper was that the
common primitives it derives for object oriented systems are "intern",
"addMethod", "lookup", "allocate", and "delegated". When I first
discussed looking at Lua for inspiration, I identified the two core
metamethods used by Lua to implement object oriented systems ("index"
and "newindex") which roughly correspond to "lookup" and "addMethod" in
the paper. The other three primitives the paper identifies are actually
implicit in Lua (strings are interned and tables are automatically
allocated) and the hook mechanism to set the equivalent of the vtable is
a system primitive. It's not exactly the same thing and the paper's
approach is more generic and ultimately more powerful. But it's pretty
close.

> But this double indirect dispatch will cost, so the authors allocate
> an inline cache: at the point of every send a couple of words are
> allocated, and if the vtable is the same as the last time this
> message was sent we don't have to do the lookup again.

Caching method lookups is also a common optimization strategy for Lua
object systems, which is another parallel with the paper. Of course,
how much such caching matters depends on how many times the lookup
operation occurs. Concerns over the cost are sometimes misplaced. In
one recent project I worked on, I created an object system in Lua and
told myself that I would probably have to implement caching later. As
it turned out, although the caching provided a measurable speed
increase, it wasn't significant enough to matter.

I bring that up because I find it ironic that Forthers tend to place so
much emphasis on speed. Back in the early days of Forth, the usual
implementation strategy wasn't native code, but a inner interpreter
running over lists of words. And yet, it was reported that some Forth
applications were as fast or faster than the functional equivalents in
compiled languages. There was no mystery there-- better algorithms
combined with fewer and simpler abstractions. In other words, better
code can trump a slower language.

And so it goes with object orientation. Yes, there is a cost. But for
people who can effectively use object orientation, the cost of a double
indirect dispatch may not matter.

When people look at object orientation as just another approach to
factoring, the cost of the runtime support may be easily overshadowed by
other benefits.

Gerry

unread,
Mar 5, 2009, 3:17:07 PM3/5/09
to

Thanks John and Andrew for your views on the paper. I've delayed replying
till now in case there were more replies. I don't know too much about the
different types of OO programming but you've given me some confidence that I
won't be wasting my time if I play around with the ideas in the paper.

Gerry


Stephen Pelc

unread,
Mar 5, 2009, 6:56:28 PM3/5/09
to
On Tue, 24 Feb 2009 12:38:54 -0500, John Passaniti
<nn...@JapanIsShinto.com> wrote:

>I bring that up because I find it ironic that Forthers tend to place so
>much emphasis on speed. Back in the early days of Forth, the usual
>implementation strategy wasn't native code, but a inner interpreter
>running over lists of words.

The improvement in performance provided by native code compilers
is over 10:1 (by measurement) for most code. That simply changes
what you can do with the same silicon. That good algorithms have
even more impact isn't important. In my world, speed *always*
matters.

John Passaniti

unread,
Mar 6, 2009, 12:16:18 PM3/6/09
to
Stephen Pelc wrote:
> The improvement in performance provided by native code compilers
> is over 10:1 (by measurement) for most code. That simply changes
> what you can do with the same silicon. That good algorithms have
> even more impact isn't important. In my world, speed *always*
> matters.

And in my world, what matters is the need of the application. Sometimes
that means that time efficiency matters far more than space efficiency.
Sometimes it's the other way around. Sometimes, neither matters and
other factors bubble up to the top. Putting just one measure of
efficiency (execution speed) to the top may be an overriding concern for
compiler vendors who are out to win benchmarks. But for the rest of
us-- those who use your tools-- we may have other priorities.

Most of the discussion in comp.lang.forth is inordinately focused on
execution speed. That always struck me as odd because (I assume) the
majority of participants in comp.lang.forth are working in embedded
systems-- systems which are typically "resource constrained." That
sometimes means slower processors, but it can also mean addressable
memory is small.

A good practical example of where execution speed doesn't matter as much
is the current project I'm working on. We're using a soft-core
processor on a FPGA. And as we've been bringing up parts of the
application, we're finding that there are some specific hot-spots in the
code. So we simply instantiate hardware to speed-up those specific
operations, and we're instantly running faster than even the best
optimizing compiler could ever generate code for. What we can't do is
magically instantiate more memory in the system, so our application is
constrained far more by RAM than by processor speed.

Regardless, the overarching point I'm making here is that before people
reject out-of-hand a particular technique because of a perception that
it is slow, they should measure it and see if it matters.

Elizabeth D Rather

unread,
Mar 6, 2009, 12:57:36 PM3/6/09
to
John Passaniti wrote:
> Stephen Pelc wrote:
>> The improvement in performance provided by native code compilers
>> is over 10:1 (by measurement) for most code. That simply changes
>> what you can do with the same silicon. That good algorithms have
>> even more impact isn't important. In my world, speed *always*
>> matters.
>
> And in my world, what matters is the need of the application. Sometimes
> that means that time efficiency matters far more than space efficiency.
> Sometimes it's the other way around. Sometimes, neither matters and
> other factors bubble up to the top. Putting just one measure of
> efficiency (execution speed) to the top may be an overriding concern for
> compiler vendors who are out to win benchmarks. But for the rest of
> us-- those who use your tools-- we may have other priorities.
>
...

> Regardless, the overarching point I'm making here is that before people
> reject out-of-hand a particular technique because of a perception that
> it is slow, they should measure it and see if it matters.

Yes, I agree, but an important fact that Stephen didn't mention is that
optimizing native code compilers often provide significant savings in
code size as well as performance. This is more true in the 32-bit
processors than the smaller ones, but when we first went this route many
years ago we were delighted to find that there was little or no size
penalty wrt threaded code even on small targets like the 8051 (and they
can always use all the performance they can get). So it's a win-win.

And those of us with significant numbers of embedded systems users are
also concentrating on providing better interactive debugging and other
features as well as speed. But they aren't discussed here largely
because the embedded systems users seem to be a minority here.

The speed discussions that are definitely *not* interesting, IMO, are
the ones that focus on optimizing for specific cache features in
specific x86 variants.

Cheers,
Elizabeth


--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================

Andrew McKewan

unread,
Mar 6, 2009, 1:00:30 PM3/6/09
to
For the applications I design, processors are getting ever more
capable and memory is becoming cheaper. We are now developing products
with 600 MHz ARM Cortex A8 and 1.6 GHz Intel Atom processors with 256K
to 1 GB RAM. Sure we are asked for more demanding features but a lot
of the hard work is now done in hardware graphics and multimedia
coprocessors. What is becoming essential to our success is improved
programmer productivity.

Up until now, almost all of our software is written in C/C++.
Excellent performance. Lousy productivity.

Now I am advocating a hybrid approach. Use C/C++ for the core
applications where performance matters or solutions already exist. Use
a scripting language for the high-level application and HMI logic.
This is where most of the iterative development occurs and is the part
of the software that is most affected by changing customer
requirements. Our productivity is improving by an order of magnitude.

But I didn't chose Forth, I chose Lua. The cycle counters will throw
up their arms in horror when they see what it takes to add two numbers
or fetch a value from an array. But guess what, it's fast enough. The
time it takes or process user input and decide what to do is a
fraction of the time it takes to perform the operation. And for our
developers, it is much easier, powerful and productive than Forth.

And yes, it has OO primitives (to stay on topic)

There are many different requirements, many processors and many
software tools. They all have their place and I am glad I have such a
rich environment in which to work.

Andrew

Doug Hoffman

unread,
Apr 11, 2009, 11:33:58 AM4/11/09
to
John Passaniti wrote:
> Stephen Pelc wrote:
>> The improvement in performance provided by native code compilers
>> is over 10:1 (by measurement) for most code. That simply changes
>> what you can do with the same silicon. That good algorithms have
>> even more impact isn't important. In my world, speed *always*
>> matters.
>
> And in my world, what matters is the need of the application. Sometimes
> that means that time efficiency matters far more than space efficiency.
> Sometimes it's the other way around. Sometimes, neither matters and
> other factors bubble up to the top. Putting just one measure of
> efficiency (execution speed) to the top may be an overriding concern for
> compiler vendors who are out to win benchmarks. But for the rest of
> us-- those who use your tools-- we may have other priorities.
>
> Most of the discussion in comp.lang.forth is inordinately focused on
> execution speed. That always struck me as odd because (I assume) the
> majority of participants in comp.lang.forth are working in embedded
> systems-- systems which are typically "resource constrained."

Apparently I missed the notice that this is supposed to be primarily an
embedded Forth newsgroup.

> That
> sometimes means slower processors, but it can also mean addressable
> memory is small.
>
> A good practical example of where execution speed doesn't matter as much
> is the current project I'm working on. We're using a soft-core
> processor on a FPGA. And as we've been bringing up parts of the
> application, we're finding that there are some specific hot-spots in the
> code. So we simply instantiate hardware to speed-up those specific
> operations, and we're instantly running faster than even the best
> optimizing compiler could ever generate code for. What we can't do is
> magically instantiate more memory in the system, so our application is
> constrained far more by RAM than by processor speed.
>
> Regardless, the overarching point I'm making here is that before people
> reject out-of-hand a particular technique because of a perception that
> it is slow, they should measure it and see if it matters.

Agreed. In fact I have a different version of Neon+ that compiles to
under 3400 bytes on Gforth, under 2900 bytes if you don't need heap
objects support. It functions *identically* to the standard version,
i.e., compiles the same source and the resulting code provides the same
results. It is slower than the standard version and has been criticized
for that slowness (it's true, you can't please everyone all the time).
That lead to creation of the "fast" version. The underlying engines for
each are of course quite different, but that is 100% transparent to the
programmer.

Now some will say that 2900 bytes is still too big and mini-oof or even
CREATE DOES> provides all the "OOP" they need. Fine by me. To each
their own.

-Doug

0 new messages