Porting

9 views
Skip to first unread message

Thiago Silva

unread,
Oct 8, 2006, 9:55:17 AM10/8/06
to Strongtalk-general
Hello all,

I would like to know if there is any work being done on porting the
sources/build to compile on other OS, particulary, GNU/Linux.
Some posts ago, I noticed there will be an effort to compile the
sources using recent MS compilers, so I wonder if the porting to, say,
GNU/Linux and g++ will be done in a later stage, or if it is in
progress.

Cheers,
Thiago Silva

David Griswold

unread,
Oct 8, 2006, 3:49:13 PM10/8/06
to strongtal...@googlegroups.com

We don't realy have *any* experienced VM developers working on Strongtalk
right now, so the answer is no, unless someone is doing that without letting
me know. Myself and a few other people are exploring the VM code in our
part time, but that's about it so far. I've been trying to get people with
VM experience from other Smalltalks involved, but no luck so far; everyone
seems to think that someone else is going to magically develop Strongtalk
for them. It would be a great thing to do for some VM hacker, but until
they get involved, all that will happen is a little tinkering here and there
on the VM. There is almost certainly going to be at least one company
hiring some people to do something with the Strongtalk VM, but that is not
yet happening. Hopefully we can get some VM people doing these things soon.
-Dave


Thiago Silva

unread,
Oct 8, 2006, 7:23:26 PM10/8/06
to strongtal...@googlegroups.com
I see.
I'm a C/C++ developer (no wizard here, tough), and have been very interested in the project. Unfortunately, I have no experience with VM, and I'm trying to find the time to play with the sources on GNU/Linux.
Anyway, thanks for the quick answer, Dave.

Thiago Silva



David Griswold

unread,
Oct 8, 2006, 8:55:43 PM10/8/06
to strongtal...@googlegroups.com
Hi Thiago and others who want to help with the VM,
 
Thanks for your enthusiasm.  I know there are a number of smart C++ programmers like you out there listening who want to get involved.  I very much want help from people like you, as long as you are willing to go through a hard course of training on the VM, because unfortunately, system programming and VMs and type-feedback are very difficult. 
 
The reason I have delayed organizing people like you is that while I have a good understanding of the theoretical and abstract VM issues, I myself am not an experienced C++ programmer, and I have not worked directly on the VM code myself (although I managed the Strongtalk VM development).  As a result, I have been spending a lot of time myself getting up to speed on C++ and the MS development tools.  So I don't want to get people too excited to start changing the VM until I have learned enough myself to give you intelligent answers to your questions, and to be able to review submitted code and coordinate development.  I'm getting up to speed as fast as I can, but it will take some time.
 
In the meantime, rather than making those of you who want to help wait until we are ready to start real hacking on the VM, I will try to set up a 'VM study group' for people like you who want to learn about VMs and type-feedback.  So, in the next few days, I will put out a reading list that will help people get up to speed.
 
I will try to collect material on basic VM issues like garbage collection, but for those who want to dive in right now and think they understand basic VM issues, you can start by reading some of the papers listed at http://www.strongtalk.org/documents.html.   You might start by reading "Optimizing Dynamically-Dispatched Calls with Run-Time Type Feedback", which gives a good overview.
 
Cheers,
Dave

Thiago Silva

unread,
Oct 8, 2006, 9:40:19 PM10/8/06
to Strongtalk-general
David Griswold wrote:
> Hi Thiago and others who want to help with the VM,
>
> Thanks for your enthusiasm. I know there are a number of smart C++
> programmers like you out there listening who want to get involved. I very
> much want help from people like you, as long as you are willing to go
> through a hard course of training on the VM, because unfortunately, system
> programming and VMs and type-feedback are very difficult.

Great. I'll see how far my enthusiasm will take me... :)

> The reason I have delayed organizing people like you is that while I have a
> good understanding of the theoretical and abstract VM issues, I myself am
> not an experienced C++ programmer, and I have not worked directly on the VM
> code myself (although I managed the Strongtalk VM development). As a
> result, I have been spending a lot of time myself getting up to speed on C++
> and the MS development tools. So I don't want to get people too excited to
> start changing the VM until I have learned enough myself to give you
> intelligent answers to your questions, and to be able to review submitted
> code and coordinate development. I'm getting up to speed as fast as I can,
> but it will take some time.

That's very reasonable. I should thank you for your efforts, as well!

> In the meantime, rather than making those of you who want to help wait until
> we are ready to start real hacking on the VM, I will try to set up a 'VM
> study group' for people like you who want to learn about VMs and
> type-feedback. So, in the next few days, I will put out a reading list that
> will help people get up to speed.

Great!

> I will try to collect material on basic VM issues like garbage collection,
> but for those who want to dive in right now and think they understand basic
> VM issues, you can start by reading some of the papers listed at
> http://www.strongtalk.org/documents.html. You might start by reading
> "Optimizing Dynamically-Dispatched Calls with Run-Time Type Feedback", which
> gives a good overview.

Thank you, Dave.


Thiago Silva

Craig Latta

unread,
Oct 9, 2006, 1:07:56 AM10/9/06
to strongtal...@googlegroups.com

Hi Dave--

> I've been trying to get people with VM experience from other
> Smalltalks involved, but no luck so far; everyone seems to think that
> someone else is going to magically develop Strongtalk for them.

Several qualified VM hackers (including myself) have told you that
the C++ aspect keeps them from jumping in. So far the most we've gotten
from you about that is "oh, as C++ goes it's not so bad". I trust your
claim is true, but I don't think it's enough to sway those people.

There's no magic involved in the thought processes here. I
certainly don't expect anyone to develop Strongtalk for me, largely
because I don't know any VM hackers who can tolerate working with C++.
:) That doesn't mean I expect that the work will remain undone; I
simply don't know who would want to do it. Teaching C++ programmers
about VMs, as you seem to be doing, is probably the right way to go on
this project at the moment.


thanks,

-C

--
Craig Latta
http://netjam.org/resume

Cafe Alpha

unread,
Oct 9, 2006, 4:03:15 AM10/9/06
to strongtal...@googlegroups.com

I'm a bit of a C++ guru.

I wonder what VM hackers prefer?

I hope the answer isn't C. After all C is just a subset of C++, and one
where the common style includes the use of global variables holding state,
insuring that code isn't reenterent.

Joshua Scholar

Brian Rice

unread,
Oct 9, 2006, 4:36:20 AM10/9/06
to strongtal...@googlegroups.com
On 10/9/06, Cafe Alpha <cafea...@gmail.com> wrote:
> > > I've been trying to get people with VM experience from other
> > > Smalltalks involved, but no luck so far; everyone seems to think that
> > > someone else is going to magically develop Strongtalk for them.
> >
> > Several qualified VM hackers (including myself) have told you that
> > the C++ aspect keeps them from jumping in. So far the most we've gotten
> > from you about that is "oh, as C++ goes it's not so bad". I trust your
> > claim is true, but I don't think it's enough to sway those people.
> >
> > There's no magic involved in the thought processes here. I
> > certainly don't expect anyone to develop Strongtalk for me, largely
> > because I don't know any VM hackers who can tolerate working with C++.
> > :) That doesn't mean I expect that the work will remain undone; I
> > simply don't know who would want to do it. Teaching C++ programmers
> > about VMs, as you seem to be doing, is probably the right way to go on
> > this project at the moment.
> >
> >
> > thanks,
> >
> > -C
> >
> > --
> > Craig Latta
>
> I'm a bit of a C++ guru.
>
> I wonder what VM hackers prefer?
>
> I hope the answer isn't C. After all C is just a subset of C++, and one
> where the common style includes the use of global variables holding state,
> insuring that code isn't reenterent.

The C++ in the Strongtalk VM doesn't look particularly hackish in
terms of its large-scale design, and the fact that the code is
organized as classes and methods is better than the style of C I
usually think of as "GtkObject" style which is basically a design
pattern.

Perhaps what would help is the use of Doxygen ( at
http://www.stack.nl/~dimitri/doxygen/ ) which presents object-oriented
protocol information in a web-browsable format (with dependency graphs
if GraphViz is installed). A few wizard-based interfaces are provided
( see http://www.stack.nl/~dimitri/doxygen/download.html#latestsrc )
which I've found pretty easy to use after some familiarization,
although even the out-of-the-box settings suffice to start with. Dave,
if you can set up a Doxyfile and a cron-job to update some web-docs
for the project's SVN repository, it might increase visibility of the
source in a relatively friendly way. If it seems like a lot of
trouble, I can cook up a simple setup pretty easily.

--
-Brian T. Rice

Krzysztof Kowalczyk

unread,
Oct 9, 2006, 7:05:39 AM10/9/06
to strongtal...@googlegroups.com
On 10/8/06, Craig Latta <threnkel...@gmail.com> wrote:
> Several qualified VM hackers (including myself) have told you that
> the C++ aspect keeps them from jumping in. So far the most we've gotten
> from you about that is "oh, as C++ goes it's not so bad". I trust your
> claim is true, but I don't think it's enough to sway those people.
>
> There's no magic involved in the thought processes here. I
> certainly don't expect anyone to develop Strongtalk for me, largely
> because I don't know any VM hackers who can tolerate working with C++.
> :) That doesn't mean I expect that the work will remain undone; I
> simply don't know who would want to do it. Teaching C++ programmers
> about VMs, as you seem to be doing, is probably the right way to go on
> this project at the moment.

I'm nitpicking, but this seems exaggerated. I know only only a handful
VMs written in high-level languages, an overhwhelming majority is
written in C/C++.

Besides, throwing all "VM hackers" into one basket isn't accurate. A
Ruby VM or Python VM or clisp VM or Squeak VM or Java VM or PLT Scheme
or mono hacker will be (almost) as lost in Strongtalk's VM as any
decent programmer.

Another truth is that people invested in their own ideas and work are
reluctant to switch to other technology, even if in many ways it's
better. We wouldn't have a thousand lisp/scheme implementations if
that wasn't true. Trying to attract them at all cost seems futile.

Anyway, I think Strongtalk's future is bright and will happen by
growing knowledged developers out of people interested in Strongtalk,
just like any other succesful project (e.g. python or perl or ruby)
grew from one dedicated person to a small army.

Strongtalk's speed is a major selling point. A combination of ease of
programming similar to Python or Ruby with Strongtalk speed should
appeal to many programmers.

And finally, there are plenty of things that can be done by people who
are not VM wizards (makefiles, porting to other x86 OSes, expanding
Strangtalk libraries, writing docs etc.).

-- kjk

Mike T

unread,
Oct 9, 2006, 12:46:34 PM10/9/06
to Strongtalk-general
I think I agree with Krzysztof's view that many Smalltalk VM developers
are probably too immersed in their own projects to consider adopting
the Strongtalk design & code base. Many VM's have successfully been
implemented in C/C++, it's a proven language for this task. However at
this time detailed documentation is most needed.

I also think that Strongtalk can have a bright future as an independant
open source Smalltalk. I'd love to see a Smalltalk with a clean well
thought-out set of libraries and has good integration into the host OS
API.

Best wishes,
Mike

David Griswold

unread,
Oct 9, 2006, 1:49:44 PM10/9/06
to strongtal...@googlegroups.com
Hi Craig,

I understand why you wouldn't want to write in C++. I don't like C++
either, if I have a choice of languages. But a system programmer is someone
who uses the tools he needs to, to get the job done. VMs and OSes are
system programming, and system programming has always been done, with very
good reason, in languages that map fairly directly to machine code. I've
used dozens of languages, like most programmers, and while I like Smalltalk
the most, I'm not dogmatic about it. I use the tool that is appropriate for
the task.

Yes, there have been a few experiments in writing VMs in languages like
Smalltalk, but frankly, they are either slow (Squeak) or don't work (Klein).
Not a single VM written in Smalltalk, or Self, or other garbage-collecting
language, has ever gotten close to the kind of performance achieved by
lower-level languages. Squeak is much slower even than VisualWorks, which
is much slower than Strongtalk. You may not know any VM hackers who could
tolerate working in C++, but that simply shows you know a pretty small
circle of VM hackers, since *every* VM *except* for Squeak has been written
in a low-level language, including VisualWorks, Dolphin (I would bet), GNU,
Python, Java, Ruby, JavaScript, Lisp, ad infinitum.

I agree that it would be *nice* if it was written in Smalltalk, and it would
be nice if writing fast VMs in Smalltalk was a mature technology. But the
burden of proof is on you to demonstrate that Smalltalk is a good system
programming language, not on me to demonstrate that C++ is a good system
programming language.

In my experience, Smalltalk programmers have historically adopted the
mindset that Smalltalk performance is fine, performance doesn't matter. But
that is not a matter of reality, it is a matter of rationalization, because
high performance in Smalltalk wasn't achieveable. That has now changed.
The only reason why most of you think performance isn't important, is
because you have self-selected application areas where performance doesn't
matter. And there you all sit, in a tiny, obscure corner of the market.
Woe to the programmer that commits to using Smalltalk for some application,
and then discovers that performance was important after all. Ask the
Croquet people whether they feel that performance isn't important.

You are being handed the biggest advance in Smalltalk technology *ever*, and
everyone is complaining that the VM isn't written in Smalltalk. The
productivity drawbacks of writing in a particular language have to be
weighed against the benefits and the size of the potential user base. If
you have the same number of users as you do programmers, then it is
important to use a high-level language. If 5 people are writing software
that is going to be used by a billion people, the cost of writing the
software becomes almost irrelevant. You don't write the Linux kernel in
Smalltalk, and you don't write the Java VM in Java. You suck it up and do
what you have to do as a professional.

Sorry if I sound harsh, but the Smalltalk community needs to wake up. If
you don't believe me, ask Dan, who wrote the Squeak VM in Smalltalk, whether
*he* thinks Strongtalk's technology is worth dealing with the C++.

-Dave


John M McIntosh

unread,
Oct 9, 2006, 2:13:04 PM10/9/06
to strongtal...@googlegroups.com
Oddly enough I was thinking over this issue when swimming yesterday, as pointed out the problem
right now is that we all think someone is going to port all this code to various platforms and their vision of what is right over their kitchen table independently one night.

However really I think to make it work, we need to use say mantis (Our Sophie project has been using mantis for problem tracking and assignments with some success) to create a list of tasks and assemble people to take parts they are interested in to solve the issue of what to do as a group, versus a bunch of independent people poking at the problem. That way one can sort tasks from difficult to boring and people can pick and choose what they can work on, plus of course capture the to do list, and gee a project plan, in some form other than a long historically insignificant? mailing list.

Otherwise it's lots of talk, but no progress.

On 9-Oct-06, at 10:49 AM, David Griswold wrote:


Thiago Silva

unread,
Oct 9, 2006, 3:17:21 PM10/9/06
to strongtal...@googlegroups.com
On 10/9/06, John M McIntosh <jmmci...@gmail.com> wrote:
Oddly enough I was thinking over this issue when swimming yesterday, as pointed out the problem
right now is that we all think someone is going to port all this code to various platforms and their vision of what is right over their kitchen table independently one night.

However really I think to make it work, we need to use say mantis (Our Sophie project has been using mantis for problem tracking and assignments with some success) to create a list of tasks and assemble people to take parts they are interested in to solve the issue of what to do as a group, versus a bunch of independent people poking at the problem. That way one can sort tasks from difficult to boring and people can pick and choose what they can work on, plus of course capture the to do list, and gee a project plan, in some form other than a long historically insignificant? mailing list.

I think using an application like mantis is good. But I think it is good when the staff, goals,  commitment, responsabilities are stable. When the process is clear. If I'm not mistaken, hasn't been a long time since there was a discussion on what direction the project should focus.

Otherwise it's lots of talk, but no progress.

I belive that if one is capable of working in something previously discussed in the list, using mantis would just allow record tasks and organize them. But if one is not commited to the project, is not a task entry that will motivate him to do what he was assigned to do.

If the list was bloated with task assignments and things to do, I would vouch for using a task manager/tracking system. It would help to organize things (tasks) that already exist. Using mantis wouldn't be a solution to the wrong problem?


Thiago Silva

Craig Latta

unread,
Oct 9, 2006, 3:29:46 PM10/9/06
to strongtal...@googlegroups.com

> I understand why you wouldn't want to write in C++. I don't like C++
> either, if I have a choice of languages. But a system programmer is
> someone who uses the tools he needs to, to get the job done.

Of course. I think Smalltalk can get the job done (e.g., Bryce or
Eliot's work), and you don't.

> ...while I like Smalltalk the most, I'm not dogmatic about it.

I'm not either. When I see something better, I'll use it.

> Yes, there have been a few experiments in writing VMs in languages
> like Smalltalk, but frankly, they are either slow (Squeak) or don't
> work (Klein).

You just seem to be saying that works in progress doesn't count. If
you were to actually criticize the designs that their authors are
pursuing, then I'd be able to take you seriously.

> Not a single VM written in Smalltalk, or Self, or other
> garbage-collecting language, has ever gotten close to the kind of
> performance achieved by lower-level languages.

And so their authors are attempting to do so. They haven't finished
yet. That doesn't mean they have failed.

> You may not know any VM hackers who could tolerate working in C++, but

> that simply shows you know a pretty small circle of VM hackers...

Sure, and that goes for you as well (or you'd have found your
collaborators by now, after n years). The real point, as you yourself
have already established here, is that the total pool of such people is
very small.

> since *every* VM *except* for Squeak has been written in a low-level
> language, including VisualWorks, Dolphin (I would bet), GNU, Python,
> Java, Ruby, JavaScript, Lisp, ad infinitum.

I was referring to Smalltalk VM hackers, sorry for the confusion.

> I agree that it would be *nice* if it was written in Smalltalk, and it
> would be nice if writing fast VMs in Smalltalk was a mature
> technology.

It sure would. So nice, in fact, that I think that's where effort
ought to go. (Hey, you actually acknowledged that "writing fast VMs in
Smalltalk" is something that anyone aspires to at all! :)

> But the burden of proof is on you to demonstrate that Smalltalk is a

> good system programming language...

While I am in fact working on making Smalltalk a better system
programming language, I have no obligation to prove anything to you.
*You* are the one asking for help. I've told you my own reservations,
for what it's worth.

> ...not on me to demonstrate that C++ is a good system programming
> language.

I never asked you to do so. But I have learned that you think for
something to be good it must execute as fast as possible, and we
disagree about that.

> In my experience, Smalltalk programmers have historically adopted the
> mindset that Smalltalk performance is fine, performance doesn't
> matter.

The rest of that paragraph is a straw-man argument which I find
ridiculous and insulting. Performance certainly does matter to me. I
also think there are more important things, for which you accuse me of
thinking performance doesn't matter at all. This is utterly specious.

> You are being handed the biggest advance in Smalltalk technology

> *ever*.

I'm sorry, but I disagree with that.

> You don't write the Linux kernel in Smalltalk, and you don't write the
> Java VM in Java. You suck it up and do what you have to do as a
> professional.

I have yet to hear any substance behind those assertions. It seems
to me that you're just using stronger language to bluster your way through.

> Sorry if I sound harsh...

No; frankly, you sound petulant.

> ...but the Smalltalk community needs to wake up.

We're not all asleep. :)

> If you don't believe me, ask Dan, who wrote the Squeak VM in
> Smalltalk, whether *he* thinks Strongtalk's technology is worth
> dealing with the C++.

Clearly Dan is at least as excited as you are about Strongtalk,
from what he's already said publicly, and clearly I disagree with him as
well. If I thought Dan had taken the Squeak VM as far as it can go, then
I might find your appeal to authority convincing.

I'm sorry for offending you, Dave (or anyone else on this list). I
wish you much success.


-C


Krzysztof Kowalczyk

unread,
Oct 9, 2006, 3:29:55 PM10/9/06
to strongtal...@googlegroups.com
> On 10/9/06, John M McIntosh <jmmci...@gmail.com> wrote:
> > However really I think to make it work, we need to use say mantis (Our
> Sophie project has been using mantis for problem tracking and assignments
> with some success) to create a list of tasks and assemble people to take
> parts they are interested in to solve the issue of what to do as a group,
> versus a bunch of independent people poking at the problem.

I believe that already exists: http://code.google.com/p/strongtalk/issues/list

I guess more prominent link from http://strongtalk.org would be good
(it's there but buried in the text).

-- kjk

David Griswold

unread,
Oct 9, 2006, 3:32:50 PM10/9/06
to strongtal...@googlegroups.com
How is mantis better suited for this than the issue tracker we have now?  What you suggest might help once VM engineering is really happening, but our problem right now is getting *any* experienced VM engineers involved.  There isn't any porting work or other major VM engineering work going on right now.
-Dave
-----Original Message-----
From: strongtal...@googlegroups.com [mailto:strongtal...@googlegroups.com]On Behalf Of John M McIntosh
Sent: Monday, October 09, 2006 11:13 AM
To: strongtal...@googlegroups.com
Subject: Re: Porting

Jecel Assumpcao Jr

unread,
Oct 9, 2006, 4:05:25 PM10/9/06
to strongtal...@googlegroups.com
David,

> Sorry if I sound harsh, but the Smalltalk community needs to wake up. If
> you don't believe me, ask Dan, who wrote the Squeak VM in Smalltalk, whether
> *he* thinks Strongtalk's technology is worth dealing with the C++.

Dan has just rewritten the whole Squeak VM in Java, thus proving there
is no limit to how much pain he can take! :-)

But, seriously, your position is not harsh at all but very reasonable.
It just isn't universal.

Certainly some Smalltalk code can spit out the exact same bits as some
other given C++ code? In that case the compiled code would run just as
fast. right? So the only possible complaint I can imagine is that the
Smalltalk code is much slower than its C++ equivalent or somehow harder
to understand and modify. And yes, I am aware that I am comparing C++
code that exists with Smalltalk code that doesn't.

Note that I am very aware of the issues of low level programming. I love
Forth, for example, and so far have programmed more in C than Smalltalk
or Lisp (and I have written a lot of code in all of these over the past
25 years). Most of my time is spent developing hardware and when I am
typing hex numbers I consider myself to be doing high level programming
;-) I have written what was essentially the same Self interpreter in C
(when I only had access to PCs) and then in Self (when I had access to a
Sparc machine).

But I will agree with you that mere technical features of a tool are not
the most important aspects in determining a project's success. Lots of
other factors are involved (like previously existing code). Yet even in
this regard my previous experience makes me a bit wary about the
Strongtalk path: I watched three separate groups of interested
programmers attempt to port Self to x86 Linux (with a fourth group
picking up the pieces of the second effort and giving us something that
actually works, but isn't very usable). In this case the bulk of the
problem was changing the compilers, which given Apple's recent move
seems like it would no longer be needed for Strongtalk (x86 only is ok
these days).

-- Jecel

David Griswold

unread,
Oct 9, 2006, 4:45:09 PM10/9/06
to strongtal...@googlegroups.com
Hi Craig,

Craig Latta wrote:
> [...]


> Sure, and that goes for you as well (or you'd have found your
> collaborators by now, after n years). The real point, as you yourself
> have already established here, is that the total pool of such people is
> very small.

It hasn't been n years, it has been 1 month.

> > since *every* VM *except* for Squeak has been written in a low-level
> > language, including VisualWorks, Dolphin (I would bet), GNU, Python,
> > Java, Ruby, JavaScript, Lisp, ad infinitum.
>
> I was referring to Smalltalk VM hackers, sorry for the confusion.

Even if you restrict it to Smalltalk, there are still more VMs written in
low-level languages than in Smalltalk. If you restrict it to fast VMs, it
is 100%.

> > I agree that it would be *nice* if it was written in Smalltalk, and it
> > would be nice if writing fast VMs in Smalltalk was a mature
> > technology.
>
> It sure would. So nice, in fact, that I think that's where effort
> ought to go. (Hey, you actually acknowledged that "writing fast VMs in
> Smalltalk" is something that anyone aspires to at all! :)
>
> > But the burden of proof is on you to demonstrate that Smalltalk is a
> > good system programming language...
>
> While I am in fact working on making Smalltalk a better system
> programming language, I have no obligation to prove anything to you.
> *You* are the one asking for help. I've told you my own reservations,
> for what it's worth.

I am not asking you to prove it to me. If you want the community to bet its
future to writing fast VMs in Smalltalk, it would be very wise to have some
positive evidence that it is even possible before discarding the
established, traditional, proven approach to writing VMs. There is only
negative evidence for Smalltalk as a system programming language so far: a
slow VM, and a non-working VM. Perhaps it can be done, and I hope so, but
the burden of proof is indeed on those who are asking that the community bet
its future on an unproven approach.

You want proof of the Strongtalk approach? Here it is: the Strongtalk VM
works right now and is > 10x Squeak speed. The Java VM is directly based on
the Strongtalk C++ design, and it is fast, reliable, multi-threaded, and is
being used right this minute by hundreds of thousands of people. It has
been ported to multiple operating systems and processor architectures, and
is being supported by multiple large companies. That's what proof looks
like.

It is not me who needs help. I am offering to help the Smalltalk community
adopt radically better technology that people have been waiting for for 20
years. I haven't been working on Smalltalk for many years; I am only
offering my free time to help people adopt this technology, which I don't
own, and don't even particularly want to work on, because I want to see the
Smalltalk community survive, despite itself. If the community doesn't step
up and Strongtalk sits here as a pile of bits, it's no skin off my back at
all.

> > You don't write the Linux kernel in Smalltalk, and you don't write the
> > Java VM in Java. You suck it up and do what you have to do as a
> > professional.
>
> I have yet to hear any substance behind those assertions. It seems
> to me that you're just using stronger language to bluster your
> way through.

You think there is no substance behind my assertion that the Linux kernel
and Java VM and all other fast OSes and VMs are written in low-level
languages? You honestly think that is bluster?

> > Sorry if I sound harsh...
>
> No; frankly, you sound petulant.

No, I'm just getting tired of people acting like I'm asking them to do *me*
a favor.

> > ...but the Smalltalk community needs to wake up.
>
> We're not all asleep. :)
>
> > If you don't believe me, ask Dan, who wrote the Squeak VM in
> > Smalltalk, whether *he* thinks Strongtalk's technology is worth
> > dealing with the C++.
>
> Clearly Dan is at least as excited as you are about Strongtalk,
> from what he's already said publicly, and clearly I disagree with him as
> well. If I thought Dan had taken the Squeak VM as far as it can go, then
> I might find your appeal to authority convincing.

At any rate, let's try not to get too heated. We all want Smalltalk to get
better. I'm sure we all think that both Strongtalk and AOStA and Exupery
and Spoon are cool. I just want the community to understand that if *they*
don't do something about it, they are not going to get a finished, ported
Strongtalk VM. Clearly, that doesn't bother you. But it does bother plenty
of other people.

-Dave


Colin Putney

unread,
Oct 9, 2006, 5:19:43 PM10/9/06
to strongtal...@googlegroups.com

On Oct 9, 2006, at 1:45 PM, David Griswold wrote:

> At any rate, let's try not to get too heated. We all want
> Smalltalk to get
> better. I'm sure we all think that both Strongtalk and AOStA and
> Exupery
> and Spoon are cool. I just want the community to understand that if
> *they*
> don't do something about it, they are not going to get a finished,
> ported
> Strongtalk VM.

David,

In this case, I think you're right. If you want to write a high
performance VM today, a low level language is a good choice -
probably the lower the better, if all you care about is extracting
performance from the machine.

However, it's worth noting a few things:

1. Squeak is slow because it's a pure interpreter, not because it's
written in Smalltalk.

2. Squeak *isn't* written in Smalltalk, it's written in Slang. Slang
is a language that can be compiled to both Smalltalk bytecode and C.
It doesn't rely on garbage collection, message sends or even objects,
so it's essentially a low-level language. The performance
characteristics of the Squeak VM are effectively those of a
reasonably-well optimized C program.

It may very well be possible to write a high performance VM in
Smalltalk, but as far as I know, nobody has done it. Attempting to do
so would be a laudable research project, but probably not a good use
of time for those interested in making the technology in Strongtalk
available for production use.

Colin

Craig Latta

unread,
Oct 9, 2006, 5:25:40 PM10/9/06
to strongtal...@googlegroups.com

> Even if you restrict it to Smalltalk, there are still more VMs written
> in low-level languages than in Smalltalk. If you restrict it to fast
> VMs, it is 100%.

Again, you're just saying that only finished work matters, and the
design merits of works in progress aren't worth discussing. I think
that's silly.

> I am not asking you to prove it to me. If you want the community to
> bet its future to writing fast VMs in Smalltalk, it would be very wise

> to have some positive evidence that it is even possible...

I think Bryce's measurements so far are very encouraging. Sure,
they don't indicate that he's the fastest thing around yet, but I think
they serve as ample "positive evidence that it is even possible". I
haven't heard numbers from Eliot yet, but I suspect he has some.

And if Java and Ruby have shown us anything, it's that you can be
huge in the marketplace without even being as fast as Squeak!

> There is only negative evidence for Smalltalk as a system programming
> language so far: a slow VM, and a non-working VM.

I disagree with you (see above).

> You want proof of the Strongtalk approach?

Of course not; you gave that to us ten years ago.

> You think there is no substance behind my assertion that the Linux
> kernel and Java VM and all other fast OSes and VMs are written in
> low-level languages?

You're asserting more than that. You're saying that's the only way
it can be done.

> You honestly think that is bluster?

I do indeed.

> We all want Smalltalk to get better. I'm sure we all think that both
> Strongtalk and AOStA and Exupery and Spoon are cool.

Whoa, hold on there; Spoon is not a competing VM, although it does
require certain features from a VM. Spoon will work with any of those
the Smalltalk VMs you've mentioned. It's about bringing a better
organization to the object memory.

> I just want the community to understand that if *they* don't do
> something about it, they are not going to get a finished, ported
> Strongtalk VM.

I never disputed that, Dave.

> Clearly, that doesn't bother you.

Aha, so now we're getting to the real thing that gets you all
worked up: I'm not bothered enough about Strongtalk's imperiled
vitality. Look, I think having Strongtalk finished would be very good,
so that its ideas are more accessible for study and reuse. I also think
there are better ways forward from where we are. It is possible to hold
those views at the same time. :)


-C


David Griswold

unread,
Oct 9, 2006, 6:36:14 PM10/9/06
to strongtal...@googlegroups.com
Craig,

Craig Latta wrote:
> > Even if you restrict it to Smalltalk, there are still more VMs written
> > in low-level languages than in Smalltalk. If you restrict it to fast
> > VMs, it is 100%.
>
> Again, you're just saying that only finished work matters, and the
> design merits of works in progress aren't worth discussing. I think
> that's silly.

I'm not saying only finished work matters, I'm saying that only finished
work is *proof* that an approach works.

> [...]


> > You think there is no substance behind my assertion that the Linux
> > kernel and Java VM and all other fast OSes and VMs are written in
> > low-level languages?
>
> You're asserting more than that. You're saying that's the only way
> it can be done.

I've never said it is the only way it can be done. I am saying it is the
only way that it *has* ever been done, and anyone who there isn't major risk
in a radical approach like writing the compiler and VM in Smalltalk is just
plain wrong. It is a great experiment, that I hope succeeds, but it is not
something you want to bet everything on.

> > You honestly think that is bluster?
>
> I do indeed.

Let me be more clear to you about the risk you are facing, since you think
this is just bluster. I managed development of the Java VM. We tested vast
amounts of code on that VM, which is based on the Strongtalk VM, so we have
*far* more experience with this kind of compilation technology than anyone
else in the world, period.

We learned some very hard lessons that weren't apparent up front. The
biggest one is that compilation speed is very, very important, and that when
you put fancy code generation into the mix, specifically high quality
register allocation, it interacts very badly with the extensive inlining
that is *required* to speed up languages like Smalltalk by a significant
amount.

Good register allocators are highly non-linear in time. As the size of the
compiled method increases, the compilation speed slows down much worse than
linearly. When you start to do extensive inlining, compiled method sizes
can grow from a few dozen to thousands of bytecodes. Very, very bad things
can happen when you run good register allocators on such giant methods. We
put incredible amounts of time into addressing this issue in Java. It was
so hard to solve that eventually it required writing an entirely new
compiler without such a good code generator, and forking off a separate VM
and a group to work on it, for use in Java clients. That is the reason
there are separate Client and Server Java VMs. Bryce thinks that background
compilation will solve this problem. We did that, and it wasn't enough, and
the compiler was highly-tuned C++.

Now, take that compiler that wasn't fast enough and write it in Smalltalk
instead of C++, and you try to tell me there isn't a big risk, one that you
won't have a handle on until almost the end of the project. I'm not saying
it won't work, I'm not saying I don't want it to work, I'm saying it is
crazy to bet everything on it. Maybe Smalltalk won't need a fast startup
time. Maybe Bryce will be able to store the compiled code between sessions,
and that fast, dynamic network distribution of bytecodes with short running
times won't be the norm in the future (sound like Spoon at all?). Maybe.
You want to bet everything on that?

With Strongtalk, there is only the pain of coding in C++. There are *not*
these kinds of big risks. That is why I don't think "discussing the design
merits of works in progress" is sufficient to be confident that a technology
is going to work.

> > We all want Smalltalk to get better. I'm sure we all think that both
> > Strongtalk and AOStA and Exupery and Spoon are cool.
>
> Whoa, hold on there; Spoon is not a competing VM, although it does
> require certain features from a VM. Spoon will work with any of those
> the Smalltalk VMs you've mentioned. It's about bringing a better
> organization to the object memory.
>
> > I just want the community to understand that if *they* don't do
> > something about it, they are not going to get a finished, ported
> > Strongtalk VM.
>
> I never disputed that, Dave.
>
> > Clearly, that doesn't bother you.
>
> Aha, so now we're getting to the real thing that gets you all
> worked up: I'm not bothered enough about Strongtalk's imperiled
> vitality. Look, I think having Strongtalk finished would be very good,
> so that its ideas are more accessible for study and reuse. I also think
> there are better ways forward from where we are. It is possible to hold
> those views at the same time. :)

It may be possible to hold those views at the same time, but that's not the
same as having the resources to work on both approaches at the same time.
If you aren't bothered enough about Strongtalk's imperiled vitality, fine by
me. But if everyone who could work on this VM feels like you do, then it is
the whole Smalltalk community that could end up suffering for it. *That* is
what gets me worked up. I offer no apologies for that.

-Dave


tim Rowledge

unread,
Oct 9, 2006, 7:31:56 PM10/9/06
to strongtal...@googlegroups.com
>
Dave said -

> Now, take that compiler that wasn't fast enough and write it in
> Smalltalk
> instead of C++, and you try to tell me there isn't a big risk, one
> that you
> won't have a handle on until almost the end of the project. I'm
> not saying
> it won't work, I'm not saying I don't want it to work, I'm saying
> it is
> crazy to bet everything on it.

Dave, I think you've grabbed the wrong end of the stick here. Craig
is, I think, arguing that writing a VM in Smalltalk-ish (ie Slang in
the case of squeak) and being able to run that version in simulation
is very useful. I'd be surprised if you didn't agree with that part;
from my own experience debugging a vm in simulation is hugely easier
than debugging one any other way. The bit I think you must have
missed is the subsequent step of converting that Slang to C/C++/
assembler/whatever. So far as I know, no one has suggested actually
trying to implement the vm in Smalltalk per se. You appear to be
merging the vm and the native-compiler into a single concept and I
don't think that is a given.

Now, reading again I realise that possibly I've got the wrong end of
your stick in turn. So, if you're not talking about the actual VM
implementation then forget the above.


> Maybe Smalltalk won't need a fast startup
> time.

So far as I understand it, the approach being used for Exupery is
that the image starts up as normal, running interpreted. Thus the
startup time is whatever it is now. It's pretty fast unless someone
adds nastiness to the startup sequence. How does a compiler improve
that? Surely there isn't time to get much in the way of smart
compiling done during that short period, no matter how fast your
compiler is? In fact I'd imagine that there is no point at all in
even firing it up during that phase since the code is only going to
run once (ish) and then the image moves on to more regular things.

If I were in a situation where startup time was important - say using
the system to implement shortlived server applets like a lot of java
usage - then I'd anticipate wanting to load pre-compiled methods, not
trying to compile them on the fly.


tim
--
tim Rowledge; t...@rowledge.org; http://www.rowledge.org/tim
Fractured Idiom:- J'Y SUIS, J'Y PESTES - I can stay for the weekend


danielv

unread,
Oct 9, 2006, 8:14:26 PM10/9/06
to Strongtalk-general
David Griswold wrote:
> It may be possible to hold those views at the same time, but that's not the
> same as having the resources to work on both approaches at the same time.
> If you aren't bothered enough about Strongtalk's imperiled vitality, fine by
> me. But if everyone who could work on this VM feels like you do, then it is
> the whole Smalltalk community that could end up suffering for it. *That* is
> what gets me worked up. I offer no apologies for that.
The pains and pleasures of the open source world... we have unbounded
resources, to the extent that enough people are interested enough in
the problems available. But there is no sure way to make people
interested in any particular problem. Don't worry about it too much, it
doesn't help :-)

However, there are multiple ways people working on their own chosen
problems can have significant benefit from the Strongtalk work. In
particular for those working on their own implementations:

> We learned some very hard lessons that weren't apparent up front.

[Interaction between compilation speed, register allocation, and
extensive inlining]


> Now, take that compiler that wasn't fast enough and write it in Smalltalk
> instead of C++, and you try to tell me there isn't a big risk, one that you
> won't have a handle on until almost the end of the project.

Lessons like this can enable Bryce and friends to use the right
benchmarks up front, and then make up their own minds on the
speed/language tradeoffs with all the information up front. Some may
even decide C++ is not that bad ;-) Maybe mapping out the hardest
scenarios in detail is worthwhile.

Daniel Vainsencher

David Griswold

unread,
Oct 9, 2006, 9:03:06 PM10/9/06
to strongtal...@googlegroups.com
Hi Tim,

It was my understanding that Exupery is written in full Smalltalk, not
Slang, in which case it isn't getting translated to C. If I am wrong and it
is in Slang, then what you say would be true, but it would also then be true
that it would not be object-oriented, since Slang is not really OO (nor does
it have GC, as far as I understand), whereas the Strongtalk VM is highly OO,
even though it is C++.

As for running it in simulation, that is indeed really nice, for seeing what
the compiler is doing. But plenty of the debugging difficulty is debugging
crashes in the the code *generated* by the compiler, which has nothing to do
with what language the compiler is written in.

>
> > Maybe Smalltalk won't need a fast startup
> > time.
>
> So far as I understand it, the approach being used for Exupery is
> that the image starts up as normal, running interpreted. Thus the
> startup time is whatever it is now. It's pretty fast unless someone
> adds nastiness to the startup sequence. How does a compiler improve
> that? Surely there isn't time to get much in the way of smart
> compiling done during that short period, no matter how fast your
> compiler is? In fact I'd imagine that there is no point at all in
> even firing it up during that phase since the code is only going to
> run once (ish) and then the image moves on to more regular things.

That is basically the same invocation counter approach that Strongtalk uses.
But if the compiler is very, very fast, then it could indeed help even
during startup, depending on what kind of code is being run. If there is
even one inner loop that reads in a file, or iterates over a bunch of
strings, or anything like that, it can run many tens of thousands of times
(if not hundreds of thousands or millions) in the first second, and so is
worth optimizing. Part of the way that the right invocation counter limit
that triggers compilation is determined, is by trying to make sure that the
amount of time spent compiling is not large compared to the time spend
interpreting that code up to the point where the code is compiled, so that
the compilation time is always small relative to the time spend running. As
the compiler becomes slower, you have to make the invocation counters higher
and higher to try to compensate.

For example, even during startup you would need to bitblt things to the
screen. That has an inner loop that runs hundreds of thousands if not
millions of times even in the first second. If you wanted to write all
those sorts of things in Smalltalk (which everyone agrees is a good idea), I
bet your startup would be a lot slower than it is now. If you have a very
fast compiler, it would get immediately compiled, and you can write all that
kind of code in pure Smalltalk rather than in C/Slang, which is what you say
the goal is.

The next issue is that the compiler itself is going to be interpreted, until
you have pre-compiled methods, which might be quite a way off. So the
compiler itself needs to be compiled, and that is not a small body of code.
So for a while after the system starts up, most of your runtime is being
sucked up by the fact that not only is your program running interpreted for
longer because of the bigger invocation counters, but the compiler is
running *really* slowly at the same time, slowly coming up to speed, before
either the compiler or your program is able to run fast. These all compound
each other. Conversely, if the compiler is very fast and doesn't have to be
compiled, you get compounded benefits: no time is spend compiling the
compiler, *and* you can run with much lower invocation counters, so your
code spends much less time running interpreted before it is compiled.

> If I were in a situation where startup time was important - say using
> the system to implement shortlived server applets like a lot of java
> usage - then I'd anticipate wanting to load pre-compiled methods, not
> trying to compile them on the fly.

If you can eventually store the generated code and load and run it without
compilation, that would help a lot. It would only work for local code, not
code coming over the net, though.

Unfortunately that won't be a panacea, because once you start doing
type-feedback, you can't just count on storing code and then not running the
compiler, because all it takes is a slight deviation in the execution path
(something as small as moving the mouse differently, for example) to cause
an uncommon trap because the inlined compiled code is then invalid, and must
be *immediately* (not in the background) deoptimized, which would then
require recompilation, starting the compiler going despite all the
pre-compiled code, although the recompilation could be done in the
background. So you can't simply turn the compiler off regardless of whether
you can precompile code, unless it is ok if critical methods revert to being
interpreted for the rest of the session.

Strongtalk contains an experimental feature that can be turned on to
minimize total runtime, at the expense of slightly slower startup, by
storing an "inlining database" of what methods needed optimization in a
training run and how they are inlined, and when the system starts up, those
methods are optimized the *first* time they are called, so that they never
run interpreted at all, which make the total runtime shorter, but the
startup pauses bigger. It was a first step towards storing precompiled
code.

-Dave


tim Rowledge

unread,
Oct 9, 2006, 9:28:49 PM10/9/06
to strongtal...@googlegroups.com

On 9-Oct-06, at 6:03 PM, David Griswold wrote:


>
> It was my understanding that Exupery is written in full Smalltalk, not
> Slang, in which case it isn't getting translated to C.

That is correct; Exupery is written in plain old Smalltalk (modulo
the potential benefits of the odd prim or two I imagine) but exupery
isn't anything to do with the VM implementation. I fear we may not be
mapping the same mental construct to words like 'compiler' which
would explain a lot of the confusion I see here.

Forgive me if I repeat the basics here just so I can feel sure I've
said it -
squeak vm is written in slang, a rather tacky pidgin of C implemented
as smalltalk.
It can run as a simulation of the vm operation.
It can be translated (actually 'transliterated' would be more correct
since it is very simple minded) to C code and compiled to make the
usual executable vm. So it runs exactly as well or as poorly as a vm
written to the same design in handwritten C.
Exupery is a background process in Squeak that takes chosen methods
and compiles them to machinecode and installs them whenever it gets
around to it. So, yes, exupery will undoubtedly take longer to
convert a bit of smalltalk to machine code, even when the bulk of the
exupery code has been transmogrified.


>
> As for running it in simulation, that is indeed really nice, for
> seeing what
> the compiler is doing. But plenty of the debugging difficulty is
> debugging
> crashes in the the code *generated* by the compiler, which has
> nothing to do
> with what language the compiler is written in.

Now this paragraph is what makes me feel like we're whizzing past
each other because it reads as if you are conflating the vm with
exupery. I guess that since you wrote your compiler in C++ and it is
presumably an integral part of the vm, that even makes sense.

And I most certainly agree that once you are dealing with code that
has been translated to machine code you are in a whole new world of
debugging pain. Believe me, been there, done that. Try developing and
debugging a VW code generator on a machine with *no* interactive
debugger :-( ouch. Of course, you could write a simulation of the cpu/
etc and run it in alongside the vm simulation.... perhaps not.


>
>>
>>> Maybe Smalltalk won't need a fast startup
>>> time.
>>

[snip]


>
> That is basically the same invocation counter approach that
> Strongtalk uses.
> But if the compiler is very, very fast, then it could indeed help even
> during startup, depending on what kind of code is being run.

Indeed. An infinitely fast compiler would obviously be a benefit
during startup and I take your point that there are repetitive
actions that can benefit in practise.

I don't have any particular agenda here, simply a desire to reduce
the amount of parties disagreeing about things they aren't actually
talking about.

Strange OpCodes: SFA: Seek Financial Assistance


David Griswold

unread,
Oct 9, 2006, 10:25:24 PM10/9/06
to strongtal...@googlegroups.com
Hi Tim,

Yes, I know that the VM itself is written in Slang. My point has been that
the speed of the compiler itself is critical, and if the compiler is written
in Smalltalk, Slang isn't going to help you, and you pay three kinds of
penalty: 1) it starts interpreted, so it runs slowly at first 2) you have to
compile it, which takes cycles that could be used on something else, and 3)
even once it is compiled, it is still much slower than C++.

I don't quite get where we are whizzing past each other. I haven't been
talking about the speed of the VM per-say, only potential engineering risks
in getting a compiler written in Smalltalk to work quickly and transparently
like it should. If you want to talk about the VM speed itself, of course
Slang *could* reach similar speed to the Strongtalk VM, but it wouldn't be
OO, it would basically be C, so it would be structured much more poorly than
the Strongtalk VM, which would cancel much of the benefit of debugging it in
Smalltalk.

And in practice, independent of the compiler technology, it would take a
major redesign of Squeak anyway to get close to Strongtalk speed, because
there are several other significant speed advantages built into the
Strongtalk VM: 1) use of 0 tag bit for smi, which is much faster since we
don't need to tag/detag when doing addition, etc. 2) much faster garbage
collector, using the two instruction Hoelzle card-marking write-barrier
rather than remembered sets. 3) Highly tuned interpreter, at least 3 times
faster than Squeaks'.

-Dave

> -----Original Message-----
> From: strongtal...@googlegroups.com
> [mailto:strongtal...@googlegroups.com]On Behalf Of tim Rowledge
> Sent: Monday, October 09, 2006 6:29 PM
> To: strongtal...@googlegroups.com
> Subject: Re: Porting
>
>
>
>

David Griswold

unread,
Oct 9, 2006, 11:56:26 PM10/9/06
to strongtal...@googlegroups.com
Hi all,

I get the feeling that my proposal to work on a common VM is being taken as
just version of the standard refrain of "drop whatever you are doing, and do
things my way, because that way I win". I want to see if I can make clear
that that really, really isn't what is going on here.

I understand very much that moving to a common VM could cause a lot of work
by other people to get discarded or reworked. I understand that there is
naturally resistance to that. But I want you to understand that I would be
sharing that pain.

If indeed we moved to a common VM, it would probably involve trashing
*every* line of code that I personally wrote on the system, since my
personal coding contribution was above the VM in Smalltalk (the basic
libaries, UI framework, and browser), as well as losing all the other cool
things the team did above the VM line, like the type system, and development
environment. I happen to think most of those things are innovative and
better than what is in other systems, and that they should be kept alive.
But I am willing to toss all of that in the trash, if necessary, if that is
the price for bringing the Smalltalk community together on a better open
source VM.

I am doing this because the Strongtalk VM really, really is so much better
than any other existing Smalltalk VM that I believe that pain is worth it,
for all of us. I know that it is easy to dismiss it as hype. But the
people who know the most about type-feedback and the history behind this VM
will tell you: this is the real thing, a real breakthrough in Smalltalk VM
design. It isn't just a little bit faster, it is way, way faster. And that
is with the current, simple compiler. There is a second generation compiler
sitting inside the source code that is considerably better. It isn't done
yet, but it is completed enough that it was just starting to run small micro
benchmarks when we stopped working on the system. So we are probably
talking nearly 20x faster than Squeak, with the better compiler.

But once again, if you don't think performance is that important, then of
course this wouldn't be worth it.

As for the pain of giving up work on other VMs/compilers etc- just think
about this: if you are a hero on some other VM or compiler, this is an
opportunity to jump over and be the hero on a vastly better VM- there is no
one else here to take the glory away from you; there is no one's shadow to
be in. There is a better compiler to finish, experiments to run, papers to
publish, and benchmarks to kick-butt on. You can come and make Strongtalk
your own- I'll go away and leave it all to you folks if it gets to that.

There is pain and tough decisions anytime a big advance is made. But that
doesn't mean the advance should be fought or avoided. We would all be
making sacrifices together.

Enough said. I will be out of the country for the next 6 weeks, and will
probably respond to email less frequently, although I will try to stay on
top of things.

Cheers,
-Dave


David Griswold

unread,
Oct 10, 2006, 1:07:47 AM10/10/06
to strongtal...@googlegroups.com
One more small thing I forgot to mention, as an incentive for learning and
working on the Strongtalk VM: it is the direct ancestor of the Java VM, the
most widely used VM in the world.

So, being an expert on it would be *excellent* experience to have on your
resume if you wanted a well-paying VM *job* at Sun or some other company
that supports it. The Java VM is expected to go open-source sometime in the
not-too distant future, but the Strongtalk VM is the best way right now to
learn how such a VM works, and I doubt the license on the Java VM will be as
unrestricted as the Strongtalk license. There are plenty of differences,
but the VM infrastructure, garbage collector, etc are basically the same.

I know many of you don't like Java much (me either), but VM jobs are hard to
find, and are about the best thing you could have on a resume.
-Dave


Alejandro F. Reimondo

unread,
Oct 10, 2006, 9:03:19 AM10/10/06
to strongtal...@googlegroups.com
Hi David,

> whereas the Strongtalk VM is highly OO,
> even though it is C++.

Have you made any experiences changing VM parts while running?
I mean, e.g. change the GC from a small&fast mark&sweep GC
for tiny images by a generational GC when the number of
objects grow...
Or change the primitive set while running, acording to methods
requierements?

I am interested in reading experiences about dynamical changes
in the VM governed by living objects..
Any pointers?

cheers,
Ale.

eliot....@gmail.com

unread,
Oct 10, 2006, 1:08:57 PM10/10/06
to Strongtalk-general

Craig Latta wrote:
> > I understand why you wouldn't want to write in C++. I don't like C++
> > either, if I have a choice of languages. But a system programmer is
> > someone who uses the tools he needs to, to get the job done.
>
> Of course. I think Smalltalk can get the job done (e.g., Bryce or
> Eliot's work), and you don't.

Except that in my architecture only the high-level optimizer and
deoptimizer are in Smalltalk. There is stll a C VM that generates
processor-specific machine code underneath it. I never figured out how
to do without the platform debugger (adb, dbx, gdb et al) when
bootstrapping.

But all this is beside the point. To make either Strongtalk or AOStA
or anything else a reality requires work. And understanding the
compilation technology is not easy.

I asked for collaborators on AOStA in 2002 and only got one taker. I
don't think the issue is C++ at all. But I'm not interested in arguing
either. What's needed is work.

Jecel Assumpcao Jr

unread,
Oct 10, 2006, 4:22:02 PM10/10/06
to strongtal...@googlegroups.com
David Griswold wrote on Mon, 9 Oct 2006 20:56:26 -0700:

> I get the feeling that my proposal to work on a common VM is being taken as
> just version of the standard refrain of "drop whatever you are doing, and do
> things my way, because that way I win". I want to see if I can make clear
> that that really, really isn't what is going on here.

At least I didn't take it that way at all. But I fear my own posts might
have come across as aggressive, or at least as too negative. So let me
try to clear some things up:

I am looking forward to Strontalk's success and would very much like to
see

1) an easier entry point for people wanting to help (done! Thanks Brian
for your efforts with Doxygen and David for all the documentation you
have done!)
2) a Linux port
3) enough of Squeak running on Strongtalk that the OLPC version of eToys
can use it

Now I am not going to drop my own project and do any of this work
myself, just like I didn't stop my project when the very similar OLPC
evolved to the point where people could participate. But I have
dedicated some of my time to OLPC (given talks about it at local
universities, helped man their booth at FISL) and hope that I am helping
by giving advice. I would like to do the same here (or I wouldn't be on
this list).

Tim thought that David was confused about a VM-in-Smalltalk proposal. It
is true that no Squeakers have suggested this or are working on
something like this but I am (since 1991) and was discussing this with
David.

Andreas Wacknitz interpreted the position that Craig and I have
expressed as a dislike for C++. At least in my case it would be better
to say we all agree on using the best tool for each job but I feel that
C++ has short term advantages which are eclipsed by its long term
problems so it is the best tool for the job of a good Smalltalk by the
end of this year but the wrong tool for the job of a good Smalltalk by
the end of next year.

-- Jecel

Avi Bryant

unread,
Oct 10, 2006, 5:34:07 PM10/10/06
to strongtal...@googlegroups.com

On Oct 10, 2006, at 1:22 PM, Jecel Assumpcao Jr wrote:
>
> I am looking forward to Strontalk's success and would very much
> like to
> see
>
> 1) an easier entry point for people wanting to help (done! Thanks
> Brian
> for your efforts with Doxygen and David for all the documentation you
> have done!)
> 2) a Linux port

I think that (1) and (2) here are substantially the same - that is, a
unix port is necessary as an easier entry point for people wanting to
help. This may just be personal bias, but I think that hackers in
general and Smalltalk hackers in particular are heavily skewed
towards being Mac OS X and Linux users; I also think the group with
the most interest in having a high performance VM is developers of
server applications, which these days seem to run mostly on Linux,
Free BSD, or Solaris. If the goal is to get as many people as
possible to pitch in, I would make a unix port absolutely the number
one priority.

My $0.02,
Avi

rjohns...@gmail.com

unread,
Oct 11, 2006, 7:53:43 AM10/11/06
to Strongtalk-general
I think this is my first post here. I am going to try to act like a
translator.

Dave Griswold is speaking like an experienced project manager. He is
talking about risk, about what is known for sure and what is not. The
rest of you are speaking like developers. You are talking about what
is possible, what is ideal, and how much it will cost you to implement
it. Most of you are trying to figure out the best way to do something.
You are idealistic. Dave is trying to get something done. He is
being practical.

I'm an idealist. I sympathise with all you developers. Just getting a
program working isn't enough, I want to be proud of it and to be able
to show it off to other people and have them admire it.

I'm also practical. I decided about ten years ago that I wasn't going
to program in C++ any more. When I program in Smalltalk, I finish
things. When I programmed in C++, I didn't. I'm a professor, and
don't have a lot of time for programming any more. I'm getting old,
and don't have the tolerance for pain that I used to have. So,
Smalltalk is much better for me than C++. I haven't even done any work
on Exupery, which is nearly ideal for me. So, the odds of me working
on Strongtalk are close to zero, though I might be able to convince a
grad student to work on it. There are a lot of young people around
here who like C++. And a few older people who still use it.

Nevertheless, Dave is right. Nobody has proven that fast VMs can be
done in nice languages. There is a lot of reason for hope, but if you
want to bet on a sure thing, bet that Smalltalk can be fast using
Strongtalk.

We all want Dave to succeed. Most of us are not willing to endure C++
to make it happen. But we should do what we can to help him. So, I am
going to propose a simple rule.

This is the Strongtalk-general mailing list. Nobody should use this
list to explain why they are not working on it. People should not bash
C++ here. People should not talk about how some other project is
better. You can certainly criticise particular aspects of the design,
but always in a spirit of trying to figure out how Strongtalk works and
making it better

If you want to criticise the general project, or if you are feeling
guilting about not working on it and want to explain yourself, do it on
your blog or in another mailing list. Let's keep this one focused on
how to get Strongtalk out into the bigger world.

-Ralph Johnson.

br...@kampjes.demon.co.uk

unread,
Oct 11, 2006, 6:33:05 PM10/11/06
to strongtal...@googlegroups.com
David Griswold writes:
>
> Hi all,
>
> I get the feeling that my proposal to work on a common VM is being taken as
> just version of the standard refrain of "drop whatever you are doing, and do
> things my way, because that way I win". I want to see if I can make clear
> that that really, really isn't what is going on here.
>
> I understand very much that moving to a common VM could cause a lot of work
> by other people to get discarded or reworked. I understand that there is
> naturally resistance to that. But I want you to understand that I would be
> sharing that pain.

The big problem is a move to a common unfinished VM without an
established active development team is a highly risky move. There are
serious risks involved with adopting the Strongtalk VM now for
Squeak. My feeling is it's better to begin developing the Strongtalk
VM as part of a Strongtalk Smalltalk but to build bridges to the other
dialects.

Let a common VM emerge as projects begin to succeed. Don't try to
force standardization to get a VM finished.

From what has been said Strongtalk is at the other end of the
Smalltalk dialect spectrum. It uses native threads and stacks which
makes fast flexible interoperability with C much easier but also makes
green threads and continuations much harder to implement.

I'd suggest work towards porting Strongtalk to other operating systems
and porting Seaside to Strongtalk. Take the good bits of Squeak not
all of it and help us take the good bits of Strongtalk where possible.


From where I'm standing finishing Exupery is less risky and possibly
less work than porting Strongtalk. However a major reason for this
is I've been working on Exupery for a few years and thinking about the
problems involved. Shifting would involve swapping known familiar
problems for unknown problems.

Bryce

br...@kampjes.demon.co.uk

unread,
Oct 11, 2006, 7:21:53 PM10/11/06
to strongtal...@googlegroups.com
David Griswold writes:

> If you can eventually store the generated code and load and run it without
> compilation, that would help a lot. It would only work for local code, not
> code coming over the net, though.

Exupery is probably worse for untrusted code coming over the net
that is not run for long enough to justify heavy optimization.

If it's impossible to live without a faster compiler then it would be
possible to implement a second register allocator that was less
optimal but faster. I've absolutely no idea how well or how easy to
tune a multiple compiler system would be. I tend to defer decisions
until there's empirical data after first making sure that they can be
solved. Register allocation should only be n log n average case. (1)

> Unfortunately that won't be a panacea, because once you start doing
> type-feedback, you can't just count on storing code and then not running the
> compiler, because all it takes is a slight deviation in the execution path
> (something as small as moving the mouse differently, for example) to cause
> an uncommon trap because the inlined compiled code is then invalid, and must
> be *immediately* (not in the background) deoptimized, which would then
> require recompilation, starting the compiler going despite all the
> pre-compiled code, although the recompilation could be done in the
> background. So you can't simply turn the compiler off regardless of whether
> you can precompile code, unless it is ok if critical methods revert to being
> interpreted for the rest of the session.

Exupery does not use uncommon traps. Decompilation is done in the
image by regular Smalltalk code not in the VM. Decompilation will only
be triggered by reflection (including use of the debugger) or code
changes.

Uncommon traps are a neat trick if you're compiling quickly. They can
be avoided by compiling enough to handle all cases. The worst that's
going to happen is native code falls back to executing sends via
PICs. Any optimization that uncommon traps enable can also be done
without them by duplicating code during compilation. So far code
duplication has not been necessary.

" a < b ifTrue: [...]"

Will get compiled and the conversion to and from a Smalltalk Boolean
removed. This is done without requiring either uncommon traps or code
duplication. If either a or b are of an unexpected type then the
native code will perform a send via a PIC. The re-entry code for the
method deals with converting the boolean back into control flow from
the returned object.

Bryce

(1) I wouldn't be surprised if Exupery's register allocator is much
worse than n log n at the moment. Compilation speed tuning is a task
to do before a 1.0. Profiling indicates that it may be 100 times worse
than it needs to be for moderate sized methods.

David Griswold

unread,
Oct 12, 2006, 3:40:28 PM10/12/06
to strongtal...@googlegroups.com
Hi Bryce,

Bryce wrote:
> [...]


>
> Uncommon traps are a neat trick if you're compiling quickly. They can
> be avoided by compiling enough to handle all cases. The worst that's
> going to happen is native code falls back to executing sends via
> PICs. Any optimization that uncommon traps enable can also be done
> without them by duplicating code during compilation. So far code
> duplication has not been necessary.
>
> " a < b ifTrue: [...]"
>
> Will get compiled and the conversion to and from a Smalltalk Boolean
> removed. This is done without requiring either uncommon traps or code
> duplication. If either a or b are of an unexpected type then the
> native code will perform a send via a PIC. The re-entry code for the
> method deals with converting the boolean back into control flow from
> the returned object.
>
> Bryce

If I correctly understand what you are suggesting doing, I'm not sure you
completely understand the point of uncommon traps. If you reenter the code
from the uncommon case, you are not going to be able to generate *nearly* as
good code. The point of an uncommon trap is *not* to avoid generating code
for the uncommon cases. It is so that all the code following the uncommon
trap can *assume* the inlined case was executed, and optimize accordingly.
If you fall-back to a send, then the following code cannot assume that the
inlined case was taken, and that removes a *huge* amount of the information
that make type-feedback work so well.

Example:

(myArray size > x) not ifTrue: [ ... ]
"Ignoring the fact that in practice here you would use <= or ifFalse: to
do this, and
assuming these messages are all treated as normal sends, just to keep
things simple"

If array has always been an Array, then the size method can be inlined,
which invokes a primitive that returns a SmallInteger. But if you generate
a send for the case where array isn't an Array, then you don't know for sure
that the return value is a smallint, so you have to do a type test on it
before you can execute an inlined version of SmallInt >. And then since you
generate a send for the case where > isn't the SmallInt method, you can't
assume that the receiver of ifTrue: is a boolean, so you have to do a type
test for that to, and so on cascading through the code.

If you generate an uncommon trap for the non-Array case, then the receiver
for > is guaranteed to be a SmallInt, so you don't even have to do a test
for that. And then, since you know that SmallInt > was executed, you know
that its result must be a boolean, so you can execute an inlined ifTrue:
without any type test on the receiver as well. So the resulting code is
incredibly fast, with all the methods inlined *and* most of the type tests
removed.

The difference between the generated code with and without uncommon traps is
very large: without uncommon traps, you still have to do most of the
type-tests. This is even more important for optimization than the inlining
itself. When inlining gets aggressive, once you do a type-test on an
object, uncommon traps let you avoid ever doing a type-test on it again in
the inlined method, which can be a lot of subsequent code, and you can
always assume that the return value is whatever the inlined method returns.
This is not a small advantage.

Now, you could clone all the subsequent code after a type-test, with each
branch specialized to know which branch was taken. But that quickly grows
out of control since each subsequent type-test would at least double the
number of paths you would have to generate code for, so you would have to
generate truly vast amounts of code, which would be totally infeasible.

So uncommon traps are absolutely essential if you want to get close to
Strongtalk speed.

Cheers,
-Dave


David Griswold

unread,
Oct 12, 2006, 3:43:03 PM10/12/06
to strongtal...@googlegroups.com
No, I don't think even Squeak can do these things. The VM is constant while
it is running.
-Dave

> -----Original Message-----
> From: strongtal...@googlegroups.com
> [mailto:strongtal...@googlegroups.com]On Behalf Of Alejandro F.
> Reimondo
> Sent: Tuesday, October 10, 2006 3:03 PM
> To: strongtal...@googlegroups.com
> Subject: Re: Porting
>
>
>

tim Rowledge

unread,
Oct 12, 2006, 3:51:55 PM10/12/06
to strongtal...@googlegroups.com

On 12-Oct-06, at 12:43 PM, David Griswold wrote:

>
> No, I don't think even Squeak can do these things. The VM is
> constant while
> it is running.
> -Dave

Well, with the wiggle that the vm can load new plugins, unload old
ones, override an internal plugin with an external.

I've been wanting to do dynamically configurable vms for a longtime.
I think it was OOPSLA 88 where I spent an afternoon talking about it
with Schiffman for example.

Useful random insult:- His seat back is not in the full upright and
locked position.


David Griswold

unread,
Oct 12, 2006, 3:57:34 PM10/12/06
to strongtal...@googlegroups.com
Hi Ralph,

Thanks for your input. The one side comment I would make is that if I was
as practical as you make me sound, I wouldn't have set out to build a
type-system and a type-feedback VM for Smalltalk in the first place ;-)

Cheers,
-Dave

P.S. If you could convince a grad student to work on Strongtalk, that would
be highly wonderful. There is lots of interesting research to do and papers
to write about it, just waiting to be done.

> -----Original Message-----
> From: strongtal...@googlegroups.com
> [mailto:strongtal...@googlegroups.com]On Behalf Of

Alejandro F. Reimondo

unread,
Oct 12, 2006, 5:20:23 PM10/12/06
to strongtal...@googlegroups.com
Hi Tim,

Thank you Dave for comments.

> I've been wanting to do dynamically configurable vms for a longtime.
> I think it was OOPSLA 88 where I spent an afternoon talking about it
> with Schiffman for example.

Anyone involved in this kind of efforts now?

cheers,
Ale.

tim Rowledge

unread,
Oct 12, 2006, 5:29:15 PM10/12/06
to strongtal...@googlegroups.com

On 12-Oct-06, at 2:20 PM, Alejandro F. Reimondo wrote:

>
>
> Anyone involved in this kind of efforts now?

Not that I've ever heard of. I'd be happy to take a look at the
problem; just paypal Euro 1m to my email address and we can get
started :-)

Abdicate (v.), to give up all hope of ever having a flat stomach.


br...@kampjes.demon.co.uk

unread,
Oct 12, 2006, 8:52:03 PM10/12/06
to strongtal...@googlegroups.com
Hi David,

David Griswold writes:
> If I correctly understand what you are suggesting doing, I'm not sure you
> completely understand the point of uncommon traps. If you reenter the code
> from the uncommon case, you are not going to be able to generate *nearly* as
> good code. The point of an uncommon trap is *not* to avoid generating code
> for the uncommon cases. It is so that all the code following the uncommon
> trap can *assume* the inlined case was executed, and optimize accordingly.
> If you fall-back to a send, then the following code cannot assume that the
> inlined case was taken, and that removes a *huge* amount of the information
> that make type-feedback work so well.
>
> Example:
>
> (myArray size > x) not ifTrue: [ ... ]
> "Ignoring the fact that in practice here you would use <= or ifFalse: to
> do this, and
> assuming these messages are all treated as normal sends, just to keep
> things simple"
>

I understand the value of removing the overhead however it can
definitely be done without needing uncommon traps. Exupery does it for
Boolean operations now, I haven't yet applied the technique to
integers or other types because my current benchmark suite doesn't
require it. (There are more significant issues to be dealt with
first).

The trick that Exupery uses is to move the deconversion so it is only
executed after the uncommon send. The common case part looks something
like this:

(ifTrue (deconvertBoolean (convertBoolean (not (deconvertBoolean (convertBoolean ...)))))

Each deconvert operation which converts a Smalltalk Boolean into
control flow (the machine's PC) can jump to a normal send if the
argument is not a boolean. Each convert takes machines PC and
convert's it to a Smalltalk boolean.

It is always safe to remove the deconvertBoolean then convertBoolean
pair for the common case. The trick is that if the initial deconvert
for myArray size or x is not a Boolean then when the send for >
returns it must enter the "removed" convertBoolean from the #not in
your example above. So, doing the optimization in Exupery involves
moving the deconvertBoolean to #not's uncommon send's return sequence
rather than removing it.

The instructions that are executed if the types are as expected are
the same. Exupery generates more instructions that hopefully should
never need to be executed however it doesn't drop to interpreted code
or need to compile immediately. If an uncommon send returns then the
deconvertBoolean receives a Boolean then it'll jump back into fast
common case code. If the uncommon send returns something other than a
Boolean then the deconverBoolean will jump to the next uncommon send.

There are a few cases where it may be necessary to duplicate a loop to
utilize empirical type information. For example:

[...] whileTrue: [anArray at: index]
or:
[...] whileTrue: [x := x + 1].

In these cases I'd rather be able to hoist the type-test on anArray
out of the loop. Because the redundancy is spread across statements it
doesn't appear to be possible to optimize without duplicating the
loop. Exupery doesn't optimize the loop cases yet and probably will
not until after Exupery get's SSA.

It's only really necessary to duplicate the loop (or method) once so
you've got one case with all possible checks removed and another which
can deal with any case. Like uncommon traps, you're gambling that the
types will be similar but code has been generated to deal with the
cases where this is not so.

Squeak's interpreter does a very limited form of boolean conversion
removal. It removes the case where a comparison is followed by a jump.
the comparison bytecode checks if the next bytecode is a conditional
jump. If it is, it executes it as part of the comparison bytecode.

Boolean conversion might not sound expensive but it's two jumps in a
row that could easily both mispredict together.


Now, a big advantage of being able to optimize away redundant type
conversion and having type feedback is it's possible to optimize both
floating point (and 32 bit integers) without hurting integer
performance. PICs inform the compiler that the method's using
floats. Then simple type check removal code can remove any unnecessary
intermediate object creation.

So:

x := a + b + c + d

would first type check a, b, c, and d. Then proceed to do all the
additions in registers then finally create an object to hold the
answer before storing it in x.

Extending dynamic primitive inlining and redundant type conversion
removal to handle 32 bit integers and floating point numbers is very
tempting. It's not a lot of work as Exupery can already dynamically
inline primitives. Dynamic primitive inlining was a sensible way to
optimize Squeak's #at: especially as it is building towards full
method inlining.

Bryce

Jecel Assumpcao Jr

unread,
Oct 12, 2006, 11:12:39 PM10/12/06
to strongtal...@googlegroups.com
Ralph Johnson wrote:
> We all want Dave to succeed. Most of us are not willing to endure C++
> to make it happen. But we should do what we can to help him. So, I am
> going to propose a simple rule.
>
> This is the Strongtalk-general mailing list. Nobody should use this
> list to explain why they are not working on it. People should not bash
> C++ here. People should not talk about how some other project is
> better. You can certainly criticise particular aspects of the design,
> but always in a spirit of trying to figure out how Strongtalk works and
> making it better

Normally I would agree, but this list was proposed as the place for a
"Common Smalltalk VM Summit". At least for that particular thread
talking about other projects isn't entirely off topic. On other threads,
however, it might be rude - just like if you go to some university for a
conference and then you leave the conference room to poke into the
classrooms and labs.

What I had understood this summit to be about was:

1) how to take advantage of the open sourcing of Strongtalk

2) dicuss a common layer between the VMs and the various Smalltalk
systems so they could share a VM (or even more than one VM) between them

-- Jecel

Michael Haupt

unread,
Oct 13, 2006, 7:04:27 AM10/13/06
to strongtal...@googlegroups.com
Hi Alejandro,

On 10/10/06, Alejandro F. Reimondo <aleRe...@smalltalking.net> wrote:
> I am interested in reading experiences about dynamical changes
> in the VM governed by living objects..
> Any pointers?

this is unrelated to Smalltalk, apologies for going off-topic.

There is a paper on dynamically switching GC strategies based on
application behaviour. The research was done in the Jikes RVM (for
Java).

S. Soman, C. Krintz, D. F. Bacon, Dynamic Selection of
Application-Specific Garbage Collectors. In ISMM'04: Proceedings of
the 4th International Symposium on Memory Management, pages 49-60. ACM
Press, 2004.

Hope this helps,

Michael

David Griswold

unread,
Oct 14, 2006, 3:24:43 PM10/14/06
to strongtal...@googlegroups.com
Hi Bryce,

> [...]

I must say I don't entirely understand how these techniques generalize.
Booleans are a special case. Uncommon traps work for arbitrary
types/classes.
What if you are expecting a String and you get an OrderedCollection instead?
There isn't any conversion or deconversion to be done, so
I don't understand how you could ever jump "back into" the common case
code.

If all that happens is that whenever a typecheck fails you jump
to a single alternate flow of control that does only sends and avoids
all inlining, then that sounds like the equivalent of an uncommon trap,
except that you lose the biggest advantage of deoptimization, which is
that compilation and inlining are completely transparent to the debugger
and all other kinds of context reflection.

How does Exupery compiled code work with the debugger (especially when you
start doing inlining)? How would the reflection that for example Seaside
does, work with
compiled contexts?

>
> Now, a big advantage of being able to optimize away redundant type
> conversion and having type feedback is it's possible to optimize both
> floating point (and 32 bit integers) without hurting integer
> performance. PICs inform the compiler that the method's using
> floats. Then simple type check removal code can remove any unnecessary
> intermediate object creation.
>
> So:
>
> x := a + b + c + d
>
> would first type check a, b, c, and d. Then proceed to do all the
> additions in registers then finally create an object to hold the
> answer before storing it in x.
>
> Extending dynamic primitive inlining and redundant type conversion
> removal to handle 32 bit integers and floating point numbers is very
> tempting. It's not a lot of work as Exupery can already dynamically
> inline primitives. Dynamic primitive inlining was a sensible way to
> optimize Squeak's #at: especially as it is building towards full
> method inlining.

Strongtalk doesn't do any of that right now. The normal uncommon trap
mechanism, combined with our use of 0 tags for SmallIntegers, means that
SmallInteger arithmetic is very fast already, since addition for example
doesn't require any tag manipulation at all, so we don't really have to
worry about conversion/deconversion issues.

As for double precision floats, they could be done the way you are
suggesting, which would be nice, but right now they are not done that way.
Right now our "fast float" experiment uses special bytecodes that treat them
as a non-OO basic type (with explicit conversion to and from objects). But
that was just a quick experiment that Robert Griesemer did, and it is not
really part of the "language", although they work and are very fast.

Cheers,
Dave


br...@kampjes.demon.co.uk

unread,
Oct 15, 2006, 12:03:45 PM10/15/06
to strongtal...@googlegroups.com, exu...@lists.squeakfoundation.org

Hi David,

David Griswold writes:
> If all that happens is that whenever a typecheck fails you jump
> to a single alternate flow of control that does only sends and avoids
> all inlining, then that sounds like the equivalent of an uncommon trap,
> except that you lose the biggest advantage of deoptimization, which is
> that compilation and inlining are completely transparent to the debugger
> and all other kinds of context reflection.
>
> How does Exupery compiled code work with the debugger (especially when you
> start doing inlining)? How would the reflection that for example Seaside
> does, work with
> compiled contexts?

Exupery can decompile a context at any time, it's just done in the
image by using reflection. Contexts are just normal objects in Squeak
and Exupery adds it's own context classes. Exupery's contexts are
responsible for converting themselves into interpreted contexts when
necessary. (1)

If anything is done to an Exupery context that could invalidate the
assumptions of compiled code then the context should convert itself to
an interpreted context. So if the debugger trys to change a context it
should convert itself back to an interpreted context. The profiler
however should not cause contexts to convert themselves to interpreted
contexts.

It's the ExuperyContext's responsibility to convert itself into an
interpreted context when Seaside serializes it. Seaside could
serializes an ExuperyContext but that could cause a crash if the
ExuperyContext converted itself into an interpreted context then
Seaside recreated the stack with it in. ExuperyContexts have different
instance variables to interpreted contexts so filling an interpreted
contexts variables using instVarAt:put: after taking them from an
Exupery context with instVarAt: is dangerous. If Seaside calls a
context to save itself rather than using instVarAt: to get the state
out then making it work with Exupery is easy. An Exupery context could
also convert itself if instVarAt: is called on it.

ExuperyBlockContext>>convertToInterpretedContext will convert an
Exupery block into an interpreted block. It re-arranges the instance
variables then calls primitiveChangeClassTo: to change the class.

Exupery manages it's code cache purely in the image. It needs to
convert all Exupery contexts back to interpreted contexts before
flushing the code cache or saving the image. Otherwise it may try
to execute compiled code that's not there.

Uncommon traps are definitely a very nice optimization for a fast
compiling compiler. By removing the need to produce the code they
speed up compilation. However uncommon traps do make it harder for a
slow compiler that trys to produce very fast code.

> Strongtalk doesn't do any of that right now. The normal uncommon trap
> mechanism, combined with our use of 0 tags for SmallIntegers, means that
> SmallInteger arithmetic is very fast already, since addition for example
> doesn't require any tag manipulation at all, so we don't really have to
> worry about conversion/deconversion issues.

I was thinking about true 32 bit integers not SmallIntegers. True 32
bit integers are needed for a few applications such as cryptology. The
Squeak cryptography guys are currently writing a few primitives to
speed up their inner loops.

> As for double precision floats, they could be done the way you are
> suggesting, which would be nice, but right now they are not done that way.
> Right now our "fast float" experiment uses special bytecodes that treat them
> as a non-OO basic type (with explicit conversion to and from objects). But
> that was just a quick experiment that Robert Griesemer did, and it is not
> really part of the "language", although they work and are very fast.

Sounds interesting, I'd like to talk more about it when I'm ready to
start work on floating point.

Bryce

(1) The description is true for block contexts but not method
contexts. Exupery method contexts currently share the same class as
interpreted method contexts. I'll create their own class before 1.0.
Before blocks the old style Exupery method contexts were executable by
a normal VM which was very nice for early bootstrapping. This also
meant that the debugger would work for them unless it was single
stepped.

The decompilation code is present and well tested. What's missing is
the hooks to trigger decompilation when a context is manipulated via
reflection.

David Griswold

unread,
Oct 16, 2006, 4:54:08 AM10/16/06
to strongtal...@googlegroups.com
Hi Bryce,

Bryce wrote:
> [...]
>


> It's the ExuperyContext's responsibility to convert itself into an
> interpreted context when Seaside serializes it. Seaside could
> serializes an ExuperyContext but that could cause a crash if the
> ExuperyContext converted itself into an interpreted context then
> Seaside recreated the stack with it in. ExuperyContexts have different
> instance variables to interpreted contexts so filling an interpreted
> contexts variables using instVarAt:put: after taking them from an
> Exupery context with instVarAt: is dangerous. If Seaside calls a
> context to save itself rather than using instVarAt: to get the state
> out then making it work with Exupery is easy. An Exupery context could
> also convert itself if instVarAt: is called on it.
>
> ExuperyBlockContext>>convertToInterpretedContext will convert an
> Exupery block into an interpreted block. It re-arranges the instance
> variables then calls primitiveChangeClassTo: to change the class.

That approach sounds like it won't work when you start doing inlining. With
inlining a single compiled context maps to multiple interpreted contexts,
and the conversion process becomes much more complex.

> Exupery manages it's code cache purely in the image. It needs to
> convert all Exupery contexts back to interpreted contexts before
> flushing the code cache or saving the image. Otherwise it may try
> to execute compiled code that's not there.
>
> Uncommon traps are definitely a very nice optimization for a fast
> compiling compiler. By removing the need to produce the code they
> speed up compilation. However uncommon traps do make it harder for a
> slow compiler that trys to produce very fast code.

As I said before, uncommon traps are not about making compilation faster,
but about making the generated code better. Why do you think uncommon traps
make it harder for a slow compiler? We think they make it easier (once you
have the full deoptimization mechanism).

From what you have said above, it sounds to me like once you do general
inlining you will need to implement full deoptimization like we do before
Exupery will work with either the debugger or with Seaside, since otherwise
you won't be able to serialize or reflect on inlined contexts. Once you
have real deoptimization, uncommon traps are trivial. So wouldn't you end
up with basically the same approach as Strongtalk after all?

> > Strongtalk doesn't do any of that right now. The normal uncommon trap
> > mechanism, combined with our use of 0 tags for SmallIntegers,
> means that
> > SmallInteger arithmetic is very fast already, since addition
> for example
> > doesn't require any tag manipulation at all, so we don't really have to
> > worry about conversion/deconversion issues.
>
> I was thinking about true 32 bit integers not SmallIntegers. True 32
> bit integers are needed for a few applications such as cryptology. The
> Squeak cryptography guys are currently writing a few primitives to
> speed up their inner loops.

I see. Implementing general 32-bit integer arithmetic seems like a lot of
work to do just to get 2 more bits of precision for one application. It
will also complicate deoptimization/decompilation, since those integers will
need to be boxed and converted to LargeIntegers when you convert to
interpreted contexts.

Why don't you just let them do their own primitives, since that feature is
probably only needed in crytography code; as long as the primitives are
inlined, that is no performance disadvantage.

When Smalltalk moves to a 64-bit implementation this issue will mostly go
away anyway, since then SmallIntegers are a lot less Small (and then double
floats could be tagged too, which would be a major advance for Smalltalk).
Of course that would only work on the major CPUs that have all gone 64bit
now.

Cheers,
Dave

Cafe Alpha

unread,
Oct 16, 2006, 5:38:28 AM10/16/06
to strongtal...@googlegroups.com
How do you tag a double float without destroying rounding behavior?

David Griswold

unread,
Oct 16, 2006, 7:00:50 AM10/16/06
to strongtal...@googlegroups.com

A good question, but I assume it could be done the same way I think it has
been done in the past for tagged single-precision floats (I believe
VisualWorks had/has this).

Doing this in an approximate way by just truncating or rounding would not be
a big deal, and even if the results were not accurate enough for
mission-critical floating point, they would be a major help for things like
graphics apps, such as Croquet. Tagged single-precision floats only had
about 7 digits of precision as I recall, which amounts to around a pixel of
error on a XGA screen. Anything much better than that would be a *big*
improvement.
-Dave

> -----Original Message-----
> From: strongtal...@googlegroups.com
> [mailto:strongtal...@googlegroups.com]On Behalf Of Cafe Alpha
> Sent: Monday, October 16, 2006 11:38 AM
> To: strongtal...@googlegroups.com
> Subject: Re: Porting
>
>
>

David Griswold

unread,
Oct 16, 2006, 7:09:26 AM10/16/06
to strongtal...@googlegroups.com
Actually, I looked and it appears VisualWorks 7.4 for Linux x86 already
supports tagged 64bit doubles (SmallDouble).
-Dave

> -----Original Message-----
> From: strongtal...@googlegroups.com
> [mailto:strongtal...@googlegroups.com]On Behalf Of Cafe Alpha
> Sent: Monday, October 16, 2006 11:38 AM
> To: strongtal...@googlegroups.com
> Subject: Re: Porting
>
>
>

Reinout Heeck

unread,
Oct 16, 2006, 8:05:13 AM10/16/06
to strongtal...@googlegroups.com
David Griswold wrote:
> Actually, I looked and it appears VisualWorks 7.4 for Linux x86 already
> supports tagged 64bit doubles (SmallDouble).

This is supported only on 64bit VMs.

Also the class comment suggests that the tag bits are 'taken' from the
exponent reducing its range rather than its precision. This sounds like
a better approach to me since converting between SmallDoubles and boxed
IEEE doubles will not yield conversion errors (since both have the same
precision).


R
-

eliot....@gmail.com

unread,
Oct 16, 2006, 3:12:28 PM10/16/06
to Strongtalk-general
Hi Reinout!

that's right. The 64-bit VW VM uses 3 tag bits so to do immediate
64-bit doubles one needs to drop 3 bits somewhere. Dropping precision
is a bad choice; numerical progrtammers will complain bitterly sooner
or later and the system won;t be used. Taking them from the exponent
is the only sensible choice; typical usage is well away from the
extremes, so one should put the 8 bit range in the middle. The only
wrinkle here is that 0 is a very common value. So my scheme maps the
8-bit exponent 0 to the 11-bit exponent 0, and maps the 8-bit exponent
range 1 through 255 to the 11-bit exponent range 896 to 1150. So you
get immediate doubles in the range +/- 5.8774717541114d-39 to +/-
6.8056473384188d+38, which is of course very similar to the 32-bit
range. Hence every non-NaN 32-bit float is reresentable as a 64-bit
immediate float.

Cafe Alpha

unread,
Oct 16, 2006, 7:28:12 PM10/16/06
to strongtal...@googlegroups.com
Is the bit layout of a 64 bit double such that the tag bits go in the right
place (at the end I assume) or does this mean that you have to pack and
unpack doubles.

And if you have to unpack doubles, doesn't this cost just as much time as
pointing at them (if not as much memory)?

----- Original Message -----
From: <eliot....@gmail.com>
To: "Strongtalk-general" <strongtal...@googlegroups.com>
Sent: Monday, October 16, 2006 12:12 PM
Subject: Re: Porting


eliot....@gmail.com

unread,
Oct 17, 2006, 12:28:21 PM10/17/06
to Strongtalk-general

On Oct 16, 4:28 pm, "Cafe Alpha" <cafealp...@gmail.com> wrote:
> Is the bit layout of a 64 bit double such that the tag bits go in the right
> place (at the end I assume) or does this mean that you have to pack and
> unpack doubles.

OK, here's the layout and rationale, from tne relevant header file:

Representation for immediate doubles, only used in the 64-bit
implementation.
Immediate doubles have the same 52 bit mantissa as IEEE
double-precision
floating-point, but only have 8 bits of exponent. So they occupy just
less
than the middle 1/8th of the double range. They overlap the normal
single-
precision floats which also have 8 bit exponents, but exclude the
single-
precision denormals (exponent -127) and the single-precsion NaNs
(exponent
+127). +/- zero is just a pair of values with both exponent and
mantissa 0.

So the non-zero immediate doubles range from
+/- 0x3800,0000,0000,0001 / 5.8774717541114d-39
to +/- 0x47ff,ffff,ffff,ffff / 6.8056473384188d+38

The encoded tagged form has the sign bit moved to the least significant
bit,
which allows for faster encode/decode because offsetting the exponent
can't
overflow into the sign bit and because testing for +/- 0 is an unsigned
compare for <= 0xd:

msb
lsb
[8 exponent subset bits][52 mantissa bits ][1 sign bit][3 tag bits]

So given the tag is 5, the tagged non-zero bit patterns are
0x0000,0000,0000,001[d5]
to 0xffff,ffff,ffff,fff[d5]
and +/- 0d is 0x0000,0000,0000,000[d5]

Encode/decode of non-zero values in machine code looks like:
msb
lsb
Decode: [8expsubset][52mantissa][1s][3tags]
shift away tags: [ 000 ][8expsubset][52mantissa][1s]
add exponent offset: [ 11 exponent ][52mantissa][1s]
rot sign: [1s][ 11 exponent ][52mantissa]

Encode: [1s][ 11 exponent ][52mantissa]
rot sign: [ 11 exponent ][52mantissa][1s]
sub exponent offset: [ 000 ][8expsubset][52mantissa][1s]
shift: [8expsubset][52mantissa][1s][ 000 ]
or/add tags: [8expsubset][52mantissa][1s][3tags]

but is slower in C because
a) there is no rotate, and
b) raw conversion between double and quadword must (at least in the
source) move bits through memory ( quadword = *(q64 *)&doubleVariable
).

HTH

>
> And if you have to unpack doubles, doesn't this cost just as much time as
> pointing at them (if not as much memory)?

No :) Memory accesses, especially stores, are *much* more expensive
than register operations and even conditional on modern hardware. On
x86-64 64-bit VW immediate double arithmetic (summation) is about twice
as slow as SmallIntegers (which in VW do ave to be detagged) and about
three times faster than boxed double arithmetic. Remember that boxing
a float involves several writes to form the object (three 64-bit words
for 64-bit VW) and the updating of the eden allocation pointer (which
might be in a register, but is more likely in a global variable).

David Griswold

unread,
Oct 19, 2006, 11:50:56 AM10/19/06
to strongtal...@googlegroups.com
The repository now contains a version that will build and run under VS8,
thanks to Andras Pahi.

I have attached the VS project and solution files, since they will be too
big to check into the repository on a regular basis. I hope to put current
versions of them on the website in the future along with convenient things
like the generated incls, but I am traveling and have been having problems
FTPing to update the website.

The project file contains an important change: there is a switch which
should be turned off under VS8: in Configuration Properties -> C/C++ ->
Language, set "Force Conformance In For Loop Scope" to "No". Otherwise the
VS8 compiler will reject the code at several places.

The debug version of the VM produces a warning on the console that slows
execution down (because of the I/O time for printing it); that needs looking
into.

Also, bin/Makefile has been renamed Makefile.win32, and no longer depends on
the MKS tools, although the reference to tools/makedeps is still there, we
need to decide on what to do about that.

-Dave

strongtalk.vcproj
strongtalk.sln

Thiago Silva

unread,
Oct 19, 2006, 1:13:39 PM10/19/06
to strongtal...@googlegroups.com
On 10/19/06, David Griswold <David.G...@acm.org> wrote:
> Also, bin/Makefile has been renamed Makefile.win32, and no longer depends on
> the MKS tools, although the reference to tools/makedeps is still there, we
> need to decide on what to do about that.
> -Dave

I'm not sure what problems makedeps solves, but my suggestion is to
include headers in the traditional way (directly in the source files),
rather than using custom files to specify dependencies, as it is now.

Thiago Silva

David Griswold

unread,
Oct 19, 2006, 1:48:28 PM10/19/06
to strongtal...@googlegroups.com
> -----Original Message-----
> From: strongtal...@googlegroups.com
> [mailto:strongtal...@googlegroups.com]On Behalf Of Thiago Silva
> Sent: Thursday, October 19, 2006 7:14 PM
> To: strongtal...@googlegroups.com

As far as I understand it, the purpose is to use precompiled header files to
speed
up compilation, so that every single header file doesn't have to be parsed
over
and over again. Does anyone with more C++ experience know if this can be
easily
dispensed with?
-Dave


David Griswold

unread,
Oct 19, 2006, 1:58:47 PM10/19/06
to strongtal...@googlegroups.com
BTW, if you are viewing the Strongtalk source code in Visual Stupidio, you
will notice right away that a lot of the indentation seems to be wrong.
This is because the code is written with the expectation that tabs are 8
chars, but VS uses 4 as a default.

To change it so the code looks right, go to Tools/Options/Text Editor/All
Languages/Tabs and change tab size to 8.

Gosh that looks better!
-Dave


Krzysztof Kowalczyk

unread,
Oct 19, 2006, 2:02:22 PM10/19/06
to strongtal...@googlegroups.com
On 10/19/06, David Griswold <David.G...@acm.org> wrote:
> As far as I understand it, the purpose is to use precompiled header files to
> speed
> up compilation, so that every single header file doesn't have to be parsed
> over
> and over again. Does anyone with more C++ experience know if this can be
> easily
> dispensed with?

Visual Studio has it's own system for precompiled headers which you
can setup in build preferences. It's almost as easy as checking a
checkbox in build settings and naming a file that stores precompiled
headers (see e.g.
http://www.cygnus-software.com/papers/precompiledheaders.html). It
doesn't require (almost) any source code changes.

GCC recently also got precompiled headers functionality
(http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html) although
I never used that myself.

However... given the speed of today's computers and (relatively small)
size of strangtalk C++ sources, I think that builds would be fast
enough without precompiled headers and I would be in favor of not
using them at all, in any form.

-- kjk

David Griswold

unread,
Oct 20, 2006, 4:51:12 AM10/20/06
to strongtal...@googlegroups.com
I've eliminated all the compilation warnings under VS8. The system now
compiles cleanly. Code is in the repository.
-Dave


David Griswold

unread,
Oct 20, 2006, 11:35:43 AM10/20/06
to strongtal...@googlegroups.com
I've done an experiment compiling the system without precompiled headers.
My machine is not particularly fast, a Sonoma Centrino 1.7ghz notebook with
a Hitachi 7K100 drive.

With precompiled headers, the system takes 27sec to compile, and produces a
debug binary that is 2.19MB, a product binary that is 1056KB, and a .pdb
file that is 8.6MB.

Without precompiled headers, the system takes 95sec to compile, and produces
a debug binary that is 11.5MB, a product binary that is 952KB, and a .pdb
file that is 11.5MB.

The build difference is significant but not overwhelming. An incremental
compile that changes any commonly used header (adding a public method, for
example) will still require recompiling most of that, although for small
changes the compile time will be irrelevant.

The difference in the .exe size is more disturbing- the difference is about
11%; my guess is that templates are getting multiply instantiated. While
the system works fine this way, I don't think we want 11% of the executable
being bogus template duplicates.

I suspect that we will need to adopt Carlo Dapor's procedure for using the
JDK makedeps (which was actually written long ago by our team at Sun for
this very purpose for the Java HotSpot VM- I had forgotten about it). I
haven't tried it yet; the one issue is whether Carlo's mention of
WinGammaPlatformVC6 and 7 means that it can't yet produce a VS8 project
file.

Carlo's post: http://tech.groups.yahoo.com/group/strongtalk/message/167
-Dave

Krzysztof Kowalczyk

unread,
Oct 20, 2006, 1:23:14 PM10/20/06
to strongtal...@googlegroups.com
Isn't the binary without precompiled headers smaller, according to your data?

Also, the size of debug binary without precompiled headers seems too large.

-- kjk

David Griswold

unread,
Oct 20, 2006, 2:50:25 PM10/20/06
to strongtal...@googlegroups.com
Sorry, that was a typo; the numbers again are:

With precompiled headers, the system takes 27sec to compile, and produces a

debug binary that is 2.19MB, a product binary that is 952KB, and a .pdb file
that is 8.6MB.

Without precompiled headers, the system takes 95sec to compile, and produces

a debug binary that is 2.44MB, a product binary that is 1056KB, and a .pdb
file that is 11.5MB.

> -----Original Message-----
> From: strongtal...@googlegroups.com
> [mailto:strongtal...@googlegroups.com]On Behalf Of Krzysztof
> Kowalczyk
> Sent: Friday, October 20, 2006 7:23 PM
> To: strongtal...@googlegroups.com

Krzysztof Kowalczyk

unread,
Oct 20, 2006, 3:31:27 PM10/20/06
to strongtal...@googlegroups.com
On 10/19/06, David Griswold <David.G...@acm.org> wrote:
> I have attached the VS project and solution files, since they will be too
> big to check into the repository on a regular basis.

David, thanks for working on improving the build system.

I tried the attached vs project file in vs8 but it's currupted. It has
a consistent corruption where "3D" is added after "=" in the XML, e.g.
the beginning looks like:

"<?xml version=3D"1.0" encoding=3D"Windows-1252"?>
<VisualStudioProject
ProjectType=3D"Visual C++"
"

VS 2005 fails to load it because it (rightfully) complains that it's
invalid XML. I'm not sure at which point it became corrupted (I
downloaded it twice using FireFox 1.5 when reading e-mail in gmail
account).

I hope you'll reconsider and check this file into repository. Even at
current size of 142 kB Subversion will handle it just fine.

In my experience as someone who plays with many open-source projects,
a bullet proof, always working out of the box build system is
important to create that first good impression that might get
potential contributors interested in the project.

I'll also note that the size is anomaly and, if improved, the size of
*vcproj should be much smaller (say, < 30 kB). For example, WebCore
(Apple's web browser) has much more source files (5-10x as much) and
its *vcproj is smaller at 114 kB.

The reason for overblown size of the attached *vcproj is that for each
files it has sth. like:
<File
RelativePath=3D"..\vm\code\zoneHeap.cpp"
>
<FileConfiguration
Name=3D"Debug|Win32"
>
<Tool
Name=3D"VCCLCompilerTool"
AdditionalIncludeDirectories=3D""
PreprocessorDefinitions=3D""
/>
</FileConfiguration>
<FileConfiguration
Name=3D"Fast|Win32"
>
<Tool
Name=3D"VCCLCompilerTool"
AdditionalIncludeDirectories=3D""
PreprocessorDefinitions=3D""
/>
</FileConfiguration>
<FileConfiguration
Name=3D"Product|Win32"
>
<Tool
Name=3D"VCCLCompilerTool"
AdditionalIncludeDirectories=3D""
PreprocessorDefinitions=3D""
/>
</FileConfiguration>
</File>

While it should really be just:
<File
RelativePath=3D"..\vm\code\zoneHeap.cpp"
</File>

I don't know how it was created so I don't know why it has
configurations listed per each file but I'm sure it can be re-created
in more compact way (I've created my share of *vcproj files).

-- kjk

Topher Cyll

unread,
Oct 20, 2006, 3:52:29 PM10/20/06
to strongtal...@googlegroups.com
Yeah, I hit this too. I fixed the XML file manually, but still had
problems building. I ran out of time before I could investigate
further. I may have some time this weekend to look again.

-Toph

David Griswold

unread,
Oct 21, 2006, 4:54:11 AM10/21/06
to strongtal...@googlegroups.com
As for the corruption, that's very strange; the strangest part being that it
is corrupted in my version too, but it works just fine for me, whereas it
doesn't work for you. I am using VS 2005 Express; perhaps you are using
some other version?

This project file was given to me by Andras along with the fixes to get the
system running under VS8. I'll look into producing a new one that is also
more compact, although VS project files are unfamiliar to me, not having
used them before.

The reason I am hesitant to check it into the repository is that that would
require storing a new one very frequently, and we don't have all that much
of a space allotment left on Google code. Their database (or Subversion)
apparently has a big problem with file overhead/fragmentation when you store
lots of small files, since our system is only somewhat over 10MB of source
but takes up nearly 50MB of space in their database already, which has a
limit of 100MB/project, although I think we could get them to increase that.
We have lots of small files because the Smalltalk code is stored with 1 file
per top-level construct, of which there are over 900.

I hadn't looked inside the project file; I thought it was a binary format,
which might produce big diffs in the repository, quickly leading to a space
problem. But as the project file is apparently XML, hopefully diffs should
be of a reasonable size, so I'll put it in the repository when we get a
non-corrupted one.
-Dave

> -----Original Message-----
> From: strongtal...@googlegroups.com
> [mailto:strongtal...@googlegroups.com]On Behalf Of Krzysztof
> Kowalczyk
> Sent: Friday, October 20, 2006 9:31 PM
> To: strongtal...@googlegroups.com

> Subject: Re: Now runs under VS8 (VC++ 2005 Express)
>
>
>

mr.d...@gmail.com

unread,
Oct 21, 2006, 1:37:47 PM10/21/06
to Strongtalk-general
Hi Eliot

> x86-64 64-bit VW immediate double arithmetic (summation) is about twice
> as slow as SmallIntegers (which in VW do ave to be detagged) and about
> three times faster than boxed double arithmetic. Remember that boxing

2:1 ratio between performance of immediate doubles and immediate ints -
thats interesting.
Now what if 64-bit immediate doubles did not have to be detagged, what
then would the performance ratio be between immediate doubles and
immediate integers?

Cheers

Dan

p.s. Will you be in Frankfurt for the user conference?

Topher Cyll

unread,
Oct 21, 2006, 2:43:10 PM10/21/06
to strongtal...@googlegroups.com
I was using VS 2005 Express too, so I don't thing it's a version thing...
-Toph
Reply all
Reply to author
Forward
0 new messages