Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

an assignment for Gavino

505 views
Skip to first unread message

hughag...@gmail.com

unread,
Jun 17, 2016, 2:49:55 AM6/17/16
to
Over on comp.lang.forth we had this thread:
https://groups.google.com/forum/#!topic/comp.lang.forth/qTDKxhW9758

I said this:

On Thursday, June 16, 2016 at 11:16:09 PM UTC-7, hughag...@gmail.com wrote:
> On Thursday, June 16, 2016 at 7:58:17 PM UTC-7, hughag...@gmail.com wrote:
> > On Wednesday, June 15, 2016 at 12:15:59 PM UTC-7, endlessboo...@gmail.com wrote:
> > > why?
> > Programming is about writing programs...
>
> Gavino --- I'm going to be like a teacher and give you an assignment! These are the rules of the assignment:
> 1.) Until you finish your assignment, you are not allowed to start any more of these idiotic threads on comp.lang.forth (or comp.lang.lisp etc.) --- if you do, you will just be reminded that you haven't finished your assignment.
> 2.) When you finish your assignment, I will put your program in the novice-package --- you will be famous! --- there is no better path to Forth fame than this!
>
> Here is the assignment: write a BreakOut game in ANS-Forth. I did this myself on the Commodore Vic-20 in line-number BASIC when I was about 16 years old (not as complicated as what I'm describing here; I ran into severe speed problems and had to keep things simple). I had to take recourse in unlawful carnal knowledge of BASIC to get my program to run at reasonable speed. You can write your program in ANS-Forth though because the modern computers are much faster than the Vic-20. Use these characters:
> ] these represent the left wall
> [ these represent the right wall
> O this represents your ball
> [#] these represent the bricks, which form a wall along the top of the screen (a space between)
> [=] these are like bricks, except that if you hit one you get another life
> [V] these represent spikes which are among the bricks in the wall along the top
> [O] these represent balls which are among the bricks in the wall along the top
> ===== this represents your paddle, which moves along the bottom of the screen
>
> The game is like a one-player version of Pong. You slide your paddle left and right (the left and right arrow keys can be used) to hit the ball so it bounces upwards. The ball bounces off the walls and off the bricks.
> 1.) When the ball hits a [#] it not only bounces off the brick, but it causes the brick to disappear.
> 2.) When the ball hits a [V] it not only bounces off the spike, but it causes the V to fall. Also, the ball bounces at a random angle.
> 3.) When the ball hits a [O] it not only bounces off the ball, but it causes the ball to fly away in the direction that the original ball came from.
>
> The wall looks like this:
>
> [#] [#] [#] [O] [#] [#] [#] [#] [V] [#] [O] [#] [#] [V] [#] [#] [#] [#] [#]
> [#] [#] [V] [#] [#] [V] [#] [O] [#] [#] [#] [#] [#] [#] [O] [#] [#] [#] [#]
> [#] [#] [#] [#] [V] [#] [#] [V] [#] [#] [#] [=] [#] [#] [#] [V] [#] [#] [#]
> [V] [#] [#] [#] [#] [=] [#] [#] [#] [#] [#] [V] [V] [#] [#] [#] [V] [#] [#]
>
> The [V] and [O] and [=] are distributed randomly in the wall. You can have several levels of difficulty. The first level would have only [#] in the wall. The next level would have some [V] in the wall. The next level would also have some [O] in the wall.
>
> The goal of the game is to break through the wall.
> If you fail to hit a ball with your paddle and the ball goes out the bottom of the screen, then you lose a life.
> If you hit a [=] then it disappears like a brick, but you also get another life.
> If you hit a [V] then the V drops (also, the ball bounces at a random angle) --- you have to avoid the falling spike because if it hits your paddle then you lose a life.
> If you hit a [O] then that O becomes live --- now you have more than one ball that you must keep in play with your paddle.
>
> Because we are using characters rather than graphics, there are a limited number of vectors that the ball can travel on:
> up 1 right 1
> up 1 right 2
> up 2 right 1
> up 1 left 1
> up 1 left 2
> up 2 left 1
> down 1 right 1
> down 1 right 2
> down 2 right 1
> down 1 left 1
> down 1 left 2
> down 2 left 1
>
> Write your program as a paced loop. This means that every X milliseconds, your ball(s) and spike(s) move up or down 1 char (you will have to figure out X by experimentation; determine what speed makes the game fun, but not so fast as to be unplayable). Unfortunately, one of the many failings of ANS-Forth is that it doesn't provide a standard way to check the system clock to determine how many milliseconds have elapsed (the TIME&DATE word is only accurate to seconds). You need these words:
> INIT-TIME ( -- ) initializes the time to zero
> TIME ( -- ms ) returns the number of milliseconds since SET-TIME was called
> Just choose an ANS-Forth compiler and ask your friendly compiler-vendor to provide these words for you. I recommend that you choose VFX because SwiftForth might be too slow for your game to be playable without recourse to assembly-language --- you don't want to have to delve into assembly-language on your first-ever program (these aren't the bad-old-days of the Vic-20 in which assembly-language was a must for almost any program; you have that advantage over me when I was writing this program).
>
> Good luck with your assignment! You can post messages on comp.lang.forth asking specific questions related to your assignment. You can't post any more idiotic messages such as: "could you re-implement myth2 soul-blighter game in forth?" You also can't post vague messages such as: "Um, my source-code file is empty. How should I, um, begin?" Note that these rules apply whether you accept the assignment or not --- quite frankly, we are all sick and tired of your idiotic questions --- you have to complete the assignment or nobody will respond to your posts any more! Programming is all about discipline --- you have to force yourself to work on the assignment until it is complete!
>
> Note that there is an element of irony in the choice of BreakOut as your first-ever Forth program. You have been trapped in the doldrums of attention-deficit-disorder for over 30 years --- now it is time to break out of the doldrums and write an actual computer program! --- you will have something to be proud of when you succeed!

I also said this:

On Thursday, June 16, 2016 at 11:36:47 PM UTC-7, hughag...@gmail.com wrote:
> You can write your program in Lisp or whatever if you prefer that language to Forth (you can even use Haskell or Ada; this would likely be the first time anybody has written a video game in either of those language!). Obviously, if you want technical help on comp.lang.forth then you must use Forth. Also, if you want the awesome fame of getting your program in included in my novice-package, then you must use Forth. Learning Lisp is arguably more useful in the real-world than learning Forth though, so you might choose Lisp for that reason. Use whatever language you want! But, you must write the program and get it to work --- you can't just continue talk on the internet about programming, without doing any programming.

I would appreciate if you guys on comp.lang.lisp would support me in this --- don't allow any more idiotic messages from Gavino --- remind him of the need to complete his assignment.

Thank you for your consideration...

Norbert_Paul

unread,
Jun 17, 2016, 6:52:51 AM6/17/16
to
hughag...@gmail.com wrote:
> Over on comp.lang.forth we had this thread:
> https://groups.google.com/forum/#!topic/comp.lang.forth/qTDKxhW9758
>
> I said this:
>
> On Thursday, June 16, 2016 at 11:16:09 PM UTC-7, hughag...@gmail.com wrote:
>> On Thursday, June 16, 2016 at 7:58:17 PM UTC-7, hughag...@gmail.com wrote:
>>> On Wednesday, June 15, 2016 at 12:15:59 PM UTC-7, endlessboo...@gmail.com wrote:

You are waisting your time.

> I would appreciate if you guys on comp.lang.lisp would support me in this --- don't allow any more idiotic messages from Gavino --- remind him of the need to complete his assignment.
I don't support troll-feeding and I suppose the majority here doesn't too.

> Thank you for your consideration...
You might consider using a decent news-reader to read newsgroups.
As far as I remember, google-groups does not support kill-lists,
but this may have changed by now.

So this is my assignement for you:
(1) Find a news-reader of your choice. It should support kill-lists.
(2) Start reading c.l.l with this reader and stay away from groups.google.com.
(3) *PGONK* (where "G" stands for "Gavino")

Thank you for your consideration...
Norbert

hughag...@gmail.com

unread,
Jun 17, 2016, 11:15:28 AM6/17/16
to
On Friday, June 17, 2016 at 3:52:51 AM UTC-7, Norbert_Paul wrote:
> You are waisting your time.

I waste a lot of time every day --- that is pretty much what the internet is all about!

I'm trying to inspire Gavino to have some focus and discipline --- I could use some improvement in that area too --- how many internet-forum denizens can say that they don't?

Things were a lot different in the old days. It was common for people to write their own games and share them, and it was fun. Now we have games such as "Myth2 Soul-Blighter" that Gavino was recently asking about (same question posted on both c.l.f. and c.l.l.). I have no interest in playing such games at all! I really think that video games are a major contributor to attention-deficit-disorder (ADD). Nowadays there is a big crack-down on cigarettes, as if they are the worst thing ever --- I don't smoke, but I doubt that smoking causes significant health problems --- I would consider video games to be much more detrimental to health.

ADD is a big problem in America today. Gavino is the face of ADD on c.l.f. and c.l.l., but he is hardly the only person with ADD in America --- I see this problem everywhere --- something has to be done about it, or America will be in trouble.

Ugh! Bring back the good old days!

hughag...@gmail.com

unread,
Jun 17, 2016, 11:36:33 AM6/17/16
to
On Friday, June 17, 2016 at 8:15:28 AM UTC-7, hughag...@gmail.com wrote:
> Ugh! Bring back the good old days!

If I were to propose a law, for the benefit of American mental-health, this would be it:

All video games distributed publicly in America must run on a Commodore-64 emulator. Just to be nice, I would let it run at 2Mhz. rather than 1Mhz as the old C64 did have some speed issues in even the simplest games. I would also allow it to use a 6502 enhanced with a (zp,X),Y addressing-mode, as this would make programming much easier and hence more fun.

The only exceptions to this law would be Go and Chess games --- these don't really cause ADD because they require focus and discipline --- these also need a powerful computer to be competitive against humans (I can beat many computer Go programs).

endlessboo...@gmail.com

unread,
Jun 17, 2016, 2:52:32 PM6/17/16
to
hey fuk u bro :)
hope this helps

endlessboo...@gmail.com

unread,
Jun 17, 2016, 2:53:14 PM6/17/16
to
waisting of peace

Pascal J. Bourguignon

unread,
Jun 17, 2016, 3:48:35 PM6/17/16
to
hughag...@gmail.com writes:

> ADD is a big problem in America today. Gavino is the face of ADD on
> c.l.f. and c.l.l., but he is hardly the only person with ADD in
> America --- I see this problem everywhere --- something has to be done
> about it, or America will be in trouble.
>
> Ugh! Bring back the good old days!

Well, we all know who can bring back the good old days, or a good enough
approximation thereof. ;-)

--
__Pascal Bourguignon__ http://www.informatimago.com/
“The factory of the future will have only two employees, a man and a
dog. The man will be there to feed the dog. The dog will be there to
keep the man from touching the equipment.” -- Carl Bass CEO Autodesk

hughag...@gmail.com

unread,
Jun 17, 2016, 8:07:54 PM6/17/16
to
Well, PLONK to Gavino --- you're kill-filed now --- nobody says "fuk u" to me!

I don't know why he was so rude to me. My advice was good! When I want to learn a new programming language, I start out by writing a fun program such as BreakOut --- my first-ever ANS-Forth program was LowDraw.4th which is roughly comparable in complexity to BreakOut (LowDraw is now in the novice-package as example-code for novices) --- afaik, everybody learns new programming languages by writing a fun program like this, even though it may be useless and have no commercial potential.

I also stand by what I said, that video games have a purely negative effect on mental-health --- I would really like to see them banned --- they were fun in the old days on the Commodore-64, but now they are a "soul blighter" (pun intended).

I also think that video games are a tool for MK-ULTRA. I'm thinking here of people like Adam Lansza. I really believe that a person needs a boost to become crazy enough to shoot elementary-school kids --- this doesn't happen naturally --- there had to be a "handler" in the background who turned Adam Lansza into a murderer of children. Video games could be used to do this. In the old days, hypnotism was done by swinging a pendulum in front of the person's eyes. The frequency of the visual input synchronizes with the up-and-down information flow in the 6 layers of the new-cortex. We now know more about the neo-cortex, and we know exactly what its frequency is, so video games routinely pulse the image at this frequency to hypnotize the user. For the most part, this just results in the user spending a lot of time playing the video game, and developing ADD within a few months. This could be used by MK-ULTRA though. Computers are now hooked up to the internet, so the user doesn't necessarily know when and with whom his computer is communicating. If the target subject had a special version of a video game, it could communicate with the handler over the internet. When the user is hypnotized, the characters in the game would begin speaking to him, and they would know his name and know a lot about his personal life, but because he is hypnotized he doesn't notice that this is inexplicable. This would actually be his handler communicating with him over the internet and implanting suggestions into his sub-conscious. Later on he is "triggered" (like in the movie, "Manchurian Candidate," where the guy was shown the queen-of-diamonds) and he begins to play out the video game in real life. Realistically, Adam Lansza was in a hypnotic trance when he shot those children --- nobody does this kind of thing when they are conscious of what they are doing.

Child abuse can result in a split personality. The child develops a second personality inside of the head, and this person takes the abuse. This is why children don't remember child abuse. They aren't pretending to not remember in an effort to protect their abuser --- they really don't remember, because they weren't conscious when it happened.

MK-ULTRA originated in post-WWII Europe. In those days, there were a lot of orphans. Anybody could go to an orphanage and get some children with no background check, but just a little bit of convincing talk. MK-ULTRA took these children and subjected them to ritual child-abuse. They could make the child switch to the second personality with some trigger that indicated that the abuse was about to begin. This second personality was very different from the primary personality --- it would do what it was told to do, including commit murder --- anything to lesson the abuse. The child would not remember any of this. The person would grow up, but he could still be triggered, whereupon the second personality would take over --- this would be the personality that does what it is told to do. This second personality is called an "altar," meaning that it worships a different god at a different altar --- popular literature uses the term "alter" which is incorrect. The person is called a "zombie" --- MK-ULTRA borrowed a lot of techniques and terminology from Voodoo.

Video games are effectively ritual child-abuse. The user (almost always under-age) is shown horrific images of murder and mayhem --- this is child abuse! I think video games could be banned just using our existing laws against child abuse. Also, video games could be considered to be a kind of pornography (they show violence instead of sex, but it is still pornography) --- they could be banned on the basis that showing pornography to children is illegal. All in all, video games are no good --- at best, they turn people into dysfunctional ADD victims --- at worst, they turn people into zombies who commit murder when they are triggered by a handler in the background.

hughag...@gmail.com

unread,
Jun 18, 2016, 2:48:28 PM6/18/16
to
On Friday, June 17, 2016 at 5:07:54 PM UTC-7, hughag...@gmail.com wrote:
> Video games are effectively ritual child-abuse. The user (almost always under-age) is shown horrific images of murder and mayhem --- this is child abuse! I think video games could be banned just using our existing laws against child abuse. Also, video games could be considered to be a kind of pornography (they show violence instead of sex, but it is still pornography) --- they could be banned on the basis that showing pornography to children is illegal. All in all, video games are no good --- at best, they turn people into dysfunctional ADD victims --- at worst, they turn people into zombies who commit murder when they are triggered by a handler in the background.

Let me hasten to say that I don't expect Gavino to become a murderer --- he is just a big happy galoot with ADD --- he is totally harmless.

Also, I am primarily opposed to the first-person shooter games. The game pretty much has to be first-person for the user to become hypnotized and begin to identify with the game character as himself. When Adam Lansza shot up that school, he was most likely in a trance and not realizing that he was out in the real world shooting real people rather than just in the video game like usual.

There may be some video games that have a strategic aspect and which don't focus on murder or mayhem --- I wouldn't have a problem with such games --- I don't foresee myself ever playing any game other than Go however.

Pascal J. Bourguignon

unread,
Jun 18, 2016, 5:28:19 PM6/18/16
to
hughag...@gmail.com writes:

> This could be used by MK-ULTRA though. Computers are now hooked up to
> the internet, so the user doesn't necessarily know when and with whom
> his computer is communicating. If the target subject had a special
> version of a video game, it could communicate with the handler over
> the internet.

You don't need a special version of the video game, just an Intel
processor.

http://boingboing.net/2016/06/15/intel-x86-processors-ship-with.html

endlessboo...@gmail.com

unread,
Jun 18, 2016, 11:44:13 PM6/18/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
youtube yaron brook
gdp would be 100T if fdr had not brought in the welfare state after causing the great depression

endlessboo...@gmail.com

unread,
Jun 18, 2016, 11:45:26 PM6/18/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
fuk u

keep your small minded insults and arrogance to yourself

it is insufferable

if you spoke with intelligence and didn't demean people

you wouldn't have people saying fuk u

youtube some yaron brook

read the syndic by kornbluth

endlessboo...@gmail.com

unread,
Jun 18, 2016, 11:47:37 PM6/18/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
what commy crap is this?

video games are awesome fun

most kids who shoot things up are communists due to government school idiocy and are amped up on government pushed mind drugs from government paid counsellors and social workers

end public school
end all mind drugs for kids
get thier diet normallized and have them exercsie more
get them more sleep
de regulate housing so all can be rich

endlessboo...@gmail.com

unread,
Jun 18, 2016, 11:59:47 PM6/18/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
On Friday, June 17, 2016 at 2:49:55 AM UTC-4, hughag...@gmail.com wrote:
https://www.youtube.com/watch?v=xIDytUCRtTA throum atomic lquid salt reasctor too radical so had to hide project inside atomc bomber project

huge I wil continue to say fuk u if you continue to condescend

remember you could be talking to someone way beyond your intelligence

so be nice on the net

don't use personal attacks

hughag...@gmail.com

unread,
Jun 19, 2016, 12:30:16 AM6/19/16
to
On Thursday, June 16, 2016 at 11:49:55 PM UTC-7, hughag...@gmail.com wrote:
> Over on comp.lang.forth we had this thread:
> https://groups.google.com/forum/#!topic/comp.lang.forth/qTDKxhW9758
>
> I said this:
>
> On Thursday, June 16, 2016 at 11:16:09 PM UTC-7, hughag...@gmail.com wrote:
> > Write your program as a paced loop.

Well I read the VFX manual and found this:
--------------------------------------------------------------------------
These basic words are defined for applications to use the [timer].

START-TIMERS \ -- ; must do this first
STOP-TIMERS \ -- ; closes timers
AFTER \ xt ms -- timerid/0 ; runs xt once after ms
EVERY \ xt ms -- timerid/0 ; runs xt every ms
TSTOP \ timerid -- ; stops the timer
MS \ period -- ; wait for ms


After the timers have been started, actions can be added. The example below starts a timer
which puts a character on the debug console every two seconds.

start-timers
: t \ -- ; will run every 2 seconds
[char] * emit
;
’ t 2 seconds every \ returns timerid, use TSTOP to stop it

The item on stack is a timer handle, use TSTOP to halt this timer.

EVERY would work well for the paced loop that is needed in the BreakOut game. This would allow the paddle, ball(s) and spike(s) to all move at a consistent speed, no matter how many objects are moving at the same time or what angle they are moving at.

We still don't have dual video buffers, but that isn't necessary because we just have a character display, not a graphics display. I'm trying to keep this game simple --- I don't want to delve into a graphics package for an entry-level program.

I could write this program in VFX, and maybe I will, but doing so is somewhat of a waste of time because I already know how to program in Forth (I already have several example programs of comparable or greater complexity in my novice-package). More useful would be for me to write the program in a language that I don't know, such as Scheme or Lisp. Do these languages provide anything comparable to what VFX has for accessing the system clock?

I would prefer to learn Chicken Scheme because it has a nice 64-bit x86 assembler that could be useful to me in the future for compiler-writing. I'm not aware of any other Scheme or Lisp that has this. The assembler is not needed right now for the BreakOut game --- the BreakOut game is just a learning exercise for a total novice (me) --- the assembler would be needed in the future for more serious programming.

hughag...@gmail.com

unread,
Jun 20, 2016, 12:33:44 AM6/20/16
to
On Saturday, June 18, 2016 at 9:30:16 PM UTC-7, hughag...@gmail.com wrote:
> EVERY would work well for the paced loop that is needed in the BreakOut game. This would allow the paddle, ball(s) and spike(s) to all move at a consistent speed, no matter how many objects are moving at the same time or what angle they are moving at.

Actually, even with support for the timer, I doubt that Scheme or Lisp could be used for the BreakOut game. You have GC which could happen at any time, resulting in a noticeable stutter in the motion of the paddle, ball(s) and spike(s).

Perhaps, my effort at learning Scheme or Lisp would be better served by porting my LowDraw.4th program. This is just numerical calculation, so there are no timing issues. I'm not enthusiastic about porting this program though --- because Scheme and Lisp have tagged data, they are slow at arithmetic (they have to examine the tags and figure out what kind of numbers they are working with, before they do an addition or multiplication) --- I would expect the LowDraw program to be maybe an order of magnitude slower in Scheme or Lisp than in Forth.

Antsan

unread,
Jun 21, 2016, 4:03:00 AM6/21/16
to
What would you be running this game on? GC costs are negligible in comparison
to the power of modern computers, especially for something as simple as
BreakOut.

Nicolas Neuss

unread,
Jun 21, 2016, 6:48:33 AM6/21/16
to
Nonsense. Most CL compilers compile to native code. An order of
magnitude is completely unrealistic.

Come on, show your fast Forth code which you believe to be an order of
magnitude faster than CL (say, with SBCL as implementation).

Note that it should be possible to extract a small routine (<100 lines)
performing the bottleneck of the computation. I don't want to wade
through thousands of lines of Forth code.

Nicolas

Nicolas Neuss

unread,
Jun 21, 2016, 6:51:29 AM6/21/16
to
Nicolas Neuss <last...@scipolis.de> writes:

Nicolas Neuss <last...@scipolis.de> writes:

> Nonsense. Most CL compilers compile to native code.

... and can also optimize away most tag handling in high-performance
code.

hughag...@gmail.com

unread,
Jun 24, 2016, 7:53:56 PM6/24/16
to
On Tuesday, June 21, 2016 at 3:48:33 AM UTC-7, Nicolas Neuss wrote:
> hughag...@gmail.com writes:
>
> > On Saturday, June 18, 2016 at 9:30:16 PM UTC-7, hughag...@gmail.com wrote:
> >> EVERY would work well for the paced loop that is needed in the
> >> BreakOut game. This would allow the paddle, ball(s) and spike(s) to
> >> all move at a consistent speed, no matter how many objects are
> >> moving at the same time or what angle they are moving at.
> >
> > Actually, even with support for the timer, I doubt that Scheme or Lisp
> > could be used for the BreakOut game. You have GC which could happen at
> > any time, resulting in a noticeable stutter in the motion of the
> > paddle, ball(s) and spike(s).
> >
> > Perhaps, my effort at learning Scheme or Lisp would be better served
> > by porting my LowDraw.4th program. This is just numerical calculation,
> > so there are no timing issues. I'm not enthusiastic about porting this
> > program though --- because Scheme and Lisp have tagged data, they are
> > slow at arithmetic (they have to examine the tags and figure out what
> > kind of numbers they are working with, before they do an addition or
> > multiplication) --- I would expect the LowDraw program to be maybe an
> > order of magnitude slower in Scheme or Lisp than in Forth.
>
> Nonsense. Most CL compilers compile to native code. An order of
> magnitude is completely unrealistic.

If your has to check the tag at run-time on every datum that you do arithmetic for to determine data-type, then that is going to be slow.

I have heard that CL has a way to declare at compile-time what type each parameter in each function is. That would speed up the program considerably. Is this actually Lisp though? Or is this a Lisp-like statically compiled language?

> Come on, show your fast Forth code which you believe to be an order of
> magnitude faster than CL (say, with SBCL as implementation).

I already said that my LowDraw.4th program would make a good benchmark. It can be found in the novice-package:
http://www.forth.org/novice.html
This is a very old version of the novice-package. I have a newer version that I can email to anybody who wants it. It has string-stack.4th among other things. I will put this on git or on a website pretty soon, but I'm still adding stuff to it. LowDraw.4th was my first-ever ANS-Forth program written over 10 years ago and it hasn't changed, so if that is all you are interested in then the downloadable version of the novice-package is fine.

> Note that it should be possible to extract a small routine (<100 lines)
> performing the bottleneck of the computation. I don't want to wade
> through thousands of lines of Forth code.

The LowDraw.4th program is 890 lines long. It calculates the odds for LowDraw poker for various drawing strategies. It does this with a recursive traversal of all possible poker hands.

There is also a LowDraw.txt file which is the documentation, and a LowDrawStrategies.4th file which contains the various drawing strategies.

I think the program implementation is pretty straight-forward and obvious. You don't really need to even look at the Forth code. If you know the concept of a recursive-traversal and you know basic probability (know that PAND and POR are multiplication and addition), you can just write the Lisp code from scratch. You can compare your output to my output to verify that your program is making the same calculation. This is not a complicated program --- I wrote it originally as a learning exercise to get my feet wet with ANS-Forth --- I have told Gavino many times that the way to learn any programming language is to just pick an easy problem and write a program for it; this is how I learned ANS-Forth, with this LowDraw.4th program.

Note that for a comparison to be meaningful, you should use VFX --- don't use SwiftForth because it is grossly inefficient and just makes Forth look bad --- there is a free evaluation version.

I didn't have the novice-package available when I wrote LowDraw.4th and hence didn't have any general-purpose code available. For example, I didn't have SORT written yet, so in LowDraw.4th I just used a Bubble-Sort that I coded off the top of my head in the middle of the program (in the novice-package I have SORT that does a HeapSort and works on any data type, which is what I would use nowadays). After I had written the novice-package, I upgraded LowDraw.4th slightly to use some of the features from the novice-package --- LowDraw.4th is still a pretty rudimentary program though --- it is not like my more recent Forth efforts that are (I think) more sophisticated.

I would really expect any programmer to be able to write a program like this from scratch in his or her favorite language without much difficulty.

hughag...@gmail.com

unread,
Jun 24, 2016, 8:14:53 PM6/24/16
to
I agree that GC is not going to be an issue with either the BreakOut game or with LowDraw.4th because neither program uses dynamic data (LowDraw.4th just uses an array that is statically defined, and the BreakOut game would also use statically defined arrays).

A better comparison in regard to the effect of GC would be my ASSOCIATION.4TH file. This uses an LLRB tree internally. It is a general-purpose data-structure. I don't have any example program that uses ASSOCIATION.4TH at this time, but I suppose I could think up something.

A program I do have that uses a lot of dynamic data is SLIDE-RULE.4TH --- this generates both gcode and PostScript code to generate the image of a slide-rule. I use singly-linked lists throughout --- coincidentally, Lisp also uses singly-linked lists, hence the name LISP (list processor) --- some of my lists contain thousands of elements.

tar...@google.com

unread,
Jun 24, 2016, 8:21:21 PM6/24/16
to
On Friday, June 24, 2016 at 4:53:56 PM UTC-7, hughag...@gmail.com wrote:
> On Tuesday, June 21, 2016 at 3:48:33 AM UTC-7, Nicolas Neuss wrote:
> > hughag...@gmail.com writes:
> >
> > > On Saturday, June 18, 2016 at 9:30:16 PM UTC-7, hughag...@gmail.com wrote:
> > >> EVERY would work well for the paced loop that is needed in the
> > >> BreakOut game. This would allow the paddle, ball(s) and spike(s) to
> > >> all move at a consistent speed, no matter how many objects are
> > >> moving at the same time or what angle they are moving at.
> > >
> > > Actually, even with support for the timer, I doubt that Scheme or Lisp
> > > could be used for the BreakOut game. You have GC which could happen at
> > > any time, resulting in a noticeable stutter in the motion of the
> > > paddle, ball(s) and spike(s).

Various studies have shown that GC is not any worse, performance-wise, than
manual memory allocation. In C, calling malloc can have dramatic performance
impacts as well.

For something like a BreakOut game, I would expect that one would be able to
write it in a way that does not need to allocate additional memory. If there
is no dynamic memory allocation, there will be no GC.

Not to mention that there are very efficient ephemeral collectors and also
some specialized algorithms that can be used to do parallel collection. The
former are part of most modern Lisp and non-lisp GC implementations. The
latter are a bit specialized.

Also take a look at
https://www.google.com/search?q=real-time+garbage+collection

> > >
> > > Perhaps, my effort at learning Scheme or Lisp would be better served
> > > by porting my LowDraw.4th program. This is just numerical calculation,
> > > so there are no timing issues. I'm not enthusiastic about porting this
> > > program though --- because Scheme and Lisp have tagged data, they are
> > > slow at arithmetic (they have to examine the tags and figure out what
> > > kind of numbers they are working with, before they do an addition or
> > > multiplication) --- I would expect the LowDraw program to be maybe an
> > > order of magnitude slower in Scheme or Lisp than in Forth.
> >
> > Nonsense. Most CL compilers compile to native code. An order of
> > magnitude is completely unrealistic.
>
> If your has to check the tag at run-time on every datum that you do arithmetic for to determine data-type, then that is going to be slow.
>

And yet performance tests show this not to be the case.
The lisp compiler implementers are quite a clever bunch, and can do a lot.

In fact, my experience with the PowerLoom system [1] (which is common code compiled into Common Lisp, Java or C++) showed it to be just as fast as Java and about 1/2 the speed of C++. But that depends a bit on exactly what code is being run.

> I have heard that CL has a way to declare at compile-time what type each parameter in each function is. That would speed up the program considerably. Is this actually Lisp though? Or is this a Lisp-like statically compiled language?
>

Part of Standard Common Lisp (and other predecessors):
http://www.lispworks.com/documentation/HyperSpec/Body/d_type.htm#type

---
[1] http://www.isi.edu/isd/LOOM/PowerLoom

Pascal J. Bourguignon

unread,
Jun 25, 2016, 6:29:02 AM6/25/16
to
hughag...@gmail.com writes:

> On Tuesday, June 21, 2016 at 3:48:33 AM UTC-7, Nicolas Neuss wrote:
>> hughag...@gmail.com writes:
>>
>> Nonsense. Most CL compilers compile to native code. An order of
>> magnitude is completely unrealistic.
>
> If your has to check the tag at run-time on every datum that you do
> arithmetic for to determine data-type, then that is going to be slow.
>
> I have heard that CL has a way to declare at compile-time what type
> each parameter in each function is. That would speed up the program
> considerably. Is this actually Lisp though? Or is this a Lisp-like
> statically compiled language?

Indeed. But the right term is not declare, it's type check:

(defun fact (x)
(check-type x (integer 1))
(if (= 1 x)
1
(* x (fact (- 1 x)))))

When you use the check-type macro, the compiler can infer that following
it, the type of the object bound to the place is what was specified. It
can therefore optimize out further type checking and operations.



On the other hand, if you make the mistake to write:

(defun fact (x)
(declare (type (integer 0) x))
(if (= 1 x)
1
(* x (fact (- 1 x)))))

then you are telling the compiler that you will never call the function
with a parameter that is not bound to a value of that type. Since the
compiler knows that you are human, and therefore bound to be lying, it
will have to keep making type checks, to ensure a safe execution
(ie. signaling an error) when you break your promise.

Unless you compile with (declaim (optimize (safety 0))) in which case
you fall back to the level of C compilers and their nasal dragons.

hughag...@gmail.com

unread,
Jun 26, 2016, 12:54:22 AM6/26/16
to
I think what you are saying is that DECLARE will cause a type-check to execute at run-time with an abort if the datum is of the wrong type, and this also allows the compiler to compile code on the assumption that the datum is of the correct type --- OTOH, CHECK-TYPE allows the compiler to compile code on the assumption that the datum is of the correct type, but there is no type-check done at run-time so if the datum is not of the correct type then chaos will result. This doesn't make much sense though because the name CHECK-TYPE seems to imply that there is a type-check being done at run-time.

I don't really know what you are talking about. I don't actually have any interest in learning Common Lisp --- the language is just gigantic and complicated, and likely slow --- I still might learn Chicken Scheme though, as it has a 64-bit x86 assembler that could be very useful for compiler-writing.

It might be interesting to see a disassembly of your two FACT functions.

Here is some VFX code (I'm not doing any range checking, so these will overflow given a big N):

: recursive-factorial ( n -- n! )
dup 1 = if exit then
dup 1- recurse * ;

: iterative-factorial ( n -- n! )
dup 1- begin dup 1 > while \ -- product n
tuck * \ -- n product*n
swap 1- \ -- product*n n-1
repeat
drop ;

see recursive-factorial
RECURSIVE-FACTORIAL
( 004DFC60 83FB01 ) CMP EBX, 01
( 004DFC63 0F8501000000 ) JNZ/NE 004DFC6A
( 004DFC69 C3 ) NEXT,
( 004DFC6A 8BD3 ) MOV EDX, EBX
( 004DFC6C 4B ) DEC EBX
( 004DFC6D 8D6DFC ) LEA EBP, [EBP+-04]
( 004DFC70 895500 ) MOV [EBP], EDX
( 004DFC73 E8E8FFFFFF ) CALL 004DFC60 RECURSIVE-FACTORIAL
( 004DFC78 0FAF5D00 ) IMUL EBX, [EBP]
( 004DFC7C 8D6D04 ) LEA EBP, [EBP+04]
( 004DFC7F C3 ) NEXT,
( 32 bytes, 11 instructions )
ok
see iterative-factorial
ITERATIVE-FACTORIAL
( 004DFCB0 8BD3 ) MOV EDX, EBX
( 004DFCB2 4B ) DEC EBX
( 004DFCB3 8D6DFC ) LEA EBP, [EBP+-04]
( 004DFCB6 895500 ) MOV [EBP], EDX
( 004DFCB9 90 ) NOP
( 004DFCBA 90 ) NOP
( 004DFCBB 90 ) NOP
( 004DFCBC 90 ) NOP
( 004DFCBD 90 ) NOP
( 004DFCBE 90 ) NOP
( 004DFCBF 90 ) NOP
( 004DFCC0 83FB01 ) CMP EBX, 01
( 004DFCC3 0F8E0E000000 ) JLE/NG 004DFCD7
( 004DFCC9 8BD3 ) MOV EDX, EBX
( 004DFCCB 0FAF5D00 ) IMUL EBX, [EBP]
( 004DFCCF 4A ) DEC EDX
( 004DFCD0 895D00 ) MOV [EBP], EBX
( 004DFCD3 8BDA ) MOV EBX, EDX
( 004DFCD5 EBE9 ) JMP 004DFCC0
( 004DFCD7 8B5D00 ) MOV EBX, [EBP]
( 004DFCDA 8D6D04 ) LEA EBP, [EBP+04]
( 004DFCDD C3 ) NEXT,
( 46 bytes, 22 instructions )
ok

Raymond Wiker

unread,
Jun 26, 2016, 3:55:58 AM6/26/16
to
hughag...@gmail.com writes:

> I don't really know what you are talking about. I don't actually have
> any interest in learning Common Lisp --- the language is just gigantic
> and complicated, and likely slow --- I still might learn Chicken
> Scheme though, as it has a 64-bit x86 assembler that could be very
> useful for compiler-writing.

You really, *really* have no clue about Common Lisp. That sort of
suggests an assignment for you.

Marco Antoniotti

unread,
Jun 26, 2016, 10:57:29 AM6/26/16
to
On Sunday, June 26, 2016 at 6:54:22 AM UTC+2, hughag...@gmail.com wrote:

>
> I think what you are saying is that DECLARE will cause a type-check to execute at run-time with an abort if the datum is of the wrong type,

No


> and this also allows the compiler to compile code on the assumption that the datum is of the correct type --- OTOH, CHECK-TYPE allows the compiler to compile code on the assumption that the datum is of the correct type, but there is no type-check done at run-time so if the datum is not of the correct type then chaos will result. This doesn't make much sense though because the name CHECK-TYPE seems to imply that there is a type-check being done at run-time.

No.

> I don't really know what you are talking about. I don't actually have any interest in learning Common Lisp --- the language is just gigantic and complicated, and likely slow

Yep. It is slow compared to optimized C/C++, Fortran, OCaml and down-to-the-metal assembly.

> --- I still might learn Chicken Scheme though, as it has a 64-bit x86 assembler that could be very useful for compiler-writing.
>
> It might be interesting to see a disassembly of your two FACT functions.
>
> Here is some VFX code (I'm not doing any range checking, so these will overflow given a big N):

Here is something for your education. The code will work with multiple precision integers (this is for LW on a Mac).

CL-USER 32 > (defun factorial (n)
(declare (type (integer 0) n))
(if (<= n 0)
1
(* n (factorial (1- n)))))
FACTORIAL

CL-USER 33 > (compile 'factorial)
FACTORIAL
NIL
NIL

CL-USER 34 > (disassemble 'factorial)
2009798A:
0: 89E6 move esi, esp
2: 81CEFCFF0F00 or esi, FFFFC
8: 3966D0 cmp [esi-30], esp
11: 7325 jnb L3
13: 80FD01 cmpb ch, 1
16: 7520 jne L3
18: 55 push ebp
19: 89E5 move ebp, esp
21: 50 push eax
22: 8B7DFC move edi, [ebp-4]
25: 33C0 xor eax, eax
27: 89FB move ebx, edi
29: 0BD8 or ebx, eax
31: F6C303 testb bl, 3
34: 7513 jne L4
36: 3BF8 cmp edi, eax
38: 7F1A jg L5
L1: 40: BF04000000 move edi, 4
45: FD std
L2: 46: 89F8 move eax, edi
48: C9 leave
49: C3 ret
L3: 50: E881650700 call 2010DF42 ; #<Function RUNTIME:BAD-ARGS-OR-STACK 2010DF42>
L4: 55: 57 push edi
56: E8DB450700 call 2010BFA2 ; #<Function SYSTEM::*%<=$ANY-STUB 2010BFA2>
61: 83F856 cmp eax, 56
64: 75E6 jne L1
L5: 66: F645FC03 testb [ebp-4], 3
70: 752A jne L7
72: 8B45FC move eax, [ebp-4]
75: 83E804 sub eax, 4
78: 7022 jo L7
L6: 80: B501 moveb ch, 1
82: FF150CF0DF21 call [21DFF00C] ; FACTORIAL
88: 8B5DFC move ebx, [ebp-4]
91: 0BD8 or ebx, eax
93: F6C303 testb bl, 3
96: 751A jne L8
98: 8B5DFC move ebx, [ebp-4]
101: C1FB02 sar ebx, 2
104: 89C7 move edi, eax
106: 0FAFFB imul edi, ebx
109: 700D jo L8
111: FD std
112: EBBC jmp L2
L7: 114: 8B45FC move eax, [ebp-4]
117: E8DE470700 call 2010C1E2 ; #<Function SYSTEM::*%1-$ANY-STUB 2010C1E2>
122: EBD4 jmp L6
L8: 124: 8B7DFC move edi, [ebp-4]
127: 83EC04 sub esp, 4
130: 8B7500 move esi, [ebp]
133: 8975FC move [ebp-4], esi
136: 83ED04 sub ebp, 4
139: 8B7508 move esi, [ebp+8]
142: 897504 move [ebp+4], esi
145: 897D08 move [ebp+8], edi
148: C9 leave
149: E92E4B0700 jmp 2010C552 ; #<Function SYSTEM::*%*$ANY-STUB 2010C552>
NIL

CL-USER 35 > (factorial 100)
93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000

It will bomb (nicely) on non integer inputs. Looks pretty much x86 assembly to me.

Cheers
--
MA

Paul Rubin

unread,
Jun 26, 2016, 11:14:01 PM6/26/16
to
Marco Antoniotti <mar...@gmail.com> writes:
> CL-USER 32 > (defun factorial (n)
> (declare (type (integer 0) n))
> (if (<= n 0)
> 1
> (* n (factorial (1- n)))))

Could you try a fixnum, tail recursive version? Something like:

(defun fac (n p)
(declare (type (fixnum 0) n))
(declare (type (fixnum 1) p))
(cond
((= n 0) p)
(t (fac (1- n) (* p n)))))

(defun factorial (n) (fac n 1))

I'd expect the asm code to look pretty much like C code.

Nicolas Neuss

unread,
Jun 27, 2016, 4:40:08 AM6/27/16
to
hughag...@gmail.com writes:

>> Note that it should be possible to extract a small routine (<100 lines)
>> performing the bottleneck of the computation. I don't want to wade
>> through thousands of lines of Forth code.
>
> The LowDraw.4th program is 890 lines long. It calculates the odds for
> LowDraw poker for various drawing strategies. It does this with a
> recursive traversal of all possible poker hands.

Too long. Please extract a small performance-critical part.

> There is also a LowDraw.txt file which is the documentation, and a
> LowDrawStrategies.4th file which contains the various drawing
> strategies.
>
> [...]
> I didn't have the novice-package available when I wrote LowDraw.4th
> and hence didn't have any general-purpose code available. For example,
> I didn't have SORT written yet, so in LowDraw.4th I just used a
> Bubble-Sort that I coded off the top of my head in the middle of the
> program.

OK, then it is anyhow unacceptably slow, unless you sort only
extremely short lists.

> I would really expect any programmer to be able to write a program
> like this from scratch in his or her favorite language without much
> difficulty.

The question is if any programmer wants to do this at your command
without you paying him.

Here is a better alternative: We take the following 26 lines of CL code
calculating a Mandelbrot set, which performs in about 1.4 secs on my
Laptop. Now you write an equivalent Forth version, you do the speed
test on it and then post the program together with your speed
measurements.

--8<---------------cut here---------------start------------->8---
(defun mandelbrot-iteration (c)
(declare (ftype (function (complex double-float) fixnum))
(optimize speed))
(loop for k of-type fixnum from 1 below 1000
for z of-type (complex double-float) = c then
(+ (* z z) c)
until (> (abs z) 100.0)
finally (return k)))

(defun mandelbrot-box (x1 x2 y1 y2 nx ny &optional result-p)
(declare (type double-float x1 x2 y1 y2)
(type fixnum nx ny))
(let ((dx (/ (- x2 x1) nx))
(dy (/ (- y2 y1) ny))
(result (make-array (list (1+ nx) (1+ ny)) :element-type 'fixnum))
(sum 0))
(declare (type double-float dx dy)
(type fixnum sum))
(loop for kx upto nx
for x of-type double-float = x1 then (+ x1 dx) do
(loop for ky upto ny
for y of-type double-float = y1 then (+ y1 dy) do
(incf sum
(setf (aref result kx ky)
(mandelbrot-iteration (complex x y))))))
(if result-p result sum)))

(time (mandelbrot-box 0.0 10.0 0.0 10.0 256 256))
--8<---------------cut here---------------end--------------->8---

Nicolas

hughag...@gmail.com

unread,
Jun 27, 2016, 5:27:53 PM6/27/16
to
On Monday, June 27, 2016 at 1:40:08 AM UTC-7, Nicolas Neuss wrote:
> hughag...@gmail.com writes:
>
> >> Note that it should be possible to extract a small routine (<100 lines)
> >> performing the bottleneck of the computation. I don't want to wade
> >> through thousands of lines of Forth code.
> >
> > The LowDraw.4th program is 890 lines long. It calculates the odds for
> > LowDraw poker for various drawing strategies. It does this with a
> > recursive traversal of all possible poker hands.
>
> Too long. Please extract a small performance-critical part.

It is a pretty simple program with a lot of comments --- I specifically chose LowDraw poker so the program would be super-simple (don't have to deal about card suits, don't have to deal with straights and flushes) --- I can't think of any probability calculation that would be simpler, while still being interesting (ye olde example of drawing colored balls out of an urn is simpler, but is pretty boring).

LowDraw poker was still being played in Gardenia CA when I wrote the program.

> > There is also a LowDraw.txt file which is the documentation, and a
> > LowDrawStrategies.4th file which contains the various drawing
> > strategies.
> >
> > [...]
> > I didn't have the novice-package available when I wrote LowDraw.4th
> > and hence didn't have any general-purpose code available. For example,
> > I didn't have SORT written yet, so in LowDraw.4th I just used a
> > Bubble-Sort that I coded off the top of my head in the middle of the
> > program.
>
> OK, then it is anyhow unacceptably slow, unless you sort only
> extremely short lists.

You seem pretty quick to declare my program to be "unacceptably slow" --- you don't know how the program works --- it is sorting arrays, not lists, and they are only 5 elements long (they are poker hands).

> > I would really expect any programmer to be able to write a program
> > like this from scratch in his or her favorite language without much
> > difficulty.
>
> The question is if any programmer wants to do this at your command
> without you paying him.

The program was interesting to me. I would expect it to be interesting to any programmer. I think probability is fascinating --- doesn't everybody?

> Here is a better alternative: We take the following 26 lines of CL code
> calculating a Mandelbrot set, which performs in about 1.4 secs on my
> Laptop. Now you write an equivalent Forth version, you do the speed
> test on it and then post the program together with your speed
> measurements.

Can you describe what this program does? I told you that my program calculates the probabilities of LowDraw poker hands given various drawing strategies --- I also said that anybody who knows basic probability should be able to write such a program from scratch --- I didn't ask people to figure out my program from my source-code without knowing what the program does, which would obviously be difficult for anybody who doesn't know Forth. In my experience with programming, it is necessary to know what a program is supposed to do, in order to write the program.

Is there a version of your program available in Scheme or C or any language other than CL? I don't know what all that LOOP and FOR stuff does --- there are a lot of words in there that I'm not familiar with.

I'm mostly interested in probability calculations --- I have never read anything about the Mandelbrot Set --- what is that used for other than making psychedelic night-glow posters for stoners' bedroom walls?

Paul Rubin

unread,
Jun 27, 2016, 9:06:20 PM6/27/16
to
Nicolas Neuss <last...@scipolis.de> writes:
> (time (mandelbrot-box 0.0 10.0 0.0 10.0 256 256))

What return value do you get from mandelbrot-box?

Paul Rubin

unread,
Jun 27, 2016, 10:48:04 PM6/27/16
to
I think I see what happened.

for x of-type double-float = x1 then (+ x1 dx) do
(loop for ky upto ny
for y of-type double-float = y1 then (+ y1 dy) do

was supposed to say:

for x of-type double-float = x1 then (+ x dx) do
(loop for ky upto ny
for y of-type double-float = y1 then (+ y dy) do

Nicolas Neuss

unread,
Jun 28, 2016, 2:54:12 AM6/28/16
to
Oh, my bad. Looks as if I never really looked at the output...

:-(

Thanks for spotting the bug(s).

Nicolas

Paul Rubin

unread,
Jun 28, 2016, 2:55:15 AM6/28/16
to
Nicolas Neuss <last...@scipolis.de> writes:
> Oh, my bad. Looks as if I never really looked at the output...

Also, sbcl doesn't accept 0.0 as a double float, I had to say 0.0d0.
What lisp were you using?

Nicolas Neuss

unread,
Jun 28, 2016, 3:02:03 AM6/28/16
to
You're right also here.

I have

(setq *READ-DEFAULT-FLOAT-FORMAT* 'double-float)

in my initialization file. For numerical work, this is the usual
choice.

Sorry, I should have tested the program better in a standard
environment, before posting it here.

With the errors corrected, I get the following performance results on my
laptop (different box, the previous one was so large that the time was
quite short):

(time (mandelbrot-box 0.0d0 1.0d0 0.0d0 1.0d0 256 256)) -> 1.5 secs

Nicolas

Nicolas Neuss

unread,
Jun 28, 2016, 3:14:08 AM6/28/16
to
hughag...@gmail.com writes:

>> Here is a better alternative: We take the following 26 lines of CL code
>> calculating a Mandelbrot set, which performs in about 1.4 secs on my
>> Laptop. Now you write an equivalent Forth version, you do the speed
>> test on it and then post the program together with your speed
>> measurements.
>
> Can you describe what this program does?

It does (hopefully - and only after the errors spotted by Paul are
corrected) calculate an array of iteration numbers corresponding to
this:

https://en.wikipedia.org/wiki/Mandelbrot_set

> I told you that my program calculates the probabilities of LowDraw
> poker hands given various drawing strategies --- I also said that
> anybody who knows basic probability should be able to write such a
> program from scratch --- I didn't ask people to figure out my program
> from my source-code without knowing what the program does, which would
> obviously be difficult for anybody who doesn't know Forth. In my
> experience with programming, it is necessary to know what a program is
> supposed to do, in order to write the program.
>
> Is there a version of your program available in Scheme or C or any
> language other than CL? I don't know what all that LOOP and FOR stuff
> does --- there are a lot of words in there that I'm not familiar with.

There is pseudocode on the Wikipedia page.

> I'm mostly interested in probability calculations --- I have never
> read anything about the Mandelbrot Set --- what is that used for other
> than making psychedelic night-glow posters for stoners' bedroom walls?

It gives a nice performance test (measuring mostly looping and
arithmetic performance) :-)

What is the use of LowDraw Poker?

Nicolas


Corrected version of program:

(defun mandelbrot-iteration (c)
(declare (ftype (function (complex double-float) fixnum))
(optimize speed))
(loop for k of-type fixnum from 1 below 1000
for z of-type (complex double-float) = c then
(+ (* z z) c)
until (> (abs z) 100.0d0)
finally (return k)))

(defun mandelbrot-box (x1 x2 y1 y2 nx ny &optional result-p)
(declare (type double-float x1 x2 y1 y2)
(type fixnum nx ny))
(let ((dx (/ (- x2 x1) nx))
(dy (/ (- y2 y1) ny))
(result (make-array (list (1+ nx) (1+ ny)) :element-type 'fixnum))
(sum 0))
(declare (type double-float dx dy)
(type fixnum sum))
(loop for kx upto nx
for x of-type double-float = x1 then (+ x dx) do
(loop for ky upto ny
for y of-type double-float = y1 then (+ y dy) do
(incf sum
(setf (aref result kx ky)
(mandelbrot-iteration (complex x y))))))
(if result-p result sum)))

Paul Rubin

unread,
Jun 28, 2016, 3:46:04 AM6/28/16
to
Nicolas Neuss <last...@scipolis.de> writes:
> on my laptop:
> (time (mandelbrot-box 0.0d0 1.0d0 0.0d0 1.0d0 256 256)) -> 1.5 secs

What hardware? On an i7-3770 (a fast x86 box) and SBCL 1.0.57.0 I get
about 0.17 sec which actually beats a straightforward C implementation.
I might try Fortran. What do you get as the total # of iterations?
There is a slight discrepancy between the lisp version (13950225) and my
C version (13959941) but I might have misunderstood the loop boundaries
or something.

Nicolas Neuss

unread,
Jun 28, 2016, 4:05:26 AM6/28/16
to
I get the same iteration count as you report for SBCL.

With respect to time, probably my Laptop was in Powersave mode when I
measured the above. I see now 0.3 secs for performance mode. The CPU
of my laptop is a i7-4600.

Nicolas

P.S.: I really should plot the output to be sure that it is not
calculating nonsense. Will try.

hughag...@gmail.com

unread,
Jun 28, 2016, 5:01:41 AM6/28/16
to
On Tuesday, June 28, 2016 at 1:05:26 AM UTC-7, Nicolas Neuss wrote:
> Paul Rubin <no.e...@nospam.invalid> writes:
>
> > Nicolas Neuss <last...@scipolis.de> writes:
> >> on my laptop:
> >> (time (mandelbrot-box 0.0d0 1.0d0 0.0d0 1.0d0 256 256)) -> 1.5 secs
> >
> > What hardware? On an i7-3770 (a fast x86 box) and SBCL 1.0.57.0 I get
> > about 0.17 sec which actually beats a straightforward C
> > implementation. I might try Fortran. What do you get as the total #
> > of iterations? There is a slight discrepancy between the lisp version
> > (13950225) and my C version (13959941) but I might have misunderstood
> > the loop boundaries or something.
>
> I get the same iteration count as you report for SBCL.
>
> With respect to time, probably my Laptop was in Powersave mode when I
> measured the above. I see now 0.3 secs for performance mode. The CPU
> of my laptop is a i7-4600.
>
> Nicolas

This seems like an overly complicated benchmark. You and Paul don't agree on the iteration count. Since I don't know what the point of any of this is, I have no opinion on what the iteration count should be. Also, if the program takes 170 or 300 milliseconds or thereabouts to run, you aren't going to get very meaningful timing results. My LowDraw.4th program takes much longer to run, so the timing is more meaningful. Also, my program is large enough that the compactness of the code generated becomes an issue (cache-thrashing is the primary speed killer on the x86).

I wish now that I had not brought up the subject of bench-marking --- this is a can of worms that we don't want to open! It is meaningless to the level of being stupid to run bench-mark programs on a multi-tasking OS. Also, even if you dodge this problem, the results are still pretty much meaningless because the contents of the code-cache and data-cache prior to running the program will have a huge effect --- the results are going to be wildly different from one run to the next (because one run preps the caches for the next run).

The biggest problem with optimizing the x86 is that nobody knows how the x86 works. This is because the chip-manufacturers consider this information to be highly proprietary. Also, there are more than one chip manufacturer, and each of their chips have different internal workings. We know that the x86 code is compiled into a low-level machine-code on the fly, but we don't know anything at all about this low-level machine-code. We get some hints regarding optimization, such as Intel's book-two on Optimization, but they are pretty vague hints. None of this stuff is quantified at all, so we don't know how important one technique is compared to another technique --- we don't know how the techniques affect each other when used at the same time --- the optimization techniques are very similar to religious rituals that supposedly affect your results, but nobody can explain the mechanism.

> P.S.: I really should plot the output to be sure that it is not
> calculating nonsense. Will try.

So, if it produces a psychedelic image, then you know that your code is correct? Maybe you can poll all of the stoners in your neighborhood to determine the quality of your program's results. Far out, man!

By comparison, my LowDraw.4th program calculates a result that can be verified for correctness. All the probabilities have to add up to 1.0, for sure. Also, you can hand-calculate some of the rare poker hands (quads, for example) to compare against the program's result --- admittedly, I didn't actually do this --- I just looked over the code and I could see that it was obviously correct, as it was pretty simple and straightforward.

Nicolas Neuss

unread,
Jun 28, 2016, 5:02:12 AM6/28/16
to
Nicolas Neuss <last...@scipolis.de> writes:

> I get the same iteration count as you report for SBCL.
>
> With respect to time, probably my Laptop was in Powersave mode when I
> measured the above. I see now 0.3 secs for performance mode. The CPU
> of my laptop is a i7-4600.
>
> Nicolas
>
> P.S.: I really should plot the output to be sure that it is not
> calculating nonsense. Will try.

OK, I have plotted the result of

(mandelbrot-box -2.0d0 1.0d0 -1.0d0 1.0d0 256 256 t)

with (using some Femlisp picture routines)

(defun array-to-picture (array)
(lret* ((m (array-dimension array 0))
(n (array-dimension array 1))
(pic (fl.picture:make-picture
m n :element-type '(unsigned-byte 8))))
(dotimes (i m)
(dotimes (j n)
(setf (fl.picture:pic-ref pic i j)
(min (* 10 (aref array i j)) 255))))))

(fl.picture::write-pgm
(array-to-picture
(mandelbrot-box -2.0d0 1.0d0 -1.0d0 1.0d0 256 256 t))
:filename #p"femlisp:images;mandelbrot.pgm")

to a PGM file. This results in

http://scipolis.de/misc/mandelbrot.pgm

So it seems that my code now computes at least something related to the
Mandelbrot set:-)

Nicolas

Nicolas Neuss

unread,
Jun 28, 2016, 5:27:30 AM6/28/16
to
hughag...@gmail.com writes:

> This seems like an overly complicated benchmark.

If 26 lines of CL are overly complicated for you, why do you think that
others would accept 900 lines of Forth as a benchmark?

> You and Paul don't agree on the iteration count.

Yes, we do. On SBCL. To make it agree completely, we would have to
check his C++ code, and maybe also which FPU settings are active, etc.

> Since I don't know what the point of any of this is, I have no opinion
> on what the iteration count should be. Also, if the program takes 170
> or 300 milliseconds or thereabouts to run, you aren't going to get
> very meaningful timing results. My LowDraw.4th program takes much
> longer to run, so the timing is more meaningful.

The easy remedy is to making the resolution finer. E.g,

(time (mandelbrot-box -2.0d0 1.0d0 -1.0d0 1.0d0 1024 1024))

takes 5 seconds.

[How long does it have to run, for being "better" (in your quality
metric for performance testing) than your LowDraw program? I can
provide you with suitable parameters...]

> Also, my program is large enough that the compactness of the code
> generated becomes an issue (cache-thrashing is the primary speed
> killer on the x86).

For certain kinds of applications, maybe. Not for the Mandelbrot
calculation.

> I wish now that I had not brought up the subject of bench-marking ---
> this is a can of worms that we don't want to open! It is meaningless
> to the level of being stupid to run bench-mark programs on a
> multi-tasking OS.

Nonsense. Or maybe, if you don't know what you're doing.

> Also, even if you dodge this problem, the results are still pretty
> much meaningless because the contents of the code-cache and data-cache
> prior to running the program will have a huge effect --- the results
> are going to be wildly different from one run to the next (because one
> run preps the caches for the next run).

See above.

> The biggest problem with optimizing the x86 is that nobody knows how
> the x86 works. This is because the chip-manufacturers consider this
> information to be highly proprietary. Also, there are more than one
> chip manufacturer, and each of their chips have different internal
> workings. We know that the x86 code is compiled into a low-level
> machine-code on the fly, but we don't know anything at all about this
> low-level machine-code. We get some hints regarding optimization, such
> as Intel's book-two on Optimization, but they are pretty vague
> hints. None of this stuff is quantified at all, so we don't know how
> important one technique is compared to another technique --- we don't
> know how the techniques affect each other when used at the same time
> --- the optimization techniques are very similar to religious rituals
> that supposedly affect your results, but nobody can explain the
> mechanism.

Of course, there are people devoting their life for making performance
testing as accurate as possible, and those people have to consider all
these kinds of problems. But there is also information which can be
obtained reliably and easily by very simple tests (like my Mandelbrot
test program). E.g., that something like interpreted Python will do
this Mandelbrot set calculation slow, while compiled C++ or CL code will
do it much faster.

>> P.S.: I really should plot the output to be sure that it is not
>> calculating nonsense. Will try.
>
> So, if it produces a psychedelic image, then you know that your code
> is correct? Maybe you can poll all of the stoners in your neighborhood
> to determine the quality of your program's results. Far out, man!
>
> By comparison, my LowDraw.4th program calculates a result that can be
> verified for correctness. All the probabilities have to add up to 1.0,
> for sure.

I very much doubt that they do add up precisely to 1.0, at least if you
should use floating point arithmetic...

> Also, you can hand-calculate some of the rare poker hands
> (quads, for example) to compare against the program's result ---
> admittedly, I didn't actually do this --- I just looked over the code
> and I could see that it was obviously correct, as it was pretty simple
> and straightforward.

That's what I had done as well - and failed (the bugs discovered by
Paul). Now, I am a little more on the safe side, because of my
psychedelic picture.

:-)

Nicolas

WJ

unread,
Jun 28, 2016, 5:30:40 AM6/28/16
to
Nicolas Neuss wrote:

> (defun mandelbrot-iteration (c)
> (declare (ftype (function (complex double-float) fixnum))
> (optimize speed))
> (loop for k of-type fixnum from 1 below 1000
> for z of-type (complex double-float) = c then
> (+ (* z z) c)
> until (> (abs z) 100.0d0)

100.0 ?

I thought that 2.0 was customary.

> finally (return k)))

Nicolas Neuss

unread,
Jun 28, 2016, 6:03:51 AM6/28/16
to
It does not matter much. The limit 100 takes some iterations more, but
has more "space" for coloring the region outside the Mandelbrot set
nicely. The hard limit 2 would make the coloring "non-smooth" at the
point (-2,0) where the Mandelbrot set touches the limiting disc.

Nicolas

hughag...@gmail.com

unread,
Jun 28, 2016, 2:31:44 PM6/28/16
to
On Tuesday, June 28, 2016 at 2:27:30 AM UTC-7, Nicolas Neuss wrote:
> hughag...@gmail.com writes:
> > ...if the program takes 170
> > or 300 milliseconds or thereabouts to run, you aren't going to get
> > very meaningful timing results. My LowDraw.4th program takes much
> > longer to run, so the timing is more meaningful.
>
> The easy remedy is to making the resolution finer. E.g,
>
> (time (mandelbrot-box -2.0d0 1.0d0 -1.0d0 1.0d0 1024 1024))
>
> takes 5 seconds.
>
> [How long does it have to run, for being "better" (in your quality
> metric for performance testing) than your LowDraw program? I can
> provide you with suitable parameters...]

5 or 10 seconds should smooth be adequate.

> > Also, my program is large enough that the compactness of the code
> > generated becomes an issue (cache-thrashing is the primary speed
> > killer on the x86).
>
> For certain kinds of applications, maybe. Not for the Mandelbrot
> calculation.

Not just "certain kinds of applications" --- all real-world programs are large enough that cache-thrashing is the over-whelming factor in affecting speed.

This is why bench-marking with small programs such as yours has become largely obsolete with the advent of computers that have a cache --- back in the old days of MS-DOS bench-marking was a big deal and all of the compiler-writers were publishing their results.
You're being totally fake now! The bottom line here is that the chip manufacturers haven't told anybody how their chips work internally, and they aren't going to --- this is proprietary because this determines how fast their chip executes software and they don't want the other chip manufacturers to learn their secrets.

The x86 machine-code is not executed directly. It is a VM that gets JIT-compiled into a machine-code that is then executed. This JIT-compilation is done internally withing the chip; the machine-code is stored and executed internally within the chip; there is no way to know anything about this because it is taking place inside of the chip and nobody has any way of peering into the chip to find out what is going on in there. The chip manufacturers don't tell us anything about the machine-language that their chip executes internally --- anybody who claims to know anything about this subject either has insider knowledge from the chip manufacturer (which means they are breaking a non-disclosure agreement and will get sued), or they are just faking up expertise on a subject that they know nothing about.

> Of course, there are people devoting their life for making performance
> testing as accurate as possible, and those people have to consider all
> these kinds of problems. But there is also information which can be
> obtained reliably and easily by very simple tests (like my Mandelbrot
> test program). E.g., that something like interpreted Python will do
> this Mandelbrot set calculation slow, while compiled C++ or CL code will
> do it much faster.

Well, there are many ways to waste one's life...

Anyway, if you have the bugs worked out of your code, I will port it over to VFX.

It would help if you emailed me a file with the array of data that you generate, so I can compare my results to it to verify that my program is doing the same thing that your program is doing.

Also, can you explain what FOR and LOOP and all that do? It is likely a boundary bug in the loops that is causing the discrepancy between iteration counts for you and Paul, so apparently he is not clear on this either.

> >> P.S.: I really should plot the output to be sure that it is not
> >> calculating nonsense. Will try.
> >
> > So, if it produces a psychedelic image, then you know that your code
> > is correct? Maybe you can poll all of the stoners in your neighborhood
> > to determine the quality of your program's results. Far out, man!
> >
> > By comparison, my LowDraw.4th program calculates a result that can be
> > verified for correctness. All the probabilities have to add up to 1.0,
> > for sure.
>
> I very much doubt that they do add up precisely to 1.0, at least if you
> should use floating point arithmetic...

That last sentence isn't grammatical. Are you saying that I should use floating-point?

I am using floating-point. I agree that there is going to be round-off error, but this is double-precision IEEE-754 so it is pretty close --- round-off error is caused by adding numbers of very different magnitudes, and by dividing numbers, but neither are done in probability calculations.

The point I'm making is that probability is pretty easy --- also, there are no arbitrary parameters such as in your Mandelbrot Set program that have been tweaked to enhance the psychedelic effect --- my program is uncomplicated and easy to understand.

What has always attracted to Forth was its simplicity. I like not having a syntax! ANS-Forth was a big step backwards because it was overly complicated and gratuitously ambiguous, and this is largely what killed Forth's popularity (I wasn't the only one attracted to Forth because it was simple and easy to understand). Now we have Forth-200x that is more of the same. Forth-200x has "recognizers" which are an attempt at giving Forth a syntax.

The more I look at your code, the less I want to learn CL. Your FOR and LOOP etc. look like syntax to me --- is this considered to be functional programming? --- all in all, I think that learning CL is not for me.

Anyway, this is the latest-and-greatest (well; most bug-free) version of the code that I intend to port to VFX:

----------------------------------------------------------------------------

Corrected version of program:

(defun mandelbrot-iteration (c)
(declare (ftype (function (complex double-float) fixnum))
(optimize speed))
(loop for k of-type fixnum from 1 below 1000
for z of-type (complex double-float) = c then
(+ (* z z) c)
until (> (abs z) 100.0d0)
finally (return k)))

(defun mandelbrot-box (x1 x2 y1 y2 nx ny &optional result-p)
(declare (type double-float x1 x2 y1 y2)
(type fixnum nx ny))
(let ((dx (/ (- x2 x1) nx))
(dy (/ (- y2 y1) ny))
(result (make-array (list (1+ nx) (1+ ny)) :element-type 'fixnum))
(sum 0))
(declare (type double-float dx dy)
(type fixnum sum))
(loop for kx upto nx
for x of-type double-float = x1 then (+ x dx) do
(loop for ky upto ny
for y of-type double-float = y1 then (+ y dy) do
(incf sum
(setf (aref result kx ky)
(mandelbrot-iteration (complex x y))))))
(if result-p result sum)))

Paul Rubin

unread,
Jun 28, 2016, 2:52:48 PM6/28/16
to
Nicolas Neuss <last...@scipolis.de> writes:
> Yes, we do. On SBCL. To make it agree completely, we would have to
> check his C++ code, and maybe also which FPU settings are active, etc.

I noticed that SBCL complained about 0.0 not being a double, but it
didn't complain about (> (abs z) 100.0). That suggests that the abs
calculation is being done in single precision. That could explain both
the numerical discrepancy and how SBCL manages to be faster than C.

The C code is here. It only counts the iterations--it doesn't build up
the result matrix.

================================================================

#include <complex.h>
#include <stdio.h>

const int mandelbrot_iteration(const double complex c)
{
int k = 1;
double complex z = c;
do {
z = z*z + c;
} while (k++ < 1000 && cabs(z) <= 100.0) ;
return k;
}

const int mandelbrot_box(const double x1, const double x2,
const double y1, const double y2,
const int nx, const int ny)
{
const double dx = (x2-x1) / nx;
const double dy = (y2-y1) / ny;
double x, y;
int sum = 0;
int i, j;

for (i=0, x=x1; i < nx; i++, x+=dx) {
for (j=0, y=y1; j < ny; j++, y+=dy) {
sum += mandelbrot_iteration(x+I*y);
}
}
return sum;
}

int main()
{
printf("%d\n", mandelbrot_box(0.0, 1.0, 0.0, 1.0, 256, 256));
return 0;
}

Raymond Wiker

unread,
Jun 28, 2016, 4:06:01 PM6/28/16
to
Paul Rubin <no.e...@nospam.invalid> writes:

> Nicolas Neuss <last...@scipolis.de> writes:
>> Yes, we do. On SBCL. To make it agree completely, we would have to
>> check his C++ code, and maybe also which FPU settings are active, etc.
>
> I noticed that SBCL complained about 0.0 not being a double, but it
> didn't complain about (> (abs z) 100.0). That suggests that the abs
> calculation is being done in single precision. That could explain both
> the numerical discrepancy and how SBCL manages to be faster than C.

z is a complex double, so I'd expect (abs z) to be a real double.

Paul Rubin

unread,
Jun 28, 2016, 4:17:01 PM6/28/16
to
Raymond Wiker <rwi...@gmail.com> writes:
> z is a complex double, so I'd expect (abs z) to be a real double.

But then the comparison with 100.0 would result in a downcast, which
could maybe make the compiler compute (abs z) as a single?

Also I recompiled the program with -ffast-math (gcc 6.1) which loosens
its IEEE arithmetic conformance. To my surprise that sped the program
by about 4x, from 0.2 sec to 0.05 sec. The result also came out
slightly different, 13959941 (fast) instead of 13959797 (regular). That
seems ok: the boundary of the Mandelbrot set is very chaotic and that's
what makes it interesting. Maybe sbcl is doing some similar
optimizations.

hughag...@gmail.com

unread,
Jun 28, 2016, 4:52:28 PM6/28/16
to
On Tuesday, June 28, 2016 at 1:17:01 PM UTC-7, Paul Rubin wrote:
> Raymond Wiker <rwi...@gmail.com> writes:
> > z is a complex double, so I'd expect (abs z) to be a real double.
>
> But then the comparison with 100.0 would result in a downcast, which
> could maybe make the compiler compute (abs z) as a single?

So, you take the ABS of both the real and imaginary parts and you continue your loop if both of them are <= 100.0 --- correct?

Or maybe you ignore the imaginary part and take the ABS of the real part and you continue your loop if it is <= 100.0 --- is this correct?

Which is being done is not clear from either the Lisp or C code.

WJ

unread,
Jun 28, 2016, 5:51:10 PM6/28/16
to
hughag...@gmail.com wrote:

> So, you take the ABS of both the real and imaginary parts
> and you continue your loop if both of them are <= 100.0 --- correct?

In Common Lisp, ABS yields the magnitude of the complex number;
i.e., the distance from the origin.

Example:

(abs #c(5.0 -5.0)) => 7.071068

The correct way to calculate the mandelbrot set is to exit
when the magnitude is > 2.0, not 100.0.

Madhu

unread,
Jun 28, 2016, 10:12:58 PM6/28/16
to

* hughag...@gmail.com <b04666b9-33da-4153...@googlegroups.com> :
Wrote on Tue, 28 Jun 2016 11:31:40 -0700 (PDT):

| Also, can you explain what FOR and LOOP and all that do? It is likely
| a boundary bug in the loops that is causing the discrepancy between
| iteration counts for you and Paul, so apparently he is not clear on
| this either.

For your convenience, I'm appending a 1 minute translation to C-like
infix psuedocode of just the loops, following the lisp code. I hope I
have transcribed it correctly: LOOP does have some tricky corner cases,
but these seemed straightforward.

As for the boundary conditions in Paul Rubin's code, for CL:LOOP
the UPTO keyword includes the "bounds" - so
(loop for x upto 3 collect x) => (0 1 2 3)

I hope you can take 5 minutes to read both the loop constructs and the
translation and understand how the loop keywords map to the control
structure. As LOOP is supposed to be a convenience language, it is
meant to be "intuitive" and understood, like english, and the macro
compiles this to the language. This is one of the principles behind
CL's "syntax" (or lack thereof), functions and macros - is, ideally
*CL CODE* should just read like it *IS PSEUDOCODE*

| (defun mandelbrot-iteration (c)
| (declare (ftype (function (complex double-float) fixnum))
| (optimize speed))
| (loop for k of-type fixnum from 1 below 1000
| for z of-type (complex double-float) = c then
| (+ (* z z) c)
| until (> (abs z) 100.0d0)
| finally (return k)))

k = 1 ; z = c
while ((k < 1000) and (not ((abs z) > 100.0))) {
k = k + 1
c = c + (z * z)
}
return k

| (defun mandelbrot-box (x1 x2 y1 y2 nx ny &optional result-p)
| (declare (type double-float x1 x2 y1 y2)
| (type fixnum nx ny))
| (let ((dx (/ (- x2 x1) nx))
| (dy (/ (- y2 y1) ny))
| (result (make-array (list (1+ nx) (1+ ny)) :element-type 'fixnum))
| (sum 0))
| (declare (type double-float dx dy)
| (type fixnum sum))
| (loop for kx upto nx
| for x of-type double-float = x1 then (+ x dx) do
| (loop for ky upto ny
| for y of-type double-float = y1 then (+ y dy) do
| (incf sum
| (setf (aref result kx ky)
| (mandelbrot-iteration (complex x y))))))
| (if result-p result sum)))

kx = 0
x = x1
while (kx <= nx) {
ky = 0
y = y1
while (ky <= ny) {
result[kx][ky] = mandelbrot( x, y )
sum = sum + result[kx][ky]
ky = ky + 1
y = y + dy
}
kx = kx + 1
x = x + dx
}

Raymond Wiker

unread,
Jun 29, 2016, 12:49:29 AM6/29/16
to
hughag...@gmail.com writes:

> On Tuesday, June 28, 2016 at 1:17:01 PM UTC-7, Paul Rubin wrote:
>> Raymond Wiker <rwi...@gmail.com> writes:
>> > z is a complex double, so I'd expect (abs z) to be a real double.
>>
>> But then the comparison with 100.0 would result in a downcast, which
>> could maybe make the compiler compute (abs z) as a single?
>
> So, you take the ABS of both the real and imaginary parts and you
> continue your loop if both of them are <= 100.0 --- correct?
>
> Or maybe you ignore the imaginary part and take the ABS of the real
> part and you continue your loop if it is <= 100.0 --- is this correct?

"None of the above" - for complex numbers, abs returns the the magnitude
or modulus of the number: (sqrt (+ (* x x) (* y y)))

See http://www.lispworks.com/documentation/HyperSpec/Body/f_abs.htm#abs

WJ

unread,
Jun 29, 2016, 3:10:24 AM6/29/16
to
Nicolas Neuss wrote:

> (defun mandelbrot-iteration (c)
> (declare (ftype (function (complex double-float) fixnum))
> (optimize speed))
> (loop for k of-type fixnum from 1 below 1000

Off by one. If c is already too large when the function is called,
the result should be zero.

WJ

unread,
Jun 29, 2016, 3:30:11 AM6/29/16
to
This version doesn't use a complex-number type; just
floats.

OCaml:

let max_iteration = 1000;;

(* Based on the wikipedia pseudocode *)
let count_iterations x0 y0 =
let rec loop iteration x y =
let xx = x *. x and yy = y *. y in
(* Exit when magnitude is >= 2.0 *)
if (xx +. yy) < 4.0 && iteration < max_iteration then
loop (iteration+1) (xx -. yy +. x0) (2.0 *. x *. y +. y0)
else
iteration
in loop 0 0.0 0.0 ;;

let mandelbrot_box x1 x2 y1 y2 nx ny =
let dx = (x2 -. x1) /. float nx and
dy = (y2 -. y1) /. float ny and
sum = ref 0 in
for kx = 0 to nx do
let x = dx *. float kx +. x1 in
for ky = 0 to ny do
let y = dy *. float ky +. y1 in
sum := !sum + count_iterations x y
done
done ;
!sum ;;

let t = Sys.time () in
print_int (mandelbrot_box 0.0 1.0 0.0 1.0 256 256) ;
Printf.printf " %.3f\n" ((Sys.time ()) -. t)


On an old WinXP laptop, the result is:

13808106 0.313

Nicolas Neuss

unread,
Jun 29, 2016, 9:53:05 AM6/29/16
to
Depends on how you define the iteration. My version always starts with
z0 := 0. Then the first step leads to z1 := z0^2+c = c.

Nicolas

Pascal J. Bourguignon

unread,
Jun 29, 2016, 2:57:25 PM6/29/16
to
Paul Rubin <no.e...@nospam.invalid> writes:

> Nicolas Neuss <last...@scipolis.de> writes:
>> Oh, my bad. Looks as if I never really looked at the output...
>
> Also, sbcl doesn't accept 0.0 as a double float, I had to say 0.0d0.

This would depend on a user setting:

* (setf *read-default-float-format* 'double-float)

DOUBLE-FLOAT
* (type-of 0.0)

DOUBLE-FLOAT
* (setf *read-default-float-format* 'single-float)

SINGLE-FLOAT
* (type-of 0.0)

SINGLE-FLOAT
*

--
__Pascal Bourguignon__ http://www.informatimago.com/
“The factory of the future will have only two employees, a man and a
dog. The man will be there to feed the dog. The dog will be there to
keep the man from touching the equipment.” -- Carl Bass CEO Autodesk

Rob Warnock

unread,
Jun 29, 2016, 5:15:05 PM6/29/16
to
Pascal J. Bourguignon <p...@informatimago.com> wrote:
+---------------
| * (setf *read-default-float-format* 'double-float)
|
| DOUBLE-FLOAT
| * (type-of 0.0)
|
| DOUBLE-FLOAT
+---------------

And while we're on that, another look at ABS:

* #c(3.0 4.0)

#C(3.0 4.0)
* (abs *)

5.0
* (type-of *)

DOUBLE-FLOAT
*

[This is CMUCL, FWIW...]


-Rob

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <http://rpw3.org/>
San Mateo, CA 94403

WJ

unread,
Jun 29, 2016, 7:05:11 PM6/29/16
to
Madhu wrote:

> As LOOP is supposed to be a convenience language, it is
> meant to be "intuitive" and understood, like english, and the macro
> compiles this to the language. This is one of the principles behind
> CL's "syntax" (or lack thereof), functions and macros - is, ideally
> *CL CODE* should just read like it *IS PSEUDOCODE*

Paul Graham:

I consider Loop one of the worst flaws in CL, and an example
to be borne in mind by both macro writers and language designers.

[ In "ANSI Common Lisp", Graham makes the following comments: ]

The loop macro was originally designed to help inexperienced
Lisp users write iterative code. Instead of writing Lisp code,
you express your program in a form meant to resemble English,
and this is then translated into Lisp. Unfortunately, loop is
more like English than its designers ever intended: you can
use it in simple cases without quite understanding how it
works, but to understand it in the abstract is almost
impossible.
....
the ANSI standard does not really give a formal specification
of its behavior.
....
The first thing one notices about the loop macro is that it
has syntax. A loop expression contains not subexpressions but
clauses. The clauses are not delimited by parentheses;
instead, each kind has a distinct syntax. In that, loop
resembles traditional Algol-like languages. But the other
distinctive feature of loop, which makes it as unlike Algol as
Lisp, is that the order in which things happen is only
loosely related to the order in which the clauses occur.
....
For such reasons, the use of loop cannot be recommended.



Dan Weinreb, one of the designers of Common Lisp:

... the problem with LOOP was that it turned out to be hard to
predict what it would do, when you started using a lot of
different facets of LOOP all together. This is a serious problem
since the whole idea of LOOP was to let you use many facets
together; if you're not doing that, LOOP is overkill.



Barry Margolin, 05 Apr 2001
(http://groups.google.com/group/comp.lang.lisp/msg/8a48cedb8b3bcb52)

>(My second rule of thumb concerning LOOP would be the negative of
>Barry Margolin's: The more complex the looping, the more you need/want
>to use LOOP.)

My recommendation is based on seeing many question in the past of the form
"What happens if you use both XXX and YYY in the same LOOP?" The
unfortunate fact is that when we were writing the standard we didn't have
time to nail down all the possible interactions between different LOOP
features, so many of these are not well specified. And even if we did get
it right in the standard, it's likely to be difficult to find them and I
wouldn't trust that all implementors got it right (many of those questions
were probably from implementors, trying to figure out what they were
supposed to do). And even if they all got it right, someone reading your
code may not be able to figure it out.

So, with all those potential problems, my feeling is that if you have to
ask, it's probably better to use something other than LOOP.



Barry Margolin:

> 3. Loop is very powerful, granted, and many people are trying to
> argue that "you can do so much with loop that it's unreadable."
> This is not an argument.

But it is! Because any use of LOOP has the potential to be
unreadable, the reader must read it carefully to verify that
it's just one of the cases that doesn't require careful
reading!



Barry Margolin: (05 Apr 2002 20:57:48 GMT)

This seems like a big change just to clean up the way LOOP is described.
And LOOP will still be a wart, because it will be the only language feature
that uses "per-macro keywords". Providing this interface and giving a name
to them would encourage other macro designers to do something similar, and
we don't want more things like LOOP.



From: John Foderaro <j...@unspamx.franz.com>
Newsgroups: comp.lang.lisp
Subject: Re: the "loop" macro
Date: Sun, 26 Aug 2001 10:51:26 -0700

I'm not trying to join a debate on loop. I just wanted to present
the other side of [the issue so that] the intelligent people can
then weigh the arguments on both sides.

I'm not suggesting that loop can be fixed either by adding
parenthesis or coming up with ways of indenting it to make it
understandable. It's a lost cause.

hughag...@gmail.com

unread,
Jun 30, 2016, 5:02:33 AM6/30/16
to
On Tuesday, June 28, 2016 at 1:17:01 PM UTC-7, Paul Rubin wrote:
The Mandelbrot Set is very chaotic and that's what makes it a terrible choice for a benchmark, because the number of iterations varies from one language to another and even within the same language depending upon optimization configuration. Note that the x87 uses 80-bit floats, but when they are stored to memory they are 64-bit IEEE-754 floats. The more optimized code will typically hold the intermediate values on the x87 stack rather than store them into local variables and then fetch them out again a little bit later --- so the more optimized code is not only faster, but also slightly more accurate.

Anyway, I ported Paul's C program over to ANS-Forth, but I'm getting yet a different number of iterations. Paul, since you know ANS-Forth and it is your C program, would you care to look at my program and tell me if it is a correct port? I know C, but maybe there is some subtle tricky aspect that I'm not understanding. Send me an email if you want to see my program, and I'll email it back to you. Note that it doesn't use the novice-package at all, but it is just plain ANS-Forth.

Nicolas Neuss

unread,
Jun 30, 2016, 5:05:39 AM6/30/16
to
I can confirm this. With these optimization settings ("g++ -ffast-math
-O") your C++ version is about 4-5 times faster than my SBCL version.
This is not quite the order of magnitude claimed by Hugh, but also not
really small.

The precise reason for this speed difference would be interesting.
Maybe g++ does some vectorization for the complex arithmetic.

Nicolas

Nicolas Neuss

unread,
Jun 30, 2016, 5:27:54 AM6/30/16
to
hughag...@gmail.com writes:

> On Tuesday, June 28, 2016 at 1:17:01 PM UTC-7, Paul Rubin wrote:
>> Raymond Wiker <rwi...@gmail.com> writes:
>> > z is a complex double, so I'd expect (abs z) to be a real double.
>>
>> But then the comparison with 100.0 would result in a downcast, which
>> could maybe make the compiler compute (abs z) as a single?
>>
>> Also I recompiled the program with -ffast-math (gcc 6.1) which loosens
>> its IEEE arithmetic conformance. To my surprise that sped the program
>> by about 4x, from 0.2 sec to 0.05 sec. The result also came out
>> slightly different, 13959941 (fast) instead of 13959797 (regular). That
>> seems ok: the boundary of the Mandelbrot set is very chaotic and that's
>> what makes it interesting. Maybe sbcl is doing some similar
>> optimizations.
>
> The Mandelbrot Set is very chaotic and that's what makes it a terrible
> choice for a benchmark, because the number of iterations varies from
> one language to another and even within the same language depending
> upon optimization configuration.

I don't see how you can avoid this when you use floating-point
arithmetic and look at the results in detail.

> Note that the x87 uses 80-bit floats, but when they are stored to
> memory they are 64-bit IEEE-754 floats. The more optimized code will
> typically hold the intermediate values on the x87 stack rather than
> store them into local variables and then fetch them out again a little
> bit later --- so the more optimized code is not only faster, but also
> slightly more accurate.

Why the results of my Mandelbrot or your Poker program nevertheless make
sense is that it does not really matter if the iteration count differs
by some small number or if your probabilities add up precisely to 1.

Caveat: I don't know of an error estimate that these inaccuracies do not
matter for the Mandelbrot set! I am even convinced, that they matter
when looking at the set in more detail. Precisely at the chaotic
boundaries the problem is very ill-conditioned (slight changes in the
input lead to large differences in the iteration number), so from a
numerical analyst point of view I am astonished that something like
these very high-resolution zooms you can find on Youtube are possible at
all. I would be interested in seeing error estimates for these
calculations. The safest way would be, of course, to do them completely
with interval arithmetic.

Nicolas

paul wallich

unread,
Jun 30, 2016, 10:19:02 AM6/30/16
to
On 6/30/16 5:27 AM, Nicolas Neuss wrote:
[...]
> Why the results of my Mandelbrot or your Poker program nevertheless make
> sense is that it does not really matter if the iteration count differs
> by some small number or if your probabilities add up precisely to 1.

The important thing is (mostly) whether you diverge, not precisely how
fast...

> Caveat: I don't know of an error estimate that these inaccuracies do not
> matter for the Mandelbrot set! I am even convinced, that they matter
> when looking at the set in more detail. Precisely at the chaotic
> boundaries the problem is very ill-conditioned (slight changes in the
> input lead to large differences in the iteration number), so from a
> numerical analyst point of view I am astonished that something like
> these very high-resolution zooms you can find on Youtube are possible at
> all. I would be interested in seeing error estimates for these
> calculations. The safest way would be, of course, to do them completely
> with interval arithmetic.

With float-based software, what you see at the bottom of the range is
mostly about your algorithm and your compiler, somewhat influenced by
the underlying mandelbrot set. My impression is that the
super-duper-zoom versions use some kind of arbitrary-precision code --
which makes sense if you're using a GPU to do the calculations, because
GPUs typically already need to chain calculations just to get to doubles.

In CL it would be really easy to eliminate these problems by going to
rationals. You'd be slower (!) but it's quite possible that you'd then
be consistent across implementations and optimization levels. (And doing
rationals in other languages can often be a bear.)

paul

hughag...@gmail.com

unread,
Jun 30, 2016, 11:07:19 AM6/30/16
to
On Thursday, June 30, 2016 at 2:27:54 AM UTC-7, Nicolas Neuss wrote:
Your Mandelbrot Set program is the worst possible choice for a bench-mark because the iteration count has a direct effect on the time spent in execution --- that is the problem with chaos, that a little fuzziness in the least significant digits can affect such a fundamental aspect of the program as how many times it iterates --- butterflies flap their wings in the least-significant digits, whereas elephants stomp in the most-significant digits.

By comparison, my LowDraw.4th program is a good choice for a bench-mark because a little fuzziness in the least significant digits has no effect on how many times it iterates (actually recurses). At the end of the program when I print out the table, I only print out 6 digits in each probability to improve readability --- realistically, who is going to want a probability to more precision than that? --- most poker players would be happy with 2 or 3 digits of precision.

I used a slide-rule at the poker table for a while, but everybody laughed, so now I just do the calculation in my head. I gave some thought to how to build a slide-rule with marks on the scales provided specifically for Texas Hold'Em --- I wrote the software to generate the images for the slide-rules --- I haven't built any though.

endlessboo...@gmail.com

unread,
Jun 30, 2016, 12:00:41 PM6/30/16
to
hugh do you make web apps in forth?

Nicolas Neuss

unread,
Jun 30, 2016, 12:06:00 PM6/30/16
to
Yes, I admit that a 5-line BLAS routine (e.g. AXPY, DOT or GEMM) would
make a better and more deterministic performance benchmark than the
Mandelbrot calculation. And a much better benchmark than your Poker
program as well.

However, please note, that the difference in iteration numbers Paul
reported amounts to something as 0.001% of the total work! This
difference is completely negligible compared with most other sources of
measurement error.

BTW, how is your Forth version of Mandelbrot going? If you don't
succeed, maybe we should try some BLAS routine first?

Nicolas

Nicolas Neuss

unread,
Jun 30, 2016, 12:23:09 PM6/30/16
to
paul wallich <p...@panix.com> writes:

> With float-based software, what you see at the bottom of the range is
> mostly about your algorithm and your compiler, somewhat influenced by
> the underlying mandelbrot set. My impression is that the
> super-duper-zoom versions use some kind of arbitrary-precision code --
> which makes sense if you're using a GPU to do the calculations,
> because GPUs typically already need to chain calculations just to get
> to doubles.

At least, the Wikipedia page for Mandelbrot contains some fuzzy hints
about higher precision necessary for calculating higher resolutions.
Nevertheless, I would be interested in some reference which does a
rigorous error analysis of these calculations.

> In CL it would be really easy to eliminate these problems by going to
> rationals. You'd be slower (!) but it's quite possible that you'd then
> be consistent across implementations and optimization levels. (And
> doing rationals in other languages can often be a bear.)
>
> paul

I am very sure that precise rationals will be impossible in this case,
because numerator and denominator will usually double in length for
every iteration step due to the squaring and the average iteration count
for a point is about 200 (for the 256x256 resolution of the [0,1]x[0,1]
box we did before)!

So you will have to round rationals in some way which is then more or
less equivalent to high-precision floating point arithmetic.

Nicolas

hughag...@gmail.com

unread,
Jun 30, 2016, 12:51:32 PM6/30/16
to
On Thursday, June 30, 2016 at 9:06:00 AM UTC-7, Nicolas Neuss wrote:
> BTW, how is your Forth version of Mandelbrot going? If you don't
> succeed, maybe we should try some BLAS routine first?

I don't know if I have succeeded (written a bug-free program) because I don't know what the iteration count is supposed to be or what the numbers generated are supposed to be --- I suppose I could plot the results to just admire the psychedelic aspects. If the plot looks like pile of manure, then there must be a bug, because Mandelbrot didn't become a famous mathematician for generating ugly pictures --- I don't know what the plot is supposed to look like though --- there are a lot of psychedelic Mandelbrot pictures on the internet, and they don't all look the same, although none of them look like a pile of manure.

Can you email me a file containing the numbers your program generated? I can compare that to my numbers to see if they correspond at least for a while --- they may diverge at some point, and then chaos will cause them to diverge rapidly from there on --- if they are totally different, then something must be wrong.

When you ask if I have succeeded, do you mean is ANS-Forth capable of this program? Well, ANS-Forth lacks local variables so I had to use global variables, but reentrancy doesn't matter in a program like this. ANS-Forth lacks F>R FR@ FR> etc., but that problem can be dodged with global variables too. ANS-Forth also lacks support for complex numbers, so I had to write that myself (I had to look complex numbers up on the internet as I haven't given any thought to that subject in decades). AFAIK, ANS-Forth is capable of the program --- ANS-Forth code is typically as ugly as a pile of manure --- I'm not aware of anybody (including the committee members) who actually use ANS-Forth practically, but for a bench-mark I have to use the Standard.

Nicolas Neuss

unread,
Jun 30, 2016, 5:43:27 PM6/30/16
to
hughag...@gmail.com writes:

> [...]
> Can you email me a file containing the numbers your program generated?
> I can compare that to my numbers to see if they correspond at least
> for a while --- they may diverge at some point, and then chaos will
> cause them to diverge rapidly from there on --- if they are totally
> different, then something must be wrong.

Here is a large advantage of using small parameters in a test (even if
it then does not run for 5 seconds:-): Using a small 2x2 mesh, that is
only 3 points in each dimension, leads to a small 3x3-matrix as a
result. You should be able to compare it quite easily, if you choose
the same parameters.

(mandelbrot-box -2.0d0 1.0d0 -1.0d0 1.0d0 2 2 t)
->
#2A((4 1000 4)
(6 1000 6)
(5 5 5))

[The total number of iteration steps (which is returned when omitting
the parameter t at the end) is (obviously) 2035.]

> When you ask if I have succeeded, do you mean is ANS-Forth capable of
> this program? Well, ANS-Forth lacks local variables so I had to use
> global variables, but reentrancy doesn't matter in a program like
> this. ANS-Forth lacks F>R FR@ FR> etc., but that problem can be dodged
> with global variables too. ANS-Forth also lacks support for complex
> numbers, so I had to write that myself (I had to look complex numbers
> up on the internet as I haven't given any thought to that subject in
> decades). AFAIK, ANS-Forth is capable of the program --- ANS-Forth
> code is typically as ugly as a pile of manure --- I'm not aware of
> anybody (including the committee members) who actually use ANS-Forth
> practically, but for a bench-mark I have to use the Standard.

I do not care for the Forth standard. You may use use the features of
any Forth implementation and any complex number library you want. It
would be nice, however, if we could take a look at your program and run
it using some freely available Forth.

Nicolas

Paul Rubin

unread,
Jun 30, 2016, 6:08:25 PM6/30/16
to
Nicolas Neuss <last...@scipolis.de> writes:
> Nevertheless, I would be interested in some reference which does a
> rigorous error analysis of these calculations.

Well, figuring you lose about 1/2 ULP for every floating point
operation, at those places where there's 1000 iterations that would be
500 ULP or the 9 least significant bits. For the unit square that's far
smaller than a pixel for any reasonable display resolution so it doesn't
seem worth worrying about. If you're zooming on a tiny region where you
need a lot of bits to even say where it is, then maybe you need higher
precision arithmetic. Wikipedia mentions you can speed that up using
perturbation methods.

> the average iteration count for a point is about 200 (for the 256x256
> resolution of the [0,1]x[0,1] box we did before)!

That's almost entirely due to points in the interior using the whole
1000 iterations without escaping. If you don't count those, the average
is about 7 iterations, and of course that's mostly from points near the
boundary. I think the discrepancies between the programs we've been
posted are likely limited to small differences in the iteration counts
in a few pixels, which wouldn't have much visual effect on pictures
colored by the escape height. The obvious optimization of checking
escape by the abs**2 instead of abs (eliminating a sqrt operation) might
also improve precision a little.

hughag...@gmail.com

unread,
Jun 30, 2016, 11:22:33 PM6/30/16
to
Well, I'll show you my code. It gets the same sum with the small version (2035) but not with the larger version (13950170). I really don't understand this chaos stuff! I felt like I was on solid ground writing a program to calculate probability. The concept of probability is pretty intuitive, and it has an obvious purpose. Chaos is not intuitive (the word "chaos" could really be considered to be the antonym of "intuitive") and it has no purpose that I can discern. You Lisp guys seem to have really steered this thread into a swamp --- is this typical for comp.lang.lisp?

Anyway, my program is ANS-Forth. I've tested it on VFX and SwiftForth (their free evaluation versions). As I said, SwiftForth code is slow as molasses --- don't use SwiftForth unless you hate Forth and want Forth to look bad.

In VFX you need to do this before beginning:
include C:\MPE\VfxEval\Lib\Ndp387.fth
VFX provides more than one floating-point file. This one uses the x87 stack as the floating-point stack. This almost always works. It can overflow (the x87 stack is limited to 8 items). This is almost never a problem --- if it does happen, then you can use temporary variables (this is where local variables would come in handy, but ANS-Forth doesn't provide them) --- my program works fine.

At the end of the program you will find this line:
: time-test ( -- ) ticks test ticks swap - cr ." milliseconds: " . ;
This is VFX-specific --- one of the other many failings of ANS-Forth is that they didn't provide TICKS in the Standard --- if you are using any ANS-Forth compiler other than VFX, this line must be commented out (use the \ for comments).

These are a few runs on my laptop:
time-test
sum = 13950170
milliseconds: 795 ok
time-test
sum = 13950170
milliseconds: 1358 ok
time-test
sum = 13950170
milliseconds: 1217 ok
time-test
sum = 13950170
milliseconds: 998 ok
time-test
sum = 13950170
milliseconds: 874 ok
time-test
sum = 13950170
milliseconds: 889 ok
time-test
sum = 13950170
milliseconds: 874 ok
time-test
sum = 13950170
milliseconds: 936 ok

Maybe you will get somewhat less variance with parameters that provide a longer running program. All in all though, this stuff is pretty meaningless. Anyway, here is the program:
----------------------------------------------------------------------------

marker Mandelbrot.4th

: f+! ( adr -- ) \ float: n --
dup f@ f+ f! ;

: csqr ( -- ) \ float: a ai -- product producti
fover fdup f* fover fdup f* f- \ float: -- a ai a*a-ai*ai
frot frot \ float: -- a*a-ai*ai a ai
f* fdup f+ ; \ float: -- a*a-ai*ai a*ai+a*ai

: cabs ( -- ) \ float: a ai -- distance
fdup f* fswap
fdup f* f+
fsqrt ;

: s>f ( n -- ) \ float: -- n
s>d d>f ;

fvariable cc
fvariable cci
: <mandelbrot-iteration> ( -- count ) \ float: c cci -- z zi
fover cc f! fdup cci f! \ float: -- z zi \ Z starts as C not as origin
1 begin
csqr
fswap cc f@ f+ fswap cci f@ f+
1+
dup 1000 >= if exit then
fover fover cabs 100.0e f> if exit then
again ;

: mandelbrot-iteration ( -- count )
<mandelbrot-iteration>
fdrop fdrop ; \ discard z zi

fvariable dx
fvariable dy
fvariable x
fvariable y
fvariable x1
fvariable y1
variable nx
variable ny
variable sum
: mandelbrot-box ( nx ny -- sum ) \ float: x1 x2 y1 y2 --
ny ! nx ! 0 sum !
fover f- ny @ s>f f/ dy f! \ float: -- x1 x2 y1
y1 f! \ float: -- x1 x2
fover f- nx @ s>f f/ dx f! \ float: -- x1
x1 f! \ float: --
x1 f@ x f! 0 begin y1 f@ y f! 0 begin
x f@ y f@ mandelbrot-iteration sum +!
dup ny @ < while 1+ dy f@ y f+! repeat drop
dup nx @ < while 1+ dx f@ x f+! repeat drop
sum @ ;

: test ( -- )
256 256 0.0e 1.0e 0.0e 1.0e mandelbrot-box
\ 2 2 -2.0e 1.0e -1.0e 1.0e mandelbrot-box
cr ." sum = " u. ;

\ The TIME-TEST function is VFX only --- comment this out if you are using any other ANS-Forth compiler.
: time-test ( -- ) ticks test ticks swap - cr ." milliseconds: " . ;
Message has been deleted

hughag...@gmail.com

unread,
Jul 1, 2016, 12:57:11 AM7/1/16
to
On Thursday, June 30, 2016 at 8:22:33 PM UTC-7, hughag...@gmail.com wrote:
> Maybe you will get somewhat less variance with parameters that provide a longer running program. All in all though, this stuff is pretty meaningless.

Actually, I don't think you will get less variance with parameters that provide a longer running program. The cause of all the variance (the OS) is going to be constant, so the variance just scales up linearly.

The best way to avoid variance is to write a very short program that concludes so soon that it doesn't get interrupted at all. Run it several times and discard any outliers with significantly longer run times because these likely did get interrupted. To do this, you would need a micro-second clock rather than a milli-second clock --- perhaps Abrash's famous "zen timer."

Also, you would have to time it repeatedly so the caches can be prepped by the previous run, as the first time you run it you will get an outlier because the caches haven't been prepped. Of course, any interrupt between runs will screw this up!

If you manage to get some runs that are all pretty much the same, with all the bigger outliers discarded, you can average them and that is your result.

After all this effort however, your result is still meaningless because it doesn't represent the real-world --- in the real-world, cache-thrashing is the overwhelmingly greatest speed-killer --- to reduce cache-thrashing you need a compiler that generates compact code --- but your bench-mark program isn't testing this at all.

Anyway, we have steered into a swamp; we are up to our axles in mud and all four wheels are spinning --- this thread started out by me trying to get Gavino to stop spouting nonsense --- now, by bench-marking chaos, we are spouting greater nonsense than Gavino ever did.

Another win for Gavino!!! An agent of chaos!!!

Nicolas Neuss

unread,
Jul 1, 2016, 6:15:10 AM7/1/16
to
Paul Rubin <no.e...@nospam.invalid> writes:

> Nicolas Neuss <last...@scipolis.de> writes:
>> Nevertheless, I would be interested in some reference which does a
>> rigorous error analysis of these calculations.
>
> Well, figuring you lose about 1/2 ULP for every floating point
> operation, at those places where there's 1000 iterations that would be
> 500 ULP or the 9 least significant bits. For the unit square that's far
> smaller than a pixel for any reasonable display resolution so it doesn't
> seem worth worrying about. If you're zooming on a tiny region where you
> need a lot of bits to even say where it is, then maybe you need higher
> precision arithmetic. Wikipedia mentions you can speed that up using
> perturbation methods.

This may be the case (and the observation that there pop up nice
pictures using this program is a hint towards it), but it is no
stringent proof! The most pessimistic estimate has to allow for loosing
relative floating point precision due to cancellation when subtracting
almost identical numbers.

E.g.

(- (+ (sqrt 2.0d0) 1.0d8) 1.0d8)

results in a precision of only 8 digits accuracy (instead of 16 digits
for (sqrt 2.0d0) at the beginning.

>> the average iteration count for a point is about 200 (for the 256x256
>> resolution of the [0,1]x[0,1] box we did before)!
>
> That's almost entirely due to points in the interior using the whole
> 1000 iterations without escaping. If you don't count those, the average
> is about 7 iterations, and of course that's mostly from points near the
> boundary.

OK, but here the bulk of pixels in the outer regions with very low
iteration numbers is also uninteresting. Near the boundary is precisely
the region where the behaviour is interesting AND where the iteration
count is still rather high.

> I think the discrepancies between the programs we've been
> posted are likely limited to small differences in the iteration counts
> in a few pixels, which wouldn't have much visual effect on pictures
> colored by the escape height. The obvious optimization of checking
> escape by the abs**2 instead of abs (eliminating a sqrt operation) might
> also improve precision a little.

Only very little, I think.

Nicolas

hughag...@gmail.com

unread,
Jul 1, 2016, 3:30:25 PM7/1/16
to
On Thursday, June 30, 2016 at 3:08:25 PM UTC-7, Paul Rubin wrote:
> The obvious optimization of checking
> escape by the abs**2 instead of abs (eliminating a sqrt operation) might
> also improve precision a little.

Well, I took Paul's advice and got rid of the FSQRT. It is faster, but I'm getting the same number of iterations. Here are a few runs on the same laptop:

time-test
sum = 13950170
milliseconds: 921 ok
time-test
sum = 13950170
milliseconds: 624 ok
time-test
sum = 13950170
milliseconds: 733 ok
time-test
sum = 13950170
milliseconds: 624 ok
time-test
sum = 13950170
milliseconds: 577 ok
time-test
sum = 13950170
milliseconds: 811 ok
time-test
sum = 13950170
milliseconds: 609 ok
time-test
sum = 13950170
milliseconds: 593 ok

Here is the new program:
-------------------------------------------------------------------------

: f+! ( adr -- ) \ float: n --
dup f@ f+ f! ;

: csqr ( -- ) \ float: a ai -- product producti
fover fdup f* fover fdup f* f- \ float: -- a ai a*a-ai*ai
frot frot \ float: -- a*a-ai*ai a ai
f* fdup f+ ; \ float: -- a*a-ai*ai a*ai+a*ai

: cabs ( -- ) \ float: a ai -- distance
fdup f* fswap
fdup f* f+
fsqrt ;

: s>f ( n -- ) \ float: -- n
s>d d>f ;

fvariable cc
fvariable cci
: <mandelbrot-iteration> ( -- count ) \ float: c cci -- z zi
fover cc f! fdup cci f! \ float: -- z zi \ Z starts as C not as origin
1 begin
csqr
fswap cc f@ f+ fswap cci f@ f+
1+
dup 1000 >= if exit then
\ fover fover cabs 100.0e f> if exit then
fover fdup f* fover fdup f* f+ 10000.0e f> if exit then \ replaces last line

hughag...@gmail.com

unread,
Jul 1, 2016, 9:52:06 PM7/1/16
to
Notice that the first time that I ran it, I got a long execution time (921 milli-seconds), and every other execution time was shorter. This most likely is because the first run had to fill the caches. After that, the caches were prepped and hence the subsequent runs were faster.

WJ

unread,
Jul 2, 2016, 1:12:47 AM7/2/16
to
Paul Rubin wrote:

> const int mandelbrot_iteration(const double complex c)
> {
> int k = 1;
> double complex z = c;
> do {
> z = z*z + c;
> } while (k++ < 1000 && cabs(z) <= 100.0) ;
> return k;
> }

The minimal number that can be returned by this function is 2.
The maximal is 1001.

Those numbers should be 0 and 1000.

Furthermore, the magnitude of the complex number should be
compared to 2.0, not 100.0.

WJ

unread,
Jul 2, 2016, 1:58:44 AM7/2/16
to
Paul Rubin wrote:

> const int mandelbrot_iteration(const double complex c)
> {
> int k = 1;
> double complex z = c;

z should start as 0.0 + 0.0*I, not c.

Paul Rubin

unread,
Jul 2, 2016, 2:04:12 AM7/2/16
to
"WJ" <w_a_...@yahoo.com> writes:
> The minimal number that can be returned by this function is 2.
> The maximal is 1001.

I tried to do the same thing as the lisp code, but I might not have
understood it.

Paul Rubin

unread,
Jul 2, 2016, 6:54:14 PM7/2/16
to
Nicolas Neuss <last...@scipolis.de> writes:
> your C++ version is about 4-5 times faster than my SBCL version....
> The precise reason for this speed difference would be interesting.
> Maybe g++ does some vectorization for the complex arithmetic.

I looked at the asm code from both versions (just the mandelbrot
iteration function) and the gcc version is much more streamlined. It's
hard to tell what the sbcl one is doing. But it's not completely
insane--it's using unboxed floats in xmm registers. Interestingly tbe
sbcl code counts to 2000 by twos, making me think it's using boxed
fixnums even in that inner loop.

I wrote a Haskell version and it was dog slow (2 sec or so). I'll try
to figure out if I did something dumb.

Is there a way to profile the SBCL version?

The Forth version runs in about 0.7 sec on the i7-3770 with 64-bit
gforth (a very efficient interpreter). VFX (a native code optimizing
compiler) probably beats gforth by 3x or more but I don't have a copy to
test.

hughag...@gmail.com

unread,
Jul 2, 2016, 10:59:15 PM7/2/16
to
On Saturday, July 2, 2016 at 3:54:14 PM UTC-7, Paul Rubin wrote:
> The Forth version runs in about 0.7 sec on the i7-3770 with 64-bit
> gforth (a very efficient interpreter). VFX (a native code optimizing
> compiler) probably beats gforth by 3x or more but I don't have a copy to
> test.

In my experience, gForth code is about 1/2 the speed of Forth Inc.'s SwiftForth code, and SwiftForth code about 1/4 the speed of MPE's VFX code. Both SwiftForth and VFX are 32-bit, and I was comparing the 32-bit gForth --- I wasn't aware of their being a 64-bit gForth, but gForth is written in GCC so I suppose it was just a matter of recompiling with the 64-bit version of GCC.

AFAIK, gForth is cripple-ware --- Anton Ertl purposely made gForth generate code that was slower than SwiftForth so gForth wouldn't undermine Forth Inc. sales --- in exchange for this, Elizabeth Rather (chair-person of the ANS-Forth committee and sales-person for Forth Inc.) appointed Anton Ertl to be the chair-person of the Forth-200x committee (Forth-200x is mandated to be 100% compatible with ANS-Forth and its primary purpose is to make ANS-Forth appear to be more important than it is).

The gForth system is written in C. Anton Ertl uses gForth to teach a class in college --- I think he had to write his Forth in C so he could sell the class as being about C programming --- the students are not realistically going to sign up for a class unless it counts as either C or Java programming experience, because those are the two languages for which the students can get jobs after graduation. The students don't care how slow their code is, because their code is only compared to their fellow students' code, and everybody in the class is using the same gForth compiler --- gForth has never been used for commercial programming, and it never will --- out in the real world, people do care about speed, because their code will be compared to competing companies' code.

GCC has some pretty cool features. For example, it is possible to assign registers globally. I haven't examined the gForth source-code so I don't know if it makes use of the GCC features or if it sticks with ANSI-C which is pretty limited. I know ANSI-C but I don't know GCC --- all of my work experience has been in Forth or assembly-language.

endlessboo...@gmail.com

unread,
Jul 5, 2016, 10:16:13 PM7/5/16
to
why cant gforth be used for e commerce?

whats with the conspiracy theory about forth inc?

liz rather has always been polite to me

endlessboo...@gmail.com

unread,
Jul 15, 2016, 4:49:07 PM7/15/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
On Friday, June 17, 2016 at 2:49:55 AM UTC-4, hughag...@gmail.com wrote:
> Over on comp.lang.forth we had this thread:
> https://groups.google.com/forum/#!topic/comp.lang.forth/qTDKxhW9758
>
> I said this:
>
> On Thursday, June 16, 2016 at 11:16:09 PM UTC-7, hughag...@gmail.com wrote:
> > On Thursday, June 16, 2016 at 7:58:17 PM UTC-7, hughag...@gmail.com wrote:
> > > On Wednesday, June 15, 2016 at 12:15:59 PM UTC-7, endlessboo...@gmail.com wrote:
> > > > why?
> > > Programming is about writing programs...
> >
> > Gavino --- I'm going to be like a teacher and give you an assignment! These are the rules of the assignment:
> > 1.) Until you finish your assignment, you are not allowed to start any more of these idiotic threads on comp.lang.forth (or comp.lang.lisp etc.) --- if you do, you will just be reminded that you haven't finished your assignment.
> > 2.) When you finish your assignment, I will put your program in the novice-package --- you will be famous! --- there is no better path to Forth fame than this!
> >
> > Here is the assignment: write a BreakOut game in ANS-Forth. I did this myself on the Commodore Vic-20 in line-number BASIC when I was about 16 years old (not as complicated as what I'm describing here; I ran into severe speed problems and had to keep things simple). I had to take recourse in unlawful carnal knowledge of BASIC to get my program to run at reasonable speed. You can write your program in ANS-Forth though because the modern computers are much faster than the Vic-20. Use these characters:
> > ] these represent the left wall
> > [ these represent the right wall
> > O this represents your ball
> > [#] these represent the bricks, which form a wall along the top of the screen (a space between)
> > [=] these are like bricks, except that if you hit one you get another life
> > [V] these represent spikes which are among the bricks in the wall along the top
> > [O] these represent balls which are among the bricks in the wall along the top
> > ===== this represents your paddle, which moves along the bottom of the screen
> >
> > The game is like a one-player version of Pong. You slide your paddle left and right (the left and right arrow keys can be used) to hit the ball so it bounces upwards. The ball bounces off the walls and off the bricks.
> > 1.) When the ball hits a [#] it not only bounces off the brick, but it causes the brick to disappear.
> > 2.) When the ball hits a [V] it not only bounces off the spike, but it causes the V to fall. Also, the ball bounces at a random angle.
> > 3.) When the ball hits a [O] it not only bounces off the ball, but it causes the ball to fly away in the direction that the original ball came from.
> >
> > The wall looks like this:
> >
> > [#] [#] [#] [O] [#] [#] [#] [#] [V] [#] [O] [#] [#] [V] [#] [#] [#] [#] [#]
> > [#] [#] [V] [#] [#] [V] [#] [O] [#] [#] [#] [#] [#] [#] [O] [#] [#] [#] [#]
> > [#] [#] [#] [#] [V] [#] [#] [V] [#] [#] [#] [=] [#] [#] [#] [V] [#] [#] [#]
> > [V] [#] [#] [#] [#] [=] [#] [#] [#] [#] [#] [V] [V] [#] [#] [#] [V] [#] [#]
> >
> > The [V] and [O] and [=] are distributed randomly in the wall. You can have several levels of difficulty. The first level would have only [#] in the wall. The next level would have some [V] in the wall. The next level would also have some [O] in the wall.
> >
> > The goal of the game is to break through the wall.
> > If you fail to hit a ball with your paddle and the ball goes out the bottom of the screen, then you lose a life.
> > If you hit a [=] then it disappears like a brick, but you also get another life.
> > If you hit a [V] then the V drops (also, the ball bounces at a random angle) --- you have to avoid the falling spike because if it hits your paddle then you lose a life.
> > If you hit a [O] then that O becomes live --- now you have more than one ball that you must keep in play with your paddle.
> >
> > Because we are using characters rather than graphics, there are a limited number of vectors that the ball can travel on:
> > up 1 right 1
> > up 1 right 2
> > up 2 right 1
> > up 1 left 1
> > up 1 left 2
> > up 2 left 1
> > down 1 right 1
> > down 1 right 2
> > down 2 right 1
> > down 1 left 1
> > down 1 left 2
> > down 2 left 1
> >
> > Write your program as a paced loop. This means that every X milliseconds, your ball(s) and spike(s) move up or down 1 char (you will have to figure out X by experimentation; determine what speed makes the game fun, but not so fast as to be unplayable). Unfortunately, one of the many failings of ANS-Forth is that it doesn't provide a standard way to check the system clock to determine how many milliseconds have elapsed (the TIME&DATE word is only accurate to seconds). You need these words:
> > INIT-TIME ( -- ) initializes the time to zero
> > TIME ( -- ms ) returns the number of milliseconds since SET-TIME was called
> > Just choose an ANS-Forth compiler and ask your friendly compiler-vendor to provide these words for you. I recommend that you choose VFX because SwiftForth might be too slow for your game to be playable without recourse to assembly-language --- you don't want to have to delve into assembly-language on your first-ever program (these aren't the bad-old-days of the Vic-20 in which assembly-language was a must for almost any program; you have that advantage over me when I was writing this program).
> >
> > Good luck with your assignment! You can post messages on comp.lang.forth asking specific questions related to your assignment. You can't post any more idiotic messages such as: "could you re-implement myth2 soul-blighter game in forth?" You also can't post vague messages such as: "Um, my source-code file is empty. How should I, um, begin?" Note that these rules apply whether you accept the assignment or not --- quite frankly, we are all sick and tired of your idiotic questions --- you have to complete the assignment or nobody will respond to your posts any more! Programming is all about discipline --- you have to force yourself to work on the assignment until it is complete!
> >
> > Note that there is an element of irony in the choice of BreakOut as your first-ever Forth program. You have been trapped in the doldrums of attention-deficit-disorder for over 30 years --- now it is time to break out of the doldrums and write an actual computer program! --- you will have something to be proud of when you succeed!
>
> I also said this:
>
> On Thursday, June 16, 2016 at 11:36:47 PM UTC-7, hughag...@gmail.com wrote:
> > You can write your program in Lisp or whatever if you prefer that language to Forth (you can even use Haskell or Ada; this would likely be the first time anybody has written a video game in either of those language!). Obviously, if you want technical help on comp.lang.forth then you must use Forth. Also, if you want the awesome fame of getting your program in included in my novice-package, then you must use Forth. Learning Lisp is arguably more useful in the real-world than learning Forth though, so you might choose Lisp for that reason. Use whatever language you want! But, you must write the program and get it to work --- you can't just continue talk on the internet about programming, without doing any programming.
>
> I would appreciate if you guys on comp.lang.lisp would support me in this --- don't allow any more idiotic messages from Gavino --- remind him of the need to complete his assignment.
>
> Thank you for your consideration...

https://www.youtube.com/watch?v=44E6ujRLchs

WJ

unread,
Jul 17, 2016, 2:53:17 AM7/17/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
Paul Rubin wrote:

> Nicolas Neuss <last...@scipolis.de> writes:
> > Yes, we do. On SBCL. To make it agree completely, we would have to
> > check his C++ code, and maybe also which FPU settings are active, etc.
>
> I noticed that SBCL complained about 0.0 not being a double, but it
> didn't complain about (> (abs z) 100.0). That suggests that the abs
> calculation is being done in single precision. That could explain both
> the numerical discrepancy and how SBCL manages to be faster than C.
>
> The C code is here. It only counts the iterations--it doesn't build up
> the result matrix.
>
> ================================================================
>
> #include <complex.h>
> #include <stdio.h>
>
> const int mandelbrot_iteration(const double complex c)
> {
> int k = 1;
> double complex z = c;
> do {
> z = z*z + c;
> } while (k++ < 1000 && cabs(z) <= 100.0) ;
> return k;
> }
>
> const int mandelbrot_box(const double x1, const double x2,
> const double y1, const double y2,
> const int nx, const int ny)
> {
> const double dx = (x2-x1) / nx;
> const double dy = (y2-y1) / ny;
> double x, y;
> int sum = 0;
> int i, j;
>
> for (i=0, x=x1; i < nx; i++, x+=dx) {
> for (j=0, y=y1; j < ny; j++, y+=dy) {
> sum += mandelbrot_iteration(x+I*y);
> }
> }
> return sum;
> }
>
> int main()
> {
> printf("%d\n", mandelbrot_box(0.0, 1.0, 0.0, 1.0, 256, 256));
> return 0;
> }

Here's a version in R that's quite slow,
even though it avoids using the built-in complex type.

library(compiler)

iter.limit = 1000
floats = seq(0,1, len=257)

count.iters <- cmpfun( function(x0,y0) {
x = y = 0
for (j in 0:iter.limit)
{ xx = x*x
yy = y*y
if (xx + yy >= 4) return(j)
y = 2 * x * y + y0
x = xx - yy + x0 }
return(iter.limit) })

sum(outer(floats,floats, Vectorize(count.iters)))

13808291

endlessboo...@gmail.com

unread,
Jul 17, 2016, 8:07:47 PM7/17/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
why are you talking about R?
this is forth discussion
https://www.youtube.com/watch?v=uNyGjJF-VV8

hughag...@gmail.com

unread,
Jul 20, 2016, 10:43:21 AM7/20/16
to
On Friday, July 1, 2016 at 12:30:25 PM UTC-7, hughag...@gmail.com wrote:
I have yet to see a comparison of times (all run on the same computer) for the various versions of this program. Wasn't the purpose of this exercise to compare speed? Of course, WJ is going around saying that my VFX program was slower than the SBCL and oCaml programs, and that this proves that I'm a "very bad programmer" --- but he is just making that stuff up --- no comparison of speeds has been done.

If everybody who has written a version of the program emails me an executable that runs under Windows and calculates its run time, I will run them all on my laptop and report the result. Note that, as I said before, there is a lot of variation in times between runs of the same program on the same computer. I will have to run each program several times and report on the mean time required as well as the standard-deviation. Also, as I said, my experience with this is that the first run is slow because the caches are empty, but the subsequent runs are faster because the caches have been prepped. I wouldn't really put a lot of stock into the results of any benchmark done on a multi-tasking OS and/or done on a modern computer with caches.

Over on comp.lang.forth we have another thread going on the same topic:
https://groups.google.com/forum/#!topic/comp.lang.forth/M-mK-j_sXiI
In this thread, HumptyDumpty posted another Forth program.

Everybody gets a different result:
13907805 HumptyDumpty --- VFX
13950170 me --- VFX
13959941 Paul Rubin --- GCC-fast
13959797 Paul Rubin --- GCC-regular
13950225 Nicolas Neuss --- SBCL

As I said before, the Mandlebrot program is the worst possible choice for a benchmark because everybody gets a different result and there is no correct answer --- I don't know if there are bugs in any of these programs or if the differences are due to chaos --- it seems profoundly uninteresting to me to write a program when I don't know what it is supposed to do, and it has no purpose except to generate psychedelic posters for stoners' bedroom walls.

hughag...@gmail.com

unread,
Jul 20, 2016, 11:04:33 AM7/20/16
to
On Wednesday, July 20, 2016 at 7:43:21 AM UTC-7, hughag...@gmail.com wrote:
> I have yet to see a comparison of times (all run on the same computer) for the various versions of this program. Wasn't the purpose of this exercise to compare speed? Of course, WJ is going around saying that my VFX program was slower than the SBCL and oCaml programs, and that this proves that I'm a "very bad programmer" --- but he is just making that stuff up --- no comparison of speeds has been done.
> ...
> Everybody gets a different result:
> 13907805 HumptyDumpty --- VFX
> 13950170 me --- VFX
> 13959941 Paul Rubin --- GCC-fast
> 13959797 Paul Rubin --- GCC-regular
> 13950225 Nicolas Neuss --- SBCL

I forgot to mention WJ's contributions:
13808106 WJ --- oCaml
13808291 WJ --- R

Well, if WJ's oCaml program takes 142,064 fewer iterations than mine, that might help to explain why it has a faster time (if it does actually have a faster time; as I said before, no comparison of speeds has yet been made) --- but it seems easier just to explain everything by saying that I'm a "very bad programmer."

All in all, this thread seems to have descended not only into chaos, but into foolishness as well --- I think I've lost interest in Common Lisp anyway, because I couldn't figure out how the Lisp program worked due to not understanding the LOOP macro, and it appears that nobody else (including the compiler-writers) understands LOOP either --- also, my mathematical background is pretty much limited to probability, so I'm out of my depth here among you math-whizzes anyway.

Nicolas Neuss

unread,
Jul 22, 2016, 5:21:47 AM7/22/16
to
hughag...@gmail.com writes:

> I have yet to see a comparison of times (all run on the same computer)
> for the various versions of this program. Wasn't the purpose of this
> exercise to compare speed? Of course, WJ is going around saying that
> my VFX program was slower than the SBCL and oCaml programs, and that
> this proves that I'm a "very bad programmer" --- but he is just making
> that stuff up --- no comparison of speeds has been done.
>
> If everybody who has written a version of the program emails me an
> executable that runs under Windows and calculates its run time, I will
> run them all on my laptop and report the result.

The program I posted should run on SBCL under Windows.

An alternative might be to compare the times with respect to a reference
program running everywhere. For the moment, I am rather satisfied with
the information that SBCL is a little slower than gcc without
-ffast-math, and about 4.3 times slower than gcc with -ffast-math.

> Note that, as I said before, there is a lot of variation in times
> between runs of the same program on the same computer.

And as I said before, you could simply change the range and width of the
mesh. For example:

(time (mandelbrot-box -2.0d0 1.0d0 -1.0d0 1.0d0 1536 1024))
Evaluation took:
7.339 seconds of real time
7.340000 seconds of total run time (7.340000 user, 0.000000 system)
100.01% CPU
19,767,596,313 processor cycles
63,104,368 bytes consed

408963801

neuss@mini:~/Programming/C$ g++ -O mandelbrot_rubin.cc
neuss@mini:~/Programming/C$ time a.out
409344245 0

real 0m5.372s
user 0m5.368s
sys 0m0.000s
neuss@mini:~/Programming/C$ g++ -Ofast mandelbrot_rubin.cc
neuss@mini:~/Programming/C$ time a.out
409344547 0

real 0m1.707s
user 0m1.704s
sys 0m0.000s

> I will have to run each program several times and report on the mean
> time required as well as the standard-deviation.

Not if you do the above change in the parameters. The results are quite
stable then.

> Also, as I said, my experience with this is that the first run is slow
> because the caches are empty, but the subsequent runs are faster
> because the caches have been prepped. I wouldn't really put a lot of
> stock into the results of any benchmark done on a multi-tasking OS
> and/or done on a modern computer with caches.

If you change the parameters as suggested above, I cannot observe any
large differences

> Over on comp.lang.forth we have another thread going on the same topic:
> https://groups.google.com/forum/#!topic/comp.lang.forth/M-mK-j_sXiI
> In this thread, HumptyDumpty posted another Forth program.
>
> Everybody gets a different result:
> 13907805 HumptyDumpty --- VFX
> 13950170 me --- VFX
> 13959941 Paul Rubin --- GCC-fast
> 13959797 Paul Rubin --- GCC-regular
> 13950225 Nicolas Neuss --- SBCL
>
> As I said before, the Mandlebrot program is the worst possible choice
> for a benchmark because everybody gets a different result and there is
> no correct answer --- I don't know if there are bugs in any of these
> programs or if the differences are due to chaos --- it seems
> profoundly uninteresting to me to write a program when I don't know
> what it is supposed to do, and it has no purpose except to generate
> psychedelic posters for stoners' bedroom walls.

If you change the output with a round operation to return the number of
Mega-Iterations, then the output (for the above call) is safely
determined as the integer 409.

For the CL program, this can be achieved by changing the last statement
from

(if result-p result sum)

to

(if result-p result (round sum 1e6))

:-)

Nicolas

P.S.: Since the thread is already rather old, I append below the code I
used for running the above tests.


;;; CL code

(defun mandelbrot-iteration (c)
(declare (ftype (function (complex double-float) fixnum))
(optimize speed))
(loop for k of-type fixnum from 1 below 1000
for z of-type (complex double-float) = c
then (+ (* z z) c)
until (> (abs z) 100.0d0)
finally (return k)))

(defun mandelbrot-box (x1 x2 y1 y2 nx ny &optional result-p)
(declare (type double-float x1 x2 y1 y2)
(type fixnum nx ny))
(let ((dx (/ (- x2 x1) nx))
(dy (/ (- y2 y1) ny))
(result (make-array (list (1+ nx) (1+ ny)) :element-type 'fixnum))
(sum 0))
(declare (type double-float dx dy)
(type fixnum sum))
(loop for kx upto nx
for x of-type double-float = x1 then (+ x dx) do
(loop for ky upto ny
for y of-type double-float = y1 then (+ y dy) do
(incf sum
(setf (aref result kx ky)
(mandelbrot-iteration (complex x y))))))
(if result-p result sum)))

(time (mandelbrot-box -2.0d0 1.0d0 -1.0d0 1.0d0 1536 1024))


/* Paul Rubin's C++ code with slight changes */

#include <complex.h>
#include <stdio.h>

static const int nx=1536;
static const int ny=1024;
static int result[nx+1][ny+1];

const int mandelbrot_iteration(const double complex c)
{
int k = 1;
double complex z = c;
do {
z = z*z + c;
} while (k++ < 1000 && cabs(z) <= 100.0) ;
return k;
}

const int mandelbrot_box(const double x1, const double x2,
const double y1, const double y2,
const int nx, const int ny)
{
const double dx = (x2-x1) / nx;
const double dy = (y2-y1) / ny;
double x, y;
int sum = 0;
int i, j;

for (i=0, x=x1; i <= nx; i++, x+=dx) {
for (j=0, y=y1; j <= ny; j++, y+=dy) {
sum += (result[i][j]=mandelbrot_iteration(x+I*y));
}
}
return sum;
}

int main()
{
printf("%d %d\n", mandelbrot_box(-2.0, 1.0, -1.0, 1.0, 1536, 1024), result[3][3]);
return 0;
}

Nicolas Neuss

unread,
Jul 22, 2016, 5:41:52 AM7/22/16
to
hughag...@gmail.com writes:

> All in all, this thread seems to have descended not only into chaos,
> but into foolishness as well --- I think I've lost interest in Common
> Lisp anyway, because I couldn't figure out how the Lisp program worked
> due to not understanding the LOOP macro, and it appears that nobody
> else (including the compiler-writers) understands LOOP either ---
> also, my mathematical background is pretty much limited to
> probability, so I'm out of my depth here among you math-whizzes
> anyway.

I doubt that you have to be a math-whizz for understanding

(loop for i upto nx
for x = x1 then (+ x dx)
do
...)

At least, I don't see why

for (i=0, x=x1; i <= nx; i++, x+=dx) { ... }

is any easier.

One (dis?)advantage of the Common Lisp loop macro is that you have much
more freedom of expression than in the C++ for loop. For example, in
the above and all other loops in my CL code, the second FOR could be
changed into an AND which indicates that the looping assignments do not
depend on each other. This even improves the CL speed a little bit.
Presumably, g++ is so clever that it can deduce this by itself.

Here is the modified CL code:

--8<---------------cut here---------------start------------->8---
(defun mandelbrot-iteration (c)
(declare (ftype (function (complex double-float) fixnum))
(optimize speed))
(loop for k of-type fixnum from 1 below 1000
and z of-type (complex double-float) = c
then (+ (* z z) c)
until (> (abs z) 100.0d0)
finally (return k)))

(defun mandelbrot-box (x1 x2 y1 y2 nx ny &optional result-p)
(declare (type double-float x1 x2 y1 y2)
(type fixnum nx ny))
(let ((dx (/ (- x2 x1) nx))
(dy (/ (- y2 y1) ny))
(result (make-array (list (1+ nx) (1+ ny)) :element-type 'fixnum))
(sum 0))
(declare (type double-float dx dy)
(type fixnum sum))
(loop for kx upto nx
and x of-type double-float = x1 then (+ x dx) do
(loop for ky upto ny
and y of-type double-float = y1 then (+ y dy) do
(incf sum
(setf (aref result kx ky)
(mandelbrot-iteration (complex x y))))))
(if result-p result sum)))

(time (mandelbrot-box -2.0d0 1.0d0 -1.0d0 1.0d0 1536 1024))
--8<---------------cut here---------------end--------------->8---

Nicolas

Paul Rubin

unread,
Jul 23, 2016, 3:17:57 AM7/23/16
to
Nicolas Neuss <last...@scipolis.de> writes:
> neuss@mini:~/Programming/C$ g++ -O mandelbrot_rubin.cc
> neuss@mini:~/Programming/C$ time a.out
> 409344245 0

I'm surprised and amused that you were able to compile that file as a
C++ program. It was actually written in C, and C++ implements complex
arithmetic somewhat differently, or so I thought.

Nicolas Neuss

unread,
Jul 23, 2016, 5:11:54 AM7/23/16
to
So I thought as well (and was positively surprised that C had complex
numbers). However:

neuss@mini:~/Programming/C$ gcc mandelbrot_rubin.cc
/tmp/ccCbXQMR.o: In function `mandelbrot_iteration(doublecomplex )':
mandelbrot_rubin.cc:(.text+0xe7): undefined reference to `cabs'
collect2: error: ld returned 1 exit status
neuss@mini:~/Programming/C$ g++ mandelbrot_rubin.cc
neuss@mini:~/Programming/C$

WJ

unread,
Jul 23, 2016, 11:28:33 AM7/23/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
Nicolas Neuss wrote:

> I doubt that you have to be a math-whizz for understanding
>
> (loop for i upto nx
> for x = x1 then (+ x dx)
> do
> ...)

"for i upto nx"? That doesn't indicate what the initial
value of i is. 0 or 1?

Also, what is the last value for i?
Is it nx, or is it nx-1?

Also, is this a nested loop, or are i and x stepped
in parallel?

Let's try to test your code.

(let ((x1 500) (dx 2) (nx 5))
(loop for i upto nx
for x = x1 then (+ x dx)
do (print (list i x))))

(0 500)
(1 502)
(2 504)
(3 506)
(4 508)
(5 510)


R:

x1=500; dx=2; nx=5; x=x1
for (i in 0:nx) {cat(i,x,"\n") ; x=x+dx}

0 500
1 502
2 504
3 506
4 508
5 510

Paul Graham:

I consider Loop one of the worst flaws in CL, and an example
to be borne in mind by both macro writers and language designers.


[ In "ANSI Common Lisp", Graham makes the following comments: ]

The loop macro was originally designed to help inexperienced
Lisp users write iterative code. Instead of writing Lisp code,
you express your program in a form meant to resemble English,
and this is then translated into Lisp. Unfortunately, loop is
more like English than its designers ever intended: you can
use it in simple cases without quite understanding how it
works, but to understand it in the abstract is almost
impossible.
....
the ANSI standard does not really give a formal specification
of its behavior.
....
The first thing one notices about the loop macro is that it
has syntax. A loop expression contains not subexpressions but
clauses. The clauses are not delimited by parentheses;
instead, each kind has a distinct syntax. In that, loop
resembles traditional Algol-like languages. But the other
distinctive feature of loop, which makes it as unlike Algol as
Lisp, is that the order in which things happen is only
loosely related to the order in which the clauses occur.
....
For such reasons, the use of loop cannot be recommended.



-----


Dan Weinreb, one of the designers of Common Lisp:

... the problem with LOOP was that it turned out to be hard to
predict what it would do, when you started using a lot of
different facets of LOOP all together. This is a serious problem
since the whole idea of LOOP was to let you use many facets
together; if you're not doing that, LOOP is overkill.


-----


Barry Margolin, 05 Apr 2001
(http://groups.google.com/group/comp.lang.lisp/msg/8a48cedb8b3bcb52)

>(My second rule of thumb concerning LOOP would be the negative of
>Barry Margolin's: The more complex the looping, the more you need/want
>to use LOOP.)

My recommendation is based on seeing many question in the past of the form
"What happens if you use both XXX and YYY in the same LOOP?" The
unfortunate fact is that when we were writing the standard we didn't have
time to nail down all the possible interactions between different LOOP
features, so many of these are not well specified. And even if we did get
it right in the standard, it's likely to be difficult to find them and I
wouldn't trust that all implementors got it right (many of those questions
were probably from implementors, trying to figure out what they were
supposed to do). And even if they all got it right, someone reading your
code may not be able to figure it out.

So, with all those potential problems, my feeling is that if you have to
ask, it's probably better to use something other than LOOP.

-----

Barry Margolin:

> 3. Loop is very powerful, granted, and many people are trying to
> argue that "you can do so much with loop that it's unreadable."
> This is not an argument.

But it is! Because any use of LOOP has the potential to be
unreadable, the reader must read it carefully to verify that
it's just one of the cases that doesn't require careful
reading!

-----

Barry Margolin: (05 Apr 2002 20:57:48 GMT)

This seems like a big change just to clean up the way LOOP is described.
And LOOP will still be a wart, because it will be the only language feature
that uses "per-macro keywords". Providing this interface and giving a name
to them would encourage other macro designers to do something similar, and
we don't want more things like LOOP.


-----

Re: Bad Idiom?

Barry Margolin (1997-01-08)

There are a few things that can be done extremely conveniently
with LOOP, and I will usually use LOOP in those cases. In
particular, the COLLECTING feature is one of the most
convenient.

The Generators and Collectors macros described in Appendix B
of CLtL2 also provide this convenience and are much more
Lisp-like. It's too bad they weren't around when LOOP was
gaining popularity, and I think they're a better way to go.
But LOOP is what I got used to, and its popularity is why it
got elevated into the ANSI standard.


-----

From: John Foderaro <j...@unspamx.franz.com>
Newsgroups: comp.lang.lisp
Subject: Re: the "loop" macro
Date: Sun, 26 Aug 2001 10:51:26 -0700

I'm not trying to join a debate on loop. I just wanted to present
the other side of [the issue so that] the intelligent people can
then weigh the arguments on both sides.

I'm not suggesting that loop can be fixed either by adding
parenthesis or coming up with ways of indenting it to make it
understandable. It's a lost cause.

--
The report card by the American Society of Civil Engineers showed the national
infrastructure a single grade above failure, a step from declining to the point
where everyday things simply stop working the way people expect them to. ---
washingtonpost.com/local/trafficandcommuting/us-infrastructure-gets-d-in-annual-report/2013/03/19/c48cb010-900b-11e2-9cfd-36d6c9b5d7ad_story.html

Paul Rubin

unread,
Jul 23, 2016, 12:16:51 PM7/23/16
to
Nicolas Neuss <last...@scipolis.de> writes:
> mandelbrot_rubin.cc:(.text+0xe7): undefined reference to `cabs'

You need

gcc -lm mandelbrot_rubin.cc

to includde the math library, iirc.

Nicolas Neuss

unread,
Jul 23, 2016, 1:07:06 PM7/23/16
to
[Feeding WJ - sorry about that]
"WJ" <w_a_...@yahoo.com> writes:

> Nicolas Neuss wrote:
>
>> I doubt that you have to be a math-whizz for understanding
>>
>> (loop for i upto nx
>> for x = x1 then (+ x dx)
>> do
>> ...)
>
> "for i upto nx"? That doesn't indicate what the initial
> value of i is. 0 or 1?
>
> Also, what is the last value for i?
> Is it nx, or is it nx-1?
>
> Also, is this a nested loop, or are i and x stepped
> in parallel?

Indeed, you have to RTFM (for example, in the form of the Hyperspec) for
using loop correctly.

> Let's try to test your code.
>
> (let ((x1 500) (dx 2) (nx 5))
> (loop for i upto nx
> for x = x1 then (+ x dx)
> do (print (list i x))))
>
> (0 500)
> (1 502)
> (2 504)
> (3 506)
> (4 508)
> (5 510)
>
>
> R:
>
> x1=500; dx=2; nx=5; x=x1
> for (i in 0:nx) {cat(i,x,"\n") ; x=x+dx}

This is not the looping of both i and x which *I* want to do. This is
an i-loop together with low-level imperative code for initializing and
incrementing x. If you are satisfied with this low level, this is
*your* problem, and only an indication that you do not have real
programming problems to solve.

> [anti-loop quotes]

All valid points, but completely irrelevant for the question if *I* or
others should use LOOP. Each of those persons was/is completely capable
of avoiding loop and replacing it with constructs fitting their
programming style. This is the power of Lisp!

When someone shows me a better construct than LOOP,
multi-implementation, well integrated in Quicklisp, performant, allowing
to express the same as the LOOP constructs, with same or better
readibility, without any warts, I am sure we all would switch to it very
fast. Unfortunately, up to now noone has succeeded in even describing
such a beast.

Nicolas

Nicolas Neuss

unread,
Jul 23, 2016, 1:10:00 PM7/23/16
to
Does not work for me:

neuss@mini:~/Programming/C$ gcc -lm mandelbrot_rubin.cc
/tmp/ccsHadrB.o: In function `mandelbrot_iteration(doublecomplex )':
mandelbrot_rubin.cc:(.text+0xe7): undefined reference to `cabs'

Jussi Piitulainen

unread,
Jul 23, 2016, 1:14:10 PM7/23/16
to
gcc mandelbrot_rubin.cc -lm

Paul Rubin

unread,
Jul 23, 2016, 1:16:39 PM7/23/16
to
Nicolas Neuss <last...@scipolis.de> writes:
> When someone shows me a better construct than LOOP,... performant

I tried a straightforward recursive implementation and it was dog slow,
but I figured that was because of the compiler spending more
optimization effort on LOOP, or else that I got the optimization options
wrong. Any self-respecting Scheme compiler would have generated good
code from something like this:

(defun mi1 (c z n)
(declare (ftype (function (complex double-float fixnum) fixnum))
(optimize speed))
(if (or (>= n 1000) (> (abs z) 100.0))
n
(mi1 c (+ (* z z) c) (1+ n))))

(defun mandelbrot-iteration (c)
(declare (ftype (function (complex) fixnum))
(optimize speed))
(mi1 c c 1))

Nicolas Neuss

unread,
Jul 23, 2016, 3:20:22 PM7/23/16
to
Jussi Piitulainen <jussi.pi...@helsinki.fi> writes:

> Nicolas Neuss <last...@scipolis.de> writes:
>
>> Paul Rubin <no.e...@nospam.invalid> writes:
>>
>>> Nicolas Neuss <last...@scipolis.de> writes:
>>>> mandelbrot_rubin.cc:(.text+0xe7): undefined reference to `cabs'
>>>
>>> You need
>>>
>>> gcc -lm mandelbrot_rubin.cc
>>>
>>> to includde the math library, iirc.
>>
>> Does not work for me:
>>
>> neuss@mini:~/Programming/C$ gcc -lm mandelbrot_rubin.cc
>> /tmp/ccsHadrB.o: In function `mandelbrot_iteration(doublecomplex )':
>> mandelbrot_rubin.cc:(.text+0xe7): undefined reference to `cabs'
>> collect2: error: ld returned 1 exit status
>
> gcc mandelbrot_rubin.cc -lm

Thanks, this works.

Obviously, it is a long time since I used C in real life. And at that
remote time we had a Makefile with separate compilation and linking
step:-)

Nicolas Neuss

unread,
Jul 23, 2016, 3:40:02 PM7/23/16
to
Paul Rubin <no.e...@nospam.invalid> writes:

> Nicolas Neuss <last...@scipolis.de> writes:
>> When someone shows me a better construct than LOOP,... performant
>
> I tried a straightforward recursive implementation and it was dog slow,
> but I figured that was because of the compiler spending more
> optimization effort on LOOP, or else that I got the optimization options
> wrong. Any self-respecting Scheme compiler would have generated good
> code from something like this:

"... would have to generate ..." - otherwise it would not be a
conforming Scheme compiler:-)

> (defun mi1 (c z n)
> (declare (ftype (function (complex double-float fixnum) fixnum))
> (optimize speed))
> (if (or (>= n 1000) (> (abs z) 100.0))
> n
> (mi1 c (+ (* z z) c) (1+ n))))
>
> (defun mandelbrot-iteration (c)
> (declare (ftype (function (complex) fixnum))
> (optimize speed))
> (mi1 c c 1))

Try the following with SBCL:

(defun mandelbrot-iteration (c)
(declare (ftype (function (complex double-float) fixnum))
(optimize speed))
(labels ((mi1 (z n)
(declare (type (complex double-float) z)
(type fixnum n))
(if (or (>= n 1000) (> (abs z) 100.0))
n
(mi1 (+ (* z z) c) (1+ n)))))
(mi1 c 1)))

Or even

(defmacro named-let (name bindings &body body)
"Implements the named-let construct from Scheme."
`(labels ((,name ,(mapcar #'first bindings)
,@body))
(,name ,@(mapcar #'second bindings))))

(defun mandelbrot-iteration (c)
(declare (ftype (function (complex double-float) fixnum))
(optimize speed))
(named-let mi1 ((z c) (n 1))
(declare (type (complex double-float) z)
(type fixnum n))
(if (or (>= n 1000) (> (abs z) 100.0))
n
(mi1 (+ (* z z) c) (1+ n)))))

I don't observe much performance difference for these versions.

Nicolas

WJ

unread,
Jul 23, 2016, 4:01:24 PM7/23/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
Another way:

x1=500; dx=2; nx=5
i=0:nx
cbind(i,seq(x1,along.with=i,by=dx))

[1,] 0 500
[2,] 1 502
[3,] 2 504
[4,] 3 506
[5,] 4 508
[6,] 5 510

--
Europe is not going to be the monolithic societies that they once were in the
last century.... They are now going into a multicultural mode. Jews will be
resented because of our leading role. --- Barbara Spectre
archive.org/download/DavidDuke_videos/HowZionistsDivideAndConquer-fjjszxkm55o.ogv

endlessboo...@gmail.com

unread,
Jul 25, 2016, 1:22:52 PM7/25/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
I agree. Fuck the loop macro. Paul Graham is a god.

endlessboo...@gmail.com

unread,
Jul 27, 2016, 12:24:14 AM7/27/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse

endlessboo...@gmail.com

unread,
Jul 27, 2016, 12:24:23 AM7/27/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
On Friday, June 17, 2016 at 6:52:51 AM UTC-4, Norbert_Paul wrote:
> hughag...@gmail.com wrote:
> > Over on comp.lang.forth we had this thread:
> > https://groups.google.com/forum/#!topic/comp.lang.forth/qTDKxhW9758
> >
> > I said this:
> >
> > On Thursday, June 16, 2016 at 11:16:09 PM UTC-7, hughag...@gmail.com wrote:
> >> On Thursday, June 16, 2016 at 7:58:17 PM UTC-7, hughag...@gmail.com wrote:
> >>> On Wednesday, June 15, 2016 at 12:15:59 PM UTC-7, endlessboo...@gmail.com wrote:
>
> You are waisting your time.
>
> > I would appreciate if you guys on comp.lang.lisp would support me in this --- don't allow any more idiotic messages from Gavino --- remind him of the need to complete his assignment.
> I don't support troll-feeding and I suppose the majority here doesn't too.
>
> > Thank you for your consideration...
> You might consider using a decent news-reader to read newsgroups.
> As far as I remember, google-groups does not support kill-lists,
> but this may have changed by now.
>
> So this is my assignement for you:
> (1) Find a news-reader of your choice. It should support kill-lists.
> (2) Start reading c.l.l with this reader and stay away from groups.google.com.
> (3) *PGONK* (where "G" stands for "Gavino")
>
> Thank you for your consideration...
> Norbert

https://www.youtube.com/watch?v=5aZi1RXe9kY
It is loading more messages.
0 new messages