Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to copy multi object array contents into single object arrays?

5 views
Skip to first unread message

Tuxedo

unread,
Jan 13, 2010, 9:12:59 AM1/13/10
to
Hi,

I have an array with image source and caption object pairs, such as:

var library = [{ img: 'img01.jpg', caption: 'Caption 1'},
{ img: 'img02.jpg', caption: 'Caption 2'},
{ img: 'img03.jpg', caption: 'Caption 3'}];

I would like to return copies of the above as if the contents had been
placed in two separate arrays, like:

var photos = ['img01.jpg', 'img02.jpg', 'img03.jpg']
var captions = ['Caption 1', 'Caption 2', 'Caption 3']

But obviously I'd like to avoid repeating code unecessarily in form of
duplicate typed out content as several arrays. Instead, I would like to
access separate arrays of photos and captions, derived from the library
array, as if they were two separate arrays in the first place, so that: ...

alert(photos) would return: img01.jpg,img02.jpg,img03.jpg
alert(captions) would return: Caption 1,Caption 2,Caption 3

How can the contents be copied from the 'library' array into two separate
virtual arrays named 'photos' and 'captions' which can be accessed like
normal single level arrays?

Many thanks,
Tuxedo

Scott Sauyet

unread,
Jan 13, 2010, 9:49:36 AM1/13/10
to
On Jan 13, 9:12 am, Tuxedo <tux...@mailinator.com> wrote:
> I have an array with image source and caption object pairs, such as:
>
> var library = [{ img: 'img01.jpg', caption: 'Caption 1'},
>                { img: 'img02.jpg', caption: 'Caption 2'},
>                { img: 'img03.jpg', caption: 'Caption 3'}];
>
> I would like to return copies of the above as if the contents had been
> placed in two separate arrays, like:
>
> var photos = ['img01.jpg', 'img02.jpg', 'img03.jpg']
> var captions = ['Caption 1', 'Caption 2', 'Caption 3']

var photos = [], captions = [];
for (var i = 0, len = library.length; i < len; i++) {
photos.push(library[i]["img"]);
captions.push(library[i]["caption"]);
}

-- Scott

David Mark

unread,
Jan 13, 2010, 9:59:12 AM1/13/10
to

photos.length = captions.length = library.length;
for (var i = 0, len = library.length; i--;) {
photos[i] = library[i]["img"];
captions[i] = library[i]["caption"];
}

Thomas 'PointedEars' Lahn

unread,
Jan 13, 2010, 10:13:06 AM1/13/10
to
David Mark wrote:

> Scott Sauyet wrote:
>> var photos = [], captions = [];
>> for (var i = 0, len = library.length; i < len; i++) {
>> photos.push(library[i]["img"]);
>> captions.push(library[i]["caption"]);
>> }
>
> var photos = [], captions = [];
> photos.length = captions.length = library.length;
> for (var i = 0, len = library.length; i--;) {

No :)

> photos[i] = library[i]["img"];
> captions[i] = library[i]["caption"];
> }

var
photos = [],
captions = [],
len = library.length;

photos.length = captions.length = len;

for (var i = len; i--;)
{
var o = library[i];
photos[i] = o.img;
captions[i] = o.caption;
}


PointedEars
--
var bugRiddenCrashPronePieceOfJunk = (
navigator.userAgent.indexOf('MSIE 5') != -1
&& navigator.userAgent.indexOf('Mac') != -1
) // Plone, register_function.js:16

Thomas 'PointedEars' Lahn

unread,
Jan 13, 2010, 10:17:12 AM1/13/10
to
Thomas 'PointedEars' Lahn wrote:

> var
> photos = [],


> captions = [],
> len = library.length;
>
> photos.length = captions.length = len;
>
> for (var i = len; i--;)
> {
> var o = library[i];
> photos[i] = o.img;
> captions[i] = o.caption;
> }

As we are iterating from end to start, does it even make sense to set the
`length' property? For it will be set in the first iteration anyway.


PointedEars
--
Danny Goodman's books are out of date and teach practices that are
positively harmful for cross-browser scripting.
-- Richard Cornford, cljs, <cife6q$253$1$8300...@news.demon.co.uk> (2004)

David Mark

unread,
Jan 13, 2010, 10:53:25 AM1/13/10
to
On Jan 13, 10:13 am, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:

> David Mark wrote:
> > Scott Sauyet wrote:
> >> var photos = [], captions = [];
> >> for (var i = 0, len = library.length; i < len; i++) {
> >> photos.push(library[i]["img"]);
> >> captions.push(library[i]["caption"]);
> >> }
>
> > var photos = [], captions = [];
> > photos.length = captions.length = library.length;
> > for (var i = 0, len = library.length; i--;) {
>
> No :)

No? Perhaps you meant you could improve on it?

>
> > photos[i] = library[i]["img"];
> > captions[i] = library[i]["caption"];
> > }
>
> var
> photos = [],
> captions = [],
> len = library.length;
>
> photos.length = captions.length = len;
>
> for (var i = len; i--;)
> {
> var o = library[i];
> photos[i] = o.img;
> captions[i] = o.caption;
> }
>

Yes, the start's a bit nicer (library length). I don't know if that
assignment to o will help though. You only saved two lookups. Not
worth worrying about it at this point. I just didn't care for the
original.

David Mark

unread,
Jan 13, 2010, 10:54:48 AM1/13/10
to
On Jan 13, 10:17 am, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:

> Thomas 'PointedEars' Lahn wrote:
> > var
> > photos = [],
> > captions = [],
> > len = library.length;
>
> > photos.length = captions.length = len;
>
> > for (var i = len; i--;)
> > {
> > var o = library[i];
> > photos[i] = o.img;
> > captions[i] = o.caption;
> > }
>
> As we are iterating from end to start, does it even make sense to set the
> `length' property? For it will be set in the first iteration anyway.
>

No. I originally had it going forward rather than reverse. Best to
drop that opening bit entirely when going in reverse. I should have
re-read it. Long night.

Thomas 'PointedEars' Lahn

unread,
Jan 13, 2010, 11:34:22 AM1/13/10
to
David Mark wrote:

> Thomas 'PointedEars' Lahn wrote:
>> David Mark wrote:
>> > Scott Sauyet wrote:
>> >> var photos = [], captions = [];
>> >> for (var i = 0, len = library.length; i < len; i++) {
>> >> photos.push(library[i]["img"]);
>> >> captions.push(library[i]["caption"]);
>> >> }
>>
>> > var photos = [], captions = [];
>> > photos.length = captions.length = library.length;
>> > for (var i = 0, len = library.length; i--;) {

^^^^^ ^^^


>> No :)
>
> No? Perhaps you meant you could improve on it?

If "to improve" means "let it do anything useful", then yes. Long night?
;-)

>>
>> > photos[i] = library[i]["img"];
>> > captions[i] = library[i]["caption"];
>> > }
>>
>> var
>> photos = [],
>> captions = [],
>> len = library.length;
>>
>> photos.length = captions.length = len;
>>
>> for (var i = len; i--;)
>> {
>> var o = library[i];
>> photos[i] = o.img;
>> captions[i] = o.caption;
>> }
>
> Yes, the start's a bit nicer (library length). I don't know if that
> assignment to o will help though. You only saved two lookups. Not
> worth worrying about it at this point.

Benchmarks suggest it would be about 20% faster in TraceMonkey 1.9.1.6.


PointedEars
--
Prototype.js was written by people who don't know javascript for people
who don't know javascript. People who don't know javascript are not
the best source of advice on designing systems that use javascript.
-- Richard Cornford, cljs, <f806at$ail$1$8300...@news.demon.co.uk>

Garrett Smith

unread,
Jan 13, 2010, 12:04:02 PM1/13/10
to
Setting the length will avoid step 10 in array [[Put]], so should be
fastest. IE versions may benefit even more from this.
--
Garrett
comp.lang.javascript FAQ: http://jibbering.com/faq/

David Mark

unread,
Jan 13, 2010, 12:10:39 PM1/13/10
to
Garrett Smith wrote:
> Thomas 'PointedEars' Lahn wrote:
>> David Mark wrote:
>>
>>> Thomas 'PointedEars' Lahn wrote:
>>>> David Mark wrote:
>>>>> Scott Sauyet wrote:
>>>>>> var photos = [], captions = [];
>>>>>> for (var i = 0, len = library.length; i < len; i++) {
>>>>>> photos.push(library[i]["img"]);
>>>>>> captions.push(library[i]["caption"]);
>>>>>> }
>>>>> var photos = [], captions = [];
>>>>> photos.length = captions.length = library.length;
>>>>> for (var i = 0, len = library.length; i--;) {
>> ^^^^^ ^^^
>>>> No :)
>>> No? Perhaps you meant you could improve on it?
>>
>> If "to improve" means "let it do anything useful", then yes. Long
>> night? ;-)

Yes. Typo. Was supposed to be

for (var i = library.length; i--;)

... and I didn't notice Thomas fixed that either. Bleary-eyed after
reading a bunch of (typically) horrible JS overnight. Why do the worst
programmers in the world write seemingly all of the "major" JS. It's
just not sustainable, so I think we'll all end up programming ES in
Flash or the like.

>>
>>>>> photos[i] = library[i]["img"];
>>>>> captions[i] = library[i]["caption"];
>>>>> }
>>>> var
>>>> photos = [],
>>>> captions = [],
>>>> len = library.length;
>>>>
>>>> photos.length = captions.length = len;
>>>>
>>>> for (var i = len; i--;)
>>>> {
>>>> var o = library[i];
>>>> photos[i] = o.img;
>>>> captions[i] = o.caption;
>>>> }
>>> Yes, the start's a bit nicer (library length). I don't know if that
>>> assignment to o will help though. You only saved two lookups. Not
>>> worth worrying about it at this point.
>>
>> Benchmarks suggest it would be about 20% faster in TraceMonkey 1.9.1.6.
>>
> Setting the length will avoid step 10 in array [[Put]], so should be
> fastest. IE versions may benefit even more from this.

But, as Thomas noted, it will be set (for good) the first time as we are
going backwards.

David Mark

unread,
Jan 13, 2010, 12:10:56 PM1/13/10
to
Garrett Smith wrote:
> Thomas 'PointedEars' Lahn wrote:
>> David Mark wrote:
>>
>>> Thomas 'PointedEars' Lahn wrote:
>>>> David Mark wrote:
>>>>> Scott Sauyet wrote:
>>>>>> var photos = [], captions = [];
>>>>>> for (var i = 0, len = library.length; i < len; i++) {
>>>>>> photos.push(library[i]["img"]);
>>>>>> captions.push(library[i]["caption"]);
>>>>>> }
>>>>> var photos = [], captions = [];
>>>>> photos.length = captions.length = library.length;
>>>>> for (var i = 0, len = library.length; i--;) {
>> ^^^^^ ^^^
>>>> No :)
>>> No? Perhaps you meant you could improve on it?
>>
>> If "to improve" means "let it do anything useful", then yes. Long
>> night? ;-)

Yes. Typo. Was supposed to be

for (var i = library.length; i--;)

... and I didn't notice Thomas fixed that either. Bleary-eyed after
reading a bunch of (typically) horrible JS overnight. Why do the worst
programmers in the world write seemingly all of the "major" JS. It's
just not sustainable, so I think we'll all end up programming ES in
Flash or the like.

>>


>>>>> photos[i] = library[i]["img"];
>>>>> captions[i] = library[i]["caption"];
>>>>> }
>>>> var
>>>> photos = [],
>>>> captions = [],
>>>> len = library.length;
>>>>
>>>> photos.length = captions.length = len;
>>>>
>>>> for (var i = len; i--;)
>>>> {
>>>> var o = library[i];
>>>> photos[i] = o.img;
>>>> captions[i] = o.caption;
>>>> }
>>> Yes, the start's a bit nicer (library length). I don't know if that
>>> assignment to o will help though. You only saved two lookups. Not
>>> worth worrying about it at this point.
>>
>> Benchmarks suggest it would be about 20% faster in TraceMonkey 1.9.1.6.
>>
> Setting the length will avoid step 10 in array [[Put]], so should be
> fastest. IE versions may benefit even more from this.

But, as Thomas noted, it will be set (for good) the first time as we are
going backwards.

Garrett Smith

unread,
Jan 13, 2010, 12:17:25 PM1/13/10
to

Where supported, in JS 1.8 or in ES5, you can use Array.prototype.map[1].

Array.prototype.map creates a new array with the results of calling a
provided function on every element in this array.

That would look like:

(function(){


var library = [{ img: 'img01.jpg', caption: 'Caption 1'},
{ img: 'img02.jpg', caption: 'Caption 2'},
{ img: 'img03.jpg', caption: 'Caption 3'}];


function filterByName(prop){
return function(element){
return element[prop];
};
}

var imgArray = library.map(filterByName("img"));
var captionArray = library.map(filterByName("caption"));
})();

Where not supported, Array.prototype.map functionality can be added, as
indicated on MDC page[1].

Writing your own loop would be fastest here.

[1]https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Objects/Array/map

Scott Sauyet

unread,
Jan 13, 2010, 5:13:45 PM1/13/10
to
On Jan 13, 10:13 am, Thomas 'PointedEars' Lahn <PointedE...@web.de>

wrote:
>> Scott Sauyet wrote:
>>>     var photos = [], captions = [];
>>>     for (var i = 0, len = library.length; i < len; i++) {
>>>         photos.push(library[i]["img"]);
>>>         captions.push(library[i]["caption"]);
>>>     }

>   var
>     photos = [],


>     captions = [],
>     len = library.length;
>
>    photos.length = captions.length = len;
>
>    for (var i = len; i--;)
>    {
>      var o = library[i];
>      photos[i] = o.img;
>      captions[i] = o.caption;
>    }


Any of the suggestions will work, of course. The differences have to
do with readability versus performance. Perhaps the most readable
version would be something like this:

var photos = [];
var captions = [];
for (var i = 0; i < library.length; i++) {
var element = library[i];
photos.push(element.img);
captions.push(element.caption);
}

For people used to C-style languages, that loop will feel quite
familiar. My version added three minor changes: combining the initial
declarations into one statement, hoisting the length field out of the
loop, and removing the additional variable declaration inside the
loop. The first is mainly an issue of style (and perhaps bandwidth).
The second can be quite important for performance if the length is
determined by calling a heavy-weight function (such as if the list
were a dynamic collection of DOM nodes) but it probably has little
effect with a simple array like this. I have no idea how the third
change affects performance.

What Thomas suggests optimizes by reversing the iteration order. I
have heard that such a change often improves performance of JS loops,
but I don't know any numbers. The main disadvantage for beginning
programmers is that the loop is somewhat less readable, at least until
you get used to the convention. It takes advantage of the fact that 0
in a test is read as false, whereas positive integers are read as
true.

One nice variation of this is this loop format:

for (var i = library.length; i --> 0;) {
// ...
}

This looks like "for i, starting at library.length, proceeding to 0,
do something". It really looks as though the code contains an arrow
("-->"). In actuality this is the postfix decrement operator ("--")
followed by a greater-than compare (">"), and it takes advantage of
the fact that the comparison happens before the decrement. But it's
very pretty.

The OP will have to decide how to weight performance advantages
against code clarity, but in order to help, does anyone have links to
metrics on the different performance characteristics of loops in
different browser environments?

-- Scott

Thomas 'PointedEars' Lahn

unread,
Jan 13, 2010, 7:18:44 PM1/13/10
to
Scott Sauyet wrote:

> Thomas 'PointedEars' Lahn wrote:
>>> Scott Sauyet wrote:
>>>> var photos = [], captions = [];
>>>> for (var i = 0, len = library.length; i < len; i++) {
>>>> photos.push(library[i]["img"]);
>>>> captions.push(library[i]["caption"]);
>>>> }
>>
>> var
>> photos = [],
>> captions = [],
>> len = library.length;
>>
>> photos.length = captions.length = len;
>>
>> for (var i = len; i--;)
>> {
>> var o = library[i];
>> photos[i] = o.img;
>> captions[i] = o.caption;
>> }
>
> Any of the suggestions will work, of course.

Alas, not all, as I have pointed out.

> The differences have to do with readability versus performance.
> Perhaps the most readable version would be something like this:
>
> var photos = [];
> var captions = [];
> for (var i = 0; i < library.length; i++) {
> var element = library[i];
> photos.push(element.img);
> captions.push(element.caption);
> }
>
> For people used to C-style languages, that loop will feel quite
> familiar.

You don't know what you are talking about. First of all, for all intents
and purposes, ECMAScript is a C-style language. (If you have eyes to see
you can observed the similarities.)

(Therefore,) the backwards-counting loop will be familiar to "people used
to C-style languages", and it will also be a lot more efficient than the
one above. After all there is no boolean type in C before C99, so you
would use `int' (and guess what, C's `for' statement works the same as in
ECMAScript save the implicit conversion!), and in C99 you would use
`(_Bool) i--' for maximum efficiency.

The ascending order and the unnecessary push() calls will decrease
efficiency considerably, and the push() call will decrease compatibility,
too (JScript 5.0 does not have it).

> My version added three minor changes: combining the initial
> declarations into one statement, hoisting the length field out of the

There is no field, `length' is a property.

> loop, and removing the additional variable declaration inside the
> loop. The first is mainly an issue of style (and perhaps bandwidth).
> The second can be quite important for performance if the length is
> determined by calling a heavy-weight function (such as if the list
> were a dynamic collection of DOM nodes) but it probably has little
> effect with a simple array like this. I have no idea how the third
> change affects performance.

It affects it considerably, as variable instantiation only happens once
before execution, and identifiers are _not_ block-scoped -- don't you know
*anything*?

As a result, removing the variable declaration and initialization, rather
unsurprisingly, *decreases* performance as the property lookup is repeated.
Make benchmarks (or take more care when reading, and observe the results
that I have posted already).

> What Thomas suggests optimizes by reversing the iteration order. I
> have heard that such a change often improves performance of JS loops,
> but I don't know any numbers.

Make benchmarks then, and you will see that the simple post-decrement
increases efficiency considerably as well (else the pattern would not have
prevailed, would it?). We have been over this ad nauseam before.

> The main disadvantage for beginning programmers is that the loop is
> somewhat less readable, at least until you get used to the convention.
> It takes advantage of the fact that 0 in a test is read as false,
> whereas positive integers are read as true.

Do you get a kick out of explaining the obvious even though nobody has
asked a question? And you are being imprecise after all: *All* numeric
values *except* 0 (+0 and -0, but these are only Specification mechanisms)
and NaN type-convert to `true'. We have discussed this numerous times as
well.

> One nice variation of this is this loop format:
>
> for (var i = library.length; i --> 0;) {
> // ...
> }
>
> This looks like "for i, starting at library.length, proceeding to 0,
> do something". It really looks as though the code contains an arrow
> ("-->"). In actuality this is the postfix decrement operator ("--")
> followed by a greater-than compare (">"), and it takes advantage of
> the fact that the comparison happens before the decrement.

As in the more simple and more efficient i-- ...

> But it's very pretty.

Pretty according to whose standards? It could create misconceptions with
newcomers about a limit-type `-->' operator, and it is even less readable
or obvious than the versions you were incompetently whining about.

> The OP will have to decide how to weight performance advantages
> against code clarity, but in order to help, does anyone have links to
> metrics on the different performance characteristics of loops in
> different browser environments?

The browser does not matter (the ECMAScript implementation does), and this
discussion has been performed a hundred times or so already. So have
results been posted. Much you have to learn.


Score adjusted

Tuxedo

unread,
Jan 14, 2010, 3:08:55 AM1/14/10
to
Thanks to all for posting the various code examples, it more than solved my
small multi to single array items conversion problem!

Tuxedo

Scott Sauyet

unread,
Jan 14, 2010, 11:41:15 AM1/14/10
to
On Jan 13, 7:18 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:

> Scott Sauyet wrote:
>> The differences have to do with readability versus performance.
>> Perhaps the most readable version would be something like this:
>
>> var photos = [];
>> var captions = [];
>> for (var i = 0; i < library.length; i++) {
>> var element = library[i];
>> photos.push(element.img);
>> captions.push(element.caption);
>> }
>
>> For people used to C-style languages, that loop will feel quite
>> familiar.
>
> You don't know what you are talking about. First of all, for all intents
> and purposes, ECMAScript is a C-style language. (If you have eyes to see
> you can observed the similarities.)

I'm sure that often I don't, but in fact here I do know what I'm
talking about. Javascript is syntactically in the family of C-style
languages most recognizable for blocks delimited with curly braces.
Just about any programmer who has programmed with a member of this
family of languages will recognize this syntax:

for (i = 0; i < bound; i++) {
// Do something.
}

They not only recognize it, but have themselves have coded with it.
The general form has been described [1] as

for (initialization; continuation condition; incrementing expr)
{
statement(s)
}

This is of course inaccurate, because the last expression does not
*have* to increment. It can decrement, it can do some strange
combination, or it can be skipped altogether. But the incrementing
version is the form with which programmers are most familiar.

> (Therefore,) the backwards-counting loop will be familiar to "people used
> to C-style languages",


Many, probably most, will recognize this easily enough:

for (i = bound; i > 0; i--) {
// Do something.
}

But this is a less commonly used, and less familiar, variation. The
following one though, is seen significantly less often:

for (i = bound; i--;) {
// Do something.
}

It won't work in languages which do not read a zero as false (Java),
and it might be frowned upon on languages which do so, but discourage
it (PHP?). I would not suggest it to a beginner unless it was to
clear up some performance problem.

The point, though, is that the OP was posting a problem that could be
solved easily by someone with only a little Javascript experience. It
might be a bad guess, but I did guess that the OP was a JS beginner,
and gave the simplest answer I could think of that could work (except
that I didn't think to remove what's become standard for me in
hoisting the loop bound.)


> and it will also be a lot more efficient than the

> one above. [ ... ]

I did say that I had heard that descending loops in JS were more
efficient than ascending ones, and asked if there was documentation
for that. I'm sorry, but your assertion is not enough proof for me.
Do you know of any decent references?


> The ascending order and the unnecessary push() calls will decrease
> efficiency considerably, and the push() call will decrease compatibility,
> too (JScript 5.0 does not have it).

I don't know about the OP, but I tend to worry little about JScript
5.0. As to the efficiency claims, do you have any documentation?


>> My version added three minor changes: combining the initial
>> declarations into one statement, hoisting the length field out of the
>
> There is no field, `length' is a property.

True. Pedantic, but true.


>> loop, and removing the additional variable declaration inside the
>> loop. The first is mainly an issue of style (and perhaps bandwidth).
>> The second can be quite important for performance if the length is
>> determined by calling a heavy-weight function (such as if the list
>> were a dynamic collection of DOM nodes) but it probably has little
>> effect with a simple array like this. I have no idea how the third
>> change affects performance.
>
> It affects it considerably, as variable instantiation only happens once
> before execution, and identifiers are _not_ block-scoped

Yes, they are function-scoped in JS.

So you're saying that this:

for (var i = 0; i < library.length; i++) {
var element = library[i];
photos.push(element.img);
captions.push(element.caption);
}

is considerably more efficient than this:

for (var i = 0; i < library.length; i++) {

photos.push(library[i].img);
captions.push(library[i].caption);
}

I don't know the relative costs of variable assignment versus array
look-up. Do you have any references on this? I count an additional
assignment in the first and one fewer look-up. Does that offer a
considerable performance gain?


> don't you know *anything*?

Yes. I do.


> As a result, removing the variable declaration and initialization, rather
> unsurprisingly, *decreases* performance as the property lookup is repeated.
> Make benchmarks (or take more care when reading, and observe the results
> that I have posted already).

I see differences in integer array look-ups, not arbitrary property
look-ups. I don't know if ES implementations have any optimizations
for dense arrays over arbitrary property look-ups.

As to benchmarks, perhaps I will try some. But if you have
references, would you please share them? Still, you are the one
making the claim. Have you any benchmarks of your own to share?


>> What Thomas suggests optimizes by reversing the iteration order. I
>> have heard that such a change often improves performance of JS loops,
>> but I don't know any numbers.
>
> Make benchmarks then, and you will see that the simple post-decrement
> increases efficiency considerably as well (else the pattern would not have
> prevailed, would it?). We have been over this ad nauseam before.

Funny, you're often the one berating others for not understanding
USENET. Having been over a subject ad nauseam, the group is best off
pointing new users to FAQ entries or other resources. All you've done
is make assertions.


>> The main disadvantage for beginning programmers is that the loop is
>> somewhat less readable, at least until you get used to the convention.
>> It takes advantage of the fact that 0 in a test is read as false,
>> whereas positive integers are read as true.
>
> Do you get a kick out of explaining the obvious even though nobody has
> asked a question?

Again, the OP asked a relatively easy question, and has received
several competing solutions. If the OP knew all that, don't you think
it likely she or he would have been able to solve the problem without
asking in this group?


> And you are being imprecise after all: *All* numeric
> values *except* 0 (+0 and -0, but these are only Specification mechanisms)
> and NaN type-convert to `true'. We have discussed this numerous times as
> well.

No, I was being quite precise. The construct under discussion does
not take advantage of the fact that 2.718281828 or -pi are interpreted
as true, only that the positive integers are.

But thank you for pointing this out, as it brings to mind another
potential pitfall of this technique: if the body of your loop
decrements the loop variable, you might transform working code into a
nearly-endless loop. Of course this is a sign of bad code, but it's
more likely to happen with an equality condition ("i") than an
inequality ("i < bound").


>> One nice variation of this is this loop format:
>
>> for (var i = library.length; i --> 0;) {
>> // ...
>> }

> [ ... ]


>> But it's very pretty.
>
> Pretty according to whose standards?

The eye of the beholder, of course. Who else would you appoint the
arbiter of code beauty?

But I personally find it very appealing.


> It could create misconceptions with
> newcomers about a limit-type `-->' operator, and it is even less readable
> or obvious than the versions you were incompetently whining about.

Suddenly you're worried about newcomers? :-)

My previous post was clearly not aimed at those who know Javascript
well, just an attempt to help the OP. But I'm not sure that your two
clauses above really work together well. If it creates misconceptions
about a non-existent operator, it's only *because* it is more readable
to the newcomer.

Still, I take offense at the notion that my whining was incompetent.
It clearly grated on you, so it seems to be very competent
whining. :-)


>> The OP will have to decide how to weight performance advantages
>> against code clarity, but in order to help, does anyone have links to
>> metrics on the different performance characteristics of loops in
>> different browser environments?
>
> The browser does not matter (the ECMAScript implementation does),

Overly pedantic again, but I'll certainly concede the point.


> and this discussion has been performed a hundred times or so
> already. So have results been posted.

Do you happen to have references?


> Much you have to learn.

Yes I do. I'm just hoping to find some competent instructors.


> Score adjusted

So it's what now? 40 - love, my serve again?


-- Scott
____________________
[1] http://en.wikipedia.org/wiki/Curly_bracket_programming_language#Loops

Scott Sauyet

unread,
Jan 14, 2010, 1:17:49 PM1/14/10
to
On Jan 14, 11:41 am, Scott Sauyet <scott.sau...@gmail.com> wrote:
> As to benchmarks, perhaps I will try some.

Ok, I tried. I don't have much experience at JS benchmarks, so there
may be major flaws in them. I'd appreciate it if someone could point
out improvements.

I've posted some tests here:

http://scott.sauyet.com/Javascript/Test/LoopTimer/1/test/

These tests run two of the code samples discussed earlier [1, 2]
multiple times. I try them with initial arrays of length 10, 100,
1000, and 10000. I run them, respectively, 100000, 10000, 1000, and
100 times, so in each case the number of iterations of the main body
of the function will happen one million times. (This should lead to
slightly higher efficiency with larger arrays, as the outer loop has
to run fewer times, but I suspect that that's only noise in the
results.) To run this in IE, I could only run one test at a time or
the browser would alert me of long-running scripts and my running
times were compromised.

In any case, I report the number of iterations of the tested algorithm
that run per millisecond.

I collected the results at

http://scott.sauyet.com/Javascript/Test/LoopTimer/1/

But I've done something screwy to IE on the page trying to be clever.
If anyone can tell me why the generated links are not working properly
in IE, I would love to know. In other browsers, you can see the
results, run one of the predefined tests, or choose to run either
algorithm in your browser, supplying the size of the array and the
number of iterations to run.

I tested on the browsers I have on my work machine:

Chrome 3.0.195.27
FF 3.5.7
IE 8
Opera 9.64
Safari 4.0.3

All running on Windows XP on a fairly powerful machine.

The results definitely say that the backward looping algorithm
supplied by Thomas is generally more efficient than the forward one I
gave. But there is some murkiness. First of all, the values reported
as this is run in different browsers (and specifically in their
ECMAScript implementations, for the pedantic among you) vary hugely.
They can differ by a factor of 50 or more.

In Opera the backward looping was faster at all array sizes, by an
approximate factor of 3. In Safari it was faster by a factor of 5.
In IE, it was faster, but at a factor that decreased as the array size
increased, down to about 1.21 for a 10000-element array. In Firefox
and Chrome it was more complicated. For array sizes of 10, 100, and
1000 in Chrome, the backward was faster than forward by factors
approximately 1.5 - 2.5. But for 10000 elements, forward was faster
by a factor of about 1.25; I checked at 100000 elements too, and
forward was faster by a factor of about 1.5. In Firefox, backwards
was faster than forwards for 10 and 100 elements, by a factor of 1.5
and 2, respectively, but for 1000, forward was faster than backward by
a factor of 17, and at 10000, forward was faster by a factor of 27.
It's not that forward improved at higher array sizes in FF but that
backwards slowed way down.

I did try reversing the order of the tests to see if garbage
collection had anything to do with this, but it make no substantive
difference.

The conclusion I can draw from this is that backward is generally a
better bet, but that might be reversed for higher array sizes, at
least in FF and Chrome.

So, has anyone got suggestions for improving these tests? Can someone
tell me what's wrong in IE in my results page?

-- Scott Sauyet
____________________
[1] http://groups.google.com/group/comp.lang.javascript/msg/73c40b2b284d970a
[2] http://groups.google.com/group/comp.lang.javascript/msg/6c56a2bac08daaa4

Jake Jarvis

unread,
Jan 14, 2010, 2:53:46 PM1/14/10
to

line 120:

| var size = parseInt(cells[i].textContent, 10) || 1,
^^^^^^^^^^^
No such thing in mshtml

--
Jake Jarvis

Jorge

unread,
Jan 14, 2010, 3:01:49 PM1/14/10
to
On Jan 14, 7:17 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
> (...)

> So, has anyone got suggestions for improving these tests?
> (...)

Yes, :-)
http://jorgechamorro.com/cljs/093/

BTW, It's not faster i-- than i++.
Cheers,
--
Jorge.

Scott Sauyet

unread,
Jan 14, 2010, 3:23:55 PM1/14/10
to
On Jan 14, 2:53 pm, Jake Jarvis <pig_in_sh...@yahoo.com> wrote:

> Scott Sauyet wrote:
>> I collected the results at
>>    http://scott.sauyet.com/Javascript/Test/LoopTimer/1/
>> But I've done something screwy to IE on the page trying to be clever.
>> If anyone can tell me why the generated links are not working properly
>> in IE, I would love to know.

> line 120:


>
> | var size = parseInt(cells[i].textContent, 10) || 1,
>                                ^^^^^^^^^^^
> No such thing in mshtml

D'OH. Self-administered dope-slap! I guess that's what I guess for
throwing things together.

Thank you very much.

-- Scott

Scott Sauyet

unread,
Jan 14, 2010, 3:25:52 PM1/14/10
to

Updated version:

http://scott.sauyet.com/Javascript/Test/LoopTimer/2/

Thanks again,

-- Scott

Jorge

unread,
Jan 14, 2010, 3:40:19 PM1/14/10
to

You can't compare a .push(value) with an [i]= value; !
--
Jorge.

Scott Sauyet

unread,
Jan 14, 2010, 3:57:10 PM1/14/10
to

Oh no? I believe I just did! :-)

This was really a comparison of my solution to the original problem
and Thomas Lahn's follow-up.

I'm not trying to prove something in particular about what part of the
algorithm is speedy or slow, only to see what differences in speed
there are between them.

-- Scott

Jorge

unread,
Jan 14, 2010, 4:01:06 PM1/14/10
to

Ok. But don't believe what Pointy said: "Make benchmarks then, and you


will see that the simple post-decrement

increases efficiency considerably as well", because it's not true.
--
Jorge.

Scott Sauyet

unread,
Jan 14, 2010, 4:01:40 PM1/14/10
to
On Jan 14, 3:01 pm, Jorge <jo...@jorgechamorro.com> wrote:
> On Jan 14, 7:17 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
>
>> So, has anyone got suggestions for improving these tests?
>
> Yes, :-)http://jorgechamorro.com/cljs/093/

Thanks. I'm confused, though. You don't seem to assign anything to
"library". Am I missing somthing? If not, then the test seems to be
meaningless.

> BTW, It's not faster i-- than i++.

No, I wouldn't expect it to be. But that doesn't imply that

for (i = bound; i--;) {/* ... */}

isn't faster than

for (i = 0; i < bound; i++) {/* ... */}

is, right? That's what Thomas was arguing.

-- Scott

Jorge

unread,
Jan 14, 2010, 4:05:38 PM1/14/10
to
On Jan 14, 10:01 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
> On Jan 14, 3:01 pm, Jorge <jo...@jorgechamorro.com> wrote:
>
> > On Jan 14, 7:17 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
>
> >> So, has anyone got suggestions for improving these tests?
>
> > Yes, :-)http://jorgechamorro.com/cljs/093/
>
> Thanks.  I'm confused, though.  You don't seem to assign anything to
> "library".  Am I missing somthing?  If not, then the test seems to be
> meaningless.

It's in the line #66: 2e5 elements.

(function (count) {
while (count--) {
library[count]= {img: 'img' + count + '.jpg', caption: 'Caption '
+ count};
}
})(window.JSLitmusMultiplier= 2e5);
--
Jorge.

Garrett Smith

unread,
Jan 14, 2010, 4:12:42 PM1/14/10
to
There are too many variables in your test. The forwards loop body calls
`push` twice, while the reverse loop does not.

What is causing the performance difference? Is it the direction of the
loop or is it the call to `push`? You can't tell from that test, can you?

To test performance of loop direction loop body must be identical in
both loops.

Garrett Smith

unread,
Jan 14, 2010, 4:18:22 PM1/14/10
to

You are making an assertion about observations you have made in
implementation-dependent, and of course, without proof.

After all there is no boolean type in C before C99, so you
> would use `int' (and guess what, C's `for' statement works the same as in
> ECMAScript save the implicit conversion!), and in C99 you would use
> `(_Bool) i--' for maximum efficiency.
>

I'm failing to see how the unavailability of boolean type in C is
relevant to the discussion of loop performance in ECMAScript.

Perhaps you can provide further information as to how this proves that a
backwards loop in ECMAScript is more efficient.

If `photos` order matters, then that ought to be a motivating factor for
which way to loop.

> The ascending order and the unnecessary push() calls will decrease
> efficiency considerably, and the push() call will decrease compatibility,
> too (JScript 5.0 does not have it).
>

The `push` method good for adding several elements at a time, as in:-

myArray.push(q,w,e,r,a,s,d,f,z,x,c,v);

That is not the case here. Calling `push` would be an extra function
call here. No need for it.

[...]


>
>> What Thomas suggests optimizes by reversing the iteration order. I
>> have heard that such a change often improves performance of JS loops,
>> but I don't know any numbers.
>
> Make benchmarks then,

Did you retest your benchmarks?

and you will see that the simple post-decrement
> increases efficiency considerably as well (else the pattern would not have
> prevailed, would it?). We have been over this ad nauseam before.
>

Any observed difference in performance will be implementation-dependent.

It appears that you have believed some observations in a few browsers
and incorrectly come to the conclusion that a backwards loop is faster.

>
> Score adjusted
Nobody but you cares about your silly scorekeeping activities. They're
embarrassing. Best keep that to yourself and not tell anyone.

Jorge

unread,
Jan 14, 2010, 4:28:34 PM1/14/10
to
On Jan 14, 10:01 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
>
> No, I wouldn't expect it to be.  But that doesn't imply that
>
>     for (i = bound; i--;) {/* ... */}
>
> isn't faster than
>
>     for (i = 0; i < bound; i++) {/* ... */}
>
> is, right?

Yes but no. On a good day we would be talking at best about an
infinitesimal of a µs.

> That's what Thomas was arguing.

i<bound is a boolean, but i-- isn't, so what you've got to compare it
to is really ~ (i === 0) (i converted to a boolean). It might be true
that the test (i === 0) is some small fraction of a µs faster than
(i<bound), but for some reason (implementation ?) filling the arrays
upwards is faster than downwards and that weights much more than that
infinitesimal of µs in the end results.

The truth is that filling the arrays upwards is faster in most
browsers, and when not, they're ~ equally fast.

--
Jorge.

Scott Sauyet

unread,
Jan 14, 2010, 5:03:29 PM1/14/10
to
On Jan 14, 4:12 pm, Garrett Smith <dhtmlkitc...@gmail.com> wrote:
> Scott Sauyet wrote:
>> On Jan 14, 3:23 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
>> Updated version:
>>    http://scott.sauyet.com/Javascript/Test/LoopTimer/2/
>> Thanks again,
>
> There are too many variables in your test. The forwards loop body calls
> `push` twice, while the reverse loop does not.
>
> What is causing the performance difference? Is it the direction of the
> loop or is it the call to `push`? You can't tell from that test, can you?

I was trying really hard not to get sucked into the navel-gazing
arguments so prevalent here. So much for my New Year's resolution, I
guess. :-)

Thomas offered a suggested improvement to my function. My tests show
that for most reasonable cases it was an improvement, and also that
it's not as clear-cut as he suggested. I should have known better
than to distinguish one implementation from another on the basis of
one of their features (forward/backward), but I guess I'm stuck with
it now!

If I find some time tonight or tomorrow, I'll do a few more thorough
benchmarks of multiple versions of the algorithms and see what the
main issues are.

Thank you very much for your response.

-- Scott

Garrett Smith

unread,
Jan 14, 2010, 6:43:08 PM1/14/10
to
Scott Sauyet wrote:
> On Jan 14, 4:12 pm, Garrett Smith <dhtmlkitc...@gmail.com> wrote:
>> Scott Sauyet wrote:
>>> On Jan 14, 3:23 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
>>> Updated version:
>>> http://scott.sauyet.com/Javascript/Test/LoopTimer/2/
>>> Thanks again,
>> There are too many variables in your test. The forwards loop body calls
>> `push` twice, while the reverse loop does not.
>>
>> What is causing the performance difference? Is it the direction of the
>> loop or is it the call to `push`? You can't tell from that test, can you?
>
> I was trying really hard not to get sucked into the navel-gazing
> arguments so prevalent here. So much for my New Year's resolution, I
> guess. :-)
>

The test you created cannot be used as a premise to a conclusion of loop
direction speed.

> Thomas offered a suggested improvement to my function. My tests show
> that for most reasonable cases it was an improvement, and also that
> it's not as clear-cut as he suggested.

Your tests do not show that a forwards loop is faster than a reverse
loop. They do not show that `push` is slower than [[Put]] (though that
should be obvious). The tests you provided prove nothing.

Think about `myArray.push()` versus `[[Put]]` with property accessors.

To call `myArray.push()`, the ECMAScript interpreter must resolve the
push property. The push property is not resolved on the object, it is
resolved on the object's prototype, Array.prototype. After `push` has
been resolved to a value, the interpreter calls it `myArray` for the
`this` value and an internal `ArgumentList` of the arguments passed.

Compare that to Array [[Put]].

In Array [[Put]](p, v), if `myArray` does not have a property with name
p, then it is created and given the value v.

If converting the property name to a numeric representation (ToUnit32)
is >= length, then 'myArray.length` is increased to `ToUnit32(p) + 1`.

I should have known better
> than to distinguish one implementation from another on the basis of
> one of their features (forward/backward), but I guess I'm stuck with
> it now!
>

It is a misleading test.

> If I find some time tonight or tomorrow, I'll do a few more thorough
> benchmarks of multiple versions of the algorithms and see what the
> main issues are.
>

The loop backwards using arrays seems to be slower then the loop forwards.

Results:

Opera 10.10:
forwards()
353
backwards()
432

Firefox 3.5.5:
forwards()
64
backwards()
933

Seamonkey
forwards()
1927
backwards()
2600

IE7:
forwards()
2141
backwards()
2502

Safari 4:
forwards()
66
backwards()
3458

Safari 3:
forwards()
1079
backwards()
8646

Safari 2:
(out of memory errors).


<!doctype html>
<html lang="en">
<head>
<title>loop test</title>
</head>
<body>
<script type="text/javascript">
var ITERATIONS = 1000000;
function forwards(){
var x = [], d = +new Date, noise = Math.PI,
i, len = ITERATIONS, time;
x.length = len;
for(i = 0; i < len; i++){
x[i] = i + noise;
}
time = new Date-d;
document.getElementById("f").innerHTML = String(time);
}

function backwards(){
var x = [], d = +new Date, noise = Math.PI,
len = ITERATIONS, time;
x.length = len;
for(i = len; i-- > 0;){
x[i] = i + noise;
}
time = new Date-d;
document.getElementById("b").innerHTML = String(time);
}
</script>
<button onclick="forwards();">forwards()</button>
<pre id="f">-</pre>

<button onclick="backwards();">backwards()</button>
<pre id="b">-</pre>
</body>
</html>

What is the explanation for Thomas' certainty that the backwards loop
would be faster? One possibility is irrational mentality. Another
possibility is that a backwards loop had been observed to be faster at
one point.

If a backwards loop had been observed to be faster, then it is worth
investigating the test that made such conclusion.

It is likely that that test did not set the `length` property of the
resulting array prior to looping. If that is the case, then the loop
forwards has a penalty of:

| 10. Change (or set) the value of the length property of A to
| ToUint32(P)+ 1.

Resetting the length on every iteration (implicitly, in step 10 of Array
[[Put]]), demands more work of the implementation.

The key to populating an array quickly is setting the length of the
array beforehand.

The direction of the loop should not be chosen based on speed, but by
the order desired of the resulting array.

Lasse

unread,
Jan 15, 2010, 5:03:27 AM1/15/10
to
On Jan 14, 7:17 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
> In Firefox and Chrome it was more complicated.  For array sizes of 10, 100, and
> 1000 in Chrome, the backward was faster than forward by factors
> approximately 1.5 - 2.5.  But for 10000 elements, forward was faster
> by a factor of about 1.25; I checked at 100000 elements too, and
> forward was faster by a factor of about 1.5.

The probable reason for this is that you start filling out the array
from the back.
This means that the array starts out with only one element af index
9999 - a very
sparse array. Chrome and Firefox likely starts using an internal
representation
meant for sparse arrays, which is not as time efficient as linear
arrays.
If you fill out the result array from the start, it will always be
densely packed,
and will stay that way when the backing store is increased.

Try changing your array initialization to
var len = library.length, photos = new Array(len), captions = new
Array(len);
At least in Chrome this preallocates the array as a dense array with
the given length.

I also notice that the two algorithms differ in several ways.
The backward one only reads library[i] once, whereas the forward one
reads it twice.
One uses foo.push(value), the other foo[i]=value. Both could use foo[i]
=value.
They also differ in how the the property is loaded: one uses
foo.caption, the other foo["caption"].
That shouldn't matter, but there is no reason to make the tests look
so different.

Also, I don't see nearly as big a difference in Chrome 4 (dev-channel
release, on Linux):
Algorithm: Plain, elements: 1000000, elements per millisecond: 3817
Algorithm: Reverse, elements: 1000000, elements per millisecond: 4259
(using 5 iterations).

/L

Thomas 'PointedEars' Lahn

unread,
Jan 15, 2010, 6:11:24 AM1/15/10
to
Scott Sauyet wrote:

> Jorge <jo...@jorgechamorro.com> wrote:
>> Scott Sauyet <scott.sau...@gmail.com> wrote:
>>> So, has anyone got suggestions for improving these tests?
>> Yes, :-)http://jorgechamorro.com/cljs/093/
>
> Thanks. I'm confused, though. You don't seem to assign anything to
> "library". Am I missing somthing? If not, then the test seems to be
> meaningless.

^^^^^^^^^^^
As always.

>> BTW, It's not faster i-- than i++.
>
> No, I wouldn't expect it to be. But that doesn't imply that
>
> for (i = bound; i--;) {/* ... */}
>
> isn't faster than
>
> for (i = 0; i < bound; i++) {/* ... */}
>
> is, right? That's what Thomas was arguing.

Exactly. It stands to reason that this applies simply because it takes one
operation less. It would appear that setting array elements biases the
result towards the incrementing loop because newer optimizations that
consider sparse arrays come into play.

var len = 100000;

/*
* WebKit throws a TypeError with this later;
* base object of call must be `console' there
*/
var dmsg = typeof console != "undefined" && console.log || window.alert;

var d = new Date();
for (var i = len; i--;) ;
dmsg(new Date() - d);

d = new Date();
for (var i = 0; i < len; i++) ;
dmsg(new Date() - d);

The second one is always considerably slower here (TraceMonkey 1.9.1.6,
JScript 5.6.6626: > 100%; Opera 10.10: > 50%; JavaScriptCore 531.21.8: >
40%).


PointedEars
--
realism: HTML 4.01 Strict
evangelism: XHTML 1.0 Strict
madness: XHTML 1.1 as application/xhtml+xml
-- Bjoern Hoehrmann

Jorge

unread,
Jan 15, 2010, 7:12:49 AM1/15/10
to
On Jan 15, 12:11 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:

> Scott Sauyet wrote:
> > Jorge <jo...@jorgechamorro.com> wrote:
> >> Scott Sauyet <scott.sau...@gmail.com> wrote:
> >>> So, has anyone got suggestions for improving these tests?
> >> Yes, :-)http://jorgechamorro.com/cljs/093/
>
> > Thanks.  I'm confused, though.  You don't seem to assign anything to
> > "library".  Am I missing somthing?  If not, then the test seems to be
> > meaningless.

Scott: learn to read code.

>   ^^^^^^^^^^^
> As always.

Pointy: you're an IDIOT.

The reality:

var i= 1e6, a= [], d= +new Date();
while (i--) { a[i]= ""; }
console.log(+new Date()- d);

Safari4.0.4 --> 2012ms
FF3.6 --> 1730ms
ff3.5 --> 1674ms
ff3.0.15 --> 1810ms
Chrome --> 1114ms
Opera 10.5b --> 665ms

var len= 1e6, i= 0, a= [], d= +new Date();
while (i<len) { a[i++]= ""; }
console.log(+new Date()- d);

Safari4.0.4 --> 303ms 15% (!)
FF3.6 --> 1110ms 64%
ff3.5 --> 1236ms 73%
ff3.0.15 --> 921ms 50%
Chrome --> 1310ms 117%
Opera 10.5b --> 680ms 102%
--
Jorge.

Thomas 'PointedEars' Lahn

unread,
Jan 15, 2010, 7:35:05 AM1/15/10
to
Jorge wrote:

> Pointy: you're an IDIOT.

Mirror, mirror ...

> The reality:
>
> var i= 1e6, a= [], d= +new Date();
> while (i--) { a[i]= ""; }
> console.log(+new Date()- d);

Idiot. That's a *while* loop that *sets array elements*. Not a *for* loop
that *does not*.

Jorge

unread,
Jan 15, 2010, 7:47:48 AM1/15/10
to
On Jan 15, 1:35 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:

> Jorge wrote:
>
> > var i= 1e6, a= [], d= +new Date();
> > while (i--) { a[i]= ""; }
> > console.log(+new Date()- d);
>
> Idiot.  That's a *while* loop that *sets array elements*.  Not a *for* loop
> that *does not*.

(load applause)

*You* posted this code:

for (var i = len; i--;) {


var o = library[i];
photos[i] = o.img;
captions[i] = o.caption;
}

And *you* said: "Benchmarks suggest it would be about 20% faster in
TraceMonkey 1.9.1.6.".
--
Jorge.

Thomas 'PointedEars' Lahn

unread,
Jan 15, 2010, 8:09:26 AM1/15/10
to
Jorge wrote:

> Thomas 'PointedEars' Lahn wrote:
>> Jorge wrote:
>> > var i= 1e6, a= [], d= +new Date();
>> > while (i--) { a[i]= ""; }
>> > console.log(+new Date()- d);
>>
>> Idiot. That's a *while* loop that *sets array elements*. Not a *for*
>> loop that *does not*.
>
> (load applause)

You are such a complete idiot.



> *You* posted this code:
>
> for (var i = len; i--;) {
> var o = library[i];
> photos[i] = o.img;
> captions[i] = o.caption;
> }
>
> And *you* said: "Benchmarks suggest it would be about 20% faster in
> TraceMonkey 1.9.1.6.".

In <news:2756852.M...@PointedEars.de>. Which is exactly what I
measured. (Faster than *what*, stupid? Now what have I been referring to
there?).

Neither of which disproves the statement about `for' loops I made in the
precursor, <news:1306342.k...@PointedEars.de>, that you tried to
disprove with your flawed test case,
<news:a34ddd71-fee2-488e b3f0-458...@r24g2000yqd.googlegroups.com>.

Get a brain, Jorge.

Jorge

unread,
Jan 15, 2010, 8:57:26 AM1/15/10
to
On Jan 15, 2:09 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
> (... whining ...)

Pointy, whine as much as you like, but your "optimized" code does in
fact run *slower*.
--
Jorge.

Jorge

unread,
Jan 15, 2010, 9:03:23 AM1/15/10
to
On Jan 15, 1:47 pm, Jorge <jo...@jorgechamorro.com> wrote:
>
> (load applause)

s/load/loud/
--
Jorge.

The Natural Philosopher

unread,
Jan 15, 2010, 9:10:29 AM1/15/10
to
s/applause/applause.wav/ ;-)

Thomas 'PointedEars' Lahn

unread,
Jan 15, 2010, 11:16:45 AM1/15/10
to
Garrett Smith wrote:

> var imgArray = library.map(filterByName("img"));
> var captionArray = library.map(filterByName("caption"));
> [...]
>
> Where not supported, Array.prototype.map functionality can be added, as
> indicated on MDC page[1].

What a waste. If an Array prototype method should be used here, then
Array.prototype.forEach().

> Writing your own loop would be fastest here.

No argument there.

Garrett Smith

unread,
Jan 15, 2010, 1:07:01 PM1/15/10
to
Thomas 'PointedEars' Lahn wrote:
> Garrett Smith wrote:
>
>> var imgArray = library.map(filterByName("img"));
>> var captionArray = library.map(filterByName("caption"));
>> [...]
>>
>> Where not supported, Array.prototype.map functionality can be added, as
>> indicated on MDC page[1].
>
> What a waste. If an Array prototype method should be used here, then
> Array.prototype.forEach().
>

Array.prototype.forEach calls a supplied function on each element in the
Array. It does not create a new Array.

Array.prototype.map creates a new array with the results of calling a
supplied function on each element in the array.

And yes, much more efficient to just loop it yourself.

Garrett Smith

unread,
Jan 15, 2010, 1:20:19 PM1/15/10
to
Lasse wrote:
> On Jan 14, 7:17 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
>> In Firefox and Chrome it was more complicated. For array sizes of 10, 100, and
>> 1000 in Chrome, the backward was faster than forward by factors
>> approximately 1.5 - 2.5. But for 10000 elements, forward was faster
>> by a factor of about 1.25; I checked at 100000 elements too, and
>> forward was faster by a factor of about 1.5.
>
> The probable reason for this is that you start filling out the array
> from the back.
> This means that the array starts out with only one element af index
> 9999 - a very
> sparse array. Chrome and Firefox likely starts using an internal
> representation
> meant for sparse arrays, which is not as time efficient as linear
> arrays.

Did you notice the test code + results I posted?
Message ID: hioa6e$689$1...@news.eternal-september.org

> If you fill out the result array from the start, it will always be
> densely packed,
> and will stay that way when the backing store is increased.
>
> Try changing your array initialization to
> var len = library.length, photos = new Array(len), captions = new
> Array(len);

The Array constructor is a little bit slower from the solution I provided:
var len = library.length, photos = [], captions = [];


photos.length = captions.length = len;

Thomas 'PointedEars' Lahn

unread,
Jan 15, 2010, 1:41:15 PM1/15/10
to
Garrett Smith wrote:

> Thomas 'PointedEars' Lahn wrote:
>> Garrett Smith wrote:
>>> var imgArray = library.map(filterByName("img"));
>>> var captionArray = library.map(filterByName("caption"));
>>> [...]
>>>
>>> Where not supported, Array.prototype.map functionality can be added, as
>>> indicated on MDC page[1].
>>
>> What a waste. If an Array prototype method should be used here, then
>> Array.prototype.forEach().
>
> Array.prototype.forEach calls a supplied function on each element in the
> Array. It does not create a new Array.

You don't say.

> Array.prototype.map creates a new array with the results of calling a
> supplied function on each element in the array.

Exactly. If you use Array.prototype.map() you have to iterate *twice* (at
least internally) on the *same* array to get the two resulting arrays.
That's the waste.

var
imgArray = [],
captionArray = [];

library.forEach(function(e, i, a) {
imgArray[i] = e.img;
captionArray[i] = e.caption;
});

Jorge

unread,
Jan 15, 2010, 1:52:11 PM1/15/10
to
On Jan 15, 7:41 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
>

>   var
>     imgArray = [],
>     captionArray = [];
>
>   library.forEach(function(e, i, a) {
>     imgArray[i] = e.img;
>     captionArray[i] = e.caption;
>   });

e is undefined...
--
Jorge.

Thomas 'PointedEars' Lahn

unread,
Jan 15, 2010, 2:00:31 PM1/15/10
to
Jorge wrote:

> Thomas 'PointedEars' Lahn wrote:
>> var
>> imgArray = [],
>> captionArray = [];
>>
>> library.forEach(function(e, i, a) {
>> imgArray[i] = e.img;
>> captionArray[i] = e.caption;
>> });
>
> e is undefined...

Where?


PointedEars
--
Anyone who slaps a 'this page is best viewed with Browser X' label on
a Web page appears to be yearning for the bad old days, before the Web,
when you had very little chance of reading a document written on another
computer, another word processor, or another network. -- Tim Berners-Lee

Jorge

unread,
Jan 15, 2010, 2:27:35 PM1/15/10
to
On Jan 15, 8:00 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:

> Jorge wrote:
> > Thomas 'PointedEars' Lahn wrote:
> >> var
> >>   imgArray = [],
> >>   captionArray = [];
>
> >> library.forEach(function(e, i, a) {
> >>   imgArray[i] = e.img;
> >>   captionArray[i] = e.caption;
> >> });
>
> > e is undefined...
>
> Where?

And 'a' is unused...
--
Jorge.

Thomas 'PointedEars' Lahn

unread,
Jan 15, 2010, 2:39:05 PM1/15/10
to
Jorge wrote:

> Thomas 'PointedEars' Lahn wrote:
>> Jorge wrote:
>> > Thomas 'PointedEars' Lahn wrote:
>> >> var
>> >> imgArray = [],
>> >> captionArray = [];
>> >>
>> >> library.forEach(function(e, i, a) {
>> >> imgArray[i] = e.img;
>> >> captionArray[i] = e.caption;
>> >> });
>>
>> > e is undefined...
>>
>> Where?
>
> And 'a' is unused...

Yes, that argument does not need to be named here; however, the purpose
here was to show how forEach() could be used -- `a' is a reference to the
array.¹

You have not answered my question, though. I take it then that you found
out you were wrong.


PointedEars
___________
¹
<https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Array/forEach>
--
Use any version of Microsoft Frontpage to create your site.
(This won't prevent people from viewing your source, but no one
will want to steal it.)
-- from <http://www.vortex-webdesign.com/help/hidesource.htm> (404-comp.)

Jorge

unread,
Jan 15, 2010, 2:42:27 PM1/15/10
to
On Jan 15, 8:27 pm, Jorge <jo...@jorgechamorro.com> wrote:
>
> And 'a' is unused...

Sadly, it's not any faster...

http://jorgechamorro.com/cljs/093/
--
Jorge.

Jorge

unread,
Jan 15, 2010, 3:03:32 PM1/15/10
to
On Jan 15, 8:39 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:

> Jorge wrote:
>
> > And 'a' is unused...
>
> Yes, that argument does not need to be named here; however, the purpose
> here was to show how forEach() could be used -- `a' is a reference to the
> array.¹

And library is undefined...

> You have not answered my question, though.  I take it then that you found
> out you were wrong.

Who? Meee?
(It looked to me like ++another of your bugs. :-)
--
Jorge.

Jorge

unread,
Jan 15, 2010, 3:23:30 PM1/15/10
to
On Jan 15, 8:42 pm, Jorge <jo...@jorgechamorro.com> wrote:
>
> Sadly, it's not any faster...
>
> http://jorgechamorro.com/cljs/093/

Except in FF3.6: http://tinyurl.com/ye94jhm
--
Jorge.

Scott Sauyet

unread,
Jan 15, 2010, 6:00:30 PM1/15/10
to
On Jan 14, 5:03 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
> If I find some time tonight or tomorrow, I'll do a few more thorough
> benchmarks of multiple versions of the algorithms and see what the
> main issues are.

I found a little time to run a number of tests, but I don't have time
now to analyze them. Anyone interested in seeing the raw data from
JSLitmus tests in five browsers running against lists with 10, 100,
1000, and 10000 elements can see my results, and run their own tests
here:

http://scott.sauyet.com/Javascript/Test/LoopTimer/3/

Note that FF seems extremely inconsistent, which makes me worry about
how it's interacting with JSLitmus. The runs shown here seem fairly
representative, though. All the other browsers seem fairly
consistent. The raw data (as a PHP array) is here:

http://scott.sauyet.com/Javascript/Test/LoopTimer/3/results.phps

Cheers,

-- Scott

Scott Sauyet

unread,
Jan 15, 2010, 6:02:33 PM1/15/10
to
On Jan 15, 3:03 pm, Jorge <jo...@jorgechamorro.com> wrote:
> And library is undefined...

Oh come on! We've been using similar definitions of library
throughout this thread. You're getting as pedantic as Thomas is,
here.

-- Scott

Thomas 'PointedEars' Lahn

unread,
Jan 15, 2010, 8:25:34 PM1/15/10
to
Scott Sauyet wrote:

By contrast, I am being *precise*, stupid.

Then again, imitation is the sincerest form of flattery ;-)


PointedEars

Jorge

unread,
Jan 16, 2010, 7:14:18 AM1/16/10
to

If your intention is to time with accuracy what comes inside the for
loop, you ought to set library.length to a (much) big(ger) number: you
want the time spent inside the loop to be as close to 100% of the
total time as possible. Iterating a million times over the whole thing
with a badly chosen (small) library.length won't give you any added
accuracy in this regard.

That's why, for example, in the Safari row, you get these completely
different results:

library.length: 10 (items) 10k (items)
pushLookupIncrement: 375,6k 537
pushNewVarIncrement: 622,6k (1,65x) 1.3k (2,42x)

For the faster is the loop is, the bigger the % of error that a small
library.length introduces.

Also, looping through 10000 items, ought to take ~ 1000 times as long
as looping through 10 items, and that's not what your results show:
622,6k !== 1.3k*1000. That's too due to the error I'm talking of: the
10k items result is more accurate and the 10 items figure is off by
100-(622,6/13)= a 52.1% (an error quite big !, the real figure for 10
items was more than 2x that !)

And, as the loop of each tested f() takes a different amount of time
to execute, it's getting a different % of error... (That's the reason
why there's a JSLitmusMultiplier in my (modified) JSLitmus.js)
--
Jorge.

Scott Sauyet

unread,
Jan 16, 2010, 7:54:22 AM1/16/10
to
On Jan 15, 8:25 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:

> Scott Sauyet wrote:
>> You're getting as pedantic as Thomas is, here.
>
> By contrast, I am being *precise*, stupid.

| 2. overly concerned with minute details or formalisms, esp. in
teaching

- http://dictionary.reference.com/browse/pedantic

Or another way of putting it, being overly precise.

If it's an insult, it's a much more mild one than "stupid". :-)

-- Scott

Scott Sauyet

unread,
Jan 16, 2010, 8:07:55 AM1/16/10
to
On Jan 16, 7:14 am, Jorge <jo...@jorgechamorro.com> wrote:
> On Jan 16, 12:00 am, Scott Sauyet <scott.sau...@gmail.com> wrote:
>>    http://scott.sauyet.com/Javascript/Test/LoopTimer/3/results.phps
>
> If your intention is to time with accuracy what comes inside the for
> loop, you ought to set library.length to a (much) big(ger) number: you
> want the time spent inside the loop to be as close to 100% of the
> total time as possible. Iterating a million times over the whole thing
> with a badly chosen (small) library.length won't give you any added
> accuracy in this regard.

Well, as I said, I haven't had time to post any analysis. And I still
don't.

But that's not what I'm trying to compare. The raw numbers aren't
comparable across the rows. That wouldn't make sense; in an ideal
world, they would fall by approximately a factor of ten at each
column. The main thing is to see how the different algorithms perform
at various array lengths in the different browsers. Here Safari is
quite consistent. At all array lengths, the final two algorithms are
the fastest, followed by the third and fourth, then the fifth and
sixth, with the first and second trailing far behind.

I think there is something to be gleaned from these examples, but it
will take me a little more time to digest it.

-- Scott

Jorge

unread,
Jan 16, 2010, 11:29:00 AM1/16/10
to
On Jan 16, 2:07 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
> On Jan 16, 7:14 am, Jorge <jo...@jorgechamorro.com> wrote:
>
> > If your intention is to time with accuracy what comes inside the for
> > loop, you ought to set library.length to a (much) big(ger) number: you
> > want the time spent inside the loop to be as close to 100% of the
> > total time as possible. Iterating a million times over the whole thing
> > with a badly chosen (small) library.length won't give you any added
> > accuracy in this regard.
>
> But that's not what I'm trying to compare.

Funny, as the loop is what makes them different.

> The raw numbers aren't
> comparable across the rows. That wouldn't make sense;

I thought the sole purpose of the tests was to compare the results :-)

> in an ideal
> world, they would fall by approximately a factor of ten at each
> column.

Only if/when the time spent in the loop is closer to 100% of the total
execution time.

> The main thing is to see how the different algorithms perform
> at various array lengths in the different browsers.

> Here Safari is quite consistent. (...)

Surprise surprise, increase a little bit library.length to e.g. just
2e4 items and see what happens then.
--
Jorge.

Thomas 'PointedEars' Lahn

unread,
Jan 16, 2010, 11:38:21 AM1/16/10
to
Scott Sauyet wrote:

> Thomas 'PointedEars' Lahn wrote:
>> Scott Sauyet wrote:
>>> You're getting as pedantic as Thomas is, here.
>>
>> By contrast, I am being *precise*, stupid.
>
> | 2. overly concerned with minute details or formalisms, esp. in
> teaching
>
> - http://dictionary.reference.com/browse/pedantic
>
> Or another way of putting it, being overly precise.

And I do not think I have been *overly* precise. I just would not accept my
words being twisted by wannabes. If you think that to be pedantic, so be
it.

> If it's an insult, it's a much more mild one than "stupid". :-)

Fair enough :) It is too often *used as* an insult, so I guess I have grown
a bit allergic to it. Sorry.


PointedEars
--
Prototype.js was written by people who don't know javascript for people
who don't know javascript. People who don't know javascript are not
the best source of advice on designing systems that use javascript.
-- Richard Cornford, cljs, <f806at$ail$1$8300...@news.demon.co.uk>

Scott Sauyet

unread,
Jan 16, 2010, 3:50:22 PM1/16/10
to
On Jan 16, 11:38 am, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:

> Scott Sauyet wrote:
> > Thomas 'PointedEars' Lahn wrote:
> >> Scott Sauyet wrote:
> >>> You're getting as pedantic as Thomas is, here.
>
> >> By contrast, I am being *precise*, stupid.
>
> > | 2. overly concerned with minute details or formalisms, esp. in
> > teaching
>
> > -http://dictionary.reference.com/browse/pedantic

>
> > Or another way of putting it, being overly precise.
>
> And I do not think I have been *overly* precise. I just would not accept my
> words being twisted by wannabes. If you think that to be pedantic, so be
> it.

Almost always when I think you're being pedantic, I do also think
you're right. I don't think you're being pedantic when you object to
someone twisting your own words. What I find overly precise is when
you correct something that people widely recognize as accurate
*enough*, like when (perhaps from earlier in this thread) you talked
about how it's ES engines not browsers at issue. Sure it's right.
But most knowledgeable people recognize that is true and still prefer
the common usage.

>> If it's an insult, it's a much more mild one than "stupid". :-)
>
> Fair enough :) It is too often *used as* an insult, so I guess I have grown
> a bit allergic to it. Sorry.

I do mean it in a slightly insulting way. I really think the dialog
here doesn't merit that sort of correction unless it touches on the
current discussion. And my skin is plenty thick enough to handle
being called stupid. You clearly have plenty of value to offer in the
discussions here; I like it when you focus on those rather than picky
corrections.

Cheers,

-- Scott

Thomas 'PointedEars' Lahn

unread,
Jan 16, 2010, 4:00:11 PM1/16/10
to
Scott Sauyet wrote:

> [...] What I find overly precise is when you correct something that


> people widely recognize as accurate *enough*, like when (perhaps from
> earlier in this thread) you talked about how it's ES engines not
> browsers at issue. Sure it's right. But most knowledgeable people
> recognize that is true and still prefer the common usage.

I do hope you are mistaken here. Or, IOW: I cannot accept people as
knowledgable who say so, because it is obviously false. You should
better double-check your assumptions.

Scott Sauyet

unread,
Jan 18, 2010, 3:30:44 PM1/18/10
to
On Jan 15, 6:00 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
> I found a little time to run a number of tests, but I don't have time
> now to analyze them. Anyone interested in seeing the raw data from
> JSLitmus tests in five browsers running against lists with 10, 100,
> 1000, and 10000 elements can see my results, and run their own tests
> here:
>
> http://scott.sauyet.com/Javascript/Test/LoopTimer/3/

And in a slightly updated page with the exact same data:

http://scott.sauyet.com/Javascript/Test/LoopTimer/4/

The early part of this thread gives the context for these text
results, but most especially the following three

http://groups.google.com/group/comp.lang.javascript/msg/dcc830996b8afcef
http://groups.google.com/group/comp.lang.javascript/msg/73c40b2b284d970a
http://groups.google.com/group/comp.lang.javascript/msg/6c56a2bac08daaa4


Which are, respectively, the initial question posted by Tuxedo, a
response by me, and an alternative posted by Thomas Lahn. Note that
generally the comparisons that I'm looking at are between different
algorithms for the same browser at the same array size, although there
are some interesting questions when looking at a single algorithm in
one browser as the array size varies. At the end, I'll briefly
discuss the differences between browsers. Note that all tests are
performed on the same Windows XP SP2 box with dual 3GHz processors and
3.25 GB RAM.

Thomas' alternative made three distinct changes to my original
algorithm, which was, in essence:

var photos = [], captions = [], len = library.length;
for (var i = 0; i < len; i++) {
photos.push(library[i]["img"]);
captions.push(library[i]["caption"]);
}

The first change, was to replace the calls to push with directly
setting the array values. A new version might look like this:

var photos = [], captions = [], len = library.length;
for (var i = 0; i < len; i++) {
photos[i] = library[i]["img"];
captions[i] = library[i]["caption"];
}

The second change was to replace the array lookups using an temporary
variable:

var photos = [], captions = [], len = library.length;
for (var i = 0; i < len; i++) {


var o = library[i];
photos[i] = o.img;
captions[i] = o.caption;
}

And the final one was to run the iteration backward toward zero,
rather than forward:

var photos = [], captions = [], len = library.length;


for (var i = len; i--;) {
var o = library[i];
photos[i] = o.img;
captions[i] = o.caption;
}

Each of these changes is logically independent, so I tested eight
different versions of the algorithm (calling them
"pushLookupIncrement", "pushLookupDecrement", "pushNewVarIncrement",
"pushNewVarDecrement", "setLookupIncrement", "setLookupDecrement",
"setNewVarIncrement", and "setNewVarDecrement") in each browser I
tested. The browsers I used were

Chrome 3.0.195.27
Firefox 3.5.7
Internet Explorer 8
Opera 9.64
Safari 4.0.3

When I refer to a browser by name below, my results are specific to
the version tested. I tested each of them a few times for each
configuration. The results (except in Firefox) were fairly
consistent, and the result reported was an attempt to post a
representative result.

Because the size of the array to be copied has a likely impact on the
speed of the test, I ran the tests in every browser with initial
arrays of size 10, 100, 1000, and 10000. While it might be
interesting to analyze larger sizes, that didn't seem to have any
practical implication for most browser-based scripting, so it wasn't
pursued further.

One subsequent post [1] took me to task for not making my array
larger, noting that there would be significant noise from the test
harness for small array sizes. While this is true, the real question
was to see how these differing algorithms performed at different array
sizes, so we needed to test them at small array sizes as well as
larger ones. This necessarily involves some significant testing
error.

The first thing definitely gleaned from this is that setting the array
elements is clearly faster than pushing new elements onto the array.
In practically every instance, push is faster than set. As the code
is also cleaner, this leaves little doubt that in a loop like this,
where we're constructing new values for the end of the array and have
the index handy, it's best to set them directly. The only places
where this difference is less clear is with Chrome and 10000 elements
and Firefox with 1000 or 10000 elements, and there, the differences
might be attributable to noise.

The question of looking up the var via index versus a temporary
variable is much less clear. In most of these tests, the new variable
is faster by a small amount. I have not run the test with only one
array to set (that is, with, say, only "photos" and not "captions"),
but it wouldn't surprise me if that reversed this. I think in the end
there is not enough of a performance difference to matter much. The
project's coding style should probably dictate this choice.

The most interesting question was one of incrementing or decrementing
the array iterator. In Chrome, there are only minor differences for
array size 10, but at 100 and 1000, decrement is significantly
faster. But by 10000, increment is twice as fast. In Firefox, 10 is
too inconsistent to be useful; at 100 decerement is somewhat faster,
but at 1000 and 10000, increment is noticeably faster. In Internet
Explorer, for 10, decrement is a bit faster, but for 100, 1000, or
10000, increment is substantially faster. In Opera, decrement is
consistently a little faster than increment. In Safari, decrement is
noticeably faster for array sizes 10 and 100, but just barely faster
for the larger sizes.

Jorge pointed out [2] that this changes drastically if we up the ante
to 20,000 array elements. In fact in Safari with 20000 array
elements, the setLookupIncrement and setNewVarIncrement functions are
over a number of tests, between 5 and 25 times as fast as their
*Decrement counterparts. I followed this up in the other browsers
[3], although I haven't put it on the results page yet, and there is a
clear turn-around for all tested browsers -- except Opera -- somewhere
between 10000 and 20000 elements, although in no other browser is it
as drastic as it is in Safari.

The upshot is that decrement is generally better at smaller array
sizes, but you're probably better off with increment as the array gets
larger. Where the cut-off is will depend on the ES implementation
most used for your script. In IE, it's fairly low, between 10 and
100; in Chrome, it's between 1000 and 10000; in Firefox, between 100
and 1000; and in Safari, between 10000 and 20000. And in Opera, I
haven't found any such cut-off.

This was not the point of the tests, but it is interesting to note in
the raw numbers just how different the different ECMAScript
implementation are in speed. The reasons are probably well known to
this group, but the raw numbers can be quite astounding. At 100
elements, using setNewVarDecrement, we get the following number of
operations per second:

Chrome: 417,142
Firefox: 9,981
IE: 8,861
Opera: 32,789
Safari: 238,953

That's a factor of 47 between speeds in Chrome versus IE!

With all that said, I want to make it clear that performance is only
one consideration in choosing an algorithm. Unless there's a known
performance problem, I would not sacrifice code clarity to gain a
performance boost. But if you're having performance problems
assigning array elements inside a loop, I hope this data may be of
help.

-- Scott
____________________
[1] http://groups.google.com/group/comp.lang.javascript/msg/2b7a059a6c672e7e
[2] http://groups.google.com/group/comp.lang.javascript/msg/59e22efd4f3425df
[3] Chrome: http://tinyurl.com/ycrp3hp
Firefox: http://tinyurl.com/ybqlxd6
IE: http://tinyurl.com/yckh6vn
Opera: http://tinyurl.com/y9s3u8v
Safari: http://tinyurl.com/y9lk62j

Jorge

unread,
Jan 18, 2010, 4:54:38 PM1/18/10
to
On Jan 18, 9:30 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
> (...)

> Jorge pointed out [2] that this changes drastically if we up the ante
> to 20,000 array elements.  In fact in Safari with 20000 array
> elements, the setLookupIncrement and setNewVarIncrement functions are
> over a number of tests, between 5 and 25 times as fast as their
> *Decrement counterparts.  I followed this up in the other browsers
> [3], although I haven't put it on the results page yet, and there is a
> clear turn-around for all tested browsers -- except Opera -- somewhere
> between 10000 and 20000 elements, although in no other browser is it
> as drastic as it is in Safari.
>
> The upshot is that decrement is generally better at smaller array
> sizes, but you're probably better off with increment as the array gets
> larger.  Where the cut-off is will depend on the ES implementation
> most used for your script.  In IE, it's fairly low, between 10 and
> 100; in Chrome, it's between 1000 and 10000; in Firefox, between 100
> and 1000; and in Safari, between 10000 and 20000.  And in Opera, I
> haven't found any such cut-off.
> (...)

This thread might interest you:
http://groups.google.com/group/comp.lang.javascript/browse_thread/thread/ca8297b2de1edad1/f140443fc7dbc8a8#d45575d71a6c117a
--
Jorge.

Scott Sauyet

unread,
Jan 18, 2010, 5:21:25 PM1/18/10
to
On Jan 18, 4:54 pm, Jorge <jo...@jorgechamorro.com> wrote:
> On Jan 18, 9:30 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
>> Jorge pointed out [2] that this changes drastically if we up the ante
>> to 20,000 array elements.  [ ... ]

Thanks. I did see that one first time through. Although it's
interesting in its own right, I don't think it's relevant to this
discussion. We are not pre-allocating the arrays, unless in one of
the decrement algorithms, an ES implementation does a pre-allocation
when faced with

var photos = [];
photos[10000] = //...

But it's interesting re-reading!

Cheers,

-- Scott

Jorge

unread,
Jan 18, 2010, 8:14:36 PM1/18/10
to
On Jan 18, 11:21 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
> On Jan 18, 4:54 pm, Jorge <jo...@jorgechamorro.com> wrote:
>
> > On Jan 18, 9:30 pm, Scott Sauyet <scott.sau...@gmail.com> wrote:
> >> Jorge pointed out [2] that this changes drastically if we up the ante
> >> to 20,000 array elements.  [ ... ]
>
> > This thread might interest you:http://groups.google.com/group/comp.lang.javascript/browse_thread/thr...

>
> Thanks.  I did see that one first time through.  Although it's
> interesting in its own right, I don't think it's relevant to this
> discussion.(...)

Safari stores array elements in fast storage "slots" up to a limit,
but no more: "Our policy for when to use a vector and when to use a
sparse map. For all array indices under MIN_SPARSE_ARRAY_INDEX, we
always use a vector. When indices greater than MIN_SPARSE_ARRAY_INDEX
are involved, we use a vector as long as it is 1/8 full. If more
sparse than that, we use a map."

MIN_SPARSE_ARRAY_INDEX happens to be 1e4.
--
Jorge.

Lasse Reichstein Nielsen

unread,
Jan 19, 2010, 1:16:44 AM1/19/10
to
Scott Sauyet <scott....@gmail.com> writes:

> The most interesting question was one of incrementing or decrementing
> the array iterator. In Chrome, there are only minor differences for
> array size 10, but at 100 and 1000, decrement is significantly
> faster. But by 10000, increment is twice as fast. In Firefox, 10 is
> too inconsistent to be useful; at 100 decerement is somewhat faster,
> but at 1000 and 10000, increment is noticeably faster.

The reason for this is the Chrome (and some of the other browsers too)
has special support for sparse arrays (which is slower than a
non-sparse array, but saves significant amounts of memory if the array
stays sparse). By starting the assignment from the end, the array
starts out looking very sparse.

If instead you allocate the array with its full size from the start,
i.e.:
var photos = new Array(library.length);
var captions = new Array(library.length);
then the arrays will be non-sparse and won't even need to grow
while filling. That should reduce the difference between increment
and decrement to just the handling of the loop variable.


> The upshot is that decrement is generally better at smaller array
> sizes, but you're probably better off with increment as the array gets
> larger.

Try decrement with a pre-allocated array (although I'm not sure which
browsers apart from Chrome allows using New Array(number) to allocate
a non-sparse array).

/L
--
Lasse Reichstein Holst Nielsen
'Javascript frameworks is a disruptive technology'

Scott Sauyet

unread,
Jan 19, 2010, 6:22:37 AM1/19/10
to
On Jan 19, 1:16 am, Lasse Reichstein Nielsen <lrn.unr...@gmail.com>
wrote:

> Scott Sauyet <scott.sau...@gmail.com> writes:
>> The upshot is that decrement is generally better at smaller array
>> sizes, but you're probably better off with increment as the array gets
>> larger.
>
> Try decrement with a pre-allocated array (although I'm not sure which
> browsers apart from Chrome allows using New Array(number) to allocate
> a non-sparse array).

Thank you. That is definitely worth checking out, and I will in a day
or two when I'm not swamped with work.

-- Scott

0 new messages