Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Switching From YUICompressor to Closure Compiler Using ANT <apply>

37 views
Skip to first unread message

Garrett Smith

unread,
Nov 14, 2009, 3:27:58 PM11/14/09
to
I have been using an ANT task with YUI Compressor to apply minfication
all of my javascript, using an <apply>[1] task.

The result is much smaller file sizes, plus helpful info in the console
(my console is in Eclipse).

I tried Google Closure Compiler on a few files and noticed that it does
munge a little more than YUI Compressor (around 10% more). This tool
also emits some helpful warnings to the console.

To account for the difference in the way Closure Compiler jar is
invoked, only a few small changes to my ANT apply task were necessary.

The relevant snippet applies compiler.jar to a fileset, which includes
all of my js files.

<target name="js.minify" depends="js.rollups">
<apply executable="java" parallel="false" verbose="true"
dest="${build}" taskname="js.compile">
<fileset dir="${build}" includes="**/*.js"/>
<arg line="-jar"/>
<arg path="compiler.jar"/>
<arg line="--js"/>
<srcfile/>
<arg line="--js_output_file"/>
<mapper type="glob" from="*.js" to="*-min.js"/>
<targetfile/>
</apply>
</target>

Explanation:
"compiler.jar" relevant arg lines:
--js - the arg line for the file name to compress
--js_output_file - the output file

in ANT:
<srcfile> - the file that gets fed to the executable (compiler.jar).
This filename must be preceeded by the arg line "--js";
<targetfile> - the name of the output file.
This filename is preceded by arg line "-js_output_file"

In a sort of psuedo command line, it would look like:
java -jar -compiler.jar --js <srcfile> --js_output_file <targetfile/>

To get generate the correct command line argument, the <arg> and
<srcfile> elements appear in the following order:

<arg line="--js"/>
<srcfile/>
<arg line="--js_output_file"/>
<mapper type="glob" from="*.js" to="*-min.js"/>
<targetfile/>

[1]http://ant.apache.org/manual/CoreTasks/apply.html
[2]http://code.google.com/closure/compiler/docs/gettingstarted_app.html
--
Garrett
comp.lang.javascript FAQ: http://jibbering.com/faq/

David Mark

unread,
Nov 14, 2009, 3:44:05 PM11/14/09
to
On Nov 14, 3:27 pm, Garrett Smith <dhtmlkitc...@gmail.com> wrote:
> I have been using an ANT task with YUI Compressor to apply minfication
> all of my javascript, using an <apply>[1] task.

Great. I use BAT files myself.

>
> The result is much smaller file sizes, plus helpful info in the console
> (my console is in Eclipse).

Mostly a waste of time these days (let the servers and agents handle
compression). Definitely a bad idea unless you test everything in
"minified" form (and that is a pain). If you are worried about dial-
up users, modems have compression built-in. A few extra KB won't
matter to broadband users. ;)

>
> I tried Google Closure Compiler on a few files and noticed that it does
> munge a little more than YUI Compressor (around 10% more). This tool
> also emits some helpful warnings to the console.

You would trust a JS tool from Google?

>
> To account for the difference in the way Closure Compiler jar is
> invoked, only a few small changes to my ANT apply task were necessary.

So you are going to switch horses just like that? Seems like a bad
idea.

kangax

unread,
Nov 14, 2009, 5:09:22 PM11/14/09
to
Garrett Smith wrote:
> I have been using an ANT task with YUI Compressor to apply minfication
> all of my javascript, using an <apply>[1] task.
>
> The result is much smaller file sizes, plus helpful info in the console
> (my console is in Eclipse).

I would not rely on Closure Compiler just yet.

They have some nice ideas on decreasing file size; ideas that
YUICompressor doesn't implement.

Yet, some things seem to be optimized just a little too much. Look at
what happens with function expressions:

var f = function(){};

becomes:

function f(){}

With this in mind, it's easy to think of example that results in a
different behavior before and after munging:

alert(f);
var f = function(){};

becomes:

alert(f);function f(){};

I don't understand how they could make such obvious mistake; obvious to
anyone understanding the difference between function expressions and
function declarations.

This "quirk" happens in "simple" mode; "Advanced" one does more harmful
things (which, thankfully, Google admits and warns about).

Btw, if you care about Identifiers of NFEs, it's good to know that they
modify (change names) of those too.

Finally, it seems that JScript condition compilation statements are
stripped as well (so, for example, your `isMaybeLeak` test would become
defunct).

You can try compressor online at:
<URL: http://closure-compiler.appspot.com/home>

[...]

--
kangax

kangax

unread,
Nov 14, 2009, 5:42:50 PM11/14/09
to
kangax wrote:
> Garrett Smith wrote:
>> I have been using an ANT task with YUI Compressor to apply minfication
>> all of my javascript, using an <apply>[1] task.
>>
>> The result is much smaller file sizes, plus helpful info in the console
>> (my console is in Eclipse).
>
> I would not rely on Closure Compiler just yet.
>
> They have some nice ideas on decreasing file size; ideas that
> YUICompressor doesn't implement.
>
> Yet, some things seem to be optimized just a little too much. Look at
> what happens with function expressions:
>
> var f = function(){};
>
> becomes:
>
> function f(){}
>
> With this in mind, it's easy to think of example that results in a
> different behavior before and after munging:
>
> alert(f);
> var f = function(){};

It would be better to avoid function decompilation here. Let's change
that example to:

alert(typeof f);
var f = function(){};

� which still exhibits a problem, of course.

[...]

--
kangax

Garrett Smith

unread,
Nov 14, 2009, 6:48:21 PM11/14/09
to
kangax wrote:
> Garrett Smith wrote:
>> I have been using an ANT task with YUI Compressor to apply minfication
>> all of my javascript, using an <apply>[1] task.
>>
>> The result is much smaller file sizes, plus helpful info in the console
>> (my console is in Eclipse).
>
> I would not rely on Closure Compiler just yet.
>

I would like to understand more what it does. I am not advocating it for
production. I guess you could say I'm a free QA for Google.

I have notice that give input:

if(obj.prop) {
meth(obj.prop);
}

the output:

obj.prop&&meth(obj.prop);

I can't see a problem with that, as both cases go through [[ToBoolean]].

> They have some nice ideas on decreasing file size; ideas that
> YUICompressor doesn't implement.

Such as?

>
> Yet, some things seem to be optimized just a little too much. Look at
> what happens with function expressions:
>
> var f = function(){};
>
> becomes:
>
> function f(){}
>
> With this in mind, it's easy to think of example that results in a
> different behavior before and after munging:
>
> alert(f);
> var f = function(){};
>
> becomes:
>
> alert(f);function f(){};
>

Oh, no, that changes program behavior.

> I don't understand how they could make such obvious mistake; obvious to
> anyone understanding the difference between function expressions and
> function declarations.

I'm not sure if there is a reasonable explanation for the decision
of changing assignment expression to FunctionDeclaration.

Compilation tools such as this offer potential that has not yet been
realized.

>
> This "quirk" happens in "simple" mode; "Advanced" one does more harmful
> things (which, thankfully, Google admits and warns about).
>
> Btw, if you care about Identifiers of NFEs, it's good to know that they
> modify (change names) of those too.
>

I see, yes, they do, don't they.

> Finally, it seems that JScript condition compilation statements are
> stripped as well (so, for example, your `isMaybeLeak` test would become
> defunct).
>

Ah, yeah, you're right about that.

> You can try compressor online at:
> <URL: http://closure-compiler.appspot.com/home>
>

Thanks for the heads-up.


js to js compilation has untapped potential, particularly with lexical
analysis of functions.

For example, a called function can be inlined where it shares scope.

Take two functions, a and b, with a shared scope e.

function e(){
function a(){
b("10");
}
function b(x){ alert(x); }
return a;
}

A possible inline optimization:

function e(){
function a(){
alert("10");
}
return a;
}

That optimization is possible because a and b share scope and do not
use |with| or |arguments|. The result is smaller and more efficient.

kangax

unread,
Nov 15, 2009, 1:21:53 AM11/15/09
to
Garrett Smith wrote:
> kangax wrote:
>> Garrett Smith wrote:
>>> I have been using an ANT task with YUI Compressor to apply minfication
>>> all of my javascript, using an <apply>[1] task.
>>>
>>> The result is much smaller file sizes, plus helpful info in the console
>>> (my console is in Eclipse).
>>
>> I would not rely on Closure Compiler just yet.
>>
>
> I would like to understand more what it does. I am not advocating it for
> production. I guess you could say I'm a free QA for Google.
>
> I have notice that give input:
>
> if(obj.prop) {
> meth(obj.prop);
> }
>
> the output:
>
> obj.prop&&meth(obj.prop);
>
> I can't see a problem with that, as both cases go through [[ToBoolean]].

There would be a problem, if block contained statement, not an
expression. Fortunately, they only seem to perform this kind of
shortening when no statements are involved:

if (foo) foo.bar();

becomes:

foo&&foo.bar();

but:

if (foo) throw 1;

becomes:

if(foo)throw 1;

>
>> They have some nice ideas on decreasing file size; ideas that
>> YUICompressor doesn't implement.
>
> Such as?

For example, removal of "dead" branches, inlining, etc.


var foo; var bar; ==> var foo, bar;

return 3 * 4; ==> return 12;

return undefined; ==> return;

if (1) a(); else b(); ==> a()

if (a) b(); else c(); ==> a ? b() : c()

>
>>
>> Yet, some things seem to be optimized just a little too much. Look at
>> what happens with function expressions:
>>
>> var f = function(){};
>>
>> becomes:
>>
>> function f(){}
>>
>> With this in mind, it's easy to think of example that results in a
>> different behavior before and after munging:
>>
>> alert(f);
>> var f = function(){};
>>
>> becomes:
>>
>> alert(f);function f(){};
>>
>
> Oh, no, that changes program behavior.

You bet :)

>
>> I don't understand how they could make such obvious mistake; obvious
>> to anyone understanding the difference between function expressions
>> and function declarations.
>
> I'm not sure if there is a reasonable explanation for the decision
> of changing assignment expression to FunctionDeclaration.
>
> Compilation tools such as this offer potential that has not yet been
> realized.

Yep. I noticed that there are more optimizations possible there. One
just needs to know ECMAScript syntax and what could be changed into
something shorter without changing program behavior (or at least
changing it in unobservable way).

if (foo) throw 0.1;

becomes:

if(foo)throw 0.1;

but could be easily shortened to:

if(foo)throw.1;

They could also transform numeric literals `0.xxx` into `.xxx` and
`1000` to `1e3` (`10000` to `1e4`, and so on), but don't. `x += 1` could
become `x++`, but currently doesn't. `/[\w]/` (and other classes) could
be changed to `/\w/`, but currently aren't.

There are probably others.

Interesting that they kind of perform inlining, but only at global scope
(from what I can see):

(in advanced mode)

function f(){ return 'foo'; }
alert(f());

becomes:

alert("foo");

but:

(function(){
function f(){ return 'foo'; }
alert(f());
})();

becomes:

(function(){function a(){return"foo"}alert(a())})();

instead of shorter (and functionally identical):

(function(){alert("foo")})();

or even:

alert("foo");

since, as I understand, it doesn't really matter if function is called
from within global code or from within anonymous function, unless this
calling function is something like a reference to global `eval` and so
could declare variables in a wrong place (which would then make it an
indirect eval call � something that could throw exception as allowed by
specification) or if non-standard extensions are involved (e.g. `caller`
from JS 1.5).

--
kangax

Hans-Georg Michna

unread,
Nov 15, 2009, 6:38:24 AM11/15/09
to
On Sat, 14 Nov 2009 12:27:58 -0800, Garrett Smith wrote:

>I tried Google Closure Compiler on a few files and noticed that it does
>munge a little more than YUI Compressor (around 10% more).

Garrett,

are you aware that what really matters is the size of the
gz-compressed file that is actually delivered to the end user?
You have to compare file sizes after an additional gz
compression.

This assumes that your web server, like every well-administered
one, gz-compresses its text output before sending it out to the
end user.

It turns out that the smallest file before gz-compression is not
at all the smallest file after and vice versa. I seem to
remember that good old Packer was one which yields highly
compressible code.

In the unlikely case that your web server does not yet compress,
then turning that on has a hugely higher overall effect, like a
global factor 3 for text files, than changing from YUI
Compressor to Google Closure Compiler.

Hans-Georg

Stefan Weiss

unread,
Nov 15, 2009, 7:23:06 AM11/15/09
to
On 15/11/09 12:38, Hans-Georg Michna wrote:
> On Sat, 14 Nov 2009 12:27:58 -0800, Garrett Smith wrote:
>
>>I tried Google Closure Compiler on a few files and noticed that it does
>>munge a little more than YUI Compressor (around 10% more).
>
> Garrett,
>
> are you aware that what really matters is the size of the
> gz-compressed file that is actually delivered to the end user?
> You have to compare file sizes after an additional gz
> compression.

If size is the metric we're optimizing for, then JS minimization + gzip
compression will produce smaller files than gzip alone (obviously).
What's more interesting: the GWT compiler has the ability to transform
and rearrange its output to be gzip-friendly:

<http://timepedia.blogspot.com/2009/08/on-reducing-size-of-compressed.html>

I couldn't find any information if this technique has been used in the
Closure compiler.


cheers,
stefan

Thomas 'PointedEars' Lahn

unread,
Nov 15, 2009, 9:38:19 AM11/15/09
to
Stefan Weiss wrote:

> If size is the metric we're optimizing for, then JS minimization + gzip
> compression will produce smaller files than gzip alone (obviously).

If you think that would be obvious, you have not understood gzip.


PointedEars
--
Prototype.js was written by people who don't know javascript for people
who don't know javascript. People who don't know javascript are not
the best source of advice on designing systems that use javascript.
-- Richard Cornford, cljs, <f806at$ail$1$8300...@news.demon.co.uk>

Gregor Kofler

unread,
Nov 15, 2009, 9:59:00 AM11/15/09
to
Stefan Weiss meinte:

> If size is the metric we're optimizing for, then JS minimization + gzip
> compression will produce smaller files than gzip alone (obviously).

Not necessarily.

Gregor


--
http://www.gregorkofler.com

Stefan Weiss

unread,
Nov 15, 2009, 10:21:23 AM11/15/09
to
On 15/11/09 15:38, Thomas 'PointedEars' Lahn wrote:
> Stefan Weiss wrote:
>> If size is the metric we're optimizing for, then JS minimization + gzip
>> compression will produce smaller files than gzip alone (obviously).
>
> If you think that would be obvious, you have not understood gzip.

I'm not talking about edge cases. Source files where all comments and
unnecessary white space and punctuation have been removed will result in
smaller compressed files. It's theoretically possible to construct
JavaScript files in such a way that their minified versions will lead to
worse overall compression, but that has no practical relevance.


cheers,
stefan


PS: Here's a trivial example:

$ echo -n 'if(foo)alert("bar")' > test.js
$ yuicomp test.js > test-min.js
$ gzip -c test.js > test.js.gz
test.js: -10.5%
$ gzip -c test-min.js > test-min.js.gz
test-min.js: -9.1%
$ ls -lA
total 16K
-rw-r--r-- 1 user user 22 2009-11-15 16:08 test-min.js
-rw-r--r-- 1 user user 54 2009-11-15 16:08 test-min.js.gz
-rw-r--r-- 1 user user 19 2009-11-15 16:08 test.js
-rw-r--r-- 1 user user 47 2009-11-15 16:08 test.js.gz

The minified version is larger than the original (the YUI compressor
adds curly braces and a semicolon), and the compressed versions are even
larger. Like I said: it's possible, but not relevant.

Thomas 'PointedEars' Lahn

unread,
Nov 15, 2009, 10:28:56 AM11/15/09
to
Stefan Weiss wrote:

> Thomas 'PointedEars' Lahn wrote:
>> Stefan Weiss wrote:
>>> If size is the metric we're optimizing for, then JS minimization + gzip
>>> compression will produce smaller files than gzip alone (obviously).
>> If you think that would be obvious, you have not understood gzip.
>
> I'm not talking about edge cases. Source files where all comments and
> unnecessary white space and punctuation have been removed will result in
> smaller compressed files.

You will have to prove that.


PointedEars

Stefan Weiss

unread,
Nov 15, 2009, 10:48:56 AM11/15/09
to

Prove what? I just gave you an example where the minimized version was
larger after compression. I also explained that I'm talking about
real-life usage, not academic experiments. This situation is
sufficiently unlikely to occur that it is practically irrelevant.

If you think it isn't, and if you're correct, you'll have no trouble
finding a more realistic counter example...


cheers,
stefan

Richard Cornford

unread,
Nov 15, 2009, 10:49:13 AM11/15/09
to
Stefan Weiss wrote:
> On 15/11/09 15:38, Thomas 'PointedEars' Lahn wrote:
>> Stefan Weiss wrote:
>>> If size is the metric we're optimizing for, then JS minimization
>>> + gzip compression will produce smaller files than gzip alone
>>> (obviously).
>>
>> If you think that would be obvious, you have not understood gzip.
>
> I'm not talking about edge cases. Source files where all comments

Removing comments is likely to make a difference, but source files do
not necessarily contain comments (and it is reasonable to remove
comments for distribution and leave the rest of the structure intact in
order to avoid the need to significantly regression test the
post-compression source code).

> and unnecessary white space and punctuation

Removing unnecessary whitespace is not necessarily going to have that
much impact. Zip compression likes to act on repetition and patterns of
whitespace around javascript tokens can be just as repetitious as the
same tokens with the unnecessary whitespace removed.

> have been removed will result in
> smaller compressed files.

<snip>

The thing that doesn't make sense to me is that the difference between
the eventual zipped versions with and without the javascript
minification tends to be extremely small (with the zipping providing the
largest size reduction regardless of anything else), and that doesn't
quite justify the (cost of the) extra QA stage of verifying the
post-minification code. Of course, organisations like Google, who don't
do any real QA, won't see the implied cost.

Richard.

Thomas 'PointedEars' Lahn

unread,
Nov 15, 2009, 11:01:05 AM11/15/09
to
Stefan Weiss wrote:

> On 15/11/09 16:28, Thomas 'PointedEars' Lahn wrote:
>> Stefan Weiss wrote:
>>> Thomas 'PointedEars' Lahn wrote:
>>>> Stefan Weiss wrote:
>>>>> If size is the metric we're optimizing for, then JS minimization +
>>>>> gzip compression will produce smaller files than gzip alone
>>>>> (obviously).
>>>> If you think that would be obvious, you have not understood gzip.
>>>
>>> I'm not talking about edge cases. Source files where all comments and
>>> unnecessary white space and punctuation have been removed will result in
>>> smaller compressed files.
>>
>> You will have to prove that.
>
> Prove what?

Your argument.

> I just gave you an example where the minimized version was larger after
> compression.

<http://en.wikipedia.org/wiki/Proof_by_example>

> also explained that I'm talking about real-life usage, not academic
> experiments.

And that is probably <http://en.wikipedia.org/wiki/Appeal_to_the_majority>.


PointedEars
--
var bugRiddenCrashPronePieceOfJunk = (
navigator.userAgent.indexOf('MSIE 5') != -1
&& navigator.userAgent.indexOf('Mac') != -1
) // Plone, register_function.js:16

Stefan Weiss

unread,
Nov 15, 2009, 11:09:41 AM11/15/09
to
On 15/11/09 16:49, Richard Cornford wrote:

> Stefan Weiss wrote:
>> I'm not talking about edge cases. Source files where all comments
[snip]

>> and unnecessary white space and punctuation
>
> Removing unnecessary whitespace is not necessarily going to have that
> much impact.

Maybe not much, but "a+b" still compresses better than "a + b". You'll
typically get these cases several times per line. It adds up.

> The thing that doesn't make sense to me is that the difference between
> the eventual zipped versions with and without the javascript
> minification tends to be extremely small (with the zipping providing the
> largest size reduction regardless of anything else), and that doesn't
> quite justify the (cost of the) extra QA stage of verifying the
> post-minification code.

I don't agree that the size difference is "extremely small". I just
confirmed this (again) on four relatively large files (concatenated
scripts which are actually deployed like this). Here's what I get:

orig min gzip min+gzip
--------+-------+-------+----------
230K 72K 58K 27K
378K 125K 84K 42K
441K 152K 99K 50K
149K 55K 27K 17K

Whether this size reduction is worth the extra effort is not something
that can be decided in general, without any knowledge about the project.


cheers,
stefan

Thomas 'PointedEars' Lahn

unread,
Nov 15, 2009, 11:17:08 AM11/15/09
to
I see now that I forgot something.

Stefan Weiss wrote:

> This situation issufficiently unlikely to occur that it is practically
> irrelevant.

I am not sure what kind of fallacy this is, but it is one.



> If you think it isn't, and if you're correct, you'll have no trouble
> finding a more realistic counter example...

This one is obvious again:
<http://en.wikipedia.org/wiki/Shifting_the_burden_of_proof>


PointedEars
--
realism: HTML 4.01 Strict
evangelism: XHTML 1.0 Strict
madness: XHTML 1.1 as application/xhtml+xml
-- Bjoern Hoehrmann

Stefan Weiss

unread,
Nov 15, 2009, 11:24:34 AM11/15/09
to
On 15/11/09 17:17, Thomas 'PointedEars' Lahn wrote:
> I see now that I forgot something.

Yes, your argument. If all that's coming from you is lists of real or
imagined logical fallacies, I'm not interested in continuing this
conversation. Can we get back on topic?


cheers,
stefan

Lasse Reichstein Nielsen

unread,
Nov 15, 2009, 11:28:39 AM11/15/09
to
Thomas 'PointedEars' Lahn <Point...@web.de> writes:

> Stefan Weiss wrote:
>
>> I'm not talking about edge cases. Source files where all comments and
>> unnecessary white space and punctuation have been removed will result in
>> smaller compressed files.
>
> You will have to prove that.

Removing just comments will definitly give a smaller result for any
reasonable and general compression strategy. Comments (non-trivial
ones, at least) contain information, and unless you have a
pathological example, it's information that's unlikely to also occur
in the remaining code. It will cause extra bits in the resulting
compressed data.

Whether removing whitespace and punctuation makes a difference isn't as
obvious, but it's very unlikely to make the result larger (as in: If you
can find a case where it does, please show us).


More generally: consistently and structurally removing characters from
a file is unlikely to make it compress worse. What you are removing
will not increase the total information entrophy of the data. It is
important that the removal is consistent (otherwise the pattern in
where something is removed will itself carry information).

/L
--
Lasse Reichstein Holst Nielsen
'Javascript frameworks is a disruptive technology'

Stefan Weiss

unread,
Nov 15, 2009, 11:29:00 AM11/15/09
to
[I originally sent this one p.m. by mistake, my apologies]

On 15/11/09 17:01, Thomas 'PointedEars' Lahn wrote:
> Stefan Weiss wrote:
>> On 15/11/09 16:28, Thomas 'PointedEars' Lahn wrote:
>>> Stefan Weiss wrote:
>>>> I'm not talking about edge cases. Source files where all comments and
>>>> unnecessary white space and punctuation have been removed will result in
>>>> smaller compressed files.
>>>
>>> You will have to prove that.
>>
>> Prove what?
>
> Your argument.
>
>> I just gave you an example where the minimized version was larger after
>> compression.
>
> <http://en.wikipedia.org/wiki/Proof_by_example>

Does not apply. Please read what I actually wrote: I said *larger* after
compression. That's a counter example against what I originally wrote,
and one "black swan" is all it takes.

>> also explained that I'm talking about real-life usage, not academic
>> experiments.
>
> And that is probably <http://en.wikipedia.org/wiki/Appeal_to_the_majority>.

Absolutely not. Did you actually read that article or just its title?


cheers,
stefan

Lasse Reichstein Nielsen

unread,
Nov 15, 2009, 11:33:00 AM11/15/09
to
Stefan Weiss <krewe...@gmail.com> writes:

> On 15/11/09 16:49, Richard Cornford wrote:

>> Removing unnecessary whitespace is not necessarily going to have that
>> much impact.
>
> Maybe not much, but "a+b" still compresses better than "a + b". You'll
> typically get these cases several times per line. It adds up.

The first time it happens, yes. But if your '+' is always flanked by
spaces, the " + " sequence will quickly compress as well as "+".
I.e., the impact is probably not significant.
But that's purely speculation, ofcourse.

...


> I don't agree that the size difference is "extremely small". I just
> confirmed this (again) on four relatively large files (concatenated
> scripts which are actually deployed like this). Here's what I get:
>
> orig min gzip min+gzip
> --------+-------+-------+----------
> 230K 72K 58K 27K
> 378K 125K 84K 42K
> 441K 152K 99K 50K
> 149K 55K 27K 17K

This is definitly a significant inprovement. Have you tried just
removing the comments and see how well it compresses then?

Richard Cornford

unread,
Nov 15, 2009, 11:36:14 AM11/15/09
to
Stefan Weiss wrote:
> On 15/11/09 16:49, Richard Cornford wrote:
>> Stefan Weiss wrote:
>>> I'm not talking about edge cases. Source files where all comments
> [snip]
>>> and unnecessary white space and punctuation
>>
>> Removing unnecessary whitespace is not necessarily going to have
>> that much impact.
>
> Maybe not much, but "a+b" still compresses better than "a + b".
> You'll typically get these cases several times per line. It
> adds up.

But that is the point, it does not necessarily add up. You will probably
need two extra bytes to for "a + b" over "a+b", but you probably still
only need two extra bytes for "a + b;a + b;a + b" over "a+b;a+b;a+b".

>> The thing that doesn't make sense to me is that the difference
>> between the eventual zipped versions with and without the
>> javascript minification tends to be extremely small (with
>> the zipping providing the largest size reduction regardless of
>> anything else), and that doesn't quite justify the (cost of
>> the) extra QA stage of verifying the post-minification code.
>
> I don't agree that the size difference is "extremely small". I
> just confirmed this (again) on four relatively large files
> (concatenated scripts which are actually deployed like this).
> Here's what I get:
>
> orig min gzip min+gzip
> --------+-------+-------+----------
> 230K 72K 58K 27K
> 378K 125K 84K 42K
> 441K 152K 99K 50K
> 149K 55K 27K 17K

But did these files contain comments?

> Whether this size reduction is worth the extra effort is not
> something that can be decided in general, without any knowledge
> about the project.

Where the extra effort is a full round of regression testing it is
likely to be significant in most projects.

Richard.

Thomas 'PointedEars' Lahn

unread,
Nov 15, 2009, 12:24:17 PM11/15/09
to
Lasse Reichstein Nielsen wrote:

> Thomas 'PointedEars' Lahn <Point...@web.de> writes:
>> Stefan Weiss wrote:
>>> I'm not talking about edge cases. Source files where all comments and
>>> unnecessary white space and punctuation have been removed will result in
>>> smaller compressed files.
>> You will have to prove that.
>
> Removing just comments will definitly give a smaller result for any
> reasonable and general compression strategy. Comments (non-trivial
> ones, at least) contain information, and unless you have a
> pathological example, it's information that's unlikely to also occur
> in the remaining code. It will cause extra bits in the resulting
> compressed data.

On the contrary.



> Whether removing whitespace and punctuation makes a difference isn't as
> obvious, but it's very unlikely to make the result larger (as in: If you
> can find a case where it does, please show us).

Et tu, Brute? That is not the way it works.

> More generally: consistently and structurally removing characters from
> a file is unlikely to make it compress worse.

That this statement is wrong for gzip follows from the definition of the
DEFLATE (LZ77 + Huffman coding) algorithm that gzip uses. LZ77 replaces
duplicate series of bytes within the sliding window with backreferences of
max. 8 bytes (CMIIW), and the Huffman coding for a symbol is shorter the
more likely it is to occur.

> What you are removing will not increase the total information entrophy of
> the data.

The what?


PointedEars
--
Anyone who slaps a 'this page is best viewed with Browser X' label on
a Web page appears to be yearning for the bad old days, before the Web,
when you had very little chance of reading a document written on another
computer, another word processor, or another network. -- Tim Berners-Lee

Lasse Reichstein Nielsen

unread,
Nov 15, 2009, 12:54:04 PM11/15/09
to
Thomas 'PointedEars' Lahn <Point...@web.de> writes:

> Lasse Reichstein Nielsen wrote:

>> More generally: consistently and structurally removing characters from
>> a file is unlikely to make it compress worse.
>
> That this statement is wrong for gzip follows from the definition of the
> DEFLATE (LZ77 + Huffman coding) algorithm that gzip uses. LZ77 replaces
> duplicate series of bytes within the sliding window with backreferences of
> max. 8 bytes (CMIIW), and the Huffman coding for a symbol is shorter the
> more likely it is to occur.

Exactly.

Removing whitespace in a structured way will not make such encoding
worse. Converting " + " to "+" *consistently* will give the same number
of back-references, only to a shorter string. However, removing the
whitespace will make the sliding window more efficient - it can span a
larger percentage of the source code, giving more chances of having
a repeated occurence still in the window.

Likewise, removing comments will possibly reduce the number of
back-references, if the remaining source code contains words or
phrases that also occur in the comments. However, if the comments come
after the source code, we are completely removing the back-reference.
If the source comes after the comments, the source becomes the first
occurence of the text, but it shouldn't take up more space than the
first occurence originally in the comments would.
And again, removing the comments makes room for more code in the
sliding window.

All in all, removing content in a consistent and structured way will
generally improve compression, especially using a sliding-window
based encryption method.

>> What you are removing will not increase the total information entrophy of
>> the data.
>
> The what?

http://en.wikipedia.org/wiki/Information_entropy

Thomas 'PointedEars' Lahn

unread,
Nov 15, 2009, 1:10:28 PM11/15/09
to
Lasse Reichstein Nielsen wrote:

> Thomas 'PointedEars' Lahn <Point...@web.de> writes:
>> Lasse Reichstein Nielsen wrote:
>>> More generally: consistently and structurally removing characters from
>>> a file is unlikely to make it compress worse.
>>
>> That this statement is wrong for gzip follows from the definition of the
>> DEFLATE (LZ77 + Huffman coding) algorithm that gzip uses. LZ77 replaces
>> duplicate series of bytes within the sliding window with backreferences
>> of max. 8 bytes (CMIIW), and the Huffman coding for a symbol is shorter
>> the more likely it is to occur.
>
> Exactly.
>
> Removing whitespace in a structured way will not make such encoding
> worse.

But that applies only if whitespace can be removed without the need for
it to be replaced by something else to keep semantical equivalence.

> Converting " + " to "+" *consistently* will give the same number
> of back-references, only to a shorter string. However, removing the
> whitespace will make the sliding window more efficient - it can span a
> larger percentage of the source code, giving more chances of having
> a repeated occurence still in the window.

I hadn't thought of that, though. But we are not talking about white-space
around operators, are we?



> All in all, removing content in a consistent and structured way will
> generally improve compression, especially using a sliding-window
> based encryption method.

But we are not talking about arbitrary input either.



>>> What you are removing will not increase the total information entrophy
>>> of the data.
>> The what?
>
> http://en.wikipedia.org/wiki/Information_entropy

That is not what you wrote, though. Hence the question. Consider your
statement being double-checked, then.

Lasse Reichstein Nielsen

unread,
Nov 15, 2009, 1:40:25 PM11/15/09
to
Thomas 'PointedEars' Lahn <Point...@web.de> writes:

> Lasse Reichstein Nielsen wrote:
>
>> Thomas 'PointedEars' Lahn <Point...@web.de> writes:

>> Removing whitespace in a structured way will not make such encoding
>> worse.
>
> But that applies only if whitespace can be removed without the need for
> it to be replaced by something else to keep semantical equivalence.

True.

>> Converting " + " to "+" *consistently* will give the same number
>> of back-references, only to a shorter string. However, removing the
>> whitespace will make the sliding window more efficient - it can span a
>> larger percentage of the source code, giving more chances of having
>> a repeated occurence still in the window.
>
> I hadn't thought of that, though. But we are not talking about white-space
> around operators, are we?

I'm not sure what the "JS compilers" that this started out with does
exactly. I was thinking mainly at a pure minifier.

A minifier should remove any unnecessary whitespace. That typically
means indentation and unnecessary token separations. Newlines can be
icky because of semicolon insertion, but most of the time they can
be removed as well (and when not, they can be consistently converted
to a single semicolon). Some whitespace between tokens cannot be
removed (e.g. "a + +b;"), but the large majority can.

>> All in all, removing content in a consistent and structured way will
>> generally improve compression, especially using a sliding-window
>> based encryption method.
>
> But we are not talking about arbitrary input either.

That's why it works. Javascript is already very structured (as a
formal language, that's to be expected). Removing comments and
reducing whitespace doesn't change the overall structure of the
source, nor will renaming variables (if the same renaming is used for
the same variable name everywhere it's used). Because the program is
still the same, we will still have the same tokens and token
sequences, giving the same opportunities for compression by
back-reference.

The important point is that the removals won't introduce information
into the program. If we only removed some of the whitespaces (e.g.,
the ones with indices that correspond to a one in the binary expansion
of pi), then we would be introducing information in the process. By
being completely consistent, we introduce nothing that wasn't given by
the source already (and if the original author wasn't consistent in
his whitespace placement, we might even remove some unnecessary
information).


>>>> What you are removing will not increase the total information entrophy
>>>> of the data.
>>> The what?
>>
>> http://en.wikipedia.org/wiki/Information_entropy
>
> That is not what you wrote, though. Hence the question. Consider your
> statement being double-checked, then.

ACK. That was my mistake.

Stefan Weiss

unread,
Nov 15, 2009, 1:50:20 PM11/15/09
to
On 15/11/09 17:33, Lasse Reichstein Nielsen wrote:
> Stefan Weiss <krewe...@gmail.com> writes:
>> On 15/11/09 16:49, Richard Cornford wrote:
>>> Removing unnecessary whitespace is not necessarily going to have that
>>> much impact.
>>
>> Maybe not much, but "a+b" still compresses better than "a + b". You'll
>> typically get these cases several times per line. It adds up.
>
> The first time it happens, yes. But if your '+' is always flanked by
> spaces, the " + " sequence will quickly compress as well as "+".
> I.e., the impact is probably not significant.

It happens once for each operator, or whatever is usually surrounded by
optional white space. Repeated occurences after that are very cheap, but
still not free. It's not going to amount to a lot, I guess.

>> orig min gzip min+gzip
>> --------+-------+-------+----------
>> 230K 72K 58K 27K
>> 378K 125K 84K 42K
>> 441K 152K 99K 50K
>> 149K 55K 27K 17K
>
> This is definitly a significant inprovement. Have you tried just
> removing the comments and see how well it compresses then?

No, I haven't. That result would be interesting, but removing all
comments while leaving the rest intact is not trivial (there was a
thread about a similar problem last week). It's a lot easier for
languages like C or Java, where there are no regex literals. If I'm not
mistaken, I'd need a full parser, preferably one which supports
conditional comments. YUI Compressor's parser does that, and I've worked
with it before; maybe I can throw something together next week.


cheers,
stefan

Thomas 'PointedEars' Lahn

unread,
Nov 15, 2009, 2:21:45 PM11/15/09
to
Stefan Weiss wrote:

> On 15/11/09 17:33, Lasse Reichstein Nielsen wrote:
>> Stefan Weiss <krewe...@gmail.com> writes:
>>> On 15/11/09 16:49, Richard Cornford wrote:
>>>> Removing unnecessary whitespace is not necessarily going to have that
>>>> much impact.
>>>
>>> Maybe not much, but "a+b" still compresses better than "a + b". You'll
>>> typically get these cases several times per line. It adds up.
>>
>> The first time it happens, yes. But if your '+' is always flanked by
>> spaces, the " + " sequence will quickly compress as well as "+".
>> I.e., the impact is probably not significant.
>
> It happens once for each operator, or whatever is usually surrounded by
> optional white space.

AIUI, with gzip (and DEFLATE's LZ77) that happens once only if the file
size does not exceed the size of the sliding window (32K or less, at the
implementation's discretion) + 1 Byte. See also RFC 1951.

Hans-Georg Michna

unread,
Nov 15, 2009, 7:45:27 PM11/15/09
to
On Sun, 15 Nov 2009 13:23:06 +0100, Stefan Weiss wrote:

>If size is the metric we're optimizing for, then JS minimization + gzip
>compression will produce smaller files than gzip alone (obviously).

In the practical cases of JavaScript files I have seen, like the
minified jQuery, it did that indeed quite clearly.

>What's more interesting: the GWT compiler has the ability to transform
>and rearrange its output to be gzip-friendly:
>
><http://timepedia.blogspot.com/2009/08/on-reducing-size-of-compressed.html>
>
>I couldn't find any information if this technique has been used in the
>Closure compiler.

Interesting. But those are already fine details. The important
point is to compact JavaScript files and on top of that have
gzip compression activated in the web server for text files.

If you have both, you're good. Trying to raise compression even
higher is laudable, but of secondary importance.

Hans-Georg

Dr J R Stockton

unread,
Nov 16, 2009, 12:03:16 PM11/16/09
to
In comp.lang.javascript message <t5pvf5d98m167osh3b3n9hb5rjihcmqu8m@4ax.
com>, Sun, 15 Nov 2009 12:38:24, Hans-Georg Michna <hans-
georgNoEm...@michna.com> posted:

>are you aware that what really matters is the size of the
>gz-compressed file that is actually delivered to the end user?
>You have to compare file sizes after an additional gz
>compression.

Not necessarily. Those who are given a limited amount of Web space by
their ISPs will be interested in keeping the size of the files, as
stored on the server, down. They may also be interested in keeping
bandwidth down, if that is also limited. In fact, I use about the same
percentage of each limit at present.

--
(c) John Stockton, nr London UK. ?@merlyn.demon.co.uk BP7, Delphi 3 & 2006.
<URL:http://www.merlyn.demon.co.uk/> TP/BP/Delphi/&c., FAQqy topics & links;
<URL:http://www.bancoems.com/CompLangPascalDelphiMisc-MiniFAQ.htm> clpdmFAQ;
NOT <URL:http://support.codegear.com/newsgroups/>: news:borland.* Guidelines

Garrett Smith

unread,
Nov 19, 2009, 2:01:01 AM11/19/09
to
kangax wrote:
> Garrett Smith wrote:
>> kangax wrote:
>>> Garrett Smith wrote:
>>>> I have been using an ANT task with YUI Compressor to apply minfication
>>>> all of my javascript, using an <apply>[1] task.
>>>>
>>>> The result is much smaller file sizes, plus helpful info in the console
>>>> (my console is in Eclipse).
>>>
>>> I would not rely on Closure Compiler just yet.
>>>
[snip]

I am probably going to be revert to YUI Compressor.

I haven't found a good way around the conditional comment removal and
they do not intend to support preserving conditional comments.

http://code.google.com/p/closure-compiler/issues/detail?id=47&can=1

WONTFIX.

>
>>
>>> They have some nice ideas on decreasing file size; ideas that
>>> YUICompressor doesn't implement.
>>
>> Such as?
>
> For example, removal of "dead" branches, inlining, etc.

[snip]

I haven't seen the dead branches removal.

Wow yeah, I could really use that, too. :-D

Odd that here they change throw.1 to throw 0.1;

Interesting.

[...]

> (in advanced mode)
>

I don't really know about the modes.

> function f(){ return 'foo'; }
> alert(f());
>
> becomes:
>
> alert("foo");
>
> but:
>
> (function(){
> function f(){ return 'foo'; }
> alert(f());
> })();
>
> becomes:
>
> (function(){function a(){return"foo"}alert(a())})();
>
> instead of shorter (and functionally identical):
>
> (function(){alert("foo")})();
>
> or even:
>
> alert("foo");
>

Closure optimizations require through understanding of scope (function
and eval), else the result will be bugs and missed optimizations:

function e(){
var unusedVar;
function a(){
b(12);
}
function b(s){
alert(s);
}
function c(){
throw.2;
}
}

e() results does absolutely nothing. It can be optimized to:-

function e(){}

- and the result would be identical program behavior, would be smaller,
would be more efficient for interpretation.

yet with
closure compiler:-

function e(){function c(){a(12)}function a(b){alert(b)}function
d(){throw 0.2;}var f};

The promise of dead code removal falls short.

Warnings that the code is unused would be more useful.

e.g. "function x is declared but is never used [file.js, line 0]".

> since, as I understand, it doesn't really matter if function is called
> from within global code or from within anonymous function, unless this
> calling function is something like a reference to global `eval` and so
> could declare variables in a wrong place (which would then make it an
> indirect eval call � something that could throw exception as allowed by
> specification) or if non-standard extensions are involved (e.g. `caller`
> from JS 1.5).

An indirect eval can throw an EvalError for ES3. I was aware of Opera
throwing for that during migration to an ES4 draft. That feature was
abandoned along with ES4.

If a global property (function f) is moved to a local property,
and invoked inline, any other scripts referencing -f- identifier
fail.

Inlined function calls for local scope would be safer, as the
interpreter (and the person invoking it) would not have to know about
any other global references to - f-.

The problem with eval can be demonstrated in an example:

function e(){
function a(){
b(12);
}
function b(s){
alert(s);
}
function c(){
eval("b(23)");
}
}

If |eval| is used directly, identifiers cannot be removed from e.

If |eval| is used indirectly, problems would also occur, but that would
be expected.

Regardless, I would rather be warned of possible dead code than have it
removed automatically. That way I can make the discrimination of what
needs to be removed (and looking over the code once again is a good
thing; not a waste of time at all).
--
Garrett
comp.lang.javascript FAQ: http://jibbering.com/faq/

Thomas 'PointedEars' Lahn

unread,
Nov 20, 2009, 3:51:46 PM11/20/09
to
Garrett Smith wrote:

> Closure optimizations require through understanding of scope (function
> and eval), else the result will be bugs and missed optimizations:
>
> function e(){
> var unusedVar;
> function a(){
> b(12);
> }
> function b(s){
> alert(s);
> }
> function c(){
> throw.2;
> }
> }
>
> e() results does absolutely nothing. It can be optimized to:-
>
> function e(){}
>
> - and the result would be identical program behavior, would be smaller,
> would be more efficient for interpretation.
>
> yet with
> closure compiler:-
>
> function e(){function c(){a(12)}function a(b){alert(b)}function
> d(){throw 0.2;}var f};
>
> The promise of dead code removal falls short.
>
> Warnings that the code is unused would be more useful.
>
> e.g. "function x is declared but is never used [file.js, line 0]".

IMHO, doing that is the task of the IDE (e.g. Eclipse JSDT, which does it),
not of the source code compressor.

> [...]


> Regardless, I would rather be warned of possible dead code than have it
> removed automatically. That way I can make the discrimination of what
> needs to be removed (and looking over the code once again is a good
> thing; not a waste of time at all).

See above.

kangax

unread,
Nov 21, 2009, 12:03:45 AM11/21/09
to
Garrett Smith wrote:
> kangax wrote:
>> Garrett Smith wrote:
>>> kangax wrote:
>>>> Garrett Smith wrote:
>>>>> I have been using an ANT task with YUI Compressor to apply minfication
>>>>> all of my javascript, using an <apply>[1] task.
>>>>>
>>>>> The result is much smaller file sizes, plus helpful info in the
>>>>> console
>>>>> (my console is in Eclipse).
>>>>
>>>> I would not rely on Closure Compiler just yet.
>>>>
> [snip]
>
> I am probably going to be revert to YUI Compressor.

Yep. Closure Compiler definitely has potential and already results in a
better (size-wise) minification, but it just seems to be very draft at
the moment. I filed few issues/suggestions there; hopefully, they'll
take care of them soon.

>
> I haven't found a good way around the conditional comment removal and
> they do not intend to support preserving conditional comments.
>
> http://code.google.com/p/closure-compiler/issues/detail?id=47&can=1
>
> WONTFIX.
>
>>
>>>
>>>> They have some nice ideas on decreasing file size; ideas that
>>>> YUICompressor doesn't implement.
>>>
>>> Such as?
>>
>> For example, removal of "dead" branches, inlining, etc.
>
> [snip]
>
> I haven't seen the dead branches removal.

if (1+1) alert(1); else alert(2);

currently translates to:

alert(1);

[...]

>> if (foo) throw 0.1;
>>
>> becomes:
>>
>> if(foo)throw 0.1;
>>
>> but could be easily shortened to:
>>
>> if(foo)throw.1;
>>
>
> Wow yeah, I could really use that, too. :-D
>
> Odd that here they change throw.1 to throw 0.1;
>
> Interesting.

They seem to normalize number literals this way. Not sure why.

>
> [...]
>
>> (in advanced mode)
>>
>
> I don't really know about the modes.

The page I gave earlier allows to toggle between simple and advanced modes.

or to nothing at all, since `e` is not referenced anywhere :) (and if
that's all the code there is, obviously)

>
> - and the result would be identical program behavior, would be smaller,
> would be more efficient for interpretation.
>
> yet with
> closure compiler:-
>
> function e(){function c(){a(12)}function a(b){alert(b)}function
> d(){throw 0.2;}var f};
>
> The promise of dead code removal falls short.

There are many more optimizations possible. They are only scratching the
surface here. For example, function inlining that they perform looks
very simplistic:

function foo(x){ return x; }
alert(foo(1));

translates to:

alert(1);

but once it's inside another function:

(function(){
function foo(x){ return x; }
alert(foo(1));
})();

it translates to:

(function(){function a(b){return b}alert(a(1))})();

instead of:

(function(){alert(1)})();

or even:

alert(1);

(removal of scopes should probably be done carefully; as I said before,
if `alert`, for example, references `eval`, there might be different
results)

>
> Warnings that the code is unused would be more useful.
>
> e.g. "function x is declared but is never used [file.js, line 0]".
>
>> since, as I understand, it doesn't really matter if function is called
>> from within global code or from within anonymous function, unless this
>> calling function is something like a reference to global `eval` and so
>> could declare variables in a wrong place (which would then make it an
>> indirect eval call � something that could throw exception as allowed
>> by specification) or if non-standard extensions are involved (e.g.
>> `caller` from JS 1.5).
>
> An indirect eval can throw an EvalError for ES3. I was aware of Opera
> throwing for that during migration to an ES4 draft. That feature was
> abandoned along with ES4.
>
> If a global property (function f) is moved to a local property,
> and invoked inline, any other scripts referencing -f- identifier
> fail.

That's why you need to know what references function you're inlining.

>
> Inlined function calls for local scope would be safer, as the
> interpreter (and the person invoking it) would not have to know about
> any other global references to - f-.

It should be safe to perform inlining in global scope too, as long as
minifier is aware of all the code. It gets tricky if parts of that code
are in html, though :)

>
> The problem with eval can be demonstrated in an example:
>
> function e(){
> function a(){
> b(12);
> }
> function b(s){
> alert(s);
> }
> function c(){
> eval("b(23)");
> }
> }
>
> If |eval| is used directly, identifiers cannot be removed from e.

Yep. `eval`/`with` would kill a whole lot of optimizations.

>
> If |eval| is used indirectly, problems would also occur, but that would
> be expected.
>
> Regardless, I would rather be warned of possible dead code than have it
> removed automatically. That way I can make the discrimination of what
> needs to be removed (and looking over the code once again is a good
> thing; not a waste of time at all).

Agreed.

--
kangax

Garrett Smith

unread,
Nov 21, 2009, 3:18:51 AM11/21/09
to
kangax wrote:
> Garrett Smith wrote:
>> kangax wrote:
>>> Garrett Smith wrote:
>>>> kangax wrote:
>>>>> Garrett Smith wrote:
>>>>>> I have been using an ANT task with YUI Compressor to apply
>>>>>> minfication
>>>>>> all of my javascript, using an <apply>[1] task.
>>>>>>
>>>>>> The result is much smaller file sizes, plus helpful info in the
>>>>>> console
>>>>>> (my console is in Eclipse).
>>>>>
>>>>> I would not rely on Closure Compiler just yet.
>>>>>
>> [snip]
>>

[snips]

>>
>> function e(){
>> var unusedVar;
>> function a(){
>> b(12);
>> }
>> function b(s){
>> alert(s);
>> }
>> function c(){
>> throw.2;
>> }
>> }
>>
>> e() results does absolutely nothing. It can be optimized to:-
>>
>> function e(){}
>
> or to nothing at all, since `e` is not referenced anywhere :) (and if
> that's all the code there is, obviously)
>

I think there's a problem there:

this["e"]();

If |e| is removed altogether, a TypeError would result.

Any other less obvious way of getting the global object makes the
proposed optimization not possible.

>>
>> - and the result would be identical program behavior, would be smaller,
>> would be more efficient for interpretation.
>>
>> yet with
>> closure compiler:-
>>
>> function e(){function c(){a(12)}function a(b){alert(b)}function
>> d(){throw 0.2;}var f};
>>
>> The promise of dead code removal falls short.
>
> There are many more optimizations possible. They are only scratching the
> surface here. For example, function inlining that they perform looks
> very simplistic:
>

[snip example]

Compiler doesn't build a Tree of [[Scope]]. That is where the best
optimizations could be realized, but they miss that.

[...]

>
>>
>> Inlined function calls for local scope would be safer, as the
>> interpreter (and the person invoking it) would not have to know about
>> any other global references to - f-.
>
> It should be safe to perform inlining in global scope too, as long as
> minifier is aware of all the code. It gets tricky if parts of that code
> are in html, though :)

Removing global identifiers would be a problem for lazy-load scripts
that may want to use that identifier. Frames that use that identifier,
where square-bracket notation is used.

It is unsafe to remove global identifiers altogether.

kangax

unread,
Nov 23, 2009, 2:23:35 PM11/23/09
to
Garrett Smith wrote:
> kangax wrote:
>> Garrett Smith wrote:
[...]

>>>
>>> function e(){
>>> var unusedVar;
>>> function a(){
>>> b(12);
>>> }
>>> function b(s){
>>> alert(s);
>>> }
>>> function c(){
>>> throw.2;
>>> }
>>> }
>>>
>>> e() results does absolutely nothing. It can be optimized to:-
>>>
>>> function e(){}
>>
>> or to nothing at all, since `e` is not referenced anywhere :) (and if
>> that's all the code there is, obviously)
>>
>
> I think there's a problem there:
>
> this["e"]();
>
> If |e| is removed altogether, a TypeError would result.

I'm not following. Was there `this["e"]()` in original code?

[...]

>>
>>>
>>> Inlined function calls for local scope would be safer, as the
>>> interpreter (and the person invoking it) would not have to know about
>>> any other global references to - f-.
>>
>> It should be safe to perform inlining in global scope too, as long as

^^^^^^^^^^


>> minifier is aware of all the code. It gets tricky if parts of that

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


>> code are in html, though :)
>
> Removing global identifiers would be a problem for lazy-load scripts
> that may want to use that identifier. Frames that use that identifier,
> where square-bracket notation is used.

And any other code from, say, intrinsic event handler attributes.

[...]


--
kangax

0 new messages