On 2013-10-27 00:30, Eric Bednarz wrote:
> IMNSHO there's a lot wrong with that concatenated picture.
>
> 1) What is 'large'? If you you have five script files between five and
> ten KiB, go ahead. But is it more efficient to load 500 KiB (or more)
> script synchronously before you can render anything at all instead of
> loading what you need in order to render the initial view first and
> lazy-load the rest?
>
> 2) Loading a really large file is usually much less efficient than
> loading several smaller files dynamically in parallel in most (more or
> less) contemporary browsers. This might depend on the precise values of
> 'several' and 'contemporary', of course.
>
> 3) If you happen to exclusively develop for UA's that already support
> SPDY, you're just waisting time anyway (or rather, you're being
> very productive shooting yourself in your foot).
>
> 4) If you change one character in a one MiB file, you have to serve one
> MiB to propagate that change to the client. Most client-side code I've
> seen can be at least be split into:
>
> - third party vendor code (very infrequently or never updated)
> - application core (updated when a bug has been found and fixed)
> - end user feature set (frequently updated)
>
> and to me it makes sense to separate and aggressively cache all of them.
I agree for the most part, I just didn't want to go into that much
detail in a reply to someone who's still learning the basics.
Performance tuning is an art. There is no one-fits-all solution, and
even if there was, it would become obsolete in a matter of months as new
technologies become available (like SPDY) and the balance of installed
user agents shifts (new iPhone, an OS end-of-lifing, etc).
There are just too many variables to give a generic recommendation. Are
the files large or small? Static or dynamic? Served from one host or
multiple hosts? Are we using public CDNs for popular libraries? What's
the chance that some of the files are already in the cache? How many
parallel script downloads does the average visitor's browser support?
How are the caches and proxies configured (on the server, the client, or
in between)? Are we serving over SSL? How many pages will the average
visitor load? Is there a class of visitors that should receive
preferential treatment? Is loading scripts even a noticable factor of
the overall site performance? Can we ignore certain legacy browsers?
When a script changes, how do we invalidate the client's cached version?
What's the expected network latency? Is this an intranet site? Or mostly
mobile? Is there a limit to the file size or number of files a client
will cache? What's the cost of a cache-related failure? Would it be
better to store some of the data in localStorage?
....and so on. That was just off the top of my head.
For high profile, high traffic sites, all of these questions (and more)
must be investigated and answered. Most sites (IMHO) can just to do
whatever they want and nobody will even notice.
All that said, I doubt that 10 separate script requests would really
perform better _on average_ than requesting one large file, large being
<200k. If the minified source code takes up more than that, we're
talking about a special case - or something perversely bloated, like
jQuery UI. Lazy-loading data is an option, of course, but also a special
case.
Even with client-side caching, the UA may periodically make requests,
and will likely get a 304 response, but those requests still takes time.
And it takes more time and bandwidth to make ten requests (plus headers)
than one. SPDY can definitely help here, but it's still a niche
protocol, mostly due to missing default server-side support.
Anecdote: Three years ago, I optimized the hell out of a site
(script-heavy, extranet, with ~20k unique visitors per month) as an
exercise. I read everything I found about the topic, including of course
the excellent books and articles by Steve Souders, and implemented what
I felt was the optimal solution at the time. It did have an effect, but
only if you were really paying attention. None of the users I talked to
noticed any significant difference with the optimizations enabled or
disabled. To make matters worse: a year later, the recommended "best"
solution was completely different.
Lesson learned: if it's fast enough, don't waste time micro-optimizing
it until someone complains. Until then, your time is better spent adding
features and eliminating bugs. With moderately current browsers and
hardware, and an acceptable internet connection, even the typical
non-optimized jQuery UI site with a zillion plugins will load fast enough.
If you suddenly "hit it big" with some project, there will be plenty of
time and funds to improve performance. But if you spend too much of your
time with premature optimization, the chances of hitting it big are small.
- stefan