I think what you all are talking about here is a CDN or Content
Delivery Network. CDN's are used to do two things: First and most
important, they obviously distribute the load on both the server's
backbone internet connection and CPU / system resources across multiple
machines, resulting in faster load times. The second service they
provide is content delivery from servers in close(er) proximity to the
host receiving the data, an optional and often overlooked aspect on many
CDN implementations. To provide the second service, the geographical
location of the host must first be determined, which is only done once
per session during the initial request. Obviously, there must be one
"primary" server that handles all the initial requests and determines
the hosts location, then places a cookie on the host (so the host's
session can be tracked) and routes that request to the closest
"tertiary" server available. For the purposes of HOTU, I would ignore
this second aspect for now. CDN servers do not contain an entire copy of
everything on the site, they only server STATIC content -- images,
mostly. Static is really the operative word when discussing CDNs. For
example, if you goto
ebay.com and look at the properties of the eBay
logo at the top, you'll notice that it was served from a different
domain, "
ebaystatic.com". This is a CDN in action. There is one other
thing I want to mention, and I'm not sure if this has been brought up or
even considered, but implementing a physically separate machine or
machines to act as database severs for a Web Application is NOT a CDN
and has nothing to do with a CDN. Using separate boxes whose sole
purpose is serving database queries is a great idea, in fact it can help
reduce the bottleneck on a site more then anything, but this setup is a
"Multi-Tier" Web Application, and generally more difficult to implement.
The distinguishing characteristic is, with a database server, the
communication occurs on the SERVER-side, between the web site's
Application Logic layer server and the database server. This is opposed
to the setup of a CDN, where communication is with the client-side,
between the server and the site visitors browser.
That said, I'm not sure of the traffic numbers on the old or new
HOTU sites, but I suppose implementing such a system is a good idea even
if it's not completely necessary at this point, and I'm not saying it
isn't. If you have additional servers available, my first priority (as
far as a CDN goes) would be implementing a true CDN dedicated static
image server (the first aspect I mentioned above, ignore the second for
now). You would do this by simply: Moving all the images to the new
static CDN server box. You can use a sub-domain, as has already been
discussed, like
img.hotud.org -- or you could use a completely separate
domain like
hotud-static.org, one is really no better then the other. In
either case, DNS records will need to point to a DIFFERENT IP address,
which points to a different server, of course.
That's the key to what was being discussed earlier, in order for a
browser to open multiple _sets_ of simultaneous connections while
loading a single website, there must be content on that site being
served from two different IP's. Since it was decided that this site
would run on Joomla, a system I have had many unfortunate experiences
with -- mostly related to it's "obese" code base & excessive queries,
you want to lighten the load on the machine executing the PHP code (god
forbid the same machine be acting as the DB server as well, which I'd
guess it is) as much as possible, from a CPU time usage point-of-view
more then bandwidth. I also agree with previous suggestions about
storing the download-able game files on a different machine -- to
prevent a bandwidth bottle neck. The best way to do this would be to use
a sub-domain whose DNS points to a separate IP as well, like
download.hotud.org.
If you wanted to use a distributed database solution (multiple boxes
with different IPs serving DB queries), you would need a load-balancing
system in place to distribute all the Joomla queries across the servers.
This will NOT happen on it's own. You would also need to correctly
configure database replication on all systems, so that when a record is
updated on one DB server it propagates to the other DB servers --
ensuring they each have an exact copy of the database.
I've been a web developer for many years, so I just thought I would
offer some assistance on this issue. While incredibly slow page-load
times do not seem to be a problem on the site yet, if it becomes
anywhere near as popular as the old HOTU was at its peak it will be a
problem soon. The methods I outlined above are the only true solution to
such an issue. People may tell you that simply upgrading your internet
back-bone or the components in the server box will make this go away,
but I've been there and I can tell you beyond a shadow of a doubt that
this is not the solution. While those things may help, they are not
solutions which address the real issue causing the poor performance and
are band-aid solutions at best. I hope these are issues that the new
HOTU will someday face, these things are a good sign of course -- but
left uncorrected there's nothing that will kill a site faster then poor
page-load times and download speeds. If and when this becomes the case,
I'm always here. I've have had experience with such situations with
several sites months after they were deployed, and I've always been able
to get the issues corrected and restore load-times to satisfactory levels.
All the best,
gonzo
Walter wrote:
>
> Browsers handle 2-3 simultaneous connections *PER MACHINE*.
>
>
> Actually I'm fairly sure that this is 'per domain'.
>
> The implication being that if you create
x.yourdomain.com
> <
http://x.yourdomain.com>,
>
y.yourdomain.com <
http://y.yourdomain.com>,
z.yourdomain.com
> <
http://z.yourdomain.com> and split resources on a