The blocksize / Bitcoin XT debate is getting pretty heated (see this10 thread). In Computer Science, real data and real deployments are the things which matter and no amount of arguing or simulations are a substitute for real deployments.
"In theory, there is no difference between theory and practice. But, in practice, there is."
For the blocksize debate, people usually point out that network bandwidth/latency is limited in different parts of the world and larger blocks can mean that only people with high bandwidth connections can participate in the network. We can easily test this out. There is a great resource PlanetLab10 that is used by thousands of researchers to run real-world experiments. Think of PlanetLab as Amazon AWS for researchers. It has real nodes/hardware deployed all over the world and you can run real experiments on it. Instead of debating "what will happen when someone in India with a 256 Kbps joins XT with 8MB blocks". You can actually run XT on nodes in India using PlanetLab and just see what happens. I'm happy to put XT developers in touch with the PlanetLab core devs for this experiment.
Real data trumps everything. Period.
Agreed and I'd narrow down the experiment even further by trying to answer what is the minimum bandwidth uplink/downlink requirement (network links are not going to be symmetric) for running a node, given you want this to be a global system (users anywhere in the world should be able to join it).
Personally, I'd feel a lot better about supporting a larger blocksize if we know that Bitcoin nodes running at hundreds of geographically distributed PlanetLab locations with varying hardware/network resources seem to be doing just fine.