Proposal for Real-world Testing of Bitcoin XT - Muneeb's proposal

147 views
Skip to first unread message

Washington Sanchez

unread,
Aug 21, 2015, 2:32:40 AM8/21/15
to bitcoin-xt
This is a cross-post from the Blockstack Discourse forum here.

The blocksize / Bitcoin XT debate is getting pretty heated (see this10 thread). In Computer Science, real data and real deployments are the things which matter and no amount of arguing or simulations are a substitute for real deployments.

 

"In theory, there is no difference between theory and practice. But, in practice, there is."

 

For the blocksize debate, people usually point out that network bandwidth/latency is limited in different parts of the world and larger blocks can mean that only people with high bandwidth connections can participate in the network. We can easily test this out. There is a great resource PlanetLab10 that is used by thousands of researchers to run real-world experiments. Think of PlanetLab as Amazon AWS for researchers. It has real nodes/hardware deployed all over the world and you can run real experiments on it. Instead of debating "what will happen when someone in India with a 256 Kbps joins XT with 8MB blocks". You can actually run XT on nodes in India using PlanetLab and just see what happens. I'm happy to put XT developers in touch with the PlanetLab core devs for this experiment.

 

Real data trumps everything. Period.


 Also another comment:

Agreed and I'd narrow down the experiment even further by trying to answer what is the minimum bandwidth uplink/downlink requirement (network links are not going to be symmetric) for running a node, given you want this to be a global system (users anywhere in the world should be able to join it).

 

Personally, I'd feel a lot better about supporting a larger blocksize if we know that Bitcoin nodes running at hundreds of geographically distributed PlanetLab locations with varying hardware/network resources seem to be doing just fine.


My own personal take is that the transaction capacity of the network shouldn't be beholden to people who shouldn't be miners (and thank God we never took this approach with the hash rate). Nevertheless, this sort of test may yield some good data and go a long way to convincing miners to support BIP101.

Thoughts?

Mike Hearn

unread,
Aug 21, 2015, 7:41:55 AM8/21/15
to bitcoin-xt
I believe Gavin already did do some network tests with regtest nodes spread across different servers in different geographies. He also build a mining simulator that simulates various latencies etc:


There has actually been quite a lot of testing and research behind BIP 101. Perhaps next week Gavin can weigh in on whether PlanetLab testing would help or just duplicate work he already did.

Jerry Chan

unread,
Aug 25, 2015, 3:19:14 AM8/25/15
to bitcoin-xt
That sounds a bit thin.
I spent 14 years working on banking technology, and there would be no way we would risk pushing a change that is not sufficiently tested (ie, on a simulator only) onto our systems like that.  It's people's money we have to safeguard and that is the top priority.

Mike Hearn

unread,
Aug 25, 2015, 5:53:51 AM8/25/15
to bitcoin-xt
As I said, testing with actual network nodes spread around the world (running a private Bitcoin network) handling 20mb blocks was done, if I recall correctly, though Gavin can weigh in on the details. I may be misremembering.

I used to work in site reliability engineering at Google, so am also familiar with the needs of high-availability systems. I think BIP 101 has been tested about as well as is realistic with our current infrastructure. The big differences I see are:
  1. At Google we had exhaustive unit testing as well as regression tests. In Bitcoin we mostly just have regression tests.

  2. Inability to do incremental rollouts for rule changes. You can't do a 5% experiment on bigger blocks by the nature of the system.
The rest of the techniques used are much the same.

Gavin Andresen

unread,
Aug 25, 2015, 10:59:13 AM8/25/15
to bitcoin-xt
RE: testing on PlanetLab:  more testing is better, as long as you know what you are trying to test and don't just re-test the same thing.

I tested large (20MB) block validation and propagation WAAAY back in December, looking for any non-linear behavior (CPU, memory usage, or bandwidth), and found no problems.

Before that, Conformal tested even larger blocks and transactions and also found no problems:


I can think of a lot of PlanetLab experiments that could be run, and I know there are at least one or two academic groups who are working on full-scale, running-actual-production-code, test networks. Last I talked to one of them, they were still busy tuning the test network (they can control latency/bandwidth between peers) so it behaved like the actual Bitcoin network.

Before throwing a bunch of bitcoin xt nodes running in -regtest mode on PlanetLab, some thought needs to go into what questions you want answered. For example, if you're worried about block propagation times with bigger blocks between miners, you'll need to also run Matt's fast relay code in PlanetLab.


To anybody who wants to help test, some practical tips:

+ Run a bunch of nodes in -regtest node.
+ You can control network topology using explicit -connect command line arguments or the addnode RPC command
+ Use the 'generate' RPC call to generate blocks on-demand
+ If you are generating tens of thousands of transactions to test, be aware that the wallet code is NOT optimized for that-- you will find coin selection and transaction creation slowing down the more transactions you generate. Use the raw transaction API with private keys that aren't in the wallet or  Conformal's btcd/btcwallet code instead to workaround that problem.

--
--
Gavin Andresen
Reply all
Reply to author
Forward
0 new messages