Status update on bigger blocks

860 views
Skip to first unread message

Mike Hearn

unread,
Jul 8, 2015, 7:24:47 AM7/8/15
to bitco...@googlegroups.com
Hello,

As you may have noticed, Bitcoin unfortunately has run out of capacity due to someone DoS attacking the network. We now get a preview of coming attractions (so far that means, people complaining that their transactions are not confirming).

Gavin has prepared patches that implement a hard fork in January for bigger blocks. I spent the last couple of days testing and doing a second code review of the patches, this revealed some further bugs and I have sent him patches to fix them. However Gavin is on vacation and so not much will happen until he is back next week.

In the past week we also found that Gavin's XT node was under attack via Tor. In case this was a trial run, I have prepared a patch to help with that a bit, though DoS attacks are an ongoing matter of concern that require significant effort to raise the bar. I will put that patch into the next XT release.

I will try and find time to look at ways to resolve the current tx flooding attack in the next day or two. I suspect it can be mitigated through heavier reliance on coin age priority, as coin age is not something that can be trivially bought on the spot, unlike fee priority. If anyone wants to beat me to it, please go ahead.

thanks,
-mike

cedivad

unread,
Jul 8, 2015, 6:27:21 PM7/8/15
to bitco...@googlegroups.com
Will it be possible to use XT to only vote for the bigger blocks and nothing else?

Mike Hearn

unread,
Jul 8, 2015, 6:32:55 PM7/8/15
to cedivad, bitco...@googlegroups.com
Bigblocks is the only block chain/consensus change that will be in the next release, so there is nothing else not to vote for.

--
You received this message because you are subscribed to the Google Groups "bitcoin-xt" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoin-xt+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

cedivad

unread,
Jul 8, 2015, 6:59:40 PM7/8/15
to bitco...@googlegroups.com, ced...@gmail.com
I see from the readme that XT is a set of changes, bigger blocks is not the only one. I personally don't care, but I think those could be a deterrent for it's adoption for people only interested in voting for the block size debate.

Mike Hearn

unread,
Jul 8, 2015, 7:21:39 PM7/8/15
to cedivad, bitco...@googlegroups.com
The other changes are to do with network handling. By running them you don't affect other people, just give them more information or prioritise your own resources differently. So there's nothing voting related about them.

Anyway, the answer is no. XT is a set of changes. I will not maintain endless combinations of features to satisfy every whim. If someone wants to take Gavin's changes and make a third fork that has only that on top of Core, that's cool by me.

David Barnes

unread,
Jul 8, 2015, 9:42:37 PM7/8/15
to bitco...@googlegroups.com
I don't like the idea of mixing the coin age priority update in with the blocksize update.

As an exchange operator it is very difficult to build up any coin age within the wallet the coins are being constantly withdrawn and deposited, but certainly doesn't qualify the exchange transactions are SPAM.
If you put in a coin age penalty I would suspect no exchanges will get behind Bitcoin XT.

Personally I much prefer the idea of charging per output as a SPAM deterrent.
https://github.com/bitcoin/bitcoin/pull/1536

Peter Tschipper

unread,
Jul 8, 2015, 11:57:18 PM7/8/15
to bitco...@googlegroups.com
Hi Mike,

I saw one of your posts today where you mentioned that charging "newer" coins higher fees might help to reduce spam.  Someone replied, apparently from an exchange, saying he wouldn't support that because it would affect their users, but I was wondering if there could be a "rapid decay" in the fee, whereby the fee would drop within perhaps 10 or 15 seconds after the UTXO was created.  The logic behind it is that this particular spam attack, as I'm sure you're aware, uses the newly created UTXO from the last batch of spam to create the next batch of spam and cycles through this process rapidly.  If the new UTXO had a high fee for the first 10 seconds, it would at least slow them down somewhat, or rather, cost them a lot more without affecting regular users.

Mike Hearn

unread,
Jul 9, 2015, 5:27:43 AM7/9/15
to David Barnes, bitco...@googlegroups.com
Hey David,

I hear you. The idea is not to add a penalty, but to find a way to prioritise when blocks are full that's better than the current algorithm - which is strictly a worst case scenario. As you know, I believe blocks should not be permanently full. That's a disaster for exactly this reason. Normally it should make no difference.

If you have better algorithm suggestions I'm all ears. What I'm thinking about may not even work, I need to do some simulations to see what happens. But consider this: whilst you are a professional operator and can monitor fees and adjust them in real time, when people are sending money into your exchange they are using whatever their wallet does and may even try to manually pick fees. So it's also in your interests that users don't face high fees to get money into your exchange.

But anyway: the goal here is simple. The attackers aim is to deny service (we presume). Perhaps they're trying to make a point or perhaps they are short BTC and want to try and push the price lower. If we're able to ensure that, say, 80% of users are seeing no disruption at all, then their mission is largely failed and at some point (assuming rationality) they should get tired of burning money and go away. So we want to maximize that percentage of users who see no disruption.

thanks,
-mike

David Barnes

unread,
Jul 9, 2015, 7:34:24 AM7/9/15
to bitco...@googlegroups.com, davidba...@gmail.com
Mike,

I have 2 main points:

1) Leave the other (non blocksize) updates for a separate patch.  Otherwise if you try to package several controversial changes together, everyone is going be able to find one thing they don't like and BitcoinXT as a whole will not achieve a majority following.   If you think there are transaction prioritizing update that are very important try to merge them to bitcoin core first, or wait and do it later after bitcoinxt has achieved acceptance.

2) On the topic of further prioritizing coin age.  Spammer will easily be able to find a way around this.  All they need to do it create transactions with thousands of outputs, and then allow those outputs to age before re-spending.  Plus spammers have time on their side, they can make the initial transactions then just sit on those outputs for months before they start the real attack....
Whereas exchanges would also have to start implementing their own coin aging systems, which will be a pain because they cannot easily sit on outputs like the spammers, as those funds may be demanded immediately by the customers.

Not to mention that most wallets do not age coins at all, they just re-spend low confirmation, or even unconfirmed change.

Point being number of outputs is much better target for prioritization than coin age.

Such as (just a simply example):

base_fee = 0.0001 * kb
standard_fee
=  base_fee + (num_outputs * base_fee/2)


Mike Hearn

unread,
Jul 9, 2015, 8:05:01 AM7/9/15
to David Barnes, bitco...@googlegroups.com
Hey David,

XT predates the block size debate and includes a variety of other patches. As I said on another thread already, I am not going to prepare N different flavours just for the sake of it. If someone has a specific, credible objection to a particular patch, then I'd like to hear them and am open to putting things behind flags so those with strong opinions can opt out. But I'd need to be given an actual technical reason. 

"Stuff might be controversial" is something the Bitcoin community needs to move beyond. It is a way to stall progress, nothing more.

I do not plan to submit further patches to Core.
  
2) On the topic of further prioritizing coin age.  Spammer will easily be able to find a way around this.  All they need to do it create transactions with thousands of outputs, and then allow those outputs to age before re-spending.

Yes, obviously. There are ways around any DoS or spam filter, especially if you assume an infinitely motivated and capable opponent. There are no magic bullets to prioritising users when out of capacity, especially in a system where those users are largely anonymous. Hence the desire to avoid running out!

However, it means spammers are burning a resource that many users have a natural supply of. So it might help users outcompete attackers during the attack.
 
Whereas exchanges would also have to start implementing their own coin aging systems, which will be a pain because they cannot easily sit on outputs like the spammers, as those funds may be demanded immediately by the customers.

I would hope that exchanges can just buy their way ahead of DoS attackers during an attack with higher fees. The cost may need to be passed on or eaten, but it can be done.

But just to reiterate the point - when things are normal this should never be needed. Exchange transactions should go through as fast as everyone elses.
 
Not to mention that most wallets do not age coins at all, they just re-spend low confirmation, or even unconfirmed change.

Well, Bitcoin Core and bitcoinj certainly take age into account. It's only a few lines of code.
 
Point being number of outputs is much better target for prioritization than coin age.

That may be so, but it needs investigation. For instance, exchanges often use sendmany and batch sends together so they'd also have lots of outputs in their transactions. And flooders can just break their transactions up to have fewer outputs. This stuff isn't easy.

Yet another approach would be to go outside the fee/priority model entirely, and communicate to miners that your transactions are important out of band. But let's exhaust the possibilities of the current data we have before looking at adding new data. And let's get the darn block size raised! :)

Mike Hearn

unread,
Jul 24, 2015, 7:46:19 AM7/24/15
to bitco...@googlegroups.com
Here's another status update on bigger blocks:

Gavin is back from vacation and has incorporated the bug fixes I sent him. One of the things the patches do is introduce a new rule that imposes a max transaction size limit. This is because the way Bitcoin calculates signatures has poor algorithmic complexity and without such a rule oversize transactions can be very slow to verify. The original code used a simple size limit, but Gavin wants to improve this to measure bytes actually hashed: this is more complex but more direct. So he has been working on a patch to do this.

Once this patch is ready I will build some Linux binaries and try them out for a day or two. If there are no problems I'll ask for more testers. Please let me know if you'd like to assist with this. Then once we're happy that the test nodes seem stable, we will go ahead and launch 0.11A


Chris Wheeler

unread,
Jul 24, 2015, 11:05:45 AM7/24/15
to bitcoin-xt, he...@vinumeris.com
I'd be happy to test this if you can provide linux binaries, I'm running a node on a fairly low bandwidth (3mb down, 1mb up) connection in the UK. I can also set something up in a well connected DC in London if required.

Mike Hearn

unread,
Jul 24, 2015, 11:15:26 AM7/24/15
to Chris Wheeler, bitcoin-xt
Thanks Chris.

What OS is your node? I'm preparing an apt repository with the gitian built amd64 binaries in it.

Chris Wheeler

unread,
Jul 24, 2015, 11:22:45 AM7/24/15
to bitcoin-xt, he...@vinumeris.com
It's Centos 7 64bit... so an rpm or yum repo would be preferred if possible, or I can just copy over the binaries.

Mike Hearn

unread,
Jul 24, 2015, 11:23:41 AM7/24/15
to Chris Wheeler, bitcoin-xt
OK. There'll be a tarball of course. If you'd be interested in making a yum repo that'd be neat.

Michael Ruddy

unread,
Jul 24, 2015, 7:43:51 PM7/24/15
to bitcoin-xt, he...@vinumeris.com
Thanks for the update. I figure that I'll compile my own on Ubuntu 14.04.2 LTS x86_64 and play with it. Is all this going to be put on the only-bigblocks branch?

Is changing the p2p protocol version and widening the payload size etc... going to be part of the patch? The network stuff isn't part of consensus, so it doesn't have to be in a hardfork and the concern does not kick in for a long time. Just curious what the plan is.
If so, I looked at that branch and I'm wondering some about CMessageHeader.nMessageSize being 32 bits and the far future block sizes overflowing that (maybe you have an update in a private patch set already). There are also some message header payload MAX_SIZE comparisons that probably need to be updated (like in CNetMessage::readHeader, etc...).

Will the text of BIP 101 (https://github.com/gavinandresen/bips/blob/blocksize/bip-0101.mediawiki) be updated to note the new max transaction size limit as well?
When going from reading the BIP to reading the code, the BIP did not seem comprehensive of at least all of the high level consensus related changes to me.
Looping that back into the BIP may aid understanding and decision making for more casual observers. I was going to mention this at https://github.com/bitcoin/bips/pull/163, but since I'm here, this might be a better forum to mention it.
Finally, I learned a little from the mailing list discussion on this, so maybe link to it from the BIP: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/009000.html.
Or not... I'm not sure how BIPs are viewed around here. To me they're more about info sharing than process.

Gavin Andresen

unread,
Jul 24, 2015, 9:03:37 PM7/24/15
to Michael Ruddy, bitcoin-xt, Mike Hearn
On Fri, Jul 24, 2015 at 7:43 PM, Michael Ruddy <mrud...@gmail.com> wrote:
Is changing the p2p protocol version and widening the payload size etc... going to be part of the patch?

No-- it will be many years before 'block' messages could be bigger than 2gigabytes, and by then we CERTAINLY won't be using 'block' messages to announce new blocks (we'll be using something much better that doesn't repeat all of the transaction data).
 
Will the text of BIP 101 (https://github.com/gavinandresen/bips/blob/blocksize/bip-0101.mediawiki) be updated to note the new max transaction size limit as well?

That's what I've been working on the last few days-- I convinced myself the right thing to do is to clean up the old sigop-counting rules. There will be a separate BIP describing the new rules; I posted a draft to bitcoin-dev mailing list today:

(latest code is at https://github.com/gavinandresen/bitcoin-git/commits/count_hash_size -- I'll work with Mike to get that into an XT branch )

--
--
Gavin Andresen

Michael Ruddy

unread,
Jul 25, 2015, 12:25:21 PM7/25/15
to bitcoin-xt, he...@vinumeris.com, gavina...@gmail.com
No-- it will be many years before 'block' messages could be bigger than 2gigabytes, and by then we CERTAINLY won't be using 'block' messages to announce new blocks (we'll be using something much better that doesn't repeat all of the transaction data).

Cool, that makes sense on multiple levels. First, it reduces the patch size by not including non-consensus, client app specific, far-future networking changes.

Second, it highlights that how the blocks are communicated is not a part of consensus, and how the scheduled block size increases can add additional value by acting to spur innovation in this area.
It may be important for people to realize and internalize that second point. In addition to being another reason for people to accept this patch, it may help them expand their creativity to think beyond limitations that the current P2P protocol over the Internet may (eventually) have.

The scheduled nature of the increases gives network participants the opportunity to evaluate their situation and judge whether further evolution is needed on their part.
If adjustment is necessary, then such innovations can take many forms including things you already mentioned at https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2, or even using other protocols over networks other than the Internet (e.g.- via Sneakernet, a future bitcoin satellite link, an Andreas Antonopoulos guerilla style "short wave radio hooked to a fence" [https://youtu.be/3mUcpsbnhGE?t=14m20s] transmission, or more likely a mesh network in developing or otherwise constrained locations, etc...). Basically, if the medium is constraining the message, then changing the medium can be a solution.

That's what I've been working on the last few days-- I convinced myself the right thing to do is to clean up the old sigop-counting rules. There will be a separate BIP describing the new rules; I posted a draft to bitcoin-dev mailing list today: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009667.html
 
Nice, I think I prefer your latest sigop counting correction over the initial easy tran size limit too. That adds value for people willing to switch. I noticed block 365955 took my 2.2GHz machine almost 5 minutes to validate the other day. I was wondering if that showed up during your last 100,000 block high sigop count analysis? It's always possible it was something else, but I've got a decent SSD and wasn't doing anything much else at the time when I saw that.

Mike Hearn

unread,
Jul 25, 2015, 12:30:30 PM7/25/15
to Michael Ruddy, bitcoin-xt, Gavin Andresen
I noticed block 365955 took my 2.2GHz machine almost 5 minutes to validate the other day. I was wondering if that showed up during your last 100,000 block high sigop count analysis?

Yep. Gavin posted to the bitcoin-development mailing list about this change and the bytes-hashed limit is set to be slightly over that transaction. So it imposes a worst case of a 5 minute to validate transaction (on your hardware), which isn't awesome, but perhaps it's easier than setting the limit much lower than what's already in the chain.
 
It's always possible it was something else, but I've got a decent SSD and wasn't doing anything much else at the time when I saw that.

It was spending its time hashing data over and over again.

At some point there'll be a new sighash function that's a lot more efficient.

Michael Ruddy

unread,
Jul 25, 2015, 1:04:53 PM7/25/15
to bitcoin-xt, gavina...@gmail.com, he...@vinumeris.com
Ah, guess I missed seeing that. I saw 364292, 364422, and 364773 mentioned on the list. Those looked like a different usage pattern from 365955. Since I saw 16ga2uqnF1NqpAuQeeg7sTCAdtDUwDyJav and 19VAb9zAhpWLaWfEuqw9HXup2zaNoNPPyE in 365955 a lot though, I was guessing the cause would be related. Good to know, thanks.
Reply all
Reply to author
Forward
0 new messages