Hi *,
1. stratumv2 templates via client-push
======================================
The stratumv2 design document suggests:
> Use a separate communication channel for transaction selection so
> that it does not have a performance impact on the main mining/share
> communication, as well as can be run in three modes - disabled (i.e.pool
> does not yet support client work selection, to provide an easier
> transition from Stratum v1), client-push (to maximally utilize the
> client’s potential block-receive-latency differences from the pool),
> and client-negotiated (for pools worried about the potential of clients
> generating invalid block templates).
--
https://stratumprotocol.org/specification/02-Design-Goals/
To me, the "client-push" approach (vs the client-negotiated approach)
seems somewhat unworkable (at least outside of solo mining).
In particular, if you're running a pool and have many clients generating
their own templates, and you aren't doing KYC on those clients, and aren't
fully validating every proposed share, then it becomes very easy to do a
"block withholding attack" [0] -- just locally create invalid blocks,
submit shares as normal, receive payouts as normal for partial shares
because the shares aren't being validated, and if you happen to find
an actual block, the pool loses out because none of your blocks were
actually valid. (You can trivially make your block invalid by simply
increasing the pool payout by 1 sat above the correct value)
Validating arbitrary attacker submitted templates seems likely to be
expensive, as they can produce transactions which aren't already in your
mempool, are relatively expensive to validate, and potentially conflict
with transactions that other clients are selecting, causing you to have
to churn txs in and out of your mempool.
Particularly if an attacker could have an array of low hashpower workers
all submitting different templates to a server, it seems like it would
be relatively easy to overload any small array of template-validater
nodes, given a pure client-push model. In which case client-push would
only really make sense for pure solo-mining, where you're running your
own stratumv2 server and your own miners and have full control (and thus
trust) from end to end.
Does that make sense, or am I missing something?
I think a negotiated approach could look close to what Ocean's template
selection looks like today: that is the pool operator runs some nodes with
various relay policies that generate blocks, and perhaps also allows for
externally submitted templates that it validates. Then clients request
work according to their preferences, perhaps they specify that they
prefer the "no-ordinals template", or perhaps "whatever template has the
highest payout (after pool fees)". The pool tracks the various templates
it's offered in order to validate shares and to broadcast valid blocks
once one is found.
A negotiated approach seems also more amenable to including out of band
payments, such as
mempool.space's accelerator, whether those payments
are distributed to miners or kept by the pool.
This could perhaps also be closer to the ethereum model of
proposer/builder separation [7], which may provide a modest boost
to MEV/MEVil resistance -- that is if there is MEV available via some
clever template construction, specialist template builders can construct
specialised templates paying slightly higher fees and submit those to
mining pools, perhaps splitting any excess profits between the pool and
template constructor. Providing a way for external parties to submit
high fee templates to pools (rate-limited by a KYC-free bond payment
over LN perhaps), seems like it would help limit the chances of that
knowledge being kept as a trade secret to one pool or mining farm, which
could then use its excess profits to become dominant, and then gain too
much ability to censor txs. Having pools publish the full templates for
auditing purposes would allow others to easily incrementally improve on
clever templates by adding any high-fee censored transactions back in.
2. block withholding and oblivious shares
=========================================
Anyway, as suggested by the subject-line and the reference to [0],
I'm still a bit concerned by block withholding attacks -- where an
attacker finds decentralised pools and throws hashrate at them to make
them unprofitable by only sending the 99.9% of shares that aren't valid
blocks, while withholding/discarding the 0.1% of shares that would be
a valid block. The result being that decentralised non-KYC pools are
less profitable and are replaced by centralised KYC pools that can
ban attackers, and we end up with most hashrate being in KYC pools,
where it's easier for the pools to censor txs, or miners to be found
and threatened or have their hardware confiscated. (See also [6])
If it were reasonable to mine blocks purely via the tx merkle root,
with only the pool knowing the coinbase or transaction set at the time
of mining, I think it could be plausible to change the PoW algorithm to
enable oblivious shares: where miners can't tell if a given valid share
corresponds to a valid block or not, but pools and nodes can still easily
easily validate work form just the block header.
In particular I think an approach like this could work:
* take the top n-bits of the prev hash in the header, which are
currently required to be all zeroes due to `powLimit` (n=0 for regtest,
n<=8 in general)
* stop requiring these bits to be zero, instead interpret them as
`nBitsShareShift`, from 0 to up to 255
* when nBitsShareShift > 0, check that the share hash is less than or
equal to (2**256 - 1) >> nBitsShareShift, where the share hash is
calculated as
sha256d( nVersion, hashPrevBlock, sha256d( hashMerkleRoot ),
nTime, nBits, nNonce )
* check that the normal block hash is not greater than
FromCompact(nBits) << nBitsShareShift
Note that this would be a light-client visible hard fork -- any blocks
that set nBitsShareShift to a non-zero value would be seen as invalid
by existing software that checks header chains.
It's also possible to take a light-client invisible approach by decreasing
the value of nBits, but providing an `nBitsBump` that new nodes validate
but existing nodes do not (see [1] eg). This has the drawback that it
would become easy to fool old nodes/light clients into following the
wrong chain, as they would not be fully validating proof-of-work, which
already breaks the usual expectations of a soft fork (see [2] eg). Because
of that, a hard fork approach seems safer here to me. YMMV, obviously.
The above approach requires two additional sha256d operations when
nodes or light clients are validating header work, but no additional
data, which seems reasonable, and should require no changes to machines
capable of header-only mining -- you just use sha256d(hashMerkleRoot)
instead of hashMerkleRoot directly.
In this scenario, pools are giving their client sha256d(hashMerkleRoot)
rather than hashMerkleRoot or a transaction tree directly, and
`nBitsShareShift` is set based on the share difficulty. Pools then
check the share is valid, and additionally check whether the share has
sufficient work to be a valid block, which they are able to do because
unlike the miner, they can calculate the normal block hash.
The above assumes that the pool has full control over the coinbase
and transaction selection, and that the miner/client is not able to
reconstruct all that data from its mining job, so this would be another
reason why a pool would only support a client-negotiated approach for
templates, not a client-push approach. Note that miners/clients could
still *audit* the work they've been given if the pool makes the full
transaction set (including coinbase) for a template available after each
template expires.
Some simple numbers: if a miner with control of 10% hashrate decided
to attack a decentralised non-KYC pool with 30% hashrate, then they
could apply 3.3% hashrate towards a blockwithholding attack, reducing
the victim's income to 90% (30% hashrate finding blocks vs 33% hashrate
getting payouts) while only reducing their own income to 96.7% (6.7%
hashrate at 100% payout, 3.3% at 90%). If they decided to attack a miner
with 50% hashrate, they would need to provide 5.55% of total hashrate to
reduce the victim's income to 90%, which would reduce their own income
to 94.45% (4.45% at 100%, 5.55% at 90%).
I've seen suggestions that block withholding could be used as a way
to attack pools that gain >50% hashpower, but as far as I can see it's
only effective against decentralised, non-KYC pools, and more effective
against small pools than large ones, which seems more or less exactly
backwards to what we'd actually want...
Some previous discussion of block withholding and KYC is at [3] [4]
[5].
3. extra header nonce space
===========================
Given BIP320, you get 2**48 values to attempt per second of nTime,
which is about 281 TH/s, or enough workspace to satisfy a single S21 XP
at 270 TH/s. Maybe that's okay, but maybe it's not very much.
Since the above would already be a (light-client visible) hard fork, it
could make sense to also provide extra nonce space that doesn't require
touching the coinbase transaction (since we're relying on miners not
having access to the contents of the tx merkle tree).
One approach would be to use the leading zeroes in hashPrevBlock as
extra nonce space (eg, [8]). That's particularly appealing since it
scales exactly with difficulty -- the higher the difficulty, the more
nonce space you need, but the more required zeroes you have. However
the idea above unfortuantely reduces the available number of zero bits
in the previous block hash by that block's choice of nBitsShareShift,
which may take a relatively large value in order to reduce traffic with
the pool. So sadly I don't think that approach would really work.
Another way to do that could work might be to add perhaps 20 bytes of
extra nonce to the header, and calculate the block hashes as:
normal hash -> sha256d(
nVersion, hashPrevBlock,
sha256d( merkleRoot, TagHash_BIPxxx(extraNonce) ),
nTime, nBits, nNonce
)
and
share hash -> sha256d(
nVersion, hashPrevBlock,
sha256d( sha256d(merkleRoot), TagHash_BIPxxx(extraNonce) ),
nTime, nBits, nNonce
)
That should still be compatible with existing mining hardware. Though it
would mean nodes are calculating 5x sha256d and 1x taghash to validate
a block header's PoW, rather than just 1x sha256d (now) or 3x sha256d
(above).
This approach would also not require changes in how light clients verify
merkle proofs of transaction inclusion in blocks, I believe. (Using a
bip340-esque TagHash for the extraNonce instead of nothing or sha256d
hopefully prevents hiding fake transactions in that portion)
4. plausibility
===============
It's not clear to me how serious a problem block withholding is. It
seems like it would explain why we have few pools, why they're all
pretty centralised, and why major ones care about KYC, but there are
plenty of other explanations for both those things. So even if this
was an easy fix, it's not clear to me how much sense it makes to think
about. And beyond that's it's a consensus change (ouch), a hard fork
(ouch, ouch) and one that requires every light client to upgrade (ouch,
ouch, ouch!). However, all of that is still just code, and none of those
things are impossible to achieve, if they're worth the effort. I would
expect a multiyear deployment timeline even once the code was written
and it was widely accepted as a good idea, though.
If this is a serious problem for the privacy and decentralisation of
mining, and a potential threat to bitcoin's censorship resistance,
it seems to me like it would be worth the effort.
5. conclusions?
===============
Anyway, I wanted to write my thoughts on this down somewhere they could
be critiqued. Particularly the idea that everyone building their own
blocks for public pools running stratumv2 doesn't actually make that much
sense, as far as I can see.
I think the share approach in section 2 and the extranonce approach in
section 3 are slightly better than previous proposed approaches I've seen,
so are worth having written down somewhere.
Cheers,
aj
[0]
https://bitcoil.co.il/pool_analysis.pdf
[1]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012051.html
[2]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012443.html
[3]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012060.html
[4]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012111.html
[5]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012069.html
[6]
https://bitcoinops.org/en/topics/pooled-mining/#block-withholding-attacks
[7]
https://ethereum.org/en/roadmap/pbs/
[8]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015386.html