OP_ZKP updates

242 views
Skip to first unread message

Weiji Guo

unread,
Jul 22, 2024, 10:16:18 AMJul 22
to Bitcoin Development Mailing List

Greetings, bitcoin developers. I am writing to update you about the OP_ZKP 

proposal I initiated last year. This time, it concerns which ZKP scheme to use in 

the underlying proving system. This email will contain the following four parts:


1, background, mainly about the initial proposal. You may skip it if you already 

know about it.

2, high-level requirements for the ZKP scheme and the current priority regarding 

selection. Also, a brief explanation of precluding trusted setup and FRI-based 

schemes. 

3, ideas and open issues regarding Inner Product Argument.

4, what-ifs


I should have studied all these further before sending this email, but I also want to 

have something to talk about during Bitcoin Nashville. During the conf, you may 

find me in the K14 booth for f2f talks.


———Background———


For those who haven't heard about this idea, here is the link to the earlier email 

thread: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-April/021592.html


Before proposing any BIP, we must explore existing ZKP schemes and discuss 

how to use them with OP_ZKP within applications. That's why I took a detour to 

develop potential applications to understand further how OP_ZKP could work in 

real-world applications. That work concluded about two months ago; since then, I 

have been working on OP_ZKP.


———High level requirements———


1, Security. The security assumptions should be minimal. Ideally, only the ECDLP 

assumption is taken, leading directly to the Inner Product Argument over 

secp256k1. However, an open issue still blocks IPA from working in OP_ZKP 

(details will follow soon). We might consider pairing in a transparent setup setting, 

but no trusted setup.


2, Small block size consumption. The proof should be both succinct and 

concretely small. Being concretely small allows individual or small numbers of 

proofs to be posted on-chain without incurring too many costs. Otherwise, they 

will have to wait to be verified in a batch, should the scheme allow such. Arguably, 

waiting for batching is not a good idea, as the transactions will have to be put on 

hold, and there is no guarantee more will come. That said, we might revisit this 

choice should we run out of candidate schemes. 


FRI-based proofs are considered too big for 100- to 128-bit provable security. The 

size is a few hundred kilobytes for a circuit of 2^20 size. Besides, it does not allow 

batched verification (see the following requirement). 


3, Batched verification is mandatory due to block size considerations. Should the 

situation allow, we could replace the proof data (as transaction witnesses) of 

thousands of OP_ZKP transactions with a single argument to save block space 

(and transaction costs). 


It also saves verification time, even without the benefits of saving block space. 


4, Small verification key for deployment. It seems natural but is not the default 

case for some schemes. 


5, Aggregated proving. This is optional as it happens off-chain. However, it 

effectively reduces proof size and verification time requirements for some non-

constant size proof schemes (we precluded trusted setup). Consider aggregating 

many sub-circuits together with a constant or log-sized argument. 


2^16 is a reasonable upper bound for a sub-circuit. Computing a block hash 

takes about 60k constraints (Sha256 applied twice against an 80-byte block 

header).


But 2^16 is still too large for FRI-based proof. Each FRI-query should cost 16^2 

hashes, which is 8 kilobytes. There are dozens of queries to run to meet the target 

security (preferably 128-bit, without further security conjectures). Interested 

readers may refer to https://a16zcrypto.com/posts/article/17-misconceptions-

about-snarks/#section--11. 


———Inner Product Argument———


Now, let's consider IPA-based solutions. IPA has a transparent setup, requires 

only ECDLP assumption, can work on the secp256k1 curve, has a relatively small 

proof size, and is capable of both batched verification and aggregated proving. 

We can use the aggregated proving technique to address the issue of linear time 

verification. The remaining open issue with IPA is that the size of the verification 

key is linear to the circuit size. 


Let me expand on the last two issues. 


The linear verification time of IPA comes from the need to recompute the 

Pedersen commitment base in each round of reduction (could be postponed but 

still O(N)). We could adopt the aggregated proving technique to address this 

issue and effectively reduce the complexity of the actual verification time. 

Suppose there are M sub-circuits; each has a size bound of N (assuming N = 

2^16). We could have an argument to combine the M inner products into one. This 

argument is also made of IPA, thus having O(log(M)) size and O(M) verification 

time. Then the total proof size is O(log(M) + log(N)), and verification time 

becomes O(M)+O(N) rather than O(M*N) (there will be some extra costs per 

aggregated proof, but let's skip it for now). This is how we could use IPA to 

achieve succinctness and to deal with huge circuits (we need this as we might 

recursively verify proofs of other schemes). 


Linear verification key comes from the need for the verifier to use all the circuit 

constants. It is at least accurate for BP, BP+, and even Halo, and it used to be 

acceptable as the multi-scalar multiplication dominates the verification costs. 

However, it becomes an issue with aggregated proving in terms of both 

deployment size and verification time. When M is large enough, the aggregated 

verification time to compute with these constants is non-trivial, even compared to 

the MSM costs. The more significant issue is with the circuit deployment. We have 

to save all these parameters on-chain. 


To further expand on this, let me re-iterate the Sonic Arithmetization process. It is 

R1CS-based with some pre-processing. Suppose there are N multiplication gates 

such that:

      A_i * B_i = C_i 

for i in [0 - N-1], A_i, B_i, and C_i as field elements, and A, B, and C are circuit 

witnesses of N-element vectors. 


There are also Q linear constraints to capture the addition gates, circuit wiring, 

and multiplied-by-constant constraints:

      WL*A + WR*B + WO*C = K

where WL, WR, and WO are Q*N matrixes of field elements, and K is a Q-element 

vector of field elements. WL, WR, WO, and K are circuit constants. 


Although WL, WR, and WO are vastly sparse, they are at least O(N) size. The 

verifier will have to compute on them during verification after generating a 

challenge value in response to the prover's commitment to the witnesses. This 

O(N) size prevents us from deploying verification keys on-chain. 


The natural idea is to commit to those constants somehow and to offload the 

commitment opening to the prover. Although the verifier still pays O(N) time to 

verify, the deployment size is constant, and the opening size is only O(log(N)) 

instead of O(N), assuming an IPA opening. However, this no longer holds with 

aggregated proving, as there is no guarantee that the aggregated constants 

aWL, aWR, and aWO will be sparse. The complexity could become O(N^2). 


The open issue here is to find a way to commit to WL, WR, and WO such that 1) 

the verification key could be tiny, 2) the proof size to open the commitment is 

acceptable, and 3) the commitments could be aggregated. If necessary, modify 

the arithmetization further, ideally without introducing new security assumptions. 

-- If we do, for example, introduce SXDH, then we can consider schemes like 

Dory, which has a transparent setup and O(log(N)) verifier time. 


———What-ifs———


What if the above open issue could be resolved? The resulting proving system 

should work even for a Raspberry Pi 4 node. I ran some benchmarks on Raspberry 

Pi 4 for some elliptical curve operations. The general conclusion is that it is about 

ten times slower than my Mac Book Pro for a single thread. For a circuit of 2^16 

size, the verification time with MBP is doubt 200~300 ms (inferred by 

benchmarking MSM of this size); therefore, it would be about 2~3 seconds for 

RP4. The aggregation might double or triple the time cost. 


Now, for block verification, counted in batched verification, it should be 

acceptable for RP4 to spend around 10 seconds verifying ALL OP_ZKP 

transactions in a new block. Luckily, we won't have too many existing blocks full 

of OP_ZKP transactions any time soon. 


For transaction forwarding, a simple technique of pool-then-batched verification 

could work for RP4. As its name suggests, an RP4 node could simply pool any 

incoming OP_ZKP transactions in memory when it is already busy verifying some. 

When it is done, the node retrieves all the pooled OP_ZKP transactions and 

verifies them in a batch. This way, it only adds a few seconds of latency to 

forwarding any OP_ZKP transactions. 


What if the open issue cannot be resolved? We might consider Dory. It is 

transparent, requires pairing, and has logarithmic proof size but concretely larger 

than Bulletproofs (but Torus-based optimization might help here by a factor of 3; 

check out here: https://youtu.be/i-uap69_1uw?t=4044 and https://eprint.iacr.org/

2007/429.pdf ). It also has logarithmic verification time, and batched verification 

allows for saving block space and verification time. The community might need to 

accept the SXDH assumption.


That's it. Thanks for reading. Any comments are welcome. 

And again, see you in the K14 booth, Nashville.


Regards,

Weiji


Weikeng Chen

unread,
Jul 22, 2024, 3:01:59 PMJul 22
to Bitcoin Development Mailing List
I need to point out that Dory requires pairing, and therefore it cannot work with secp256k1?
Please circle back.

Weiji Guo

unread,
Jul 22, 2024, 8:40:08 PMJul 22
to Bitcoin Development Mailing List
Yes, that's true. With Dory we will have to work on some pairing-friendly curve. Not secp256k1.

On Tuesday, July 23, 2024 at 3:01:59 AM UTC+8 Weikeng Chen wrote:
I need to point out that Dory requires pairing, and therefore it cannot work with secp256k1?
Please circle back.
On Monday, July 22, 2024 at 9:16:18 AM UTC-5 Weiji Guo wrote:

———What-ifs———

Weiji Guo

unread,
Aug 28, 2024, 11:35:55 AMAug 28
to Bitcoin Development Mailing List
I believe I have found the solution to the open issue mentioned in the earlier email. It is just recursive 
verification. Instead of publishing each application circuit's verification key on-chain, we should have 
only one circuit that OP_ZKP will verify, which is a recursive verifier. 

Interested readers are welcome to visit the GitHub org dedicated for OP_ZKP: https://github.com/opzkp

So far I have just put up the high level ideas here: https://github.com/opzkp/tea-horse. There are nothing
else yet. But we will add stuff as we move on.

Regards,
Weiji

Reply all
Reply to author
Forward
0 new messages