Dear PQC,
I concur that independence of intermediate results is an important consideration, and in Round5 it may be a serious problem. I have discussed the issue with the Round5 team, and we have concluded the following.
The usual errors in a LWE/LWR scheme are of the form
s(As'+e') - s’(As+e) = se’ - s’e
where in this case s,s' are the sparse ternary secrets and e,e' are the rounding error. The calculation is done modulo the cyclotomic polynomial \Phi_{n+1}(x).
If we were to instead calculate this in the NTRU ring — i.e. mod the polynomial x^{n+1} - 1 — each coefficient would be the sum of h rounding terms times +-1, where h is the fixed Hamming weight of the secret. In that case, while there might still be correlations between the magnitudes of the terms, they would probably be somewhat close to i.i.d..
To reduce the resulting value mod \Phi_{n+1}(x), we take the n’th coefficient and subtract it from the others. This means that if the n’th coefficient is large, we will have a greatly increased error probability in every position. This might cause many errors, so that the error-correcting code would be overwhelmed.
Concretely, we have estimated this probability in SAGE, and gotten 4-error probabilities around 2^-68, 26-56, 2^-55 for R5ND_PKE{1,3,5} respectively. This is high enough that it may allow an attack, especially if the adversary iterates to find ciphertexts with larger than expected amounts of rounding noise. I have discussed this with the Round5 team, who have confirmed the calculations. I have also confirmed this by setting s=700 in R5ND_PKE3. The same script estimates 2^-11.56 for these parameters, and indeed tests on Markku’s code found errors a 2^-12.38 fraction of the time. The discrepancy may be explained by xe3 sometimes correcting more than 3 errors. But if errors were independent, the same script would predict a failure rate below 2^-32.
****
The above issues don’t affect the original Round2, but they frustrate application of XEf or other error correction as in Round5.
At the PQC conference, I was working on a low-bandwidth KEM, called “Glowstick”, which makes heavy use of error correction. After PQC, I discussed extending Glowstick to cyclotomics with Sauvik, Oscar and Ludo of the Round2 team. We developed a ring-switching trick, which may be able to repair the error correction in Round5, but which currently lacks a security proof. Consider that with a balanced sparse ternary secret, as in Round2, the coefficients of the secret s sum to 0. In other words, s is a multiple of x-1.
[The same is true with the “balanced balls-in-bins” distribution, \sum_{i=0}^{h-1} (-1)^i x^{random_i mod n}, in development versions of Glowstick.]
The trick is that we still send (As+e mod \Phi_{n+1}(x)) and the same for As’+e’, just as in the current Round2 and Round5. But the approximate shared secret is s(As’+e’) mod the NTRU polynomial x^{n+1} - 1, instead of mod \Phi. This is well-defined, because mod x^{n+1} - 1:
As+e mod \Phi_{n+1}(x) = As+e+k\Phi_{n+1}(x)
s’(As+e mod \Phi_{n+1}(x)) = s’(As+e) + ks’\Phi_{n+1}(x)
But indeed s’\Phi_{n+1}(x) = 0 mod x^{n+1}-1, because s’ is a multiple of x-1. So the public key is mod \Phi_{n+1}(x), but the approximate shared secret is mod x^{n+1} - 1. This may make the security proof difficult from RLWE mod \Phi, but the LPR10 data is more heavily rounded and only has \mu < n coefficients, so it seems difficult for the attacker to make use of the different ring. It may even be possible to turn this into a proof, but we haven’t managed to do it.
This is a trivial implementation change, since internally Round2 and Round5 are computing mod x^{n+1} - 1 anyway. So the change is just don’t reduce mod \Phi at the end.
If this fix is employed, it prevents both the noise amplification and the major correlation problem described above, so error correction will be effective again. It also results in much lower failure rates than the current draft of Round5, or alternatively higher noise and better security. We would still need to check for smaller correlations between failures, caused by eg ciphertexts with larger than expected rounding noise.
Cheers,
— Mike Hamburg