Hi NIST PQC Forum!
This is bubble-over from an IETF thread I started last week.
Context: hash-then-sign schemes are good. For example, they allow you to pre-hash your potentially very large message and then send just the hash value to your cryptographic module to sign or verify. We like this pattern, it’s good for bandwidth and latency of cryptographic modules. We notice that SPHINCS+, CRYSTALS-Dilithium, and FALCON all start with a keyed message digest – in the case of randomized SPHINCS+ and FALCON, that message digest is keyed with a random number; in the case of non-randomized SPHINCS+ and Dilithium, that message digest is keyed with values derived from the public key (for completeness: randomized SPHINCS+ seems to be the only to do both).
A quick skim through the submission documents for the three schemes shows that the message randomization is intended as a protection against differential and fault attacks since the traces would not be repeatable between subsequent signatures even of the same message. Unless I missed something, I don’t see any other justification given for the use of keyed message digests (randomized or deterministic).
But it seems to me that, especially the randomized version, keyed message digests also protect against yet-to-be-discovered collision attacks in the underlying hash function because an attacker cannot pre-compute against an `r` chosen at signing time (ie the signature scheme’s security may not need to rely on the hash function being collision resistant).
Question:
So what is the safe way to externally pre-hash messages for these schemes in order to achieve a hash-then-sign scheme? Is it ok to take m’ = SHA256(m) and then sign m’ ? If we care about the built-in collision-resistance, then the answer is probably “No”. Is it safe to externalize the keyed message digest step of SPHINCS+, Dilithium, FALCON? In the non-randomized versions where the keyed message digest only relies on values in the public key, I would think the answer is “Yes”. For randomized versions, that would mean having access to a cryptographic RNG value outside the cryptographic module boundary, which, at least for FIPS validation, is probably a “No”.
I’m eager to hear more on the design rationale for starting with a randomized or deterministic keyed message digest, and recommendations for the safe way to external pre-hashes with these schemes.
---
Mike Ounsworth
Software Security Architect, Entrust
We notice that SPHINCS+, CRYSTALS-Dilithium, and FALCON all start with a keyed message digest – in the case of randomized SPHINCS+ and FALCON, that message digest is keyed with a random number...
A quick skim through the submission documents for the three schemes shows that the message randomization is intended as a protection against differential and fault attacks since the traces would not be repeatable between subsequent signatures even of the same message. Unless I missed something, I don’t see any other justification given for the use of keyed message digests (randomized or deterministic).
But it seems to me that, especially the randomized version, keyed message digests also protect against yet-to-be-discovered collision attacks in the underlying hash function because an attacker cannot pre-compute against an `r` chosen at signing time (ie the signature scheme’s security may not need to rely on the hash function being collision resistant).
Question:
So what is the safe way to externally pre-hash messages for these schemes in order to achieve a hash-then-sign scheme? ... For randomized versions, that would mean having access to a cryptographic RNG value outside the cryptographic module boundary, which, at least for FIPS validation, is probably a “No”.
If you’re looking for design justifications, in the case of Sphincs+, we prepend both data from the public key and a random value for two reasons:
Of course, Sphincs+ is an intentionally conservative design; one could argue that the second attack doesn’t really apply to SHA256(m) (because even if the attacker has 2^64 public keys that all signed messages, the direct multicollision attack without this protection would still take an expected 2^192 time on a conventional computer, which is completely infeasible, and at least 2^96 time on a Quantum one (and likely more – coming up with a more realistic lower bound is nontrivial) – I would argue that is similarly infeasible.
On the other hand, if one were to use prehashing, I would argue that the prehash should be with a very conservative hash function (say, SHA-512 or SHA3-512); we are putting all our hybrid eggs in this one hashing basket, and so we should make sure this one basket is a good one.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
pqc-forum+...@list.nist.gov.
To view this discussion on the web visit
https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CH0PR11MB57394C98AA026DB0649C3FBC9F4A9%40CH0PR11MB5739.namprd11.prod.outlook.com.
On the other hand, if one were to use prehashing, I would argue that the prehash should be with a very conservative hash function (say, SHA-512 or SHA3-512); we are putting all our hybrid eggs in this one hashing basket, and so we should make sure this one basket is a good one.
Do any of the currently standardized hash functions, such as SHA{2,3}-384 or SHA{2,3}-512 (or even SHA{2,3}-256) fail the “very conservative” criteria? Is there any reason to expect a “weak” hash function, any more than you’d expect a “weak” block cipher?
From: CFRG <cfrg-b...@irtf.org> On Behalf Of
Blumenthal, Uri - 0553 - MITLL
Sent: Monday, September 19, 2022 8:55 AM
To: Scott Fluhrer (sfluhrer) <sfluhrer=40cis...@dmarc.ietf.org>; Mike Ounsworth <Mike.Ou...@entrust.com>; pqc-forum <pqc-...@list.nist.gov>
Cc: p...@ietf.org; cf...@irtf.org
Subject: Re: [CFRG] Design rationale for keyed message digests in SPHINCS+, Dilithium, FALCON?
On the other hand, if one were to use prehashing, I would argue that the prehash should be with a very conservative hash function (say, SHA-512 or SHA3-512); we are putting all our hybrid eggs in this one hashing basket, and so we should make sure this one basket is a good one.
Do any of the currently standardized hash functions, such as SHA{2,3}-384 or SHA{2,3}-512 (or even SHA{2,3}-256) fail the “very conservative” criteria? Is there any reason to expect a “weak” hash function, any more than you’d expect a “weak” block cipher?
Actually, I did suggest SHA{2,3}-512, and so they fulfill what I had in mind by “very conservative”.
One could easily claim that SHA2-256 would be sufficient; however one additional criteria that should be considered is cost – if SHA2-512 is no more costly than SHA2-256, my opinion is that SHA2-512 should be preferred (it’s overkill, but there’s nothing wrong with cheap overkill). I would claim that this is even more true in the hybrid scenario – the attacker can forge if either they can break the hash function OR both of RSA and Dilithium.
In the scenario that’s immediately before us (signing with an HSM – that’s not the only relevant scenario, but would appear to be the most constrained), the obvious costs of prehashing are:
As for the first one, I believe SHA-512 is actually more efficient than SHA-256 on 64 bit CPUs; on the other hand, if we consider smaller microcontrollers (32 bit), this is not as true, but it’s not as clear if smaller microcontrollers would be signing/verifying very large messages. On the other hand, with SHA3, increasing the hash size does result in a less efficient hash function.
As for the second and third costs, I don’t believe that the additional 256 bits or so we get when moving from SHA{23}-256 to SHA{23}-512 would result in significantly higher costs (however hearing from some HSM vendors would be good)
There are alternative approaches to this (e.g. randomizing the hash on the main CPU and including that randomization factor in the signature), however those go beyond the current hash-and-sign paradigm that’s in common use.
Scott,
Thank you! My point basically was that with any accepted standard hash-functions (currently, SHA2 and SHA3 families) for “pre-hash”, we’d be OK.
I have no problem with SHA2-512, but even if somebody pre-hashed with SHA2-256, it wouldn’t be a problem.
And an IETF-specified protocol can easily require that hash-function it uses is either collision-resistant or belongs to the “approved” set (like, what you described).
TNX
--
V/R,
Uri
There are two ways to design a system. One is to make it so simple there are obviously no deficiencies.
The other is to make it so complex there are no obvious deficiencies.
- C. A. R. Hoare
From: "'Scott Fluhrer (sfluhrer)' via pqc-forum" <pqc-...@list.nist.gov>
Reply-To: "Scott Fluhrer (sfluhrer)" <sflu...@cisco.com>
Date: Monday, September 19, 2022 at 10:36
To: Uri Blumenthal <u...@ll.mit.edu>, "Scott Fluhrer (sfluhrer)" <sfluhrer=40cis...@dmarc.ietf.org>, Mike Ounsworth <Mike.Ou...@entrust.com>, pqc-forum <pqc-...@list.nist.gov>
Cc: "p...@ietf.org" <p...@ietf.org>, CFRG <cf...@irtf.org>
Subject: [pqc-forum] RE: [CFRG] Design rationale for keyed message digests in SPHINCS+, Dilithium, FALCON?
From: CFRG <cfrg-b...@irtf.org> On Behalf Of Blumenthal, Uri - 0553 - MITLL
Sent: Monday, September 19, 2022 8:55 AM
To: Scott Fluhrer (sfluhrer) <sfluhrer=40cis...@dmarc.ietf.org>; Mike Ounsworth <Mike.Ou...@entrust.com>; pqc-forum <pqc-...@list.nist.gov>
Cc: p...@ietf.org; cf...@irtf.org
Subject: Re: [CFRG] Design rationale for keyed message digests in SPHINCS+, Dilithium, FALCON?
On the other hand, if one were to use prehashing, I would argue that the prehash should be with a very conservative hash function (say, SHA-512 or SHA3-512); we are putting all our hybrid eggs in this one hashing basket, and so we should make sure this one basket is a good one.
Do any of the currently standardized hash functions, such as SHA{2,3}-384 or SHA{2,3}-512 (or even SHA{2,3}-256) fail the “very conservative” criteria? Is there any reason to expect a “weak” hash function, any more than you’d expect a “weak” block cipher?
Actually, I did suggest SHA{2,3}-512, and so they fulfill what I had in mind by “very conservative”.
One could easily claim that SHA2-256 would be sufficient; however one additional criteria that should be considered is cost – if SHA2-512 is no more costly than SHA2-256, my opinion is that SHA2-512 should be preferred (it’s overkill, but there’s nothing wrong with cheap overkill). I would claim that this is even more true in the hybrid scenario – the attacker can forge if either they can break the hash function OR both of RSA and Dilithium.
In the scenario that’s immediately before us (signing with an HSM – that’s not the only relevant scenario, but would appear to be the most constrained), the obvious costs of prehashing are:
- The cost of performing the hash function on the full message by the main CPU
- The cost of transferring this hash to the HSM (where we transfer more for larger hash functions)
- The cost of having the HSM sign the message (which increases in size with larger hash functions)
As for the first one, I believe SHA-512 is actually more efficient than SHA-256 on 64 bit CPUs; on the other hand, if we consider smaller microcontrollers (32 bit), this is not as true, but it’s not as clear if smaller microcontrollers would be signing/verifying very large messages. On the other hand, with SHA3, increasing the hash size does result in a less efficient hash function.
As for the second and third costs, I don’t believe that the additional 256 bits or so we get when moving from SHA{23}-256 to SHA{23}-512 would result in significantly higher costs (however hearing from some HSM vendors would be good)
There are alternative approaches to this (e.g. randomizing the hash on the main CPU and including that randomization factor in the signature), however those go beyond the current hash-and-sign paradigm that’s in common use.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CH0PR11MB54442EB4384C9BBADAB94BF9C14D9%40CH0PR11MB5444.namprd11.prod.outlook.com.
Thank you all for the wholesome discussion all!
Here is my attempt to summarize: we have a few fundamental options:
Option 0: Do not pre-hash; send the whole message to the cryptographic primitive.
Discussion: There will be at least some applications where this is not practical; for example imagine signing a 25 mb email with a smartcard. Streaming the entire 25 mb to the smartcard sounds like you’d be sitting there waiting for a human-irritating amount of time. Validation of firmware images during secure boot is another case that comes to mind where you want to digest in-place and not stream the firmware image over an internal BUS.
Option 1: Simple pre-hash m’ = SHA256(m); sign(m)
Discussion: Don’t, for various reasons, none of which are catastrophic to the algorithm security, but there are better ways.
Option 2: Externalize the keyed digest step of SPHINCS+, Dilithium, FALCON to the client.
Discussion: REALLY DON’T! This can be private-key-recovery-level catastrophic for FALCON. For Dilithium and non-randomized SPHINCS+ this might be cryptographically sound, but regardless, moving part of the algorithm outside the crypto module boundary is unlikely to ever pass a FIPS validation.
Option 3: Application-level envelopes
Discussion: if your application has a need to only send a small amount of data to the crypto module, then your application needs to define a compressing envelope format, and sign that. How fancy the envelope format needs to be is dictated by the security needs of the protocol – ie collision resistance, entropy, contains a nonce, algorithm code footprint, performance, etc. Downside is that we’re not solving this problem centrally, but delegating the problem of doing this securely to each protocol design team.
This seems to be the winning option.
Have I understood and summarized correctly?
---
Mike Ounsworth
Software Security Architect, Entrust
From: 'Mike Ounsworth' via pqc-forum <pqc-...@list.nist.gov>
Sent: September 18, 2022 2:42 PM
To: pqc-forum <pqc-...@list.nist.gov>
Cc: p...@ietf.org; cf...@irtf.org
Subject: [EXTERNAL] [pqc-forum] Design rationale for keyed message digests in SPHINCS+, Dilithium, FALCON?
WARNING: This email originated outside of Entrust.
DO NOT CLICK links or attachments unless you trust the sender and know the content is safe.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CH0PR11MB57394C98AA026DB0649C3FBC9F4A9%40CH0PR11MB5739.namprd11.prod.outlook.com.
Is it ok to take m’ = SHA256(m) and then sign m’
My preference would be to consider three design constraints:
1. Domain separation
2. Exclusive ownership
3. Context binding
My read:
TNX
--
V/R,
Uri
There are two ways to design a system. One is to make it so simple there are obviously no deficiencies.
The other is to make it so complex there are no obvious deficiencies.
- C. A. R. Hoare
From: CFRG <cfrg-b...@irtf.org> on behalf of Mike Ounsworth <Mike.Ounsworth=40entr...@dmarc.ietf.org>
Date: Monday, September 19, 2022 at 17:18
To: Mike Ounsworth <Mike.Ou...@entrust.com>, pqc-forum <pqc-...@list.nist.gov>
Cc: "p...@ietf.org" <p...@ietf.org>, CFRG <cf...@irtf.org>
If Falcon has an issue with bad entropy during signing, might I suggest a minor modification? I believe that the issue is that if Falcon gets the same randomized hash twice, and then uses different coins in the ffSampling to generate signatures, Bad Things happen.
If that is the case, the obvious approach (and Chris, please correct me if I missed something) would be to use a RFC-6979 style approach; take the randomized hash and some secret data (from the private key), and use that the generate the random coins for ffSampling, perhaps something as simple as SHAKE( private_key || hash ). That way, if you somehow do come up with the same randomized hash value because of an entropy failure, you’ll generate identical signatures, which is safe.
Now, this isn’t perfect – if someone manages to do a fault attack on the deterministic coin generation, that’d still leak the key, and that’d be hard to detect. That may be enough to kill the idea in the HSM/third party hashing scenario. I believe it still makes sense if the same device does the entire Falcon operation (because it avoids a potential pitfall on poor entropy)
One might claim that the single-device scenario doesn’t really need it, because the same rbg generates random bits for both the randomized hash and ffSampling. On the other hand, we have seen cases (with RSA key generation) where entropy was bad in the first part of the process (selecting the prime ‘p’) and became better in the second part (selecting the prime ‘q’) – because it has happened in the past, I believe that the possibility of it happening in the future is not implausible.
From: pqc-...@list.nist.gov <pqc-...@list.nist.gov>
On Behalf Of Christopher J Peikert
Sent: Sunday, September 18, 2022 10:45 PM
To: Mike Ounsworth <Mike.Ou...@entrust.com>
Cc: pqc-forum <pqc-...@list.nist.gov>; p...@ietf.org; cf...@irtf.org
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CACOo0QjNnEimk7U6c%2BBuNzJEFh%3DyhrvNthi3qfgj1jkcTnZmCQ%40mail.gmail.com.
If Falcon has an issue with bad entropy during signing, might I suggest a minor modification? I believe that the issue is that if Falcon gets the same randomized hash twice, and then uses different coins in the ffSampling to generate signatures, Bad Things happen.
If that is the case, the obvious approach (and Chris, please correct me if I missed something) would be to use a RFC-6979 style approach; take the randomized hash and some secret data (from the private key), and use that the generate the random coins for ffSampling, perhaps something as simple as SHAKE( private_key || hash ). That way, if you somehow do come up with the same randomized hash value because of an entropy failure, you’ll generate identical signatures, which is safe.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CH0PR11MB5444E50C9F95D88360C40A47C14C9%40CH0PR11MB5444.namprd11.prod.outlook.com.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CH0PR11MB5444E50C9F95D88360C40A47C14C9%40CH0PR11MB5444.namprd11.prod.outlook.com.
Outermost -> Innermost
byte[] SignData (byte[] data, PrivateKey key, EnvelopeID envelopeID, byte[]? attributes=null)byte[] SignDigest (byte[] digest, AlgorithmID digestAlgorithm, PrivateKey key, byte[]? attributes=null)----HSM boundary----byte[] SignManifest (byte[] manifest, PrivateKey key)byte[] SignNonce (byte[] nonce, byte[] manifest, PrivateKey key)byte[] SignInner (byte[] signatureInput, PrivateKey key)
So the code using this approach is going to be (mostly) calling SignData to request an envelope in COSE, CMS/PKCS#7/OpenPGP etc format.
A HSM would only expose the SignDigest method for private keys held internally. The device itself would probably need to expose the SignNonce and SignInner APIs for conformance testing purposes but allowing those methods to be used on protected keys is obviously prohibited. Specifying the conformance testing API in this fashion allows a test to be added to check that attempting to use a non-compliant method on a protected key is refused.
--Derek Atkins
Likewise, one might want to dive into HSM reliability, availability, and serviceability.
From: pqc-...@list.nist.gov <pqc-...@list.nist.gov>
On Behalf Of Derek Atkins
Sent: September 20, 2022 1:03 PM
To: sflu...@cisco.com; ph...@hallambaker.com
Cc: cpei...@alum.mit.edu; Mike.Ou...@entrust.com; pqc-...@list.nist.gov; p...@ietf.org; cf...@irtf.org
Subject: Re: [pqc-forum] Design rationale for keyed message digests in SPHINCS+, Dilithium, FALCON?
Phill,
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/fdc56eb1ab6ce61cb6a43a7917cc17863ac16d0c.camel%40Veridify.com.
Phill,
On Tue, 2022-09-20 at 12:34 -0400, Phillip Hallam-Baker wrote:
[snip]
Outermost -> Innermost
byte[] SignData (byte[] data, PrivateKey key, EnvelopeID envelopeID, byte[]? attributes=null)byte[] SignDigest (byte[] digest, AlgorithmID digestAlgorithm, PrivateKey key, byte[]? attributes=null)----HSM boundary----byte[] SignManifest (byte[] manifest, PrivateKey key)byte[] SignNonce (byte[] nonce, byte[] manifest, PrivateKey key)byte[] SignInner (byte[] signatureInput, PrivateKey key)
So the code using this approach is going to be (mostly) calling SignData to request an envelope in COSE, CMS/PKCS#7/OpenPGP etc format.
[snip]
A HSM would only expose the SignDigest method for private keys held internally. The device itself would probably need to expose the SignNonce and SignInner APIs for conformance testing purposes but allowing those methods to be used on protected keys is obviously prohibited. Specifying the conformance testing API in this fashion allows a test to be added to check that attempting to use a non-compliant method on a protected key is refused.
Just to make sure I'm not confused, is this a typo, I would have expected you to say "An HSM only exposes the SignManifest method". Am I missing something?
Hi,
On Wed, 2022-09-28 at 11:49 -0400, Phillip Hallam-Baker wrote:
On Tue, Sep 20, 2022 at 1:03 PM Derek Atkins <dat...@veridify.com> wrote:
Phill,
On Tue, 2022-09-20 at 12:34 -0400, Phillip Hallam-Baker wrote:
[snip]
Outermost -> Innermost
byte[] SignData (byte[] data, PrivateKey key, EnvelopeID envelopeID, byte[]? attributes=null)byte[] SignDigest (byte[] digest, AlgorithmID digestAlgorithm, PrivateKey key, byte[]? attributes=null)----HSM boundary----byte[] SignManifest (byte[] manifest, PrivateKey key)byte[] SignNonce (byte[] nonce, byte[] manifest, PrivateKey key)byte[] SignInner (byte[] signatureInput, PrivateKey key)
So the code using this approach is going to be (mostly) calling SignData to request an envelope in COSE, CMS/PKCS#7/OpenPGP etc format.
[snip]
A HSM would only expose the SignDigest method for private keys held internally. The device itself would probably need to expose the SignNonce and SignInner APIs for conformance testing purposes but allowing those methods to be used on protected keys is obviously prohibited. Specifying the conformance testing API in this fashion allows a test to be added to check that attempting to use a non-compliant method on a protected key is refused.
Just to make sure I'm not confused, is this a typo, I would have expected you to say "An HSM only exposes the SignManifest method". Am I missing something?
An HSM only exposes the SignManifest method in FIPS mode. If you put it in FIPS Mode, it can never be put in test mode and can never give the debug information or provide non-FIPS functions.
We used to spray the Test HSMs yellow with TEST in black so nobody would ever be silly enough to create a trust chain to 'em.
You still didn't clarify. You originally said "An HSM would only expose the SignDigest method"... I was questioning whether this was a typo, as I would only expect an HSM to expose SignManifest.
OK, so this seems to make the argument that a signing scheme MUST be deterministic. Otherwise, we can never trust the HSMs not to be leaking their keys.
Den ons 28 sep. 2022 19:08Phillip Hallam-Baker <ph...@hallambaker.com> skrev:Looking at the CPU co-processor implementation, there is a subtle issue here in that we inevitably expose a side channel that would allow the co-processor to leak the secret key because it chooses the nonce so the algorithm is inherently non-deterministic.I will give the whole algorithm here because I don't want GCHQ visiting me with a secrecy order.Goal: NOBUS compliant leakage of secret key of a nondeterministic signature scheme from an HSM with only twice the processing load.First step is to reduce the private key to manageable size, I will use a deterministic key generation approach from a 128 bit seed.Let us imagine that the HSM creates and publishes signatures frequently so we can expect to collect tens of thousands of examples. The approach I am going to take is trial and error. The signer chooses a random nonce, creates two signatures signature and then checks to see if the first resulting signature matches a template f(seed, sig) => bool. If the signature matches, the first is returned, otherwise the second.What this means is that the probability that the signature matches the template is 0.75 instead of 0.5. Which is more than sufficient to leak the value of seed over a few thousand signatures. Not least because all that you need to do is reduce the search space sufficiently to make an exhaustive search of the rest feasible.The pattern could be quite simple, put the signature value through a digest, take the first byte. Use the low 7 bits to specify the bit position of the seed to be leaked and return true if the top bit matches the corresponding bit in the seed.For a bit more obfuscation, we could use a MAC with a secret key so that the basis for the statistical variations is hidden.I guess I should read just Motti's book instead of reinventing it all myself.OK, so this seems to make the argument that a signing scheme MUST be deterministic. Otherwise, we can never trust the HSMs not to be leaking their keys.
There's a possible (but inefficient) way to address that. Commit to a sequence of random values to be used in order, then use a Zero-knowledge proof to demonstrate you used the correct random value input in the signature. It adds state to be managed, it's definitely a lot slower, but it should work.
On Wed, Sep 28, 2022 at 11:08 AM Phillip Hallam-Baker <ph...@hallambaker.com> wrote:
OK, so this seems to make the argument that a signing scheme MUST be deterministic. Otherwise, we can never trust the HSMs not to be leaking their keys.
Furthermore, mandating a deterministic scheme doesn't necessarily close the covert channel if there's no way for a verifier to determine the signature was produced correctly under a prescribed deterministic method.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAHOTMV%2B%3DBhN8pB_0zJRj4C5sHftvkLqpvNiy0bMD81ER06vWAQ%40mail.gmail.com.