Dear Meltem and the rest of the team,
Thank you for this update and best wishes for the new year.
I would very much appreciate if the final LWC report could include a
detailed discussion supporting key decisions made, including different
perspectives of the team and the role that external input has played, as
well as how this has been gathered (see the final AES report from 2001,
pages 13--16, as one example). Besides aligning with fundamental
principles such as transparency and openness behind NISTIR 7977, such
detailed supporting arguments are also very valuable for posterity,
including for those researching the dynamics of security competitions.
On the topic of NISTIR 7977, there are several relevant points to make.
It specifically describes a NIST competition as having one winner. Of
course, both the PQC and LWC process are classed as 'competition-like'.
However, such an approach to cryptographic standards development is not
described in NISTIR 7977. This document from March 2016 notes that it is
to be reviewed every five years. I have not been able to find any public
information about this review. Maybe the broader Cryptographic
Technology Group can clarify whether and how NISTIR 7977 has been
reviewed, and how this review relates to competition-like processes.
Although such considerations haven't been made explicit in NISTIR 7977,
I understand how performance gaps of two orders of magnitude on a given
measure support having multiple algorithms for PQC, even if that leads
to a decrease in compatibility and an increase in complexity. This is
less clear to me when it comes to symmetric cryptography. I haven't yet
found detailed arguments for the shift from the 'if any' stance to the
description of standardising 'one or more' algorithms. Are there any
measures for which there is an order of magnitude improvement, and/or
are there any other distinguishing features to support standardising one
or more new symmetric algorithms (for example, providing a flexible
interface/toolkit allowing designers of embedded systems to translate
protection goals to properties in an efficient and error-proof manner,
aka Saltzer & Schroeder's psychological acceptability principle)?
More broadly speaking (and looking back to NISTIR 7977): whatever choice
NIST makes, are the above considerations transparent and made with
sufficient community involvement? This is not just a question around the
analysis of individual algorithms, but also for underlying fundamental
questions (such as whether and if so how many algorithms are
standardised besides AES and SHA2/SHA3; see paragraph 3 of page 15 of
the final AES report for what this discussion looked like in 2000).
Regards,
Arne
PS. For those who don't have the final AES report at hand: 'At the AES3
conference, there was significant discussion regarding the number of
algorithms that should be included in the AES. The vast majority of
attendees expressed their support -- both verbally and with a show of
hands -- for selecting only a single algorithm. There was some support
for selecting a backup algorithm, but there was no agreement as to how
that should be accomplished. The above sentiments were reflected in
written comments provided to NIST by many of the attendees after the
conference.'
On 2022-12-30 20:06, 'Sonmez Turan, Meltem (Fed)' via lwc-forum wrote:
> Dear subscribers of the NIST Lightweight Cryptography forum,
>
> We would like to inform you that NIST LWC team is planning to continue
> internal discussions for a few additional weeks before announcing the
> winner(s) of the standardization effort.
>
> Please feel free to share your recent results/observations via the
> forum or email the NIST team directly at
>
lightweig...@nist.gov<mailto:
lightweig...@nist.gov>.