Slides on SCA Evaluation & Benchmarking of LWC Finalists

103 views
Skip to first unread message

Jens-Peter Kaps

unread,
Nov 1, 2022, 11:37:07 AM11/1/22
to lwc-...@list.nist.gov, lightweig...@nist.gov, Krzysztof M. Gaj
Hi,

We have posted a set of slides titled
  "SCA Evaluation & Benchmarking of Finalists in the NIST Lightweight Cryptography Standardization Process"
at
under
  Summary of Results.

These slides summarize the effort of multiple groups devoted to
 a) developing protected hardware and software implementations of LWC finalists
 b) evaluating protected implementations using leakage assessment methods and key-recovery attacks.

Detailed reports concerning these projects can be found on the web page mentioned above under
 * Protected Implementations
 * Labs
 * Assignments, Commitments, and Reports.

The second part of our slides summarizes our group's effort to benchmark unprotected and protected hardware implementations of LWC finalists using Xilinx Artix-7 FPGAs.

The obtained results are presented in the form of 
  * two-dimensional graphs - Throughput vs. Area, and
  * one-dimensional graphs depicting the ranking of candidates in terms of the
     - Ratio of Areas for protected and unprotected implementations
     - Ratio of Throughputs for unprotected and protected implementations
     - Throughput over Area ratio for processing plaintext, associated data (AD), and hashed messages
     - Number of random bits per byte of plaintext and AD, used by protected implementations of orders 1, 2, and 3.

Some first preliminary conclusions are drawn.

These slides were presented to NIST on October 27, 2022, and slightly revised to incorporate NIST comments and a few new results afterward.

We are looking forward to your suggestions for possible revisions and extensions to these slides. We would like to treat this set of slides as a living document until NIST makes its final decision regarding the future LWC standard(s).

The corresponding report will be posted on the same webpage and submitted to NIST and the Cryptology ePrint Archive within the next month.

Great thanks to all the teams who have contributed their time and resources to this tremendous effort!
Many thanks to NIST for providing valuable feedback and coordination!

We are looking forward to your comments!

Jens & Kris
LWC Team
Cryptographic Engineering Research Group (CERG)
George Mason University, USA


  

h...@arnepadmos.com

unread,
Nov 1, 2022, 7:23:39 PM11/1/22
to Jens-Peter Kaps, lwc-...@list.nist.gov, lightweig...@nist.gov, Krzysztof M. Gaj
Dear Jens and Kris,

Thank you for sharing these detailed results.

I think it makes sense to share the (preliminary) conclusions from slide
73 here to save those who are specifically interested in the outcomes
from having to trawl through all of the slides:

```
Hardware benchmarking results demonstrate advantages of the following
candidates:
- Ascon and Xoodyak:
- High speed
- High throughput/area ratio
- Moderate randomness requirements
- Support for hashing
- TinyJAMBU:
- Low area
- High throughput/area ratio
- Moderate randomness requirements
- ISAP:
- Mode-level protection against arbitrary-level DPA (no masking)
- High throughput/area ratio among protected designs
- Support for hashing
```

As to suggestions on these slides, I'm missing a discussion of the
limitations of this study. For example, I don't see a discussion of
Romulus-T, and wonder if your results would allow some initial
indications as to its performance vis-a-vis ISAP. Also, I don't see a
mention of Romulus-H. While NISTIR 8369 states that ASCON,
PHOTON-Beetle, SPARKLE, and Xoodyak will be considered 'for applications
that also require hashing functionality', Romulus is not mentioned here
in error and is still being considered by NIST for hashing.

Regards,
Arne

Jens-Peter Kaps

unread,
Nov 25, 2022, 12:03:45 PM11/25/22
to h...@arnepadmos.com, lwc-...@list.nist.gov, lightweig...@nist.gov, Krzysztof M. Gaj
Hi Arne,

Thanks a lot for your comments!
We apologize for our tardy response!

Regarding your specific questions:

Romulus-T and Romulus-H have been introduced in the Romulus specification v1.3, with a release date of May 17, 2021, i.e., only after the end of Round 2.
NISTIR 8369 is titled "Status Report on the Second Round of the NIST Lightweight Cryptography Standardization Process."
Thus, not listing Romulus among the Round 2 candidates supporting hash functionality appears to be correct.

The only implementations of Romulus submitted to our group for FPGA benchmarking were:

During Round 2:

 - Five unprotected implementations of Romulus-N that differed only in terms of hardware architectures. 
   These implementations had properties summarized in 
      Table 1, Table 2, Figure 27, Figure 28, Appendix A, Figs. 106-111.

During Final Round:

 - Three protected implementations of Romulus-N with protection orders of 1, 2, and 3, respectively.
   These implementations were generated semi-automatically by the team from Ruhr-University Bochum, Germany, using the tool called AGEMA.
   The starting point for these implementations was an unprotected implementation of Romulus-N.

No hardware implementations supporting Romulus-T or Romulus-H have been submitted to our group for FPGA benchmarking.
No hardware implementations were included in the submission package of Romulus submitted to NIST at the beginning of the Final Round.
As far as we know, no new hardware implementations of Romulus were announced on the lwc-forum during the Final Round.

Regarding the general question on limitations of our study:

Our study has relied on the voluntary effort of multiple groups from all over the world.
We have benchmarked only implementations either submitted directly to our group or announced on the lwc-forum.
These implementations had to pass all verification tests for correct functionality.

Consequently, as far as we know, as of November 2022:
 A) Only 4 out of 10 finalists (Ascon, Elephant, TinyJAMBU, and Xoodyak) had manually developed SCA-protected implementations based on masking available for FPGA benchmarking.
    Additionally, the implementation of Elephant did not pass verification tests and thus was not included in benchmarking results.
 B) 9 out of 10 candidates (all except Grain-128AEAD) had protected implementations developed semi-automatically using AGEMA.
    Additionally, some of these implementations (e.g., GIFT-COFB, SPARKLE, and TinyJAMBU) were not based on the best available unprotected implementations.

Another challenge is an inherent difficulty in comparing mode-protected implementations with protected implementations based on masking.

Any additional questions and comments are very welcome!

Jens & Kris 
LWC Team 
Cryptographic Engineering Research Group (CERG) 
George Mason University, USA 

h...@arnepadmos.com

unread,
Nov 27, 2022, 11:33:15 AM11/27/22
to Jens-Peter Kaps, lwc-...@list.nist.gov, lightweig...@nist.gov, Krzysztof M. Gaj
Dear Jens and Kris,

Thank you for the clarifications as to the limitations of the
study/effort. Given that only a limited number of manually-developed
masked implementations was submitted to be benchmarked -- while NIST
explicitly indicated in the second-round report that 'the selection will
consider the security of the candidates and performance on software and
hardware platforms, including the performance of protected
implementations' -- do you think the number of implementations,
especially those not by the designing teams, gives any indications as to
the community opinion of fitness-for-purpose? (Note that unlike the
in-person second and third AES conferences, the virtual fourth and fifth
LWC workshops didn't include any feedback sheets for voting. Also, while
the two 'Open Discussion' LWC sessions included many key strategic
questions, going by the recordings and Q&A logs on the website these
sessions didn't really turn into a detailed discussion.)

Regarding Romulus-H, the statements in the last two paragraphs of the
'Next Steps' section in the second-round report don't read like a list
of candidates that support hashing functionality but instead they appear
to be forward-looking statements. The clarification I received from NIST
in August 2021 was: 'Since the final package of Romulus now includes a
hash function, Romulus will be considered as well. The report was
actually written before we have received the finalists packages
(publication took place after a lengthy internal review).' (Sidenote:
these two paragraphs actually read like both Elephant and Grain-128AEAD
are already out of the running for software applications, but NIST
clarified that 'the purpose of the remark is to list the candidates that
demonstrated performance advantages over NIST standards in benchmarking
studies' and that 'the performance comparisons will continue until the
end of the process'.)

Regards,
Arne

On 2022-11-25 18:03, Jens-Peter Kaps wrote:
> Hi Arne,
>
> Thanks a lot for your comments!
> We apologize for our tardy response!
>
> Regarding your specific questions:
>
> Romulus-T and Romulus-H have been introduced in the Romulus
> specification v1.3, with a release date of May 17, 2021, i.e., only
> after the end of Round 2.
> NISTIR 8369 is titled "Status Report on the Second Round of the NIST
> Lightweight Cryptography Standardization Process."
> Thus, not listing Romulus among the Round 2 candidates supporting hash
> functionality appears to be correct.
>
> The only implementations of Romulus submitted to our group for FPGA
> benchmarking were:
>
> _During Round 2:_
>
>  - Five unprotected implementations of Romulus-N that differed only in
> terms of hardware architectures.
>    These implementations had properties summarized in
> https://eprint.iacr.org/2020/1207
> <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Feprint.iacr.org%2F2020%2F1207&data=05%7C01%7Cjkaps%40gmu.edu%7C7d00fec8c7bc4a47deef08daceba914c%7C9e857255df574c47a0c00546460380cb%7C0%7C0%7C638049597968481797%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=f47hnXA45hOHdGYnJwL6zqhKysUwMaWXDSeZvF3pxqI%3D&reserved=0>
>       Table 1, Table 2, Figure 27, Figure 28, Appendix A, Figs.
> 106-111.
>
> _During Final Round:_

Jens-Peter Kaps

unread,
Dec 1, 2022, 11:13:56 AM12/1/22
to h...@arnepadmos.com, lwc-...@list.nist.gov, lightweig...@nist.gov, Krzysztof M. Gaj
Hi Arne,

>> do you think the number of implementations, especially those not by
the designing teams, gives any indications as to the community opinion
of fitness-for-purpose?

This interpretation would probably go too far.

On the hardware side, the only implementations not submitted by the LWC
candidate teams were as follows:
 from Tsinghua University: Xoodyak,
 from GMU:  Elephant, TinyJAMBU, and Xoodyak.

We do not have any insight into the motivation of the team from Tsinghua
University.
However, in the case of the GMU Team, the only reason for choosing the
three specific candidates was that the students available to work on
this project were closely familiar with the underlying unprotected
implementations developed by them during Round 2.

On the software side, the only implementations not submitted by the LWC
candidate teams were as follows:
 from Tsinghua University: Xoodyak,
 from Alexandre Adomnicai: GIFT-COFB and Romulus.

At least in the case of Alexandre, the motivation appears to be
primarily a good knowledge of algorithms themselves and the underlying
unprotected implementations.

On the other hand, the capabilities of the submission teams to generate
their own protected implementations is certainly a plus as it simplifies
benchmarking and side-channel evaluation by independent groups.
Consequently, a given algorithm becomes more thoroughly analyzed.

Hope it helps!

Jens & Kris


On 11/27/22 11:33, h...@arnepadmos.com wrote:
> Dear Jens and Kris,
>
Reply all
Reply to author
Forward
0 new messages