Summary of the VHDL/Verilog submissions for Round 3 CAESAR Candidates

297 views
Skip to first unread message

Kris Gaj

unread,
Jul 22, 2017, 6:54:42 AM7/22/17
to crypto-co...@googlegroups.com
Hello,

The summary of the VHDL/Verilog submissions for Round 3 CAESAR Candidates, 
sent to either crypto-co...@googlegroups.com or to my individual e-mail address
by July 15, 2017, have been posted at
(Please make sure to Refresh in order to make sure that you are accessing the most
recent versions of these pages!)
   
These summeries are also linked from the GMU One Stop Website devoted to the 
CAESAR Competition, available at
under
   VHDL/Verilog Code of CAESAR Candidates: Summary I 
   VHDL/Verilog Code of CAESAR Candidates: Summary II.
   
For the algorithms that have not been tweaked and their VHDL/Verilog code not updated,
the Round 2 implementations are listed in the Round 3 tables as well.
   
Please do not hesitate to let me know if I missed any new submission 
or any earlier existing code in the newly posted Round 3 summaries.

The benchmarking of all submitted implementations by the GMU Team has already started.

Below is a short statistics of the third-round VHDL/Verilog submissions:

The total number of Round 3 Candidates covered:   15 out of 15

Total number of submission packages (number of rows in Summary I):   27
including
 * new code developed from scratch or 
   revised code supporting a new variant-architecture pair: 14
     AEGIS (NTU), AEGIS (GMU), AES-OTR (NEC), Ascon (TU Graz), 
     CLOC-TWINE (CLOC-SILC Team), COLM (CINVESTAV-IPN), COLM (GMU),
     Deoxys (NTU), JAMBU-AES (GMU), Ketje x 2 (Ketje-Keyak Team)
     [compliant with the CAESAR Hardware API & with simpler interface optimized for area], 
     SILC-LED/PRESENT (CLOC-SILC Team), SILC-AES (GMU), Tiaoxin (GMU)
     
 * making previously non-compliant implementation compliant: 1
     Deoxys (Axel & Marc)
 
 * Round 3 tweaks and additional optimizations: 6
     ACORN (NTU), AEZ (GMU), Ascon (GMU), Deoxys (GMU), MORUS (NTU), NORX (GMU)
 
 * Round 2 designs for algorithms without tweaks: 6
      CLOC-AES (CLOC-SILC Team), CLOC-AES (GMU), JAMBU-SIMON (NTU),
      Keyak (Ketje-Keyak Team), SILC-AES (CLOC-SILC Team), OCB (GMU)

Total number of variant-architecture pairs:  51

Total number of benchmarking runs required for four High-Performance FPGA families 
(Virtex-6, Virtex-7, Stratix IV, and Stratix V):  204

Number of submissions per group:
  CERG GMU:   11 (AEGIS, AEZ, Ascon, CLOC-AES, COLM, Deoxys, JAMBU-AES, NORX,
                  OCB, SILC-AES, Tiaoxin)
  CCRG NTU Singapore: 4 (ACORN, AEGIS, JAMBU-SIMON, MORUS)
  CLOC-SILC Team:     4 (CLOC-AES, CLOC-TWINE, SILC-AES, SILC-LED/PRESENT)
  Ketje-Keyak Team:   3 (Ketje x 2, Keyak)
  NEC Japan:          1 (AES-OTR)
  IAIK TU Graz:       1 (Ascon)
  CINVESTAV-IPN, Mexico:  1 (COLM)
  Axel & Marc:        1 (Deoxys)
  NTU Singapore:      1 (Deoxys)
  
Number of submissions per cipher:
  3: CLOC, Deoxys, SILC
  2: AEGIS, Ascon, COLM, JAMBU, Ketje
  1: all remaining candidates.
  
Overall, we believe that the coverage of the Use Case 2 (High-performance)
and the Use Case 3 (Defense in depth) algorithms
is adequate to rank them from the point of view of their efficiency in hardware.

Regarding the Use Case 1 (Lightweight), much more work is required, as 
 - only the implementation of ACORN is truly lightweight and at the same time
   compliant with the CAESAR Hardware API,
 - implementations of the AES-based ciphers use AES with the 128-bit datapath,
   rather than the lightweight 32-bit, 16-bit, or 8-bit datapath,
 - only the implementation of Ascon is protected against side-channel attacks.
 
Any additional submissions regarding Use Case 1 are strongly encouraged,
and will be given priority during the benchmarking process.

Congratulations and thanks to all the hardware design teams for their
hard work, timely submissions, and great contributions to the comprehensive 
evaluation of the Round 3 CAESAR candidates!
Special thanks for making the majority of the implementations available 
in public domain, which allows transparant evaluation, and enables
further analysis and optimization by other teams.

The early results of benchmarking will be available in early August,
and the Round 3 GMU Power Point report around August 11.

Any additional comments, suggestions, and requests are very welcome!

Regards,

Kris
on behalf of the GMU Benchmarking Team
Reply all
Reply to author
Forward
0 new messages