Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Minimum time for bruteforce

64 views
Skip to first unread message

Frederick Gotham

unread,
Nov 4, 2020, 4:54:23 PM11/4/20
to
My current task is to ensure that an embedded Linux device fulfills all the criteria for a particular security standard.

One of the requirements in the standard is that a bruteforce attack takes a minimum amount of time which is specified as 2 days (i.e. 48 solid hours of trying keys).

For a 20-Bit cryptographic key, there is approx. 1 million possible unique keys. In order to peform a full bruteforce attack on 1 million keys in 2 days, it must take no longer than 173 milliseconds to try one key. And so I must write code that takes at least 173 milliseconds to use a key to decode data. An easy way of doing it would be:

void DecodeString(char *const p, uint_fast32_t const key)
{
    Sleep_For_Milliseconds(173u);
    Decrypt(p,key);
}

This is of course a little wasteful if the "Decrypt" function takes 140 milliseconds, as the entire operation overall will take 313 milliseconds.

In order to be less wasteful, I could code it something like as follows:

void DecodeString(char *const p, uint_fast32_t const key)
{
    uint_fast64_t const before = GetTickCount();

    Decrypt(p,key);

    uint_fast64_t const duration = GetTickCount() - before;

    if ( duration < 173u )
        Sleep_For_Milliseconds( 173u - duration );
}

This technique relies on checking the time twice: Once before the cryptographic function and once afterwards.

A function such as "GetTickCount" on Microsoft, or "get_uptime" on Linux would be preferable over using 'ctime' just in case the system time somehow changes.

Anyone got any ideas on how best to code this?

Öö Tiib

unread,
Nov 4, 2020, 6:11:55 PM11/4/20
to
20 bit key? It is like 6 decimal numbers that can be perhaps
some kind of pin code on door (3 sequential failed tries alarm)
not cryptography key.

Chris M. Thomasson

unread,
Nov 4, 2020, 6:25:08 PM11/4/20
to
Imagine if Eve was collecting encrypted data sent across the wire for
several months... Well, she can spend as much time as she wants to brute
force.

Chris M. Thomasson

unread,
Nov 4, 2020, 7:48:37 PM11/4/20
to
On 11/4/2020 1:54 PM, Frederick Gotham wrote:
> My current task is to ensure that an embedded Linux device fulfills all the criteria for a particular security standard.
[...]
> Anyone got any ideas on how best to code this?
>

Wait, this is multiposted to several groups. Step one, stop doing that?

Bo Persson

unread,
Nov 5, 2020, 3:02:09 AM11/5/20
to
On 2020-11-04 at 22:54, Frederick Gotham wrote:
> My current task is to ensure that an embedded Linux device fulfills all the criteria for a particular security standard.
>
> One of the requirements in the standard is that a bruteforce attack takes a minimum amount of time which is specified as 2 days (i.e. 48 solid hours of trying keys).
>
> For a 20-Bit cryptographic key, there is approx. 1 million possible unique keys. In order to peform a full bruteforce attack on 1 million keys in 2 days, it must take no longer than 173 milliseconds to try one key. And so I must write code that takes at least 173 milliseconds to use a key to decode data. An easy way of doing it would be:
>
> void DecodeString(char *const p, uint_fast32_t const key)
> {
>     Sleep_For_Milliseconds(173u);
>     Decrypt(p,key);
> }
>

First, it is very unlikely that an attacker will have to try *every*
possible combination before finding the correct one. On average you will
fill need to try half the keys.

One way do delay an attack is to add a variable delay, and make it
longer and longer for each incorrect entry.

Juha Nieminen

unread,
Nov 5, 2020, 8:59:17 AM11/5/20
to
Frederick Gotham <cauldwel...@gmail.com> wrote:
> My current task is to ensure that an embedded Linux device fulfills all the criteria for a particular security standard.

Cryptography and security are extraordinarily complex subjects. In general
one should avoid making ad-hoc solutions to security and cryptographic
problems, and instead rely on the best knowledge that current computing
science has on those subjects.

Anyway, unless the system actually needs to accept hundreds of login
attempts per second, one of the normal ways to do this is that incorrect
keys will introduce an ever-increasing delay to the next attempt (up to
some reasonable maximum delay, eg. a few seconds). The delay can reset
after some time has passed without further attempts.

> A function such as "GetTickCount" on Microsoft, or "get_uptime" on Linux would be preferable over using 'ctime' just in case the system time somehow changes.

You could use std::chrono::steady_clock, or look at how it's implemented.

Also, I think std::clock() also measures CPU time, not system time, so
it could also be used.

gcc has the extension __rdtsc() in its x86intrin.h header. However, this
returns CPU clock cycles (since startup), so to convert them to time
(or, rather, to know how many clock cycles is the timespan you are after)
you would need to know the clockrate of the CPU. (There might be some
function to get that.)

David Brown

unread,
Nov 5, 2020, 3:47:19 PM11/5/20
to
On 04/11/2020 22:54, Frederick Gotham wrote:
> My current task is to ensure that an embedded Linux device fulfills
> all the criteria for a particular security standard.
>
> One of the requirements in the standard is that a bruteforce attack
> takes a minimum amount of time which is specified as 2 days (i.e. 48
> solid hours of trying keys).
>
> For a 20-Bit cryptographic key, there is approx. 1 million possible
> unique keys. In order to peform a full bruteforce attack on 1 million
> keys in 2 days, it must take no longer than 173 milliseconds to try
> one key. And so I must write code that takes at least 173
> milliseconds to use a key to decode data. An easy way of doing it
> would be:
>

20 bits is not a "cryptographic key" - it's a PIN code. It's suitable
for people pressing the numbers by hand, where it takes five seconds to
try a code and they get blisters and give up after 20 or 30 shots.

When the USA had their idiotic and counter-productive policy of treating
"strong" encryption software as "military grade subject to export
control", they counted 40 bits as the minimum for "strong" encryption.
And that was decades ago. No one who understands anything about
cryptography would bother with less than 128 bits these days.

Once you have picked a rational size for the cryptographic key, and used
an existing encryption algorithm and implementation (you are not
qualified to make the algorithm or code yourself), your minimum delay
problems disappear.

Juha Nieminen

unread,
Nov 6, 2020, 3:44:04 AM11/6/20
to
David Brown <david...@hesbynett.no> wrote:
>> For a 20-Bit cryptographic key, there is approx. 1 million possible
>> unique keys. In order to peform a full bruteforce attack on 1 million
>> keys in 2 days, it must take no longer than 173 milliseconds to try
>> one key. And so I must write code that takes at least 173
>> milliseconds to use a key to decode data. An easy way of doing it
>> would be:
>>
>
> 20 bits is not a "cryptographic key" - it's a PIN code. It's suitable
> for people pressing the numbers by hand, where it takes five seconds to
> try a code and they get blisters and give up after 20 or 30 shots.
>
> When the USA had their idiotic and counter-productive policy of treating
> "strong" encryption software as "military grade subject to export
> control", they counted 40 bits as the minimum for "strong" encryption.
> And that was decades ago. No one who understands anything about
> cryptography would bother with less than 128 bits these days.

He didn't actually say they are using 20-bit keys. It sounded more like
just a practical example.

The problem is that the application may be eg. about user-defined
passwords, which are usually relatively short.

David Brown

unread,
Nov 6, 2020, 8:39:11 AM11/6/20
to
True. But if he had been using 40 bits, he'd have seen he needed a
delay of 173 nanoseconds (ignoring his misunderstanding about how many
attempts are actually needed), and then he would presumably have
realised that you don't need to add a software delay of 173 nanoseconds.

> The problem is that the application may be eg. about user-defined
> passwords, which are usually relatively short.
>

If you have lower-case and upper-case letters, digits and a couple of
punctuation marks, that's 64 values - 6 bits of "key" per user password
character. (In practice, 5 bits is more realistic as many characters
are rarely picked.) Even with short passwords it's not hard to beat 20
bits of variation.

However, when you use user-defined passwords, you don't use the password
directly for the cryptographic key - you add salt and pass it through a
hash. Run it through SHA256 and you have a 256 bit key. The source
does not have 256 bits of entropy, of course - even if the password
rules insist on 10 characters, an upper case, a lower case, a digit, and
a punctuation character, you might have, say, 30 bits of entropy. But
you can't work backwards from the SHA values - you have to work
forwards. That means you have to enumerate all the possible passwords
and generate the hashes and try them. Tables exist of commonly used
passwords, with algorithms to generate variations (like "Secret1!",
"Secret2!", etc.). But the job is much harder than simply trying all
values from 0 to 2 ^ 30.

0 new messages