Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Hardness of DDH with short exponents

55 views
Skip to first unread message

Heiko Stamer

unread,
Jan 26, 2005, 6:14:35 PM1/26/05
to
Let QR_p be the group of quadratic residues modulo p (p is a safe prime
and q = (p - 1) / 2 is the group order). Further choose p \equiv 7
\pmod{8}, such that g = 2 is a generator and suppose that the Decisional
Diffie-Hellman assumption holds in QR_p [1].

We consider ElGamal-like Operations (c_1 = g^r, c_2 = m h^r) [2] on
elements of this cyclic group, where h = g^{x_1} g^{x_2} ... g^{x_k} is
a shared public key (the secret keys x_i and r are randomly chosen from
Z_q). Note that in our case the messages m are very small group elements
(quadratic residues mod p) including the 1.

Now my questions: Can we reduce the size of r (similar to [3]) without
essential weakening the DDH? What is know about lattice based attacks like
[4] in such cases? (We don't use "small" subgroups.) Are there any helpful
references (on these topics) that I've missed?

Thanks in advance,
Heiko Stamer.

Short References:
[1] Dan Boneh: The Decision Diffie-Hellman Problem, 1998.
[2] Adam Barnett, Nigel P. Smart: Mental Poker Revisited, 2003.
[3] P.C. van Oorschot, M. Wiener: On Diffie-Hellman Key Agreement
with Short Exponents, 1996.
[4] D. Boneh, A. Joux, and P. Nguyen: Why Textbook ElGamal and RSA
Encryption are Insecure, 2000.
[5] Rosario Gennaro, Hugo Krawczyk, and Tal Rabin: Secure hashed
Diffie-Hellman over non-DDH groups, 2004.

David Wagner

unread,
Jan 26, 2005, 10:17:41 PM1/26/05
to
Heiko Stamer wrote:
>Now my questions: Can we reduce the size of r (similar to [3]) without
>essential weakening the DDH?

Interesting question! I don't know. All I can say is that I don't
know of any attacks.

Here is one (very strong) condition which I suppose would be sufficient
to guarantee that there is no weakening. Let f_k : QR_p -> QR_p be
defined by f_k(x) = x^k mod p. Let E be a random variable distributed
uniformly on [q]={0,..,q-1}. Let E' be a random variable distributed
uniformly on [2^160], the space of short exponents. Suppose it were
true that no efficient algorithm A can distinguish an oracle for f_E
from an oracle for f_E'. Then I think it would follow that DDH is safe
for use with exponents from [2^160] if DDH is safe with exponents from
[q]. This doesn't help very much, since we don't have any more reason to
believe that this condition holds than to simply believe that DDH is safe
with short exponents, so I apologize that I wasn't able to help very much.

Kristian Gjųsteen

unread,
Jan 28, 2005, 5:00:36 AM1/28/05
to
David Wagner <daw-u...@taverner.cs.berkeley.edu> wrote:
>Heiko Stamer wrote:
>>Now my questions: Can we reduce the size of r (similar to [3]) without
>>essential weakening the DDH?

One way to reduce the exponent length is to use a subgroup that is
hard to distinguish from the entire subgroup.

Example: If n=ab is an RSA modulus such that p=2n+1 is prime, you
have two subgroups (order a and order b) of Q_p that seem to be
hard to distinguish from the rest of the group Q_p.

There are other examples with other groups.

>Interesting question! I don't know. All I can say is that I don't
>know of any attacks.
>
>Here is one (very strong) condition which I suppose would be sufficient
>to guarantee that there is no weakening. Let f_k : QR_p -> QR_p be
>defined by f_k(x) = x^k mod p. Let E be a random variable distributed
>uniformly on [q]={0,..,q-1}. Let E' be a random variable distributed
>uniformly on [2^160], the space of short exponents. Suppose it were
>true that no efficient algorithm A can distinguish an oracle for f_E
>from an oracle for f_E'. Then I think it would follow that DDH is safe
>for use with exponents from [2^160] if DDH is safe with exponents from
>[q]. This doesn't help very much, since we don't have any more reason to
>believe that this condition holds than to simply believe that DDH is safe
>with short exponents, so I apologize that I wasn't able to help very much.

I believe this kind of reasoning makes sense, since you get two
essentially independent assumptions, instead of one complicated
assumption.

For what it's worth: I suggested this assumption at PKC'05 on Monday,
as a way to speed up one of my cryptosystems. There were no protests
from the listeners.

Here's the relevant quote from my slides (K is a group):

Small-exponent assumption

Let g be a generator for K. It is hard to distinguish the set
{ g^i | 1 <= i < 2^(2t) } from the set { g^i | 2^(2t) <= i < |K| }.

(The smaller set can obviously be distinguished with O(2^t) work.)

David Wagner

unread,
Jan 28, 2005, 5:05:22 AM1/28/05
to
Kristian Gjųsteen wrote:
>Here's the relevant quote from my slides (K is a group):
>
> Small-exponent assumption
>
> Let g be a generator for K. It is hard to distinguish the set
> { g^i | 1 <= i < 2^(2t) } from the set { g^i | 2^(2t) <= i < |K| }.
>
>(The smaller set can obviously be distinguished with O(2^t) work.)

Interesting. Can you say more about what you meant by distinguishing
set S from set T? Do you mean that a distinguisher is an algorithm A
that is given a description of a set as input, and it has to guess which
it was given? That doesn't sound like it will work, since your sets are
exponentially large. I'm guessing you meant that A can't distinguish an
element drawn uniformly at random from S from an element drawn uniformly
at random from T? This is a bit weaker than the assumption I gave, and I
don't think it suffices to prove that short-exponent DDH is as strong as
regular DDH (did I miss something?). In comparison, my assumption gives
A oracle access to the function x |-> x^i, i.e., ability to choose the
base (and to repeat multiple times, with the same exponent every time).

Kristian Gjųsteen

unread,
Jan 28, 2005, 5:39:28 AM1/28/05
to
In article <ctd2p2$440$1...@agate.berkeley.edu>,

David Wagner <daw-u...@taverner.cs.berkeley.edu> wrote:
>Kristian Gjųsteen wrote:
>>Here's the relevant quote from my slides (K is a group):
>>
>> Small-exponent assumption
>>
>> Let g be a generator for K. It is hard to distinguish the set
>> { g^i | 1 <= i < 2^(2t) } from the set { g^i | 2^(2t) <= i < |K| }.
>>
>>(The smaller set can obviously be distinguished with O(2^t) work.)
>
>Interesting. Can you say more about what you meant by distinguishing
>set S from set T? Do you mean that a distinguisher is an algorithm A
>that is given a description of a set as input, and it has to guess which
>it was given? That doesn't sound like it will work, since your sets are
>exponentially large. I'm guessing you meant that A can't distinguish an
>element drawn uniformly at random from S from an element drawn uniformly
>at random from T?

That's right. It is shorter to talk about distinguishing sets than
distinguishing elements chosen uniformly at random from one set,
from elements chosen uniformly at random from another set.

It may not be entirely clear, unfortunately.

> This is a bit weaker than the assumption I gave, and I
>don't think it suffices to prove that short-exponent DDH is as strong as
>regular DDH (did I miss something?). In comparison, my assumption gives
>A oracle access to the function x |-> x^i, i.e., ability to choose the
>base (and to repeat multiple times, with the same exponent every time).

I think I can, unless I made some stupid mistake, or we are talking
about different things. Here's the argument:

First suppose the order of the group K is much larger than 2^2t.

We want to distinguish the set

S = { (g,g^x) | 0 <= x < 2^2t }

from

L = { (g,g^x) | 2^2t <= x < |K| }.

DDH is essentially distinguishing the quadruples

D = { (g, g^x, g^y, g^xy) | 0 <= x,y < |K| }

from

R = { (g, g^x, g^y, g^z) | 0 <= x,y,z < |K| }.

Small-exponent-DDH would be distinguishing the quadruples

D' = { (g, g^x, g^y, g^xy) | 0 <= x < 2^2t, 0 <= y < |K| }

from

R' = { (g, g^x, g^y, g^z) | 0 <= x < 2^2t, 0 <= y,z < |K| }.

Suppose I have an algorithm A that can distinguish D' from R', but
not D from R. I claim I can use that algorithm to distinguish S
from L as follows:

1. I get (g,u).
2. I sample (y,z) uniformly from the set { 0,..., |K|-1}.
3. I flip a coin b.
4. If b=0, I compute the quadruple (g,u,g^y,u^y).
5. If b=1, I compute the quadruple (g,u,g^y,g^z).
6. I give the quadruple to A and it makes a guess b'.
7. If b=b', I conclude that (g,u) is in S, otherwise I
conclude that (g,u) is in L.

If (g,u) is in L, I get a quadruple that is either in D or R. In
this case, the adversary has no advantage and I guess correctly
with probability one-half.

If (g,u) is in S, I get a quadruple that is either in D' or R'. In
this case, the adversary has an advantage and I guess correctly
with probability greater than one-half.

So I can distinguish S.

David Wagner

unread,
Jan 28, 2005, 6:01:28 AM1/28/05
to
Kristian Gjųsteen wrote:
>Small-exponent-DDH would be distinguishing the quadruples
> D' = { (g, g^x, g^y, g^xy) | 0 <= x < 2^2t, 0 <= y < |K| }
>from
> R' = { (g, g^x, g^y, g^z) | 0 <= x < 2^2t, 0 <= y,z < |K| }.

I would have thought small-exponent-DDH would be distinguishing
D' = { (g, g^x, g^y, g^xy) | 0 <= x,y < 2^2t }
from
R' = { (g, g^x, g^y, g^z) | 0 <= x,y < 2^2t, 0 <= z < |K| }.
I think this is what is needed for semantic security (IND-CPA) if
you want to use El Gamal with short exponents, for instance. Or did
I go wrong somewhere?

Kristian Gjųsteen

unread,
Jan 28, 2005, 6:26:30 AM1/28/05
to

You could use short exponents for encryption (where you do two
exponentiations), and long exponents for decryption (where you only
do one exponentiation). In my specific case, it makes for various
reasons no sense to use short exponents for decryptions

But anyway: Couldn't you use my argument twice? First you map the
tuple (g,u) into either your D'/R' or my D'/R'. If the advantage
is different you get a distinguisher. If not, continue with my first
argument to get a distinguisher.

In general, if you have three sets S1 inside S2 inside S3, and you
have an algorithm that distinguishes S1 from S3-S1, then you either
have an algorithm that distinguishes S1 from S2-S1, or an algorithm
that distinguishes S2 from S3-S2 (or both).

Julia Faflek

unread,
Jan 28, 2005, 8:12:34 AM1/28/05
to
Heiko Stamer wrote:
> Now my questions: Can we reduce the size of r (similar to [3]) without
> essential weakening the DDH? What is know about lattice based attacks like
> [4] in such cases? (We don't use "small" subgroups.) Are there any helpful
> references (on these topics) that I've missed?

I refer you to the paper "Koshiba & Kurosawa: Short Exponent Diffie-Hellman
Problems" (PKC 2004, LNCS 2947, pp. 173-186) in which it is shown that
short-exponent-DDH is as hard as DDH and discrete logarithms with short
exponents.


Kristian Gjųsteen

unread,
Jan 28, 2005, 8:28:09 AM1/28/05
to

This paper refers to exponents where the lower-order bits are
zero. I believe Heiko Stamer is asking about exponents where the
high-order bits are zero.

Julia Faflek

unread,
Jan 28, 2005, 9:53:20 AM1/28/05
to

I don't understand the problem.Where is the difference when using a group G
of odd order? Then it is possible to shift the least significant bits which
are zero by (g^r)^(2^-i mod |G|) to the high-order bits or not?


asti...@okiok.com

unread,
Jan 28, 2005, 11:01:07 AM1/28/05
to

David Wagner wrote:

I think you are right.

--Anton

David Wagner

unread,
Jan 29, 2005, 1:05:07 AM1/29/05
to
Kristian Gjųsteen wrote:
>But anyway: Couldn't you use my argument twice? First you map the
>tuple (g,u) into either your D'/R' or my D'/R'. If the advantage
>is different you get a distinguisher. If not, continue with my first
>argument to get a distinguisher.

Hmm. I'm feeling dumb. I don't disbelieve you, but I can't immediately
see how to do it. Can you show me? Maybe if you show me how to map
(g,u) into my D'/R', I'll be able to see how to continue from there...

David Wagner

unread,
Jan 29, 2005, 1:20:52 AM1/29/05
to
David Wagner wrote:

>Kristian Gjřsteen wrote:
>>But anyway: Couldn't you use my argument twice? First you map the
>>tuple (g,u) into either your D'/R' or my D'/R'. If the advantage
>>is different you get a distinguisher. If not, continue with my first
>>argument to get a distinguisher.
>
>Can you show me?

Never mind! I just read Koshiba and Kurosawa's PKC 2004 paper, and now
I see how to do it. Their paper is very clear and a pleasure to read,
and it goes much like you suggested. And, incidentally, they manage to
start from a (seemingly weaker and) more plausible assumption: that the
short-exponent discrete log problem is hard. Thank you to Julia Faflek
for pointing me to that paper.

David Wagner

unread,
Jan 29, 2005, 1:17:18 AM1/29/05
to
Julia Faflek wrote:
>I don't understand the problem.Where is the difference when using a group G
>of odd order? Then it is possible to shift the least significant bits which
>are zero by (g^r)^(2^-i mod |G|) to the high-order bits or not?

I agree. Sounds equivalent if the group order |G| is known and odd.

Kristian Gjųsteen

unread,
Jan 30, 2005, 10:50:14 AM1/30/05
to
Julia Faflek <faf...@upb.de> wrote:
>I don't understand the problem.Where is the difference when using a group G
>of odd order? Then it is possible to shift the least significant bits which
>are zero by (g^r)^(2^-i mod |G|) to the high-order bits or not?

Yes, that is true. The short-exponent-dlog is the same if you
consider the low-order or high-order bits, when you can divide by
2. The same seems to be true for the indistinguishability result.

Sorry.

Kristian Gjųsteen

unread,
Jan 31, 2005, 6:00:50 AM1/31/05
to
David Wagner <daw-u...@taverner.cs.berkeley.edu> wrote:
>And, incidentally, they manage to
>start from a (seemingly weaker and) more plausible assumption: that the
>short-exponent discrete log problem is hard.

Yes, they show that a distinguisher for the short-exponent-subset
will give a solver for the short-exponent discrete log problem.

But I could not find any clear statement of how strong the reduction
from the indistinguishability assumption to the DLSE is. There are
other similar results that aren't very strong.

So to get short exponents from the weaker assumption, you may have
to increase your field size, but I don't know by how much. Switching
to elliptic curves may be a better option.

(In my case, I cannot divide by 2, so the results don't apply,
unfortunately.)

David Wagner

unread,
Jan 31, 2005, 6:20:51 AM1/31/05
to
Kristian Gjųsteen wrote:
>But I could not find any clear statement of how strong the reduction
>from the indistinguishability assumption to the DLSE is. There are
>other similar results that aren't very strong.

Looked to me like you lose a factor of n or so in the advantage,
where n is the bitlength of the group order. (Their proof has n
hybrids, where in each hybrid they invoke indistinguishability.)

Kristian Gjųsteen

unread,
Jan 31, 2005, 7:18:35 AM1/31/05
to

Yes, that's the first loss.

In the argument sketch for Thm 1, they need to strengthen their
distinguisher by randomizing in step 3, so that causes another loss.
(Typically, they dismiss it as "polynomial". The last time I did
something like this, the factor was the inverse of the distinguishing
advantage. It looks like that is what happens here.)

In step 4 they repeat the distinguisher with randomized input to
prevent the success probability of shrinking exponentially.

So their run-time and/or success probability increases, and I believe
it increases by more than a factor of n. I haven't got a good
estimate for how much more.

People should not write "efficient" and "polynomial". They should
be precise, and not pretend as if polynomial equals cheap.

David Wagner

unread,
Jan 31, 2005, 1:17:56 PM1/31/05
to
Kristian Gjųsteen wrote:
>In the argument sketch for Thm 1, they need to strengthen their
>distinguisher by randomizing in step 3, so that causes another loss. [...]

>In step 4 they repeat the distinguisher with randomized input to
>prevent the success probability of shrinking exponentially.
>So their run-time and/or success probability increases, and I believe
>it increases by more than a factor of n.

Oh, you're absolutely right. I forgot about that part of the proof.

>People should not write "efficient" and "polynomial". They should
>be precise, and not pretend as if polynomial equals cheap.

Agreed. Concrete security is good stuff.

Heiko Stamer

unread,
Feb 1, 2005, 6:26:14 PM2/1/05
to
First, I want to thank everybody who contributed to this thread.

Julia Faflek wrote:
> I refer you to the paper "Koshiba & Kurosawa: Short Exponent
> Diffie-Hellman Problems" (PKC 2004, LNCS 2947, pp. 173-186) in which it
> is shown that short-exponent-DDH is as hard as DDH and discrete
> logarithms with short exponents.

I've got it and roughly read. This was exactly the reference I was
looking for. Thus one can compute the masking operations [1] in a more
efficient way. The additional DLSE assumption seems to be reasonable for
our application. However, we should "add a large safety margin" ;-)

By the way: Camenisch et al. [2] using a subgroup DDH assumption to
guarantee privacy in their "Direct Anonymous Attestation" protocol.
They use non-shifted but shortened exponents (split into two parts,
each of size e.g. 104 bit) and work in a prime order (e.g. |q| 208 bit)
subgroup of Z/pZ (|p| 1632 bit), where p = rq + 1 and p, q are both
primes. Can their assumption be related to DLSE+DDH, too?

David Wagner wrote:

>>I don't understand the problem. Where is the difference when using a


>>group G of odd order? Then it is possible to shift the least significant
>>bits which are zero by (g^r)^(2^-i mod |G|) to the high-order bits or
>>not?
>
> I agree. Sounds equivalent if the group order |G| is known and odd.

For our group QR_p the order is known (odd prime). Although the generation
of safe primes is less efficient it does not really matter here, because
this part will be done very rarely.

[1] Adam Barnett, Nigel P. Smart: Mental Poker Revisited, 2003.
[2] E. Brickell, J. Camenisch, L. Chen: Direct Anonymous Attestation, 2004.

Kristian Gjųsteen

unread,
Feb 3, 2005, 4:48:17 AM2/3/05
to
Heiko Stamer <sta...@gaos.org> wrote:
>I've got it and roughly read. This was exactly the reference I was
>looking for. Thus one can compute the masking operations [1] in a more
>efficient way. The additional DLSE assumption seems to be reasonable for
>our application. However, we should "add a large safety margin" ;-)

Any particular reason why you cannot use elliptic curve groups?

>By the way: Camenisch et al. [2] using a subgroup DDH assumption to
>guarantee privacy in their "Direct Anonymous Attestation" protocol.
>They use non-shifted but shortened exponents (split into two parts,
>each of size e.g. 104 bit) and work in a prime order (e.g. |q| 208 bit)
>subgroup of Z/pZ (|p| 1632 bit), where p = rq + 1 and p, q are both
>primes. Can their assumption be related to DLSE+DDH, too?

I briefly looked at a paper that appears to be [2], but I did not
see anything about shortened exponents. Perhaps you could provide
a more detailed reference, or explain in more detail?

Heiko Stamer

unread,
Feb 3, 2005, 2:35:19 PM2/3/05
to
On Thu, 03 Feb 2005 09:48:17 +0000, Kristian Gjųsteen wrote:

>>efficient way. The additional DLSE assumption seems to be reasonable for
>>our application. However, we should "add a large safety margin" ;-)
>
> Any particular reason why you cannot use elliptic curve groups?

Sure, we can. But the reasons are the missing time and the increased
complexity for a practical implementation. Of course, I will
consider this eventually (next year?).

>>By the way: Camenisch et al. [2] using a subgroup DDH assumption to
>>guarantee privacy in their "Direct Anonymous Attestation" protocol.
>>They use non-shifted but shortened exponents (split into two parts,
>>each of size e.g. 104 bit) and work in a prime order (e.g. |q| 208 bit)
>>subgroup of Z/pZ (|p| 1632 bit), where p = rq + 1 and p, q are both
>>primes. Can their assumption be related to DLSE+DDH, too?
>
> I briefly looked at a paper that appears to be [2], but I did not
> see anything about shortened exponents. Perhaps you could provide
> a more detailed reference, or explain in more detail?

This paragraph refers to the full paper [2], e.g. available at IACR
ePrint (2004/205): First, consider assumption 2 on page 6: a, b, c are
from [0,\rho-1]. The pseudonyms N_I, N_V are computed in the DAA-join
resp. DAA-sign protocol (see pages 9 and 12) by N := \xi^{f_0 + f_1
2^{\ell_f}} \bmod \Gamma with respect to a random base \xi \in <\gamma>
(\gamma is of order \rho). f_0, f_1 are of size \ell_f (104 bit) and \rho
is of size \ell_\rho (208 bit). Maybe I am wrong (Missing bits shifted to
the base \xi?), but it seems to me that the exponent {f_0 + f_1
2^{\ell_f}} is to short.

However, can we relate "[...] for sufficiently large values of \ell_Gamma
and \ell_\rho" in assumption 2 (DDH) to the DLSE?

Heiko Stamer

unread,
Feb 3, 2005, 5:21:53 PM2/3/05
to
On Thu, 03 Feb 2005 20:35:19 +0100, Heiko Stamer wrote:

> (\gamma is of order \rho). f_0, f_1 are of size \ell_f (104 bit) and \rho
> is of size \ell_\rho (208 bit). Maybe I am wrong (Missing bits shifted to

> exponent {f_0 + f_1 2^{\ell_f}} is to short.

^^^^^^^^^^^^^^
Nope, here are the mistake. I don't recognized the factor :-(

0 new messages