hashed third party caveat ids for discharge macaroon ids

113 views
Skip to first unread message

roger peppe

unread,
Jun 6, 2016, 10:53:11 AM6/6/16
to maca...@googlegroups.com
Dear Macaroon folks,

We've been thinking about a space optimisation for discharge macaroons.
In the paper, the discharge macaroon id is exactly the same id that's in
the macaroon itself. This means that when we bundle together a macaroon
with its discharges, the space overhead of a large third party caveat id
will be doubled.

This is a particular issue for us because we're using public key encryption
to encode our third party caveats.

How about we allow a discharge macaroon id to hold the hash of
the original id instead, as an optional space-saving alternative?

As far as we can see, the id in a discharge macaroon serves only to
identify the original third party caveat. The security of that
macaroon comes entirely from the fact that the signature
is derived from the encrypted root key.

This would mean that the macaroon checking code would need to
do a little bit more work (an extra SHA256 calculation, only strictly necessary
if no matching discharge macaroon is found).

Can anyone think of a down side to this? In particular, are there
any security ramifications?

cheers,
rog.

Tony Arcieri

unread,
Jun 6, 2016, 1:24:02 PM6/6/16
to maca...@googlegroups.com
On Monday, June 6, 2016, roger peppe <rogp...@gmail.com> wrote:
This is a particular issue for us because we're using public key encryption
to encode our third party caveats.

What public key algorithm are you using? (RSA?) An X25519 key (public or private) is the same size as a SHA-256 digest (256-bits)
 
Can anyone think of a down side to this? In particular, are there
any security ramifications?

Using a hash for this purpose is fine. 


--
Tony Arcieri

roger peppe

unread,
Jun 7, 2016, 4:31:34 AM6/7/16
to maca...@googlegroups.com
On 6 June 2016 at 18:24, Tony Arcieri <bas...@gmail.com> wrote:
> On Monday, June 6, 2016, roger peppe <rogp...@gmail.com> wrote:
>>
>> This is a particular issue for us because we're using public key
>> encryption
>> to encode our third party caveats.
>
>
> What public key algorithm are you using? (RSA?) An X25519 key (public or
> private) is the same size as a SHA-256 digest (256-bits)

Yes, we're using X225519 public key encryption. Although the key
is the same size as an SHA-256 digest, the size of the whole
third party caveat id is quite a bit bigger than that.

In particular, we include a version byte, the encoder's public key, a nonce
and the encrypted caveat payload itself which has an overhead of 16 bytes.

When including the whole caveat id in the discharge macaroon,
the cost of including a third party caveat is 154+2*n bytes (where
n is the size of the caveat payload). Using the hash
of the caveat id as the discharge macaroon's identifier, we can
reduce that cost to 109+n bytes, saving 45+n bytes per
third party caveat.

> Using a hash for this purpose is fine.

OK, that's good to have confirmed.

I'd like to propose that we adopt this as a standard across
macaroon implementations, so all implementations will
accept either a discharge macaroon with the literal
third party caveat id or its SHA256 hash.

Anyone think this is a bad idea?

cheers,
rog.

Jørn Wildt

unread,
Jun 7, 2016, 6:29:36 AM6/7/16
to maca...@googlegroups.com
I'd like to propose that we adopt this as a standard across
> macaroon implementations, so all implementations will
> accept either a discharge macaroon with the literal
> third party caveat id or its SHA256 hash.

> Anyone think this is a bad idea?

Well, it seems like a somewhat obvious idea, so I wonder if there were a reason for not doing so in the original implementation?

Besides that: how would a consumer be able to distinguish between a hash and the original caveat? 

/Jørn




--
You received this message because you are subscribed to the Google Groups "Macaroons" group.
To unsubscribe from this group and stop receiving emails from it, send an email to macaroons+...@googlegroups.com.
To post to this group, send email to maca...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/macaroons/CAJhgachvpNT2hipQTQkc5tS2xUv%2BMjesX5PT80tEOJ2qzctp%2BA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Tony Arcieri

unread,
Jun 7, 2016, 6:50:39 AM6/7/16
to maca...@googlegroups.com
On Tue, Jun 7, 2016 at 6:29 AM, Jørn Wildt <j...@fjeldgruppen.dk> wrote:
I'd like to propose that we adopt this as a standard across
> macaroon implementations, so all implementations will
> accept either a discharge macaroon with the literal
> third party caveat id or its SHA256 hash.

[...]
Besides that: how would a consumer be able to distinguish between a hash and the original caveat? 

Yeah, there's a potential second preimage attack unless there's an explicit means for telling content apart from a digest, ala:


--
Tony Arcieri

roger peppe

unread,
Jun 7, 2016, 6:59:10 AM6/7/16
to maca...@googlegroups.com
On 7 June 2016 at 11:29, Jørn Wildt <j...@fjeldgruppen.dk> wrote:
>> I'd like to propose that we adopt this as a standard across
>> macaroon implementations, so all implementations will
>> accept either a discharge macaroon with the literal
>> third party caveat id or its SHA256 hash.
>
>> Anyone think this is a bad idea?
>
> Well, it seems like a somewhat obvious idea, so I wonder if there were a
> reason for not doing so in the original implementation?

I guess it was described that way in the original paper and
so done similarly in the implementations.

> Besides that: how would a consumer be able to distinguish between a hash and
> the original caveat?

Isn't that trivial? If id of third party macaroon is equal to
hash(third party caveat id),
then it's the hash, otherwise it's the original caveat.

The implementation I had in mind, when looking for a third party
discharge macaroon,
use a discharge macaroon if its id is the same as the third party caveat id or
if it's the same as the hash of the third party caveat id.

On 7 June 2016 at 11:50, Tony Arcieri <bas...@gmail.com> wrote:
>> Besides that: how would a consumer be able to distinguish between a hash
>> and the original caveat?
>
> Yeah, there's a potential second preimage attack unless there's an explicit
> means for telling content apart from a digest, ala:
>
> https://en.wikipedia.org/wiki/Merkle_tree#Second_preimage_attack

Isn't that only a problem if SHA256 isn't second-preimage resistant?
If it isn't, that's news to me and would make me very wary of SHA256
in general.

cheers,
rog.

Tony Arcieri

unread,
Jun 7, 2016, 8:03:56 AM6/7/16
to maca...@googlegroups.com
On Tuesday, June 7, 2016, roger peppe <rogp...@gmail.com> wrote:
> Yeah, there's a potential second preimage attack unless there's an explicit
> means for telling content apart from a digest, ala:
>
> https://en.wikipedia.org/wiki/Merkle_tree#Second_preimage_attack

Isn't that only a problem if SHA256 isn't second-preimage resistant?
If it isn't, that's news to me and would make me very wary of SHA256
in general.

No, the linked attack is against "naive" Merkle trees where an attacker is able to trick a Merkle tree implementation into storing a digest in the tree, then later confuses a verifier into thinking the digest is part of the interior structure of the tree, and therefore can add the preimage of the digest to the tree. This can occur regardless of the preimage resistance of the underlying hash function.

The solution is to add a flag so implementations can tell leaves from the interior structure.

In order to avoid similar problems here, I would suggest implementations take an explicit argument as to whether they should hash first before verifying.


--
Tony Arcieri

roger peppe

unread,
Jun 7, 2016, 10:01:35 AM6/7/16
to maca...@googlegroups.com
On 7 June 2016 at 13:03, Tony Arcieri <bas...@gmail.com> wrote:
> On Tuesday, June 7, 2016, roger peppe <rogp...@gmail.com> wrote:
>>
>> > Yeah, there's a potential second preimage attack unless there's an
>> > explicit
>> > means for telling content apart from a digest, ala:
>> >
>> > https://en.wikipedia.org/wiki/Merkle_tree#Second_preimage_attack
>>
>> Isn't that only a problem if SHA256 isn't second-preimage resistant?
>> If it isn't, that's news to me and would make me very wary of SHA256
>> in general.
>
>
> No, the linked attack is against "naive" Merkle trees where an attacker is
> able to trick a Merkle tree implementation into storing a digest in the
> tree, then later confuses a verifier into thinking the digest is part of the
> interior structure of the tree, and therefore can add the preimage of the
> digest to the tree. This can occur regardless of the preimage resistance of
> the underlying hash function.

Pardon my ignorance, but could you explain how the above attack
applies to macaroon
verification, where every discharge macaroon is verified independently of its id
by virtue of its signature ? AFAICS the worst that can happen is that
a discharger might cause a verification to fail, which is something within its
power anyway.

> The solution is to add a flag so implementations can tell leaves from the
> interior structure.
>
> In order to avoid similar problems here, I would suggest implementations
> take an explicit argument as to whether they should hash first before
> verifying.

This doesn't seem quite right to me, as it assumes that all dischargers will
choose the same approach. Another possibility might be to assume that all caveat
ids of length 32 bytes are hashes - if your original caveat id is 32 bytes, you
must hash it; otherwise you can choose whether to do so or not. Thus you could
know whether to hash or not by looking at the third party caveat id rather than
checking both options.

But I'd like to understand why we have a potential problem before going down
either of these two routes.

cheers,
rog.

Tony Arcieri

unread,
Jun 7, 2016, 10:54:21 AM6/7/16
to maca...@googlegroups.com
On Tuesday, June 7, 2016, roger peppe <rogp...@gmail.com> wrote:
Pardon my ignorance, but could you explain how the above attack
applies to macaroon
verification, where every discharge macaroon is verified independently of its id
by virtue of its signature ? AFAICS the worst that can happen is that
a discharger might cause a verification to fail, which is something within its
power anyway.

It's not directly applicable, but just an example of the class of problems that can crop up. Offhand I can't see this leading to any immediate attack.


--
Tony Arcieri

Robert Escriva

unread,
Jun 7, 2016, 3:02:59 PM6/7/16
to maca...@googlegroups.com
This is why I've always suggested the approach of extra round trips to
the server instead of using public keys. It avoids the bloat.

Off the top of my head, I would raise a concern about performance:

In the current libmacaroons, the verifier does not short circuit on
third-party macaroon traversal, in order to mitigate potential timing
attacks stemming from presenting macaroons in a particular order with
many fake macaroons. If you add the SHA256 calculation, it would double
the time taken for each verification, or else open up potential timing
attack that I took care to avoid.

Otherwise, I don't see any problem with this so long as the hash
function is resistant to malicious constructions (sha256), but I'd feel
more comfortable hearing that from people more qualified than myself.

-Robert

Robert Escriva

unread,
Jun 7, 2016, 3:02:59 PM6/7/16
to maca...@googlegroups.com
Attacks like this make me weary of adding an implicit conversion from
unhashed ids to hashed ids or vice-versa.

I think that the same benefits can be achieved through other means. The
service that adds a caveat like Rog. described adds the hash as the
caveat id of the third party encrypted package and provides said package
out-of-band. The discharging service then takes the encrypted package
and generates a macaroon with the hash as the macaroon's id. This
scheme works with every current verifier, has the same efficiency gains,
and as best I can see, doesn't introduce any messaging overhead that
wasn't present.

-Robert

roger peppe

unread,
Jun 8, 2016, 4:07:55 AM6/8/16
to maca...@googlegroups.com
On 6 June 2016 at 16:07, Robert Escriva <rob...@rescrv.net> wrote:
> This is why I've always suggested the approach of extra round trips to
> the server instead of using public keys. It avoids the bloat.

Yes, this is definitely an option in some cases, but I wouldn't want to
incur a network round trip (and a database store) for every third party
caveat, with additional questions about storage lifetime, and the
fact that it requires the target service must be able to contact the
third party directly which isn't currently required.

> Off the top of my head, I would raise a concern about performance:
>
> In the current libmacaroons, the verifier does not short circuit on
> third-party macaroon traversal, in order to mitigate potential timing
> attacks stemming from presenting macaroons in a particular order with
> many fake macaroons. If you add the SHA256 calculation, it would double
> the time taken for each verification, or else open up potential timing
> attack that I took care to avoid.

I'm interested to hear more about this potential timing attack.

If it's a security hole to reveal which first party caveat fails, then
I can see why you wouldn't short-circuit the caveats when checking
a macaroon. But I'm not sure I see any issue with avoiding checking
the caveats on a discharge macaroon that's not referred to, or
in avoiding calculating extra hashes, because AFAICS the extra hash checking
will be a deterministic function of the public information in the
macaroons provided,
so the attacker wouldn't be able to learn any new information
however accurately they manage to time the request processing.

cheers,
rog.

PS If it's a security hole to reveal which first party caveat fails, I suspect
that avoiding short-circuiting when checking caveats is not sufficient,
as many caveat checkers will have different timings on success or failure
(for example, a checker that involves a database lookup might
take significantly different time if the lookup succeeds or fails).

We actually make the reason for failure available directly as an
error message, which makes it much easier to debug things when
they're going wrong. We haven't yet found a situation where
we want to conceal information about which caveat has failed.

roger peppe

unread,
Jun 8, 2016, 4:29:11 AM6/8/16
to maca...@googlegroups.com
On 7 June 2016 at 13:15, Robert Escriva <rob...@rescrv.net> wrote:
> On Tue, Jun 07, 2016 at 08:03:55AM -0400, Tony Arcieri wrote:
>> On Tuesday, June 7, 2016, roger peppe <rogp...@gmail.com> wrote:
>>
>> > Yeah, there's a potential second preimage attack unless there's an
>> explicit
>> > means for telling content apart from a digest, ala:
>> >
>> > https://en.wikipedia.org/wiki/Merkle_tree#Second_preimage_attack
>>
>> Isn't that only a problem if SHA256 isn't second-preimage resistant?
>> If it isn't, that's news to me and would make me very wary of SHA256
>> in general.
>>
>> No, the linked attack is against "naive" Merkle trees where an attacker is able
>> to trick a Merkle tree implementation into storing a digest in the tree, then
>> later confuses a verifier into thinking the digest is part of the interior
>> structure of the tree, and therefore can add the preimage of the digest to the
>> tree. This can occur regardless of the preimage resistance of the underlying
>> hash function.
>>
>> The solution is to add a flag so implementations can tell leaves from the
>> interior structure.
>>
>> In order to avoid similar problems here, I would suggest implementations take
>> an explicit argument as to whether they should hash first before verifying.
>>
>> --
>> Tony Arcieri
>
> Attacks like this make me weary of adding an implicit conversion from
> unhashed ids to hashed ids or vice-versa.

Attacks like this are only valid if SHA256 is vulnerable to second preimage
attacks, which it's not (even SHA1 isn't vulnerable that way
https://casecurity.org/2014/01/30/why-we-need-to-move-to-sha-2/).

I'm wary of adding complexity to guard against attacks that we know
aren't a problem.

> I think that the same benefits can be achieved through other means. The
> service that adds a caveat like Rog. described adds the hash as the
> caveat id of the third party encrypted package and provides said package
> out-of-band. The discharging service then takes the encrypted package
> and generates a macaroon with the hash as the macaroon's id. This
> scheme works with every current verifier, has the same efficiency gains,
> and as best I can see, doesn't introduce any messaging overhead that
> wasn't present.

This is an interesting thought, and may well be a useful thing to do
in some cases. It actually has another advantage in that the final
macaroon+discharge doesn't need to contain the original (out of band)
data at all. Unfortunately it would mean that it would no longer be
possible to just "discharge a caveat" - you'd need to bundle the original
macaroon with its associated out-of-band data, and pass that up with
the caveat id when discharging, which is a significant change, and one
that we wouldn't contemplate lightly. The change to allow a hash as a
discharge macaroon id is a much smaller change that fits within existing
flows, which is why I prefer it.

The two approaches aren't incompatible AFAICS.

cheers,
rog.

Robert Escriva

unread,
Jun 9, 2016, 10:41:13 AM6/9/16
to maca...@googlegroups.com
On Wed, Jun 08, 2016 at 09:29:09AM +0100, roger peppe wrote:
> > I think that the same benefits can be achieved through other means. The
> > service that adds a caveat like Rog. described adds the hash as the
> > caveat id of the third party encrypted package and provides said package
> > out-of-band. The discharging service then takes the encrypted package
> > and generates a macaroon with the hash as the macaroon's id. This
> > scheme works with every current verifier, has the same efficiency gains,
> > and as best I can see, doesn't introduce any messaging overhead that
> > wasn't present.
>
> This is an interesting thought, and may well be a useful thing to do
> in some cases. It actually has another advantage in that the final
> macaroon+discharge doesn't need to contain the original (out of band)
> data at all. Unfortunately it would mean that it would no longer be
> possible to just "discharge a caveat" - you'd need to bundle the original
> macaroon with its associated out-of-band data, and pass that up with
> the caveat id when discharging, which is a significant change, and one
> that we wouldn't contemplate lightly. The change to allow a hash as a
> discharge macaroon id is a much smaller change that fits within existing
> flows, which is why I prefer it.
>
> The two approaches aren't incompatible AFAICS.
>
> cheers,
> rog.

I think the amount of change is the same. Either you change your
workflow to accommodate the pattern I suggested, or we change macaroons
to add this complexity that expands us into a realm of less defined/more
manipulable behavior. I'd like to think about this more and maybe hear
from some of the folks who are using macaroons within Google before
adding it. It would be very convincing if the people who wrote the
macaroon authorization logic appendix in the paper weighed in.

The approach I suggested has an additional benefit that you don't even
need a hash. You could get by with a small number of bytes of random
data, shrinking your macaroon size more.

I'm curiously about your current "just 'discharge a caveat'" workflow,
because you need to have logic to determine where and how to discharge a
caveat, and this is likely to be significantly more complex than an
additional lookup in a hash table to map caveat id to payload.

-Robert

roger peppe

unread,
Jun 9, 2016, 12:00:13 PM6/9/16
to maca...@googlegroups.com
Well, in terms of the number of lines of code that would need to change,
it's no contest (the hash change can be done without changing
the external API significantly), but... after some thought, I think
you're probably right. If we define a new type that bundles a macaroon
with its externally held caveat ids, then change our services to
return that type always, the changes might end up mostly mechanical.

And would be very nice that even when using public key encryption
for third party caveats, the final set of macaroon + discharges used
for authorization would not reflect the size of the third party caveat ids
actually send to the third party for discharge.

> I'd like to think about this more and maybe hear
> from some of the folks who are using macaroons within Google before
> adding it. It would be very convincing if the people who wrote the
> macaroon authorization logic appendix in the paper weighed in.

That's fine. I suggested it because it seemed non-controversial
to me. Given that it seems somewhat controversial, I
retract my suggestion. I'll see what it takes to go with your
approach instead.

> The approach I suggested has an additional benefit that you don't even
> need a hash. You could get by with a small number of bytes of random
> data, shrinking your macaroon size more.

You might not even need a random number. You could just use
a numeric id starting from zero, with zero being the first third party caveat
added. You'd need to pass the base id to the third party discharger
though so that any third party caveats it adds wouldn't clash (it
might be a good but somewhat bizarre place to use utf-8 encoding - the
third party would append a new character to the existing third party caveat
id that it's discharging).

> I'm curiously about your current "just 'discharge a caveat'" workflow,
> because you need to have logic to determine where and how to discharge a
> caveat, and this is likely to be significantly more complex than an
> additional lookup in a hash table to map caveat id to payload.

It is, but is mostly orthogonal to the checking code.

Thanks again for your useful suggestion,

cheers,
rog.
Reply all
Reply to author
Forward
0 new messages