Future-proofing qubes-secpack

120 views
Skip to first unread message

Axel

unread,
Jun 3, 2017, 11:24:27 AM6/3/17
to qubes-devel
As Joanna has already noted, qubes-secpack is not advertised as solving all problems related to distribution security, but "the best we can do" currently.

I'd like to suggest a practical improvement of qubes-secpack that I believe can protect against a (rather limited) class of threats including some forced private key hand-over and insider threats.

The scheme:

The idea is to publish hashes of git commits, and maybe also of detached signatures, to the bitcoin blockchain. This will serve as a reasonable secure proof that the information was created before a certain point in time. In addition to the proof of freshness, this locks the information into a narrow time frame.

Taking the canary as an example, the logic of the scheme would be as follows:

Axioms:
- BnBB: If X is before Y and Y is before Z, then X is before Z.
- DB: If X depends on Y, then Y is before X.

Assumptions:
1. <Timestamp A> is before <Proof of freshness>
2. <Canary> depends on <Proof of freshness>
3. <Signature> depends on <Canary>
4. <Blockchain transaction> depends on <Signature>
5. <Blockchain transaction> is before <Timestamp B>

Argument:
6. <Proof of freshness> is before <Canary> [2, DB]
7. <Canary> is before <Signature> [3, DB]
8. <Signature> is before <Blockchain transaction> [4, DB]
9. <Timestamp A> is before <Canary> [1, 6, BnBB]
10. <Canary> is before <Blockchain transaction> [7, 8, BnBB]
11. <Canary> is before <Timestamp B> [10, 5, BnBB]
12. <Timestamp A> is before <Signature> [9, 7, BnBB]
13. <Signature> is before <Timestamp B> [8, 5, BnBB]
14. <Timestamp A> is before <Canary>, and <Canary> is before <Timestamp B> [9, 11, Conjunction]
15. <Timestamp A> is before <Signature>, and <Signature> is before <Timestamp B> [12, 13, Conjunction]

In other words, under these assumptions it is proven that the canary and its signature was created in the time period between timestamps A and B.

When publishing a statement to the blockchain, the statement should include the hash of the latest git commit (and perhaps also hashes of detached signatures if git hashing is not trusted). The bitcoin transaction needs only to publish the hash of this statement. After it is published, the entire statement, its hash, and the identifier of the transaction that published it, should be added to qubes-secpack in a subsequent commit. The purpose of this subsequent commit is only to provide discoverability, and technically needs no signature. Further, the only security assumed to be provided by the publication of such a transaction, is to give reasonable assurance that the statement was produced before a certain point in time. Especially, it does not matter who performs the publication or why.

Suggestion: The instructions on how to use qubes-secpack, including the blockchain timestamping verification, should be part of qubes-secpack and be signed.

How this scheme mitigates some threats:

Suppose the developers who sign commits and distributions are forced by an outside actor to hand over their private keys. This actor could now for example attempt to sign digests of malicious images of Qubes OS and have them distributed, as well as falsify new canary statements on schedule. So far everything seems lost. Now, suppose further that some indication of this event comes to public knowledge, i.e. users begin to suspect the keys can not be trusted after a certain point in time. With the current system, i.e. without publishing to the blockchain, assumptions 4 and 5 above are not applicable, and we end up not being able to trust anything anymore. However, by adding assumptions 4 and 5, i.e. continually publishing commits to the blockchain, the signatures that were created before the adversarial event are trustworthy despite a compromised private key, assuming both the signature and the blockchain transaction are verified. To clarify: If a user trusts that a private key was compromised only after a certain <Timestamp B>, then that user will still be able to trust the contents in qubes-secpack verified by the corresponding <Signature>.

Analogously, there may be several other scenarios in which this scheme can help, e.g. in deprecating keys.

Live example:

So I went ahead and did it.

-----Begin Statement-----
Today is 2017-06-03
Last published git commit of qubes-secpack is 4b1d111457f793cd97524dce2ac98cc694220f88
---End Statement-----

SHA-256 hash of Statement is ec516a47d56b0fb6346404d8e40da1c20a6630d1f02635f9b402eb9b1b69865d

Bitcoin transaction 840ea8d6a71ae5415bf0efb84fb25f03cd6a1b0fa2a511133383a3477fe18601 includes as it's first output a script that ends with the hash of the Statement. See https://www.blocktrail.com/BTC/tx/840ea8d6a71ae5415bf0efb84fb25f03cd6a1b0fa2a511133383a3477fe18601

Although this particular publication was made using https://proofofexistence.com it is possible to do the same thing manually for a lower fee.

Vít Šesták

unread,
Jun 3, 2017, 2:53:27 PM6/3/17
to qubes-devel
Well, blockchain could be probably also used as a proof of freshness: Just add some Blockchain-related data to the signed message.

Regards,
Vít Šesták 'v6ak'

Andrew David Wong

unread,
Jun 4, 2017, 12:50:57 AM6/4/17
to Axel, qubes-devel
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2017-06-03 10:24, Axel wrote:
> As Joanna has already noted, qubes-secpack is not advertised as
> solving all problems related to distribution security, but "the
> best we can do" currently.
>
> I'd like to suggest a practical improvement of qubes-secpack that
> I believe can protect against a (rather limited) class of threats
> including some forced private key hand-over and insider threats.
>
> *The scheme:*
>
> The idea is to publish hashes of git commits, and maybe also of
> detached signatures, to the bitcoin blockchain. This will serve as
> a reasonable secure proof that the information was created
> *before* a certain point in time. In addition to the proof of
> freshness, this locks the information into a *narrow time frame*.
> [...]

Have you seen this?

https://github.com/QubesOS/qubes-issues/issues/2685
https://github.com/QubesOS/qubes-secpack/pull/15

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org
-----BEGIN PGP SIGNATURE-----

iQIcBAEBCgAGBQJZM5GnAAoJENtN07w5UDAwFiAQAMbzWLx+/hIAoeh5AF5cSsRA
1/zMy5S/CC+0zdY3APhZibIZHfEkqN3ZN9IkeccMDEjIpmAfPEOLjoRIx7pWJfl8
nLNtxZyAZAP4zpVsg4Efw4CYk+ZRu9TAfgLjBDJbtyKf2tEhVYnv/htgpbe+dLxL
UCy6XrVNnOZThm1/c8MSe/SeP2J/2YHQFYBBlJkXTOh8K/JbwHHG2muUYZhjR/oR
bZ9Ecr5bzY48uSU7E8POB8slE/dedYYlCCb3i4LnxLCazeQPuetbatYOWEaARUdJ
MI1hnRqya1GS+oQwFetVm8pT8rymerxYgNSJZNI7bhr4fZUDVEFUnSFLJKcOmiU8
UqigzwdfMMdZJzU+gVKooDsYlQs9loMyddLbYTUODKGTOrt/3ZD69bCU8fitFM1S
SZRvsiwQ9wPTmaFYu4FVYz1yxl1jmHrky2cWntZ4rULlb+ODzZgpIuOeDURXPnG9
5DTBmEoc9EK8Zmu2fVcMYOyTrl2KQ2g7nyLvjCl+hRm4VaXhf25aBjimHzWrCyg9
6++GuyGk4E6iJGO/umYdIg9+BfetOZSY7lcNf8EI3afQv/ORh/MM29QFplGFaoD6
TnI+c9fhLmLK9JLoSBjAMhVt6+IvuFKlIOptLu16Xo87s1wjcdGh1fnrBctggzr2
gE0zDuA2/SGm7mNHBy9Y
=q6rH
-----END PGP SIGNATURE-----

Axel

unread,
Jun 4, 2017, 8:29:46 AM6/4/17
to qubes-devel, svenss...@gmail.com
I did not see that pull request. Note however that the pull request makes qubes-secpack depend on the blockchain in order to prove information creation *after* a certain point in time, while my suggestion was the opposite: make the blockchain depend on qubes-secpack in order to prove information creation *before* a certain point in time.

I think we should have both. Other things than just canaries can benefit from being locked into a narrow time period. Maybe it makes sense to have a folder in qubes-secpack with timing proofs. They would be of two types:
* A proof of freshness, including the last 10 bitcoin block hashes.
* A proof of existence, including the identifier of a bitcoin transaction that includes the hash of either the last git commit, or a more elaborate statement. IMPORTANT NOTE: As far as I understand, git tags, signed or not, are not part of the actual commits. If so, proof of existence must also include the tag signatures.

Regularly making such timing proofs part of the commit chain will lock all changes in that repository into a narrow period of time. Each commit is provably created after all proofs of freshness committed before or included in it, and before all proofs of existence committed after it. Ideally, every important commit, such as a release digest or canary, should be directly preceded by, or include, a proof of freshness and directly followed by a proof of existence.


Jean-Philippe Ouellet

unread,
Jun 4, 2017, 1:04:55 PM6/4/17
to Axel, qubes-devel, peter...@gmail.com
Peter Todd's https://opentimestamps.org/ seems like a good fit for this purpose.

Axel

unread,
Jun 4, 2017, 5:45:07 PM6/4/17
to qubes-devel, svenss...@gmail.com, peter...@gmail.com
Excellent, and it's even free of charge. Following links from opentimestamps.org, I found https://stamp.io/ which claims to also be free of charge, and using both Bitcoin and Ethereum blockchains.

Andrew David Wong

unread,
Jun 4, 2017, 5:45:40 PM6/4/17
to Axel, qubes-devel, Peter Todd
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

My next question was going to be whether you're aware of Peter Todd's
OpenTimestamps project, which Jean-Philippe mentioned. Also see:

https://petertodd.org/2016/opentimestamps-announcement
https://github.com/opentimestamps/opentimestamps-client

IIUC, OpenTimestamps may already do what you're aiming to do.

P.S. - Please don't top-post.
iQIcBAEBCgAGBQJZNH97AAoJENtN07w5UDAwVScP/1Pfp4hvpTMw5HDdmWIpLIMA
qSgBYe82CypcVCEK3bM2ku7UX8m2t2SFXz5EytWRQ/LUJjZ9gXCU1Pb9K3+2YJ2m
Gj7vdeAeXNWoUCmpmPWJpwSEshwuOpU3rknHS/1ICH3aP0eYCA9HVk0MMFA/k+e4
j0AQ1co2bEQN6eTSgFOq8MrDavVXgySJPe0GT9Tr1vIo2P0plYsDRHSvhCw2KKst
A/OGF7EP7a/Siq5rvgM2oIWUG6GSgo0dJojfj8Ce0y0iEDV2BaqD/HSJWSF8onQ5
jhxhIiKnaSP+DGK2IrsiFb+pigT2NipLj+uLcrFraMX+Ua7dlepWfGHFKZKCSFnl
1ACe6XbRrpkRnPF08q5yMqK6/vKdYeBE7HI8/BkHpCM5b6SzNi1DUxwjTCh9hZhA
Ut/EHrnv188OLxFNpCt7Lg3DGhixaLhuC/h27sFRHcLf6TUrVQmndacVoM1YS8xX
5Gc6cYARjyK/KCm43DhSTmgfIpwuZdEq2Lv0/G/cEDotoja3hSYkAjPtk7Jv8czw
oDn6Sf3D8KaXN7BXGtphN/sffqHxTFMMhzKpirD41QUzvtvKdzG3ZNjLK82pn6kg
b1R5rYxDAieDtYMikiejYprPW0kD+3Qrh6lydRRG0Bx3useTiU+aC3xpfCGgVcFz
aExJLDNvQ0tetUTXgUsM
=3Vlz
-----END PGP SIGNATURE-----

Andrew David Wong

unread,
Jun 4, 2017, 6:04:54 PM6/4/17
to Axel, qubes-devel, peter...@gmail.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2017-06-04 16:45, Axel wrote:
> Excellent, and it's even free of charge. Following links from
> opentimestamps.org, I found https://stamp.io/ which claims to also
> be free of charge, and using both Bitcoin and Ethereum
> blockchains.
>

Looks like we both replied a the exact same time. :)

I've created a tracking issue for this:

https://github.com/QubesOS/qubes-issues/issues/2847

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org
-----BEGIN PGP SIGNATURE-----

iQIcBAEBCgAGBQJZNIP4AAoJENtN07w5UDAwOWAP/A63UxOe6cbMOoz/luG5LM2w
5KEbVq+0F4CnnTCT5h36K/XImN9XjCsGVgedZ9vk0gb6Q2YRlAQ03ZSkKpImHBbc
gzi0fBtWpmTg2P4HrLrg44afSTUu+ASNpMTbpGyvNf/CLeV3fpGTKHw1JFobmWFi
gkfKj+cGeg9Z9HnddylXirSy2uoYQmb5aLuAYBxRnQjc/0WrkEeBFstht4Ywtu0W
mQ8sl2EOYFwtV11s6ysl+FepYGtE3KxmsC+ANJqsRbQRI9m2DS90YvBp7qhj5BIk
s6GOTWsFbeBULvhaIlm2MWsr5sPUb8Ixkp9Mc25ns6geaM6otnGp1AZAwMxXj3F8
n8zZ45MvWe+NAVEWzc2VJ1yNMUflB4hrLNBN5Mt4THmq3IFol7JxTJIfeperk02A
N7yANWEbjY1sI7fxUg5QvEjV7VoVFIXVqXXze306gP73CLZkdNxFczD1lSAg9ZBG
pKTdkxOct00rIslO0IK+vO9nPcF8v0fxlD2D8p8iRkImA5604JdsqIJY0twi/urO
sb/54/BUbsI7B97ggPZaNW1kT4T9NFE8KO9QiSVZ31v3aPcgGeMz8WUylx0y+vAo
Pp6Q9BkGFGlivHL6ZoOcMQclrVZFHK8yfIsfI1ijnlOPwAhpfEqKfgAD7Bvf2RTj
7qML67wwPELNgw+PRACv
=jDoe
-----END PGP SIGNATURE-----

Peter Todd

unread,
Jun 5, 2017, 7:34:01 AM6/5/17
to Axel, qubes-devel
On Sun, Jun 04, 2017 at 05:29:46AM -0700, Axel wrote:
> I did not see that pull request. Note however that the pull request makes
> qubes-secpack depend on the blockchain in order to prove information
> creation *after* a certain point in time, while my suggestion was the
> opposite: make the blockchain depend on qubes-secpack in order to prove
> information creation *before* a certain point in time.
>
> I think we should have both. Other things than just canaries can benefit
> from being locked into a narrow time period. Maybe it makes sense to have a
> folder in qubes-secpack with timing proofs. They would be of two types:
> * A proof of freshness, including the last 10 bitcoin block hashes.

Bitcoin block hashes are a chain, so it doesn't make any sense to include more
than one, unless you're worried about reorgs.

If you are worried about reorgs, for the purpose of time-related proofs,
remember that Bitcoin block header times are *very* imprecise, as the
consensus doesn't tightly bound the claimed block time for a bunch of reasons.
So as a conservative rule of thumb, I would only consider a block header
timestamp to have a precision within about a day. With careful analysis you can
get tighter bounds than that, but in practice that's rarely very useful anyway.

> * A proof of existence, including the identifier of a bitcoin transaction
> that includes the hash of either the last git commit, or a more elaborate
> statement. IMPORTANT NOTE: As far as I understand, git tags, signed or not,
> are not part of the actual commits. If so, proof of existence must also
> include the tag signatures.
>
> Regularly making such timing proofs part of the commit chain will lock all
> changes in that repository into a narrow period of time. Each commit is
> provably created after all proofs of freshness committed before or included
> in it, and before all proofs of existence committed after it. Ideally,
> every important commit, such as a release digest or canary, should be
> directly preceded by, or include, a proof of freshness and directly
> followed by a proof of existence.

What exactly are you trying to prevent here?

Timestamping commits and canary signatures to prove they existed in the past
makes a lot of sense if they're signed, as that allows you to establish that
those signatures were created prior to a compromise.

Additionally proving that canaries were created after a certain point in time
is useful to prevent forward dating.

But what attack does the latter type of proof prevent when applied to git
commits?

--
https://petertodd.org 'peter'[:-1]@petertodd.org
signature.asc

Peter Todd

unread,
Jun 5, 2017, 7:36:07 AM6/5/17
to Andrew David Wong, Axel, qubes-devel
On Sun, Jun 04, 2017 at 04:45:32PM -0500, Andrew David Wong wrote:
> My next question was going to be whether you're aware of Peter Todd's
> OpenTimestamps project, which Jean-Philippe mentioned. Also see:
>
> https://petertodd.org/2016/opentimestamps-announcement
> https://github.com/opentimestamps/opentimestamps-client
>
> IIUC, OpenTimestamps may already do what you're aiming to do.

In particular, OpenTimestamps can timestamp both git commits and git tags in a
convenient way as part of your normal workflow:

https://petertodd.org/2016/opentimestamps-git-integration
signature.asc

Axel

unread,
Jun 5, 2017, 1:30:55 PM6/5/17
to qubes-devel, a...@qubes-os.org, svenss...@gmail.com


On Monday, June 5, 2017 at 1:36:07 PM UTC+2, Peter Todd wrote:
On Sun, Jun 04, 2017 at 04:45:32PM -0500, Andrew David Wong wrote:
> My next question was going to be whether you're aware of Peter Todd's
> OpenTimestamps project, which Jean-Philippe mentioned. Also see:
>
> https://petertodd.org/2016/opentimestamps-announcement
> https://github.com/opentimestamps/opentimestamps-client
>
> IIUC, OpenTimestamps may already do what you're aiming to do.

In particular, OpenTimestamps can timestamp both git commits and git tags in a
convenient way as part of your normal workflow:


This is what we need. A little funny how Peter takes Qubes OS as the example in this article :-)

Axel

unread,
Jun 5, 2017, 2:15:34 PM6/5/17
to qubes-devel, svenss...@gmail.com


On Monday, June 5, 2017 at 1:34:01 PM UTC+2, Peter Todd wrote:
On Sun, Jun 04, 2017 at 05:29:46AM -0700, Axel wrote:
> I did not see that pull request. Note however that the pull request makes
> qubes-secpack depend on the blockchain in order to prove information
> creation *after* a certain point in time, while my suggestion was the
> opposite: make the blockchain depend on qubes-secpack in order to prove
> information creation *before* a certain point in time.
>
> I think we should have both. Other things than just canaries can benefit
> from being locked into a narrow time period. Maybe it makes sense to have a
> folder in qubes-secpack with timing proofs. They would be of two types:
> * A proof of freshness, including the last 10 bitcoin block hashes.

Bitcoin block hashes are a chain, so it doesn't make any sense to include more
than one, unless you're worried about reorgs.

Agree. reorgs are the only reason to include more than one, but 10 seems like overkill. Reorgs aren't even an ultimate threat in this case, we don't have a double-spending problem or similar. The only reason for us to worry about reorgs is future availability of the blockchain data. 3 blocks is generally considered very secure, so maybe we can go with that.
 

If you are worried about reorgs, for the purpose of time-related proofs,
remember that Bitcoin block header times are *very* imprecise, as the
consensus doesn't tightly bound the claimed block time for a bunch of reasons.
So as a conservative rule of thumb, I would only consider a block header
timestamp to have a precision within about a day. With careful analysis you can
get tighter bounds than that, but in practice that's rarely very useful anyway.

I agree that we don't need more precision than a day. As I said above, the only worry is that a reorg would make the block data unavailable in the future, rendering the proof unverifiable.
 

> * A proof of existence, including the identifier of a bitcoin transaction
> that includes the hash of either the last git commit, or a more elaborate
> statement. IMPORTANT NOTE: As far as I understand, git tags, signed or not,
> are not part of the actual commits. If so, proof of existence must also
> include the tag signatures.
>
> Regularly making such timing proofs part of the commit chain will lock all
> changes in that repository into a narrow period of time. Each commit is
> provably created after all proofs of freshness committed before or included
> in it, and before all proofs of existence committed after it. Ideally,
> every important commit, such as a release digest or canary, should be
> directly preceded by, or include, a proof of freshness and directly
> followed by a proof of existence.

What exactly are you trying to prevent here?

Timestamping commits and canary signatures to prove they existed in the past
makes a lot of sense if they're signed, as that allows you to establish that
those signatures were created prior to a compromise.

Additionally proving that canaries were created after a certain point in time
is useful to prevent forward dating.

But what attack does the latter type of proof prevent when applied to git
commits?

Using proof of freshness to prevent forward dating can be applied to more than just canaries. Imagine an attacker that has temporary use-access but not read-access to a private key, e.g. by compromising one side of split GPG in a setup where the vault VM does not show the user what is about to be signed. In this case, the attacker might be interested in signing a malicious Qubes .iso image, and in order to prevent the developers to refute the .iso, the attacker also signs a key revocation that is sufficiently forward-dated to not raise suspicion about the .iso.

I admit this example is somewhat far-fetched, but the point is that more than just canaries can benefit from forward-dating prevention. The stronger argument for doing this is that it is the conservative thing to do: Nothing seems to become less secure by preventing forward-dating, and it does essentially remove an attack surface. It should be standard procedure to prove timing of everything that is released, meaning both proof of freshness and proof of existence.

A proof of freshness only proves anything for things that depend on it. For example, a key revocation that does not include a proof of freshness, isn't proven to be created after that point in time. The best thing would be if a proof of freshness could be part of every signed message, including .iso digests and key revocations. If this seems like too much work, I believe the next best thing would be to adopt a standard procedure that everything that is signed MUST also be part of a signed git commit that includes a proof of freshness. With this rule, users that trust the key owners (which we must do anyways), can trust that a digest published in a signed git commit was created approximately around the point in time of it's corresponding proof of freshness.

Peter Todd

unread,
Jun 5, 2017, 3:54:03 PM6/5/17
to Axel, qubes-devel
On Mon, Jun 05, 2017 at 11:15:33AM -0700, Axel wrote:
> > Bitcoin block hashes are a chain, so it doesn't make any sense to include
> > more
> > than one, unless you're worried about reorgs.
> >
>
> Agree. reorgs are the only reason to include more than one, but 10 seems
> like overkill. Reorgs aren't even an ultimate threat in this case, we don't
> have a double-spending problem or similar. The only reason for us to worry
> about reorgs is future availability of the blockchain data. 3 blocks is
> generally considered very secure, so maybe we can go with that.

Ah, but see, since Bitcoin blocks are in a chain, a block hash ten blocks deep
is only invalidated in the event of an (extremely) rare re-org more than ten
blocks deep.

> > If you are worried about reorgs, for the purpose of time-related proofs,
> > remember that Bitcoin block header times are *very* imprecise, as the
> > consensus doesn't tightly bound the claimed block time for a bunch of
> > reasons.
> > So as a conservative rule of thumb, I would only consider a block header
> > timestamp to have a precision within about a day. With careful analysis
> > you can
> > get tighter bounds than that, but in practice that's rarely very useful
> > anyway.
> >
>
> I agree that we don't need more precision than a day. As I said above, the
> only worry is that a reorg would make the block data unavailable in the
> future, rendering the proof unverifiable.

Right, but if you don't need better precision, just picking a *single*
blockhash, say, ten blocks back is totally fine. On average that'd be a block a
bit over an hour and a half old.

The only reason you'd want to include *more* than one block hash is to try to
get better precision, with a fallback in case of a reorg. If you're picking a
blockhash 10 blocks back, on average that'd be a bit over an hour and a half
old, which is well within the timing precision you can expect from Bitcoin
anyway, and certainly good enough for our application. So my advice would be to
keep it simple and just include a single block hash.


FWIW, the actual consensus rules for Bitcoin blocktimes require the following
two conditions to be true:

1) nTime > the median nTime of the past 11 blocks

This rule ensures nTime will move forward, eventually, though miners have a lot
of lee-way to backdate their blocks. A miner with negligible hashing power can
backdate a block they create at no cost to themselves on average something like
an hour or so, assuming the block times of all other miners are honest.

If the backdating hashing power is non-negligible - say 50% - it's quite
plausible they'd be able to create backdated blocks with block times that are
multiple hours behind what they should be. If 100% of hashing power is
backdatin blocks, the only thing limiting the attack is at some point they'll
cause difficulty to increase, but that'd be a scenario where block times have
been backdated by multiple *days* at least.

Fortunately, by the time we're talking about multiple hours/days of backdating,
it's a very public attack that probably would get noticed. But backdating an
hour or two is something miners could definitely get away with.


2) nTime < now + 2 hours

This attempts to prevent miners from producing forward dated blocks, which
among other things, could be used to artifically drive difficulty down. But
because there's no universal and reliable notion of time, it's a really dodgy
rule that itself can be used as an attack vector - miners can be in a position
where they *want* part of the network to (initially) reject their blocks. If
miners do start forward dating their blocks for whatever reason, I'd expect to
see enforcement of this rule to break down; what happens then is hard to know.

Again, the only good thing is that a forward dating attack is pretty public, so
we'd at least find out it had happened and could respond accordingly.


Finally, I should point out that providing incentives for miners to mess with
block times is a potential threat to Bitcoin as a whole; we'd all be better off
if people design systems that are robust to such attacks, to avoid giving
attackers incentives to do them in the first place.

tl;dr: Please round off nTime to the nearest day. :)
While it's somewhat niche, I agree that you do make a good case there for the
at utility of proof-of-freshness when applied to PGP signatures in general. In
addition to split-GPG, that'd also be useful for the quite common case of
signing with a PGP smartcard.

I will make the point though that the proof-of-freshness is only valid for
proving that the *signature* was freshly created, not the Git commit itself.
The reason is simple: the proof-of-freshness is only useful because it's
difficult to recreate the signature; the git-commit *by itself* doesn't have
that property.

> A proof of freshness only proves anything for things that depend on it. For
> example, a key revocation that does not include a proof of freshness, isn't
> proven to be created after that point in time. The best thing would be if a
> proof of freshness could be part of every signed message, including .iso
> digests and key revocations. If this seems like too much work, I believe
> the next best thing would be to adopt a standard procedure that everything
> that is signed MUST also be part of a signed git commit that includes a
> proof of freshness. With this rule, users that trust the key owners (which
> we must do anyways), can trust that a digest published in a signed git
> commit was created approximately around the point in time of it's
> corresponding proof of freshness.

If you put the proof-of-freshness in the git commit, what you've actually done
is made a proof-of-freshness for the *signature* on the git commit, just with
one step of indirection.

I think that indirection just confuses the issue, so better to put the
proof-of-freshness in the signature itself. Fortunately the OpenPGP standard
has something called "signature notation data" that allows you to add arbitrary
(signed) data to OpenPGP signatures. In fact, I used to have a cronjob that
periodically set my gpg.conf to include a recent blockhash as a
proof-of-freshness with the notation bloc...@bitcoin.org=<blockhash>

Even better would be if GPG had the ability to run a command on demand to get
the value a notation should be set too, but as far as I know that feature
doesn't exist yet. That said, I could easily add it to the OpenTimestamps
Client as part of the git commit timestamping support.
signature.asc

Chris Laprise

unread,
Jun 5, 2017, 4:52:25 PM6/5/17
to Peter Todd, Axel, qubes-devel
Can OpenTimestamps be easily reconfigured to use a blockchain system
other than Bitcoin?

Chris

Axel

unread,
Jun 5, 2017, 5:21:34 PM6/5/17
to qubes-devel, pe...@petertodd.org, svenss...@gmail.com, tas...@openmailbox.org
From OpenTimestamps.org: "OpenTimestamps aims to be a standard format for blockchain timestamping. The format is flexible enough to be vendor and blockchain independent"

IIUC, Stampery is a free of charge service that produces OTS proofs tied to both Bitcoin and Ethereum concurrently. See https://api.stampery.com/#stampery-api-usage-downloading-ots-receipts

Axel

unread,
Jun 5, 2017, 6:17:47 PM6/5/17
to qubes-devel, svenss...@gmail.com
Your point is valid, this idea escaped me.
 

The only reason you'd want to include *more* than one block hash is to try to
get better precision, with a fallback in case of a reorg. If you're picking a
blockhash 10 blocks back, on average that'd be a bit over an hour and a half
old, which is well within the timing precision you can expect from Bitcoin
anyway, and certainly good enough for our application. So my advice would be to
keep it simple and just include a single block hash.

Agree.
Again, agree.
Also, these rules were new to me. Thank you for the insights!
In terms of usefulness, yes.
 

> A proof of freshness only proves anything for things that depend on it. For
> example, a key revocation that does not include a proof of freshness, isn't
> proven to be created after that point in time. The best thing would be if a
> proof of freshness could be part of every signed message, including .iso
> digests and key revocations. If this seems like too much work, I believe
> the next best thing would be to adopt a standard procedure that everything
> that is signed MUST also be part of a signed git commit that includes a
> proof of freshness. With this rule, users that trust the key owners (which
> we must do anyways), can trust that a digest published in a signed git
> commit was created approximately around the point in time of it's
> corresponding proof of freshness.

If you put the proof-of-freshness in the git commit, what you've actually done
is made a proof-of-freshness for the *signature* on the git commit, just with
one step of indirection.

I think that indirection just confuses the issue, so better to put the
proof-of-freshness in the signature itself. Fortunately the OpenPGP standard 
has something called "signature notation data" that allows you to add arbitrary
(signed) data to OpenPGP signatures. In fact, I used to have a cronjob that
periodically set my gpg.conf to include a recent blockhash as a
proof-of-freshness with the notation bloc...@bitcoin.org=<blockhash>

Agree. As I said, the best thing would be if a proof of freshness could be part of every signed message. I only suggested proof of freshness as part of the commit as a second-hand option since I didn't know about the notational data support.
 

Even better would be if GPG had the ability to run a command on demand to get
the value a notation should be set too, but as far as I know that feature
doesn't exist yet. That said, I could easily add it to the OpenTimestamps
Client as part of the git commit timestamping support.

What exactly would you add to OTS git support?
Did you mean adding proof-of-freshness functionality to the git tag signing override provided by OTS? I think that would be very useful.

Moreover, I think it would be useful to add something to qubes-secpack/utils to help make GPG signatures with a suitably old block hash as notational data.
 btw, nice anti-spam measure!

Peter Todd

unread,
Jun 7, 2017, 3:11:22 PM6/7/17
to Axel, qubes-devel, tas...@openmailbox.org
On Mon, Jun 05, 2017 at 02:21:34PM -0700, Axel wrote:
>
>
> On Monday, June 5, 2017 at 10:52:25 PM UTC+2, Chris Laprise wrote:
> >
> > Can OpenTimestamps be easily reconfigured to use a blockchain system
> > other than Bitcoin?
> >
> > Chris
> >
>
> From OpenTimestamps.org: "OpenTimestamps aims to be a standard format for
> blockchain timestamping. The format is flexible enough to be vendor and
> blockchain independent"

Yup, OpenTimestamps proofs can contain attestations from multiple different
notaries simultaneously. Multiple attestations is also backwards compatible, in
that a Bitcoin-only verifier will simply ignore attestations using other
mechanisms that it doesn't know about.

Currently the only code that's been written in addition to Bitcoin is
preliminary Ethereum support, however that was only added at the insistence of
a banking client; I personally don't think it's particularly valuable to have.
Ethereum doesn't provide a very different guarantee than Bitcoin does, and in
practice it's suffered a lot of consensus issues; what exactly Ethereum is
isn't well-defined. The Ethereum chain also in practice timestamps the Bitcoin
chain anyway, as there's a few Ethereum contracts that follow the Bitcoin chain
for various reasons, and timestamp proofs can be extracted if needed from those
contracts.

More interesting will be when we add support to OpenTimestamps for notaries
with very different trust models, namely trusted timestamping schemes like
Roughtime. Such schemes will add both diversity, and can add much higher
precision timestamps that a decentralized system never could.

> IIUC, Stampery is a free of charge service that produces OTS proofs tied to
> both Bitcoin and Ethereum concurrently. See
> https://api.stampery.com/#stampery-api-usage-downloading-ots-receipts

I'm not sure whether or not Stampery's OTS proofs actually include the Ethereum attestations.

In any case, there's a system of 100% free calendar servers, currently operated
by myself and another OpenTimestamps team member, that also allow you to
produce OTS proofs. In addition, this system allows you to produce OTS proofs
*instantly*, without waiting for confirmation in the Bitcoin blockchain. These
calendars run on open-source software, and all data associated with them is
freely available for anyone to download and mirror. I'd strongly suggest OTS
users use those calendars.
signature.asc

Peter Todd

unread,
Jun 7, 2017, 3:11:24 PM6/7/17
to Axel, qubes-devel
On Mon, Jun 05, 2017 at 03:17:46PM -0700, Axel wrote:
> > I think that indirection just confuses the issue, so better to put the
> > proof-of-freshness in the signature itself. Fortunately the OpenPGP
> > standard
>
> has something called "signature notation data" that allows you to add
> > arbitrary
> > (signed) data to OpenPGP signatures. In fact, I used to have a cronjob
> > that
> > periodically set my gpg.conf to include a recent blockhash as a
> > proof-of-freshness with the notation bloc...@bitcoin.org <javascript:>=<blockhash>
> >
> >
>
> Agree. As I said, the best thing would be if a proof of freshness could be
> part of every signed message. I only suggested proof of freshness as part
> of the commit as a second-hand option since I didn't know about the
> notational data support.

Ah good, I misread your original message. Glad we're on the same page!

> > Even better would be if GPG had the ability to run a command on demand to
> > get
> > the value a notation should be set too, but as far as I know that feature
> > doesn't exist yet. That said, I could easily add it to the OpenTimestamps
> > Client as part of the git commit timestamping support.
> >
>
> What exactly would you add to OTS git support?
> Did you mean adding proof-of-freshness functionality to the git tag signing
> override provided by OTS? I think that would be very useful.

Basically, because the OTS git support is a wrapper *around* the gpg binary,
it's easy for that wrapper to also pass the --set-notation option with a
proof-of-freshness blockhash. This *is* a bit of a hack, but at least on a
technical level it will work, and I can easily add a mechanism for that wrapper
to get a recent block hash from the OpenTimestamps calendar servers. In fact,
adding proofs-of-freshness is something I've been thinking about adding to
OpenTimestamps anyway; the main reason I haven't already is that what exactly
it proves is easily misunderstood.

> Moreover, I think it would be useful to add something to
> qubes-secpack/utils to help make GPG signatures with a suitably old block
> hash as notational data.

Yup, although a tricky part there is you of course need to communicate with the
outside world to get such a blockhash, which makes this difficult to implement
on a truly off-line system; I'll admit that it may very well be simpler to put
the blockhash in the thing being signed (e.g. git commit) rather than in the
signature as notation data under that circumstance.

Though, that's another neat thing about the OTS git support: it works just fine
with Qube's split-gpg support. It'll also work with the signature notation
scheme as qubes-gpg-client-wrapper passes unknown options like the
--set-notation option to GPG (that does smell like a possible security hole!).

> > --
> > https://petertodd.org 'peter'[:-1]@petertodd.org
> >
>
> btw, nice anti-spam measure!

Thanks! I've had this signature for something like 10+ years, and you're maybe the
second or third person to comment on it. :)
signature.asc

Axel

unread,
Jun 7, 2017, 4:50:34 PM6/7/17
to qubes-devel, svenss...@gmail.com, tas...@openmailbox.org

Here's an idea for you: Make that a centralized process!
* Set up a process that regularly collects proofs of freshness from many diverse sources, including news sources, all blockchains you can think of, and other kinds of notaries like selected, known roughtime servers.
* Make "history" documents that include all these proofs of freshness, as well as root hashes from the OTS calendar servers.
* Make proofs of existence of the history documents with many diverse methods, including publishing to all blockchains you can think of, and other kinds of notaries.
* Store these history documents, or historical artifacts :-), together with their rather large .ots proofs in a repository that people are encouraged to mirror, and find ways to include it in several save-for-the-future initiatives, including archive.org.
* Standardize a format for history documents, and add an expression in the OTS standard to refer to such a document, e.g. HistoryDocument(hash, optional suggestion of how to retrieve). Using a hash there ensures that no consensus is necessary.
* Naturally, the hash of a history document should be included in the next history document, in effect creating a blockchain (except no consensus is necessary) for time proofs that interlink with so many beautiful things.

This would make the world a better place in many ways, including these:
* .ots proof files would become more secure since they would in effect refer to more things than a user otherwise would care to reference.
* .ots proof files would remain small despite the added security if the .ots generator defaults to include the latest HistoryDocument, and at most one other source (that stamps the same calendar root hash). A second thing would be included only in order to protect against HistoryDocument data unavailability.
* .ots proof files would become more accurate, since the hash can be published to pretty much the first block of any blockchain that happens to form after the stamping.
* For the same reasons, proofs of freshness would also become more secure and more accurate.
* Proofs of freshness would actually become smaller than they are now. No need to include many things, just the latest HistoryDocument hash and at most one other thing e.g. a bitcoin block an hour old.
* You said something about not giving people incentive to mess with Bitcoin block time. This would create a powerful deterrent. There is no way an adversary could dream of influencing all those notaries concurrently, and if they mess enough with one blockchain, anyone will be able to actually prove it, rather than just notice.
* Since anyone can see, use, and tie information to/from this meta-chain, it could benefit other projects, e.g. it could strengthen the roughtime system as well.

I think this process can be optimized somewhat:
* A history document only needs to include hashes from chains that has published a new block since the last history document.
* Similarly, if a new history document is created before a certain blockchain has confirmed the publication of the last one, an attempt should be made to override the transaction so as to include the latter history document instead of the former and save on transaction fees. Depending on the blockchain in question I assume this could be done by double-spending with a slightly higher fee.
* These two optimizations would provide higher granularity without creating excessive data.

 

Axel

unread,
Jun 7, 2017, 5:05:56 PM6/7/17
to qubes-devel, svenss...@gmail.com
I guess that depends on how split-gpg is implemented. If notational data can be added only on the vault side, then I agree.

One good thing is that security does not depend on the ability to verify the hash before using it as freshness proof. It will either produce a proof of freshness or not. It can hardly hurt. Even if you type it up manually, reading it off of another machine and getting a typo in it, it could still serve as proof of freshness: you can argue that getting a hash that close to the real thing would be computationally/statistically/fortune-tellingly unfeasible. It can be compared to citing a news article and forgetting the period character.
Reply all
Reply to author
Forward
0 new messages