So long as you have no plans to prevent that use-case sounds good to me!
On Tue, Sep 20, 2016 at 04:56:00PM -0700, Adam Langley wrote:
> > I can't speak for what the "official" plan is, but it'd be very easy to setup
> > "slave" Roughtime servers that created a merkle tree of requests in some
> > interval, then submitted the tip of that tree to a master Roughtime server. The
> > protocol can even support this as-is: you'd just concatenate the two
> > merkle-paths and set the index field appropriately. Of course, this would make
> > it look like you had an unusually large merkle tree if nested multiple times,
> > but a standards compliant client should accept such a response just fine.
> >
> > Note though that you can't do this without the co-operation of the server: the
> > hash calculation for a leaf is 0x00 + <nonce> while the hash calculation for an
> > inner node in the tree is 0x01 + <left> + right.
> >
> > Of course, the above does increase latency with every level of horizontal
> > scaling.
>
> It would be possible to move the packet processing to untrusted
> machines which built small Merkle trees and forwarded their roots to a
> trusted machine that signed larger Merkle trees. There is a limit in
> the protocol about how large the tree can be (based on the size of the
> response), but batches of 1,024 should be doable.
Ah right, because of UDP packet size limits.
Reminds me of a related issue: Why full SHA512 rather than SHA256 or truncated
SHA512? For timestamping, the birthday attack isn't relevant - a colision still
proves both messages existed at that point in time - so 512 bits is unusually
strong, particularly when Roughtime's intended use-case is so ephemeral
(there's a comment in the OpenTimestamps source noting that for timestamping
even MD5 doesn't have any practical attacks yet). Going from 512-bit hashes to
256-bit would make batches of 1,048,576 doable for the same packet size, which
could make the difference between a slow HSM being usable as the signer or not.
Equally, you've declared that the protocol and infrastructure is subject to
change and upgrades, so that'd sets expectations appropriately if you do need
to upgrade to more bits in the future.
But I may be biased: being a Bitcoin guy if SHA256 fails I'm going to have a
very bad day indeed. :)