|Inverse-replay or fast-forward attack||Trishank Karthik Kuppusamy||9/30/13 12:49 PM|
So I think there is something like an "inverse replay attack", best
illustrated by an example:
1. A release is made at version 1.
1.1. A timestamp is made at version 1, which signs for release version 1.
2. A new TUF client downloads timestamp, sees a release for version 1.
3. Simultaneously, a new release is made, and its version is incremented
3.1. A new timestamp is made, and its version is incremented to 2.
4. The client from (2) downloads release 2 when it expects release 1.
4.1. The client throws a BadHashError because it expects release 1.
So this is probably best called a "fast-forward problem". This would
happen when metadata is updated so quickly that "non-atomic read
transactions" will fail erroneously due what the client thinks is an
arbitrary metadata attack, when actually the metadata has been updated.
Would the problem be solved if each metadata file includes, besides the
length and hashes, the version numbers of the metadata files it signs
for? This way, if a TUF client sees a properly signed metadata file, but
also sees that its version number has been increased by the time it
reads it, will retry the update process instead of suspecting an
arbitrary metadata attack?
Does this all make sense?
|Re: [tuf] Inverse-replay or fast-forward attack||jcappos||9/30/13 3:15 PM|
Yes. The problem makes sense.
One potential solution: assuming we can change the file layout, we could always have the secure hash exist in the file name. In this way, the person retrieving metadata1, gets those files.
(I was going to talk with you guys about this in the meeting and alluded to it a few times. I've been holding off while we finished demo2.)
|Re: [tuf] Inverse-replay or fast-forward attack||Donald Stufft||9/30/13 3:17 PM|
FWIW I want to put a sha256 hash in the PyPI file urls anyways.
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
|Re: [tuf] Inverse-replay or fast-forward attack||Trishank Karthik Kuppusamy||10/5/13 2:56 PM|
The more I think about it, the more I like Justin's idea of addressing metadata or target by its hash.
Here is a simple change to the TUF specification that will accommodate this idea.
(Assume that file reads and writes are exclusive; i.e. no one will be able to write to a file that is being read, or read a file that is being written.) The first step of updating with TUF is to download timestamp.txt. This will remain unchanged. However, recall that timestamp.txt will contain the hashes of release.txt. It will then be an option for the client to download release.txt this way:
This is a signal to the TUF repository at example.com to return a file (release.txt in this case) with that SHA256 hash. TUF will be agnostic with respect to the choice of key-value store used to implement this.
Everything else from timestamp onwards should be download-able this way. We can then keep consistent, read-only snapshots of the repository. Eventually, the repository will run out of space to keep new snapshots. We can use something like a "mark-and-sweep" algorithm to preserve the contents of the latest release: walk the latest release, mark all visited objects, delete all unmarked objects. The last few releases may be preserved in a similar manner.
Have I missed something here? Any counterargument?