I'm working on building an update system and I've been playing with TUF for a bit. I like the design and how well thought out it is. And it's fairly easy to use. I do have some comments and questions on it, though.
I'm currently playing with building a manifest-oriented update system. Basically you have sequentially numbered manifest files on the repo and a list of files in each manifest. The client would fetch the latest manifest file from the repo, compare that with the current one, download any new or updated files, and then call a program with the list of new, updated, and deleted files from the manifest. This could be used to download updates for package manager-based systems (like rpm or rpm-ostree), delta image based files (like for ostree), or even image-based systems. This should be something pretty flexible and a step easier to use than TUF itself.
I have looked at uptane, but that seems very focused not not very flexible.
Now my comments.
TUF is not an update framework. It is an authenticated file transfer mechanism with protection against many known attacks. You can use it to build an update system, but it could be used for a lot of things besides that. I'm not sure what to do with this comment, but it's an observation, I guess.
repo.py and client.py are terrible names for the scripts. I've renamed them to tuf-repo and tuf-client in my yocto recipes, could the same be done in TUF itself?
I know it's been mentioned before, but a C implementation would be quite handy for small systems. I guess add a vote for that.
There appears to be no way to update the timestamp or snapshot file time with the repo tool. You can write a little script to do it easily enough, but that seems like something that should be built in to the repo tool. Am I missing something?
I'd like to generate some instructions on how to do all this with air-gapped systems. But I'm not quite sure of the procedure to update a repository without causing possible glitches on the clients, nor am I 100% clear on proper key management. It seems like for best security, you would have three systems:
root-system ---> build-system ---> web-server
In my mind, the root system(s) is where you create the keys (allowing multiple signing keys on different root systems for surviving the compromise of root keys). This system is only used for initial creation and if keys are compromised or about to expire. The root key(s) stays there and are not transferred to anything else (except for secure backup). You would then transfer everything but the root key(s) to the build system, which would take the output of a build, add everything to the repository, put the client part onto the target repository for the final initial target image. You would then transfer the repository to the web server along with the timestamp key. The web server periodically updates the timestamp file.
What is the best way to do the transfer to the web server? If seems like you could get into a situation where a client transfers the metadata while it is being updated, and the client would be confused, either getting a partial metadata file or files mixed between metadata versions. In fact, looking at the code, that looks possible even when the repo tool copies the metadata over from staging. Consistent snapshots don't seem to solve this problem, they seem to solve another problem.
What is the procedure for key update? If a key has been compromised or is about to expire, how do you handle that?
I'm not sure where the snapshot key and file update falls in all this. It's default expiry is one week, so there probably needs to be some sort of automated system to update it. But I don't think you want that on the web server, right? How is that expected to work?
Sorry for all the newbie questions.
-corey