RFC: (Not) Trusting the Qubes RPMs and ISOs

127 views
Skip to first unread message

Joanna Rutkowska

unread,
May 10, 2011, 1:50:00 PM5/10/11
to qubes...@googlegroups.com
Hello,

Here comes another crazy idea to consider for an upcoming Qubes version.

So far, all the Qubes RPMs/ISOs that we have published have been built
on a dedicated machine that stands in my apartment and which myself, and
only myself (at least so I believe), have full control of (*).

This dedicated build machine runs Fedora 13 and I use it only for
merging and managing Qubes repos, building, signing and uploading RPMs
and ISOs. (The main reason I use a dedicated machine, instead of a
dedicated AppVM on my laptop, is that I don't like when my laptop
produces too much heat)

Anyway, such an approach to building RPMs have at least two obvious
disadvantages:

First it assumes that I'm not malicious and the RPMs that I sign with
the Qubes Release key and then publish to our Yum repository, have been
indeed built from the very same sources that we expose in our git repos,
and not from some other, backdoored sources.

Another disadvantage is that having one select person doing all the
final merging and RPMs building simply scales poorly. And this is
especially important when e.g. an urgent update must be quickly prepared.

So, ideally we could make the RPM generation process somehow
distributed. However, the RPMs must always be signed with some key.
Should we just select a few developers and give them access to Qubes
Release key? Of course not, because then we would need to trust every
single one of them (which is much worse than just to trust one of them).
Besides it's never good to share one key or password with other people...

Should we then create a dedicated build server and allow select
developers to ssh there, and expose some script that would be generating
and signing RPMs but only from select GIT repos? This way the developers
don't get direct access to the signing key, and can only build and sign
RPMs from the actual GIT repo they sign with their own key. They still
can add backdoors by committing offensive stuff to their GIT repo (from
which the build server would generate RPMs) but the idea is that it
would be _much_ easier to catch such a backdoor on the source code
level, than it would be if they were only introduced in a compiled RPM.

The disadvantages of the approach with a dedicated build server is that
we need to secure such server properly. This means that suddenly we
would need to invest lots of work in hardening one of our servers. Both
against remote attacks (SSH exploits, anyone?), and, more importantly,
against local escalation. After all, the whole point here is that each
developer has only limited account and has no access to the actual
signing key (to which only the building script has access). Another
disadvantage is that whoever hosts our server (hosting provider) can
always retrieve the signing private key easily.

Finally, the option that I like the most, is the fully distributed
building process. In this case we select a pool of N developers who are
allowed to build and sign RPMs and upload them to Qubes Yum repo. Then
we modify yum program in Qubes so that it always require at least M
correct signatures on each Qubes RPM out of N possible.

So, each developer generates the RPMs on their own (they agree via email
from which repos they will do it), and then generates detached
signatures for them, and then they all upload those signatures (and one
of the RPMs) to the Yum repo server, where we can have a little script
running that would combine all those signatures into one final RPM. The
point is that we no longer need a private signing key on the server,
meaning we don't need to trust the server.

A small technical problem is that it might be difficult to generate the
very same RPMs by different developers. Things such as build host name,
build date, etc., all make each generated RPM somehow unique. So, for
this to make sense, we would need some little script for extracting the
actual relevant content from the RPM (i.e. files and install scripts),
and sign only this content.

This way Marek could agree with Rafal to build new RPMs and push them
online, while myself could peacefully lay on a beach, contemplating the
sunset. Or something like that.

The same process could be even used for ISO building and singing -- in
fact it should even be simpler, because no rpm/yum need to be modified
in this case (just the instructions for ISO signature verification in
the Wiki), and also it's expected that ISO doesn't depend so much on the
build environment/time of build, so it should be just easy for a few
developers to push detached signatures, and one of them to upload the ISO.

In the above scheme I still own the Qubes Master key, and so I could
theoretically generate a new developer key, use "Rafal Wojtczuk (Qubes
OS Signing Key)" as the key ID, and then use it to sign some malicious
RPMs. And then repeat the same to create a fake "Marek Marczykowski"
singing key. So, theoretically I would still be able to push malicious
updates. However, it would be trivial for Marek and Rafal to discover
such a scam and alert the community about that. So, I guess I'd rather
use something more sophisticated instead (such as bribing Rafal and
Marek with good sushi).

Unless I'm still missing something?

I'm curious how the build process looks in case of other OSes vendors?
Both open source, as well as commercial? I'm pretty sure they just use a
build server, where they keep the private signing key(**)?

But maybe I'm wrong and they already use the scheme I just presented
here, or maybe something even better? I'm quite of a newbie in this
area, so any comments would be appreciated.

Cheers,
joanna.

(*) This doesn't include the original Fedora RPMs that we simply get
from Fedora, verify their signatures and reuse whenever it's possible.

(**) Even if they keep the signing key on a smart card, it really
doesn't improve the security process much. This is because if somebody
compromised the build server (or just has unlimited access to it, such
as the admin or provider), then the malicious software, a rootkit if you
prefer, could just sit and wait until they will be building RPMs next
time, and so they insert the smart card again, and then just "ask" the
smart card to sign some malicious RPMs. The smart card never really
knows that it is signing.

signature.asc

nicolas wagrez

unread,
May 10, 2011, 3:39:20 PM5/10/11
to qubes...@googlegroups.com
The last option seems to be the best provided that you can vet
developers properly. After all, you need to find the right number that
cannot be bought or whose machine did not get infected and that are
available at the time of release. This might increase the size of the
team by adding quite a few (part-time?) persons.

Stephanie Daugherty

unread,
May 10, 2011, 4:22:41 PM5/10/11
to qubes-devel
I'm just a lurker on the list, but I would say that so long as the
build machines and the signing keys are "air gapped" that is, never
even indirectly accessible from the network and that there are only
vetted developers signing releases, it should be fine - you'd be
exceeding the security of much of your upstream sources by doing that
much.

I don't think it's really necessary to build in parallel and compare
the builds to allow a release (but someone should do so occasionally
as a spot check to keep the vetted code signers honest). Besides, as I
said, the upstream sources are as much of a problem for the
possibility of a rogue maintainer.

-Stephanie

Radoslaw Szkodzinski

unread,
May 10, 2011, 4:56:49 PM5/10/11
to qubes...@googlegroups.com
The distributed option seems to work best.

Other distros typically have a reference build server with a release
key (automated), and an entirely separate development key.
Releases are typically vetted by some VIP.
The real problem is that a malicious change by one of the developers
won't be detected, as you've already noticed.

Either way, to publicize a package, someone(s) trusted will have to sign it.
For testing purposes, developers (and desperate users) can just exchange keys.

Of course a central trusted place to store these would be useful,
however adding a single point of security failure. (There are two
already, Qubes git [hard, simpler to spot] and distribution WWW
[easier] servers.)
Peer to peer (developer to testers) key exchange is probably more resilient.

Radosław

Reply all
Reply to author
Forward
0 new messages