As you probably know, in Qubes Beta 1 we have introduced firewalling
support. In a default installation there is a dedicated (small) VM that
does firewalling for all the other domains (AppVMs) in the system.
[ domain 1 ] ---\
[ domain 2 ] ----\
... ---- [ firewall VM ] --- [ net VM ] --- Internet
[ domain N ] ----/
Thanks to our new template sharing scheme, it's possible to have even
more than one net- and firewall- VM (although most users would need just
Let me first explain what firewalling in Qubes is, and what it is *not*,
supposed to provide...
The primary reason for introducing firewalling in Qubes is to protect
the user from her own mistakes. For example, I might like to ensure that
my 'work' domain can only do networking to my mail server and nothing
more. This is to ensure, that if I get an email with a http link inside
and I accidentally click on it, that the link will *not* cause the
default browser in 'work' domain to connect to some
god-knows-what-website. So, this is all about *ensuring that some
(trusted) domains cannot do untrusted networking*.
Another, closely related job of our firewall, is to *ensure that some
trusted actions can only be done from select (trusted) domains*.
Imagine, for example, that a company has a special server for keeping
all the products' blueprints (say HDL sources for new processors).
Obviously one would like to assure that those employees who were granted
access to this sensitive server actually access it only from trusted
domains, and not e.g. from some 'red' domain used for random web
browsing. Again, this is where our firewall comes handy.
In a future version of Qubes, we could easily imagine how the
firewalling policy is centrally managed by the corporate IT stuff. This
could be combined with a trusted boot (that we currently don't have yet)
and remote attestation, so that only Qubes machines with properly
applied firewalling policy could actually connect to the "blueprint"
server in the corporate intranet. In contrast to other OSes, using
remote attestation for Qubes would actually make sense (on Windows, Mac,
or Linux, it makes little sense, because it's so easy to subvert the
TCB, and because there is no GUI isolation, at least on Linux and Mac).
Now, what does the firewalling in Qubes is *not* supposed to provide?
Well it is not supposed to be a *leak-prevention* mechanism! So, if my
'work' domain has got compromised somehow (e.g. somebody sent me
GPG-encrypted message that exploited a hypothetical bug in gnupg
process), then the firewall will *not* prevent the (smart) attacker from
leaking data out of this compromised domain, especially if the attacker
has compromised at least one other domain in my system, e.g. 'red' or
'netvm' -- both of which are normally assumed to be untrusted.
There are lots of ways to build *cooperative* covert channels between
two domains running on Qubes. From straightforward network-based covert
channels (say via traffic modulation) to more exotic ones via CPU cache.
Such covert channels could not be eliminated on x86 hardware, they can
only be minimized. But this, in turn, requires special modifications to
e.g. scheduling, that essentially makes your system totally unusable.
So, while the firewalling in Qubes might make leaking data out of your
compromised domain a bit harder than usual, you should not count on it
-- it could always be bypassed given the attacker invested enough time
into building a sophisticated covert channel.
Besides, don't forget that in the example above with 'work' domain and
email compromising it via GPG bug, the attacker could easily leak all
the data by... just sending an email to herself!
So, don't get too excited about cooperative covert channels prevention,
as in most practical cases there will be other ways around it, anyway!
If your email client gets comprised, then you will likely loose your
emails, no matter what. What Qubes offers you, is that many other data
from your other domains will still be safe.
So, after this a bit long introduction of what firewalling in Qubes is,
and is not, supposed to do, I would like to discuss some current
limitations, and whether we should attempt to do something to address
them in Beta 2, or not?
The number one problem(?) that strikes me is the inability to set
firewall rules by DNS names, instead of IP address (yea, I know, I know,
DNS is insecure). This is really annoying in cases of all the popular
services on the internet that resolve to god-knows-how-many IP
addresses, different one with each of your request (load balancing).
To give you a specific example: I would like to be able to limit my
'work-admin' domain to be able to accesses only aws.amazon.com and
console.aws.amazon.com, and only through HTTPS (or ideally just:
But this is not possible currently, because even though I can provide
DNS names in the firewall rules editor dialog box, those DNS names will
be resolved by iptables at the moment of inserting them. But
aws.amazon.com resolves to whole lot of different IP addresses, so you
will quickly find out that the rules you have in iptables are not valid
for the newer IP addresses.
Same applies to e.g. bank website. Most banks would provide one DNS
names that resolve to many IPs. And I would like to limit my 'banking'
domain to be able to talk only to http://mybank.com.pl and nothing more.
Currently this cannot be done in most cases reliably.
What I do instead, currently, I just limit those domains to https
traffic. That kind of sucks.
The question is: should we attempt to solve this problem, and if so,
what would be the best way to do that? Note that, unlike personal
firewalls for Windows or Mac, we cannot hook into the TCP/IP stack,
because firewalling is done in a separate VM.
Oh, BTW, and why do we do firewalling in a separate VM, and not, e.g. in
the default net VM? The reasoning here is that net VM is assumed to be
easy-to-compromise, and we generally don't trust it. And, the firewall
logic is supossed to protect the user from mistakes, as described above,
and so is kind of trusted. Thus, we can easily imagine the following
1) attacker compromises the net VM, e.g. via a bug in DHCP there (like
e.g. the recent one announced last week),
2) the attacker then modifies the firewall rules (if they were kept in
the netvm) to allow all net access from 'work' domain,
3) finally the attacker sends me an email with an embedded link to some
malicious website that can exploit some new 0day in firefox.
4) Myself, being absent minded and all, click on a link and get my
'work' domain compromised. Boom!
Yes, that's a partly social-engineering attack, yet it would be nice to
somehow prevent it. That's why we decided to keep firewall enforcing
logic in a separate VM.
But, now, on the second thought, perhaps if we allowed on-the-fly DNS
resolving in the firewall (so essentially allow filtering by DNS names
instead of by IP), then we would essentially allow for a similar attack
scenario as described above? The attacker, who controls DNS (which they
do in any hotel/airport network), can now say that smtp.mail.com is now
resolving to what is nastytarp.ru, and then lure me into opening a link
that leads there...
So, the logic behind leaving firewalling IP-only, is this -- if we
cannot really come up with a reliable IP-based filtering, then we should
not attempt to come up with any filtering, because it won't be any good?
But then, on the other hand, IP-based filtering is not much better --
unless I allow only IPsec, there is nothing that could prevent the
attacker sitting in the airport lounge with me, and who does MITM on my
connection, to fully bypass IP-based firewalling (so, e.g. sending me
email with embedded link to http://nastytrap.ru:25, then returning a DNS
response which resolves to the IP of my mail server, and then returning
some malicious HTML content for my browser's request to that IP).
Again, these are all partly social-engineering attacks that also require
attackers to do MITM, and so affect road warriers the most, and not
people working in their safe offices? So, perhaps we should not worry
THAT much about them, and just introduce DNS-based filtering?
So, what do you think? :)
This is nice, and cool thing, I just have several question and
- service VM colours
It is possible to change them? For me it is confusing to use the same
color for a normal VM and a service (dom0, firewall, netvm) VM It
would be nice to have a unique color for those.
- firewall rules
First of all, I cant use the firewall editor :o It is asking for a
password of what? (maybe just I missed something?)
If you want filtering and implement some rules for all of your domain
there will be a huge mess soon. To prevent that we should use custom
rules for the real filtering, so the FORWARD chain can be predefined.
in this case we need only that kind of rules in the forward chain:
target proto src dst
domain1-inet all <domain1 net> 0/0
domain2-inet all <domain2 net> 0/0
And inster the real rules in the sub chains...
Another question here is if the domain nets/IP's are customizable? if
yes where can I do this?
More on the firewall rules, we need logging. It is nearly pointless to
drop some packages without logging it.
> So, what do you think? :)
Your question was about the filtering based on DNS instead of IP...
I think if you want web filtering we should use a web proxy (Squid or
an application level firewall)
In this case you can use whitelisting method for the restricted
domains, and blacklisting for any other.
But, in this case we need a user friendly whitelis/blacklist 'manager'
to do the job instead of modifying the rules by hand.
Also can do more if using other features like:
- premade content filters
- virus scanning
On the other hand we can't make it too difficult to use because it
will become unmanageable...
during this mail I was using my fresh Qubes R1 ;) - thanks.
> The number one problem(?) that strikes me is the inability to set
> firewall rules by DNS names, instead of IP address (yea, I know, I know,
> DNS is insecure). This is really annoying in cases of all the popular
> services on the internet that resolve to god-knows-how-many IP
> addresses, different one with each of your request (load balancing).
Load balancing DNS in high demand networks has always been a big problem
with firewall rules IMHO. For a desktop environment usability is a major
factor so symbolic names could go a long way to help the average person
make proper use of a firewall system, so this idea of DNS resolution
deserves some consideration. But then DNS is generally untrustable
without DNSSEC, so the question is how far can you go without
deliberately falling into any neferous DNS traps. Let me just throw some
ideas out there and you can tell me where I'm making any big mistakes. ;)
> To give you a specific example: I would like to be able to limit my
> 'work-admin' domain to be able to accesses only aws.amazon.com and
> console.aws.amazon.com, and only through HTTPS (or ideally just:
I think the key here is to have a (semi)trusted way of doing a reverse
IP lookup via the authoritative server for that domain and matching that
IP to the proper DNS domain name rule. Once the DNS host information is
found to match a symbolic firewall rule then a dynamically added rule
could permit subnet access to that domain. But *only* for given
organizations which are semi-trusted and preconfigured for that VM.
(e.g. BANKING-VM:*.BANK.COM SURFING_VM:*.google.com
This trust mechanism requires several things, mainly a way to know which
domains are allowed to be dynamically processed symbolically, and which
DNS authoritative servers can be trusted for that domain. One should not
believe just any DNS answer (e.g dns-anycast), and some form of DNS
configuration checking should be used to verify that these servers
appear to be legitimate and still configured properly.
> But this is not possible currently, because even though I can provide
> DNS names in the firewall rules editor dialog box, those DNS names will
> be resolved by iptables at the moment of inserting them. But
> aws.amazon.com resolves to whole lot of different IP addresses, so you
> will quickly find out that the rules you have in iptables are not valid
> for the newer IP addresses.
Suppose at bootup the firewall system reads a table of preconfigured
website domains, and for each one you build a DNS symbolic firewall rule
and then query and cache the authoritative DNS server list for that
domain (or load, cache, compare it to prior knowledge). When an outbound
request happens an IP reverse lookup is performed and the results
matched to any symbolic rules. If a rule is matched the athorative DNS
server is contacted directly and a query checks that the authoritative
forward DNS resolution also matches the IP. If everything matches then
the symbolic rule adds the subnet IP dynamically to the firewall
iptables rule list for a period of time.
This dynamic step would of course mean an initial lag-time with setting
up any new outbound connections, but once the first connection is
established to that server farm subnet the rest of the transactions will
still be quick. This special rule processing could be performed for just
those connections which are about to be rejected by ipfilter for lack of
a proper permit rule. To keep the overall firewall tables optimized
there could always be a configurable contrack with a timeout mechanism
to determine when the dynamic ipfilter rules should be removed.
> Same applies to e.g. bank website. Most banks would provide one DNS
> names that resolve to many IPs. And I would like to limit my 'banking'
> domain to be able to talk only to http://mybank.com.pl and nothing more.
> Currently this cannot be done in most cases reliably.
> What I do instead, currently, I just limit those domains to https
> traffic. That kind of sucks.
For banking an 'https only' system may be OK, but besides the fact that
not all sites provide https there are also sites like 'Google search'
where you will have limited capability if you use https (e.g. no
optional "Web,Images,Videos,Maps,News,Shopping,Gmail,more" links). Just
because you can get to a site doesn't mean you can do what you need. On
the other hand, having a setting for https-only as an optional
restriction make perfect sense for banking!
> The question is: should we attempt to solve this problem, and if so,
> what would be the best way to do that? Note that, unlike personal
> firewalls for Windows or Mac, we cannot hook into the TCP/IP stack,
> because firewalling is done in a separate VM.
My suggestions above do have problems in that a man-in-the-middle attack
could always redirect the DNS through address translation and even
though you think you are talking to the authoritative DNS server you
could be wrong. There may still be ways to check for authenticity with
the DNS server that may help detect that there is a man-in-the-middle,
but using TCP and talking directly to the known/stored DNS authoritative
server should be a good first step (as apposed to dns-anycast, or
general UDP). I would have to study this more at home because apparently
my workplace only allows dns-anycast (really really dumb) which is not
returning any authoritative information at all no matter what dig
commands I use.
In the current implementation you don't edit the iptables rules
Firewall rules are assigned on a per-VM basis and stored in a simple XML
file in the VM directory. This is the file you edit using the
qubes-manager firewall rules editor.
Then the rules of all VMs connected to a Firewall VM are processed to
generates iptables rules loaded in the Firewall VM.