On Tue, Sep 22, 2015 at 09:52:59PM +0000, Axon wrote:
Short Answer:
Template - UpdateVM - TorVM - sys-net
Long answer:
I use a debian minimal template with qubes-tor and tor-arm installed.
Follow usual steps in creating the TorVM, as per Qubes docs.
I make some minor changes:
Set the memory as low as possible and limit vcpus.
Use this torrc in /rw/config/qubes-tor:
SocksPort "10.137.x.x:9049 IsolateClientAddr IsolateSOCKSAuth
IsolateDestPort IsolateDestAddr"
SocksPort "10.137.x.x:9050 IsolateClientAddr IsolateSOCKSAuth"
TransPort "10.137.x.x:9040 IsolateClientAddr"
DNSPort "10.137.x.x:53 IsolateClientAddr IsolateSOCKSAuth"
ControlPort "9051"
VirtualAddrNetworkIPv4 "
172.16.0.0/12"
(This opens the control port so I can use arm for monitoring and
control.)
I don't want clear traffic FROM the TorVM, and I only want torified
traffic, so I customize /usr/lib/qubes-tor/start_tor_proxy.sh to load
this iptables script:
*nat
:PREROUTING ACCEPT [64:4864]
:INPUT ACCEPT [17:1150]
:OUTPUT ACCEPT [5:300]
:POSTROUTING ACCEPT [5:300]
:PR-QBS - [0:0]
:PR-QBS-SERVICES - [0:0]
-A PREROUTING -i vif+ -p udp -m udp --dport 53 -j DNAT --to-destination 10.137.x.x:53
-A PREROUTING -i vif+ -p tcp -m tcp --dport 9049 -j DNAT --to-destination 10.137.x.x:9049
-A PREROUTING -i vif+ -p tcp -m tcp --dport 9050 -j DNAT --to-destination 10.137.x.x:9050
-A PREROUTING -i vif+ -p tcp -j DNAT --to-destination 10.137.x.x:9040
COMMIT
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]
-A INPUT -i vif+ -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i vif+ -p tcp -m tcp --dport 9040 -j ACCEPT
-A INPUT -i vif+ -p tcp -m tcp --dport 9050 -j ACCEPT
-A INPUT -i vif+ -p tcp -m tcp --dport 9049 -j ACCEPT
-A INPUT -i vif+ -p udp -m udp -j DROP
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -m state --state INVALID -j DROP
-A OUTPUT -m conntrack --ctstate INVALID -j DROP
-A OUTPUT ! -s
127.0.0.1/32 ! -d
127.0.0.1/32 ! -o lo -p tcp -m tcp --tcp-flags RST,ACK RST,ACK -j DROP
-A OUTPUT ! -s
127.0.0.1/32 ! -d
127.0.0.1/32 ! -o lo -p tcp -m tcp --tcp-flags FIN,ACK FIN,ACK -j DROP
-A OUTPUT -p tcp -m owner --uid-owner 106 -m tcp -j ACCEPT
-A OUTPUT -m owner --uid-owner 106 -j DROP
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -o vif+ -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -j LOG --log-prefix "DROP OUT "
COMMIT
(I actually do this by placing new versions of the files in /rw/config
and moving them in place in rc.local, but you could make these changes
in the template.)
That's the normal iptables setup, but restricting outbound traffic from
the TorVM to traffic from qubes-tor. It's the same as used in Tails.
I use a standard proxy VM as updateVM, connect it to the TorVM, and use
that as netvm for the templates. Make this the updateVM for Dom0.
This gives torified updates for templates and Dom0.
Because all traffic runs though the torvm, it's trivial to use .onion
addresses too.
All this just works thanks to the great work done by Marek and abeluck.
Sometimes there are network issues - this is inevitable when updating
across Tor. I usually find that kicking the Tor service from arm fixes
them.
I've been running variations on this for some time without significant
problems. Traffic monitoring shows no leaks.
Other stuff:
On dev machine I put a standalone VM running apt-cacher in line between
the templates and the updateVM to act as a caching proxy. I use a
nat rule to redirect traffic for the update proxy (10.137.255.254) to
apt-cacher, and have apt-cacher configured to use the updateVM as proxy.
Sometimes the Fedora templates complain about this - I just switch netvm
to the UpdateVM, run a yum update, but stop at the actual download,
switch netvm back to apt-cacher and replay the transaction to get the
update running through the caching proxy..
I use a torfw to implement enforcement of separation between VMs using
network level policies.
As standard the fw will MASQUERADE all connected VMs to the same IP
address, which causes problems on the Tor isolation front.
To get round this I use custom nat rules to map the connected VMs to
new addresses, and an rpc service triggered on if-up to set routing on
the TorVM, and to manipulate the raw table.
That's really a separate topic.
The Qubes networking model is hugely flexible. As a matter of policy I
try to leave the VM unchanged and handle any configuration required in
the netvm. This means that it is easy to change netvm and still have
networking work.(For example, switch between vpn and normal traffic,
tor and clear.)
If I had time I could make this shorter.
I hope it's fairly clear.
unman