On 12.01.2013 10:22, Alex Dubois wrote:
> ---------- Forwarded message ----------
> From: Alex Dubois <
bow...@gmail.com>
> Date: Sat, Jan 12, 2013 at 9:22 AM
> Subject: Re: [qubes-devel] Port forwarding to appvm
> To: Marek Marczykowski <
marm...@invisiblethingslab.com>
>
>
> Great thanks for the heads up (I was deep into reading systemd man pages
> that I don't know, I was still on sysV).
>
> I was about to say that it fixed my problem but in fact the netvm would not
> boot fully (could not launch a terminal) after a reboot of template. so I
> had to change it back.
It worked for me...
> I ran into difficulties to revert as I could not get a tty.
Try sudo xl console fedora-17-x64
>
> I did:
> qvm-run -u root -p fedora-17-x64 /bin/bash
>
> but as no tty, I was a bit blind...
> managed to fix it by doing:
> cd /usr/lib/systemd/system/
> cp qubes-misc-post.service qubes-misc-post.service.backup
>
> replaced the 3rd line of the file by learning a bit about sed
> sed '3 c\
> After=qubes-dvm.service' qubes-misc-post.service >
>
qubes-misc-post.service.new
>
> After checking sed did its magic...
>
> cp
qubes-misc-post.service.new qubes-misc-post.service
>
>
>
> Do you think putting this would fix it?
> After=qubes-dvm.service iptables.service
>
> I also thought that maybe you would agree to put the qubes-firewall.service
> back into systemd for the netvm so that we could use the hook
> /rw/config/qubes_firewall_user_script
qubes-firewall.service applies per-VM rules controlled by VM settings in dom0.
This rules are exposed via xenstore only to firewallvm (or any ProxyVM in
general), so would not work in netvm without exposing the rules also here (see
below).
Anyway you can enable any Qubes service in any VM using qvm-service (you need
to enter exact "qubes-firewall" name). It is also doable in Qubes VM Manager.
> I feel I could help to create some qvm commands to do the exposition of
> these services or add it to the Qubes VM Manager but as I don't know how it
> is structured some quick pointers would be good.
>
> the command line could be something like:
>
> qvm-service expose/remove -p tcp --dport 443 --from personalVM --to
> netVM/devVM
Great!
This would need the following changes:
1. Introduce some property of QubesVm class to storing redirection -> qubes.py
2. Expose the rules to netvm and firewallvm somewhere in xenstore -> qubes.py
3. Apply the rules in VMs (netvm, firewallvm and perhaps dest-vm)
There are still some things to design here, at least:
Ad. 1: Where this property should be: in netvm (as list of port->destvm) or in
dest-vm (simple ports list)?
The first approach can enable user to expose the port only when using one
specific netvm - eg only on standard netvm (connected to wired interface), but
not on usbvm (connected to 3g modem). This way also make easier to detect
conflicts.
The second one is much more intuitive. And perhaps (slightly) easier to
implement - rules are propagated "up", so it is easy to find out which
(proxy/net)VM should have the rules applied - the user can have many NetVMs
and ProxyVMs!.
Ad. 2: The current "qubes_iptables_domainrules" xenstore directory (for
firewallvm) contains only filter table. Some extension/new directory should be
used for the nat table, eg "qubes_iptables_nat". It will also require to
collect/distribute all redirections in netvm and firewallvm (keeping in mind
that someone can have multiple netvms and multiple firewallvms). Assuming the
second approach in previous point, at VM startup (and rules change) propagate
the rules to the "netvm" (QubesVm.netvm is VM to which it is connected, but
this actually can be ProxyVM) and reload the rules there
(write_iptables_xenstore_entry call). Note that we need to know from where the
rule was propagated to make redirection to right IP, so basically it builds
the list from first approach of storing redirections (but here build at
runtime in separate "internal" attribute). Then this should trigger propagate
rules next level up and reload there, recursively up to the final netvm.
Some technical details:
- currently "write_iptables_xenstore_entry" is implemented in ProxyVM, so not
available in NetVM either in AppVM. If you would like to reuse/extend this
code, move this function to NetVM or even base QubesVM (from which ProxyVM
inherits).
- QubesVm attributes are defined in _get_attrs_config function - there is
machinery to deserialize it from qubes.xml ('eval' entry if it isn't string)
and serialize it back ('save' entry - simple 'str()' in most cases). There are
already lists/hashes stored in QubesVM so you can look there for samples (at
least 'services' and 'pcidevs'). Persistent attributes should also be added to
QubesVmCollection (parse_xml_element method). I know, it isn't good design,
but haven't time yet to rewrite this part of code.
Ad. 3: qubes-firewall service can be somehow reused here, for both firewallvm
(where it is started already) and netvm (where should be added to default
services set -> vm-systemd/qubes-sysinit.sh). I think you will figure out how
it works :)
I'm not sure what to do with firewall in dest-vm. Currently INPUT chain is
totally static and basically contains DROP policy. Perhaps it should be left
as task for the user to add some -j ACCEPT there (in /rw/config/rc.local). But
it is up to you if you want automatic rules also in dest-vm or not.
Regarding cmdline tool for it, IMHO it worth new qvm-redirect (or sth) tool.
You can use any exinsting qvm-tool as a template, maybe qvm-service.
>
> in the case of devVM this would mean in firewallVM:
> iptables -t filter -N dev.personal
> iptables -t filter -A FORWARD -j dev.personal -i firewallVM.vif_dev -o
> firewallVM.vif_personal -s dev-ip -d personal-ip
> iptables -t filter -A dev.personal -j ACCEPT -p tcp --sport 1024:65535
> --dport 80 -m state --state NEW
Is --sport really needed here?
> (I like to do it this way so that when you add logging to the last rule you
> only log SYNs, similarly if you want to throttle or set time of day limits)
>
> In the case of netVM this would mean routing on top of the above in
> firewallVM:
> iptables -t nat -N eth0.tcp80
> iptables -t nat -A PREROUTING -j eth0.tcp80 -i eth0 -p tcp --sport
> 1024:65535 --dport 80 -d firewallVM.eth0
Is --sport really needed?
> iptables -t nat -A eth0.tcp80 -j DNAT --to-destination
> firewallVM.vif_personal
>
> and in netVM:
> routing
> iptables -t nat -N eth0.tcp80
> iptables -t nat -A PREROUTING -j eth0.tcp80 -i eth0 -p tcp --sport
> 1024:65535 --dport 80 -d netVM.eth0
> iptables -t nat -A eth0.tcp80 -j DNAT --to-destination firewallVM.eth0 *(not
> sure how I would pass this to netVM if not launched from dom0)*
> filtering
> iptables -t filter -N out.firewall
> iptables -t filter -A FORWARD -j out.firewall -i netVM.eth0 -o
> netVM.vif_firewall -s netVM.eth0-ip -d firewallVM.eth0-ip
> iptables -t filter -A out.firewall -j ACCEPT -p tcp --sport 1024:65535
> --dport 80 -m state --state NEW
>
> A little possible bug in this set-up is that if the exposed port is above
> 1024 (let's say 25565 for minecraft for my kids :)), then the inbound
> routing rule for the exposed service may prevent outbound traffic from i.e.
> a personalVM connecting to an external high port (if the masquerade random
> client port happen to be 25565). I am not sure how iptables behave.
It would not break anything (even without --sport), because masquarade prevent
to using already forwarded/open ports.