I guess then its time to start discussing about vhost-user support with libvirt maintainers.
Thanks for bringing this up. I agree that PCI flavor based device assignment will be useful once it gets implemented. In the meantime, how about using the Neutron/Agent config file to provide physical_network to PCI address mapping ?
I've considered the idea of using an external script to do the config sync.
some observations:
1. gives us some flexibility regarding the sync implementation (rsync, git, some other pub-sub mechanism)
2. less code on Neutron end and more code on SnS end (BTW processing the data is slightly easier on python end ;)
3. introduces external dependencies - have to run/maintain additional script (on Controller node) and additional daemons on Compute nodes.
4. requires additional config info (for e.g.: DB access credentials) - can be assigned to ENV variables, before launching the script
5. doesn't use Openstack's message queue based RPC mechanism (which *probably* limits max msg size - can be an issue if config is huge) ;)
> The usual mechanism driver API (network_created, etc) could potentially be used separately to call snabbswitch on the network node to check if a new config is OK or not -- without having to worry about actually pushing changes out to compute nodes.I don't know if this is necessary. :)
>
Thanks for the great feedback! :-)
So, we'll use the DB dump (csv) + csv2conf setup and see how it works.
I suppose the validation can happen in csv2conf ? For e.g.: if we use the "Router {X} / Port {Y} ( / VF {Z} )" format for the port, then check: Z < Max TX/RX queues per port (defined in intel10g?). will that work?
Regards,
Rahul
--
You received this message because you are subscribed to the Google Groups "Snabb Switch development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to snabb-devel...@googlegroups.com.
To post to this group, send an email to snabb...@googlegroups.com.
Visit this group at http://groups.google.com/group/snabb-devel.
Just a small update: Got some nova + neutron + neutron-snabbswitch-agent tests working today.
There is still some issue with QEMU giving the following warning:
2014-02-18 01:01:43.723+0000: 10777: warning : qemuOpenVhostNet:521 : Unable to open vhost-net. Opened so far 0, requested 1
### Terminal 1 ###
$ nova flavor-key m1.nano set quota:mem_hugepages=True
$ neutron security-group-create SecGroupC --description "security group for VirtioC"
$ neutron net-create NetworkC --provider:network_type flat --provider:physical_network Router1_Port2 --router:external=True
$ neutron subnet-create --name SubnetC --ip-version 6 --no-gateway --disable-dhcp NetworkC a::0/64
$ neutron port-create --name VirtioC --fixed-ip subnet_id=YYYYYYYY,ip_address=a::1 --security-group SecGroupC NetworkC
$ nova boot --image debian_squeeze_amd64_standard --flavor m1.nano --nic port-id=XXXXXXXX ServerC
I have a rough idea for an algorithm to update the app config. (caution: this might not be a good idea):Suppose we have a data structure called an "app network" that is a set of apps (nodes) and links (edges).
I've been using the old "vapp" server from VOS for testing. The server quits immediately after receiving communication from the client:
sudo ~/hacking/qemu/x86_64-softmmu/qemu-system-x86_64 --enable-kvm -nographic -m 1024 -mem-path /hugetlbfs,prealloc=on,share=on -chardev socket,id=chr0,path=/home/luke/qemu.sock,server -netdev type=vhost-user,id=net0,chardev=chr0 -device virtio-net-pci,netdev=net0 -drive file=deb.qcow2,if=virtio
> Hm, the latest vapp and qemu from VOS are working together for me on chur.I'll try the latest vapp then.
I've been using a combination of vagrant + libvirt provider (with kvm nested virtualization enabled on chur).
update:
* patched libvirt vhost_user support code to fix the PCI address issue mentioned earlier
* started testing using the latest vapp server
* looks like stuff is working (see below) :-)
there is an issue related to PCI mapping: currently, there is no way to specify that a provider network should be mapped to a particular compute node (this will change in the future when PCI flavour based device assignment becomes available in Nova).
For the time being, the current PCI mapping scheme can be altered to include the provider network <-> node relationship (many-to-many?). For e.g.:
````
[snabbswitch]
pci_mappings = Router...@node1.example.com=0000:00:01.0, Router...@node2.example.com=0000:00:01.0
````
The Snabbswitch mechanism driver will generate the "pci_mappings.txt" during initialization:
````
node1.example.com Router1_Port1 0000:00:01.0
node2.example.com Router1_Port2 0000:00:01.0
````
(Note: the pci_mappings.txt will be in the same directory as the DB dumps).
The compute_node_script (git poller) will only pick the lines in pci_mappings.txt that matches the machine's hostname (such as node1.example.com) to generate the app-network config for the Snabbswitch traffic processes.
Thoughts?
So in the example scenario, if we have a port (say, VirtioA) created using physical_network Router1_Port1, the Nova scheduler will attempt to boot the vm on node1 and not node2 (hopefully!).
However, if the TeraStream project's provisioning script can make use of an external shell script (containing some helper functions) then maybe it isn't too hard...
[...]
neutron net-create command can take the output of this function when specifying provider:network.