neutron router-show 21b402ca-14d1-4e70-828b-1142e65801e4+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| Field | Value |+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| admin_state_up | True || distributed | False || external_gateway_info | {"network_id": "cb54714c-c937-44f5-8386-75426a13cd27", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "d468096d-81af-4f5e-9a2f-f3d84417062e", "ip_address": "172.16.10.140"}]} || ha | False || id | 21b402ca-14d1-4e70-828b-1142e65801e4 || name | demo-router || routes | || status | ACTIVE || tenant_id | c83d69bf26f64d03b2f962b406c20c68 |+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
# ip netns
qrouter-21b402ca-14d1-4e70-828b-1142e65801e4
qdhcp-e619b885-8948-4279-9ed2-13eb18ad620a# ip netns exec qrouter-21b402ca-14d1-4e70-828b-1142e65801e4 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever12: qr-a2f5d7ff-57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether fa:16:3e:d4:15:eb brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global qr-a2f5d7ff-57 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fed4:15eb/64 scope link valid_lft forever preferred_lft forever13: qg-80f2da5c-40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether fa:16:3e:42:c5:57 brd ff:ff:ff:ff:ff:ff inet 172.16.10.140/24 brd 172.16.10.255 scope global qg-80f2da5c-40 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe42:c557/64 scope link valid_lft forever preferred_lft forever--
Вы получили это сообщение, поскольку подписаны на группу "Russian OpenStack Community".
Чтобы отменить подписку на эту группу и больше не получать от нее сообщения, отправьте письмо на электронный адрес openstack-russ...@googlegroups.com.
Чтобы отправлять сообщения в эту группу, отправьте письмо на электронный адрес openstac...@googlegroups.com.
Чтобы зайти в группу, перейдите по ссылке http://groups.google.com/group/openstack-russia.
Чтобы настроить другие параметры, перейдите по ссылке https://groups.google.com/d/optout.
[DEFAULT]
verbose = Truelock_path = $state_path/lockcore_plugin = neutron.plugins.ml2.plugin.Ml2Pluginservice_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.metering.metering_plugin.MeteringPlugin,neutron.services.l3_router.l3_router_plugin.L3RouterPluginauth_strategy = keystoneallow_overlapping_ips = Truenotify_nova_on_port_status_changes = Truenotify_nova_on_port_data_changes = Truenova_url = http://controller:8774/v2nova_region_name = regionOnenova_admin_username = novanova_admin_tenant_id = 343bd5aadfb0461d9bc80f1fc7a5e20dnova_admin_password = cRvVx3YLgTIE21V6nova_admin_auth_url = http://controller:35357/v2.0rabbit_host=127.0.0.1rabbit_userid = guestrabbit_password=z80L2vrUOyrpc_backend=rabbit[matchmaker_redis][matchmaker_ring][quotas][agent]root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf[keystone_authtoken]auth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = neutronadmin_password = T2Myx4QqVpMCJE5Z[database]connection = mysql://neutron:2qhCNEcszR@controller/neutron[service_providers]service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:defaultservice_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
[ml2]type_drivers = flat,vlantenant_network_types = flat,vlanmechanism_drivers = openvswitch[ml2_type_flat]flat_networks = External[ml2_type_vlan]network_vlan_ranges = Intnet1:100:200[ml2_type_gre][ml2_type_vxlan][securitygroup]enable_security_group = Trueenable_ipset = Truefirewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver[ovs]local_ip = 192.168.168.21enable_tunneling = Truebridge_mappings = External:br-ex,Intnet1:br-eth1[DEFAULT]verbose = Trueinterface_driver = neutron.agent.linux.interface.OVSInterfaceDriveruse_namespaces = Truegateway_external_network_id = abcc9474-4b51-4342-9d46-b176407bd65fhandle_internal_only_routers = Falseexternal_network_bridge = br-exenable_metadata_proxy = Truerouter_delete_namespaces = Trueagent_mode = legacy[DEFAULT]verbose = Trueinterface_driver = neutron.agent.linux.interface.OVSInterfaceDriverovs_integration_bridge = br-intdhcp_driver = neutron.agent.linux.dhcp.Dnsmasquse_namespaces = Trueenable_isolated_metadata = Falseenable_metadata_network = Falsednsmasq_config_file = /etc/neutron/dnsmasq-neutron.confdhcp_delete_namespaces = TrueКроме указанного в предыдущем ответе, меняли настройки default security group?
Ссылка для размышления: http://docs.openstack.org/admin-guide-cloud/content/enabling_ping_and_ssh.html
+----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| Field | Value |+----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| description | default || id | 13e72bb0-2d5f-4f27-82cf-67e61e928b45 || name | default || security_group_rules | {"remote_group_id": "13e72bb0-2d5f-4f27-82cf-67e61e928b45", "direction": "ingress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "e6c2909a44594883bf973ae013c5b5e4", "port_range_max": null, "security_group_id": "13e72bb0-2d5f-4f27-82cf-67e61e928b45", "port_range_min": null, "ethertype": "IPv4", "id": "5b0e1f78-b71c-4e40-a22d-7dc3753f8652"} || | {"remote_group_id": "13e72bb0-2d5f-4f27-82cf-67e61e928b45", "direction": "ingress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "e6c2909a44594883bf973ae013c5b5e4", "port_range_max": null, "security_group_id": "13e72bb0-2d5f-4f27-82cf-67e61e928b45", "port_range_min": null, "ethertype": "IPv6", "id": "7ae42c03-d871-4cb2-86f5-f4339e2eb7f0"} || | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "e6c2909a44594883bf973ae013c5b5e4", "port_range_max": null, "security_group_id": "13e72bb0-2d5f-4f27-82cf-67e61e928b45", "port_range_min": null, "ethertype": "IPv4", "id": "846553c0-c89f-45fb-984a-918b2b686e73"} || | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "e6c2909a44594883bf973ae013c5b5e4", "port_range_max": null, "security_group_id": "13e72bb0-2d5f-4f27-82cf-67e61e928b45", "port_range_min": null, "ethertype": "IPv6", "id": "da4a7b83-53af-4d52-b530-8d58fddf3712"} || | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "icmp", "tenant_id": "e6c2909a44594883bf973ae013c5b5e4", "port_range_max": null, "security_group_id": "13e72bb0-2d5f-4f27-82cf-67e61e928b45", "port_range_min": null, "ethertype": "IPv4", "id": "f33e0bdb-1167-4b21-a0d3-0b3135985168"} || | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "tcp", "tenant_id": "e6c2909a44594883bf973ae013c5b5e4", "port_range_max": 22, "security_group_id": "13e72bb0-2d5f-4f27-82cf-67e61e928b45", "port_range_min": 22, "ethertype": "IPv4", "id": "f5d728a7-237b-4a6e-8b0f-8e36554179c1"} || tenant_id | e6c2909a44594883bf973ae013c5b5e4 |+----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+.neutron 18139 0.0 0.0 153468 38408 ? S May22 1:01 /usr/bin/python2.7 /usr/bin/neutron-dhcp-agent --config-file=/etc/neutron/dhcp_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-dhcp-agent.logneutron 18199 0.0 0.0 160096 43040 ? S May22 0:44 /usr/bin/python2.7 /usr/bin/neutron-l3-agent --config-file=/etc/neutron/l3_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-l3-agent.logneutron 18344 0.0 0.0 202860 43912 ? S May22 1:22 /usr/bin/python2.7 /usr/bin/neutron-metadata-agent --config-file=/etc/neutron/metadata_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-metadata-agent.logneutron 18408 0.0 0.0 223764 41776 ? S May22 0:00 /usr/bin/python2.7 /usr/bin/neutron-metadata-agent --config-file=/etc/neutron/metadata_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-metadata-agent.logneutron 18409 0.0 0.0 223764 41776 ? S May22 0:00 /usr/bin/python2.7 /usr/bin/neutron-metadata-agent --config-file=/etc/neutron/metadata_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-metadata-agent.logneutron 18410 0.0 0.0 223764 41776 ? S May22 0:00 /usr/bin/python2.7 /usr/bin/neutron-metadata-agent --config-file=/etc/neutron/metadata_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-metadata-agent.logneutron 18411 0.0 0.0 223764 41776 ? S May22 0:00 /usr/bin/python2.7 /usr/bin/neutron-metadata-agent --config-file=/etc/neutron/metadata_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-metadata-agent.logneutron 18412 0.0 0.0 223764 41776 ? S May22 0:00 /usr/bin/python2.7 /usr/bin/neutron-metadata-agent --config-file=/etc/neutron/metadata_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-metadata-agent.logneutron 18413 0.0 0.0 223764 41776 ? S May22 0:00 /usr/bin/python2.7 /usr/bin/neutron-metadata-agent --config-file=/etc/neutron/metadata_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-metadata-agent.logneutron 18414 0.0 0.0 223764 41776 ? S May22 0:00 /usr/bin/python2.7 /usr/bin/neutron-metadata-agent --config-file=/etc/neutron/metadata_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-metadata-agent.logneutron 18415 0.0 0.0 223764 41776 ? S May22 0:00 /usr/bin/python2.7 /usr/bin/neutron-metadata-agent --config-file=/etc/neutron/metadata_agent.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-metadata-agent.lognobody 18439 0.0 0.0 39828 1084 ? S May22 0:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tapc815dc01-bc --except-interface=lo --pid-file=/var/lib/neutron/dhcp/590a5b56-9aa0-4d13-82b9-278623ea5ad7/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/590a5b56-9aa0-4d13-82b9-278623ea5ad7/host --addn-hosts=/var/lib/neutron/dhcp/590a5b56-9aa0-4d13-82b9-278623ea5ad7/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/590a5b56-9aa0-4d13-82b9-278623ea5ad7/opts --leasefile-ro --dhcp-range=set:tag0,192.168.168.0,static,86400s --dhcp-lease-max=256 --conf-file=/etc/neutron/dnsmasq-neutron.conf --domain=openstacklocalneutron 18580 0.0 0.0 150228 35140 ? S May22 5:49 /usr/bin/python2.7 /usr/bin/neutron-openvswitch-agent --config-file=/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-openvswitch-agent.logneutron 18623 0.2 0.1 252916 74408 ? S May22 22:06 /usr/bin/python2.7 /usr/bin/neutron-server --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-server.logroot 19291 0.0 0.0 58208 2056 ? S May22 0:00 sudo neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface name,ofport --format=jsonroot 19293 0.0 0.0 45076 8972 ? S May22 0:00 /usr/bin/python2.7 /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface name,ofport --format=jsonroot 28318 0.0 0.0 11392 884 pts/0 S+ 13:38 0:00 grep neutron31d3ad8d-dfd0-49d9-9f45-c0b178725874 Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal Port "eth0" Interface "eth0" Bridge "br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Port "phy-br-eth1" Interface "phy-br-eth1" type: patch options: {peer="int-br-eth1"} Port "eth1" Interface "eth1" Bridge br-tun Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int fail_mode: secure Port "qvob3fc4048-79" tag: 1 Interface "qvob3fc4048-79" Port "int-br-eth1" Interface "int-br-eth1" type: patch options: {peer="phy-br-eth1"} Port "tapc815dc01-bc" tag: 1 Interface "tapc815dc01-bc" type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal ovs_version: "2.1.0"+--------------------------------------+----------+-------------------------------------------------------+| id | name | subnets |+--------------------------------------+----------+-------------------------------------------------------+| 34d3a856-14e0-41ae-84d1-43dea48bb670 | ext-net | 0adde31a-a75a-42c0-a51c-98ab9092782c 172.16.10.0/24 || 590a5b56-9aa0-4d13-82b9-278623ea5ad7 | demo-net | 0e4804ef-af7c-4f59-aca0-3267b8a9f0af 192.168.168.0/24 |+--------------------------------------+----------+-------------------------------------------------------+
+---------------------------+--------------------------------------+| Field | Value |+---------------------------+--------------------------------------+| admin_state_up | True || id | 34d3a856-14e0-41ae-84d1-43dea48bb670 || name | ext-net || provider:network_type | flat || provider:physical_network | External || provider:segmentation_id | || router:external | True || shared | False || status | ACTIVE || subnets | 0adde31a-a75a-42c0-a51c-98ab9092782c || tenant_id | e6c2909a44594883bf973ae013c5b5e4 |+---------------------------+--------------------------------------+
+---------------------------+--------------------------------------+| Field | Value |+---------------------------+--------------------------------------+| admin_state_up | True || id | 590a5b56-9aa0-4d13-82b9-278623ea5ad7 || name | demo-net || provider:network_type | vlan || provider:physical_network | Intnet1 || provider:segmentation_id | 100 || router:external | False || shared | False || status | ACTIVE || subnets | 0e4804ef-af7c-4f59-aca0-3267b8a9f0af || tenant_id | e6c2909a44594883bf973ae013c5b5e4 |+---------------------------+--------------------------------------+Всем здравствуйте.
root@mango02:~# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=440857.520s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=NORMAL cookie=0x0, duration=440853.669s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=3,in_port=16,dl_vlan=100 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=440856.875s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=16 actions=drop cookie=0x0, duration=440856.179s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=17 actions=drop cookie=0x0, duration=440857.466s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=droproot@mango02:~# ovs-ofctl dump-flows br-exNXST_FLOW reply (xid=0x4): cookie=0x0, duration=440864.563s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=NORMAL cookie=0x0, duration=440864.104s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=droproot@mango02:~# ovs-ofctl dump-flows br-eth1NXST_FLOW reply (xid=0x4): cookie=0x0, duration=440873.731s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=1 actions=NORMAL cookie=0x0, duration=440870.160s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=4,in_port=1,dl_vlan=1 actions=mod_vlan_vid:100,NORMAL cookie=0x0, duration=440873.264s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=droproot@mango02:~#
egrep -v "$^|#" /etc/neutron/neutron.conf
[DEFAULT]
[..]
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.metering.metering_plugin.MeteringPlugin,neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
[..]
внутренняя, виртуальная сеть для ВМ, существующая только внутри гипервизора
и внешняя маршрутизируемая сеть, доступная в физическом VLAN/интерфейсе, из которой будут выделяться Floating IP и назначаться на ВМ посредством NAT на адрес ВМ из внутренней сети.
2015-05-28 20:35 GMT+03:00 yashumitsu <yashu...@gmail.com>:
1. Первое что бросается в глаза, так кажется уже не пишут:egrep -v "$^|#" /etc/neutron/neutron.conf
[DEFAULT]
[..]
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.metering.metering_plugin.MeteringPlugin,neutron.services.l3_router.l3_router_plugin.L3RouterPlugin[..]
Да и на время отладки, желательно отключить дополнительные плагины Neutron. Предлагаю взять за основу neutron.conf, например из: https://fosskb.wordpress.com/2015/03/01/openstack-juno-on-debian-wheezy-single-machine-setup/
core_plugin = ml2service_plugins=router2. Какая планируется конфигурация сети?
Спасибо за ссылку, взял от туда конфиги, и подправил строки с плагинами.Теперь:core_plugin = ml2service_plugins=routerERROR nova.compute.manager [-] [instance: cac05c96-e2bc-44f1-a583-12fb549c952a] Instance failed to spawnХм, почему-то моё письмо не попало в рассылку, хотя сохранилось в "отправленных" Gmail без каких-либо подозрений.