openstack高可用集群19-linuxbridge结合vxlan
阅读原文时间:2021年07月28日阅读:1

生产环境,假设我们的openstack是公有云,我们一般的linuxbridge结合vlan的模式相对于大量的用户来说是vlan是不够用的,于是我们引进vxlan技术解决云主机内网网络通讯的问题。

我们的物理服务器一般有4个网络网卡,一个是远控卡,一个是管理网卡(物理机之间相互通讯和管理使用),一个用于云主机外网通讯(交换机与其对接是trunk口,云主机通过物理机的vlan与不同外网对接),最后一个是云主机内网通讯使用(交换机与其对接是access口,并且配置好IP好被vxlan调用)。

本文是整个按照neutron网络开始写的文章,如果以前只是使用linuxbridge结合vlan的模式,其实只要在其基础上稍加修改配置文件,并重启网络服务就好。需要修改的配置文件如下:

控制节点:

/etc/neutron/plugins/ml2/ml2_conf.ini

/etc/neutron/plugins/ml2/linuxbridge_agent.ini

重启服务

# systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

计算节点:

/etc/neutron/plugins/ml2/linuxbridge_agent.ini

重启服务

# systemctl restart neutron-linuxbridge-agent.service

实验环境:

eth0:10.30.1.208 eth1:无IP地址 eth2:192.168.248.1 node1 控制节点

eth0:10.30.1.203 eth1:无IP地址 eth2:192.168.248.3 node3 计算节点

eth0:10.30.1.204 eth1:无IP地址 eth2:192.168.248.4 node4 计算节点 (本文没有显示其配置,其实和node3配置基本相同)

配置网络选项

Neutron在控制节点部署  node1

[root@node1 ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

Neutron在计算节点中的部署  node3

[root@node3 ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset

Neutron控制节点配置  node1

# grep -v "^#\|^$" /etc/neutron/neutron.conf

[DEFAULT]

core_plugin = ml2

service_plugins =  neutron.services.l3_router.l3_router_plugin.L3RouterPlugin

auth_strategy = keystone

transport_url = rabbit://openstack:openstack@10.30.1.208

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[agent]

[cors]

[database]

connection = mysql+pymysql://neutron:neutron@10.30.1.208/neutron

[keystone_authtoken]

auth_uri = http://10.30.1.208:5000

auth_url = http://10.30.1.208:35357

memcached_servers = 10.30.1.208:11211

project_domain_name = Default

project_name = service

user_domain_name = Default

password = neutron

username = neutron

auth_type = password

[matchmaker_redis]

[nova]

auth_url = http://10.30.1.208:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = nova

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[quotas]

quota_network = 200

quota_subnet = 200

quota_port = 5000

quota_driver = neutron.db.quota.driver.DbQuotaDriver

quota_router = 100

quota_floatingip = 1000

quota_security_group = 100

quota_security_group_rule = 1000

[ssl]

配置 Modular Layer 2 (ML2) 插件

ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施

编辑 /etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作:

[root@node1 ~]# grep -v "^#\|^$" /etc/neutron/plugins/ml2/ml2_conf.ini

[DEFAULT]

[l2pop]

[ml2]

type_drivers = flat,vlan,gre,vxlan,geneve

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = external

[ml2_type_geneve]

[ml2_type_gre]

[ml2_type_vlan]

network_vlan_ranges = default:1:4000

[ml2_type_vxlan]

vni_ranges = 1001:2000

[securitygroup]

enable_ipset = true

配置Linuxbridge代理

[root@node1 ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = default:eth1

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

enable_security_group = true

[vxlan]

enable_vxlan = true

l2_population = true

local_ip = 192.168.248.1

配置DHCP代理

The DHCP agent provides DHCP services for virtual networks.

编辑/etc/neutron/dhcp_agent.ini文件并完成下面的操作:

在``[DEFAULT]``部分,配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据

[root@node1 ~]# grep -v "^#\|^$" /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

[agent]

[ovs]

配置元数据代理

The :term:`metadata agent `负责提供配置信息,例如:访问实例的凭证

编辑``/etc/neutron/metadata_agent.ini``文件并完成以下操作:

在``[DEFAULT]`` 部分,配置元数据主机以及共享密码:

# grep -v '^#\|^$' /etc/neutron/metadata_agent.ini

[DEFAULT]

nova_metadata_ip = 10.30.1.208

metadata_proxy_shared_secret = syscloud.cn

[agent]

[cache]

配置l3

# grep -v '^#\|^$' /etc/neutron/l3_agent.ini

[DEFAULT]

ovs_use_veth = False

interface_driver = linuxbridge

#interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

debug = True

[agent]

[ovs]

为控制节点的计算服务nova配置网络服务

编辑``/etc/nova/nova.conf``文件并完成以下操作:

在``[neutron]``部分,配置访问参数,启用元数据代理并设置密码:

[DEFAULT]

cpu_allocation_ratio=8

ram_allocation_ratio=2

disk_allocation_ratio=2

resume_guests_state_on_host_boot=true

reserved_host_disk_mb=20480

baremetal_enabled_filters=RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter

transport_url = rabbit://openstack:openstack@10.30.1.208

auth_strategy = keystone

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

enabled_apis=osapi_compute,metadata

[api]

[api_database]

connection = mysql+pymysql://nova:nova@10.30.1.208/nova_api

[barbican]

[cache]

[cells]

[cinder]

os_region_name = RegionOne

[compute]

[conductor]

[console]

[consoleauth]

[cors]

[crypto]

[database]

connection = mysql+pymysql://nova:nova@10.30.1.208/nova

[ephemeral_storage_encryption]

[filter_scheduler]

[glance]

api_servers = http://10.30.1.208:9292

[guestfs]

[healthcheck]

[hyperv]

[ironic]

[key_manager]

[keystone]

[keystone_authtoken]

auth_uri = http://10.30.1.208:5000

auth_url = http://10.30.1.208:35357

memcached_servers = 10.30.1.208:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = nova

[libvirt]

[matchmaker_redis]

[metrics]

[mks]

[neutron]

url = http://10.30.1.208:9696

auth_url = http://10.30.1.208:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

service_metadata_proxy = true

metadata_proxy_shared_secret = syscloud.cn

[notifications]

[osapi_v21]

[oslo_concurrency]

lock_path=/var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[pci]

[placement]

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://10.30.1.208:35357/v3

username = placement

password = placement

[quota]

[rdp]

[remote_debug]

[scheduler]

[serial_console]

[service_user]

[spice]

[trusted_computing]

[upgrade_levels]

[vendordata_dynamic_auth]

[vmware]

[vnc]

enabled  =  true

server_listen = 0.0.0.0

server_proxyclient_address = 10.30.1.208

[workarounds]

[wsgi]

[xenserver]

[xvp]

完成安装

网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini``指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini``。如果超链接不存在,使用下面的命令创建它:

[root@node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库:

[root@node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

注解

数据库的同步发生在 Networking 之后,因为脚本需要完成服务器和插件的配置文件。

重启计算API 服务:

[root@node1 ~]# systemctl restart openstack-nova-api.service

当系统启动时,启动 Networking 服务并配置它启动。

对于两种网络选项:

[root@node1 ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

[root@node1 ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

检验nentron在控制节点是否OK

[root@node1 ~]# openstack network agent list

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

| 051ac750-5922-489b-a2ff-9135ffb2103f | Linux bridge agent | node1 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| 2e4ed59b-1f22-4510-9258-fae0ff15e8e7 | DHCP agent         | node1 | nova              | :-)   | UP    | neutron-dhcp-agent        |

| a2778f7c-5ed2-4769-8786-84448d1fe27e | L3 agent           | node1 | nova              | :-)   | UP    | neutron-l3-agent          |

| ffaff574-b3ca-49a3-9c00-78bc7033b642 | Metadata agent     | node1 | None              | :-)   | UP    | neutron-metadata-agent    |

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

终极检验示范:

[root@node1 ~]# openstack extension list --network

Neutron计算节点配置  node3

neutron计算节点:(将neutron的配置文件拷贝到计算节点)

编辑/etc/neutron/neutron.conf文件并完成以下操作:

# grep -v '^#\|^$' /etc/neutron/neutron.conf

[DEFAULT]

auth_strategy = keystone

transport_url = rabbit://openstack:openstack@10.30.1.208

[agent]

[cors]

[database]

[keystone_authtoken]

auth_uri = http://10.30.1.208:5000

auth_url = http://10.30.1.208:35357

memcached_servers = 10.30.1.208:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = neutron

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[quotas]

[quotas]

quota_network = 200

quota_subnet = 200

quota_port = 5000

quota_driver = neutron.db.quota.driver.DbQuotaDriver

quota_router = 100

quota_floatingip = 50

quota_security_group = 100

quota_security_group_rule = 1000

[ssl]

配置网络选项

选择与您之前在控制节点上选择的相同的网络选项。之后,回到这里并进行下一步:为计算节点配置网络服务。

配置Linux网桥代理

Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组。

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:

[root@node3 ~]# grep -v "^#\|^$" /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = default:eth1

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

enable_security_group = true

[vxlan]

enable_vxlan = true

l2_population = true

local_ip = 192.168.248.3

为计算节点配置网络服务

编辑/etc/nova/nova.conf文件并完成下面的操作:

[DEFAULT]

cpu_allocation_ratio=8

ram_allocation_ratio=2

disk_allocation_ratio=2

resume_guests_state_on_host_boot=true

reserved_host_disk_mb=20480

baremetal_enabled_filters=RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter

enabled_apis = osapi_compute,metadata

transport_url = rabbit://openstack:openstack@10.30.1.208

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]

auth_strategy = keystone

[api_database]

[barbican]

[cache]

[cells]

[cinder]

[compute]

[conductor]

[console]

[consoleauth]

[cors]

[crypto]

[database]

[ephemeral_storage_encryption]

[filter_scheduler]

[glance]

api_servers = http://10.30.1.208:9292

[guestfs]

[healthcheck]

[hyperv]

[ironic]

[key_manager]

[keystone]

[keystone_authtoken]

auth_uri = http://10.30.1.208:5000

auth_url = http://10.30.1.208:35357

memcached_servers = 10.30.1.208:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = nova

[libvirt]

images_type = rbd

images_rbd_pool = vms

images_rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_user = cinder

rbd_secret_uuid = 29355b97-1fd8-4135-a26e-d7efeaa27b0a

live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

[matchmaker_redis]

[metrics]

[mks]

[neutron]

url = http://10.30.1.208:9696

auth_url = http://10.30.1.208:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

[notifications]

[osapi_v21]

[oslo_concurrency]

lock_path=/var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[pci]

[placement]

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://10.30.1.208:35357/v3

username = placement

password = placement

[quota]

[rdp]

[remote_debug]

[scheduler]

[serial_console]

[service_user]

[spice]

[trusted_computing]

[upgrade_levels]

[vendordata_dynamic_auth]

[vmware]

[vnc]

enabled = true

server_listen = 0.0.0.0

server_proxyclient_address = 10.30.1.203

novncproxy_base_url = http://10.30.1.208:6080/vnc_auto.html

[workarounds]

[wsgi]

[xenserver]

[xvp]

完成安装

重启计算服务:

[root@node3 ~]# systemctl restart openstack-nova-compute.service

启动Linuxbridge代理并配置它开机自启动:

[root@node3 ~]# systemctl enable neutron-linuxbridge-agent.service

[root@node3 ~]# systemctl start neutron-linuxbridge-agent.service

检验nentron在计算节点是否OK

[root@node1 ~]# source admin-openstack.sh

[root@node1 ~]# openstack network agent list

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

| 051ac750-5922-489b-a2ff-9135ffb2103f | Linux bridge agent | node1 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| 2e4ed59b-1f22-4510-9258-fae0ff15e8e7 | DHCP agent         | node1 | nova              | :-)   | UP    | neutron-dhcp-agent        |

| a2778f7c-5ed2-4769-8786-84448d1fe27e | L3 agent           | node1 | nova              | :-)   | UP    | neutron-l3-agent          |

| d8f84f14-7f70-4dca-b4c2-0285b7a56166 | Linux bridge agent | node3 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| ffaff574-b3ca-49a3-9c00-78bc7033b642 | Metadata agent     | node1 | None              | :-)   | UP    | neutron-metadata-agent    |

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

代表计算节点的Linux bridge agent已成功连接到控制节点。

在节点节点node4重复node3的操作

查看neutron服务是否都起来了

[root@node1 ~]# openstack network agent list

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

| 051ac750-5922-489b-a2ff-9135ffb2103f | Linux bridge agent | node1 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| 2e4ed59b-1f22-4510-9258-fae0ff15e8e7 | DHCP agent         | node1 | nova              | :-)   | UP    | neutron-dhcp-agent        |

| a2778f7c-5ed2-4769-8786-84448d1fe27e | L3 agent           | node1 | nova              | :-)   | UP    | neutron-l3-agent          |

| c75f9b28-7010-4dfd-b646-ff79456f1435 | Linux bridge agent | node4 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| d8f84f14-7f70-4dca-b4c2-0285b7a56166 | Linux bridge agent | node3 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| ffaff574-b3ca-49a3-9c00-78bc7033b642 | Metadata agent     | node1 | None              | :-)   | UP    | neutron-metadata-agent    |

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

创建vxlan network "vxlan100_net"

注:管理员可以创建任意VNI的vxlan,普通用户创建的时候不能指定,只能在配置文件设置的ID中随机分配。

底层网络发生了什么变化

在控制节点之下brctl show,查看当前的网络结构如下

[root@node1 ~]# brctl show

bridge name    bridge id          STP enabled    interfaces

brq85ae5035-20 8000.42b8819dab66  no             tapd40d05b8-bd

                                                 vxlan-100

neutron创建了:

    vxlan100对应的网桥brq85ae5035-20

    vxlan interface vlan-100

    dhcp的tap设备tapd40d05b8-bd

vxlan100和tapd40d05b8-bd已经连接到brq85ae5035-20,vxlan的二层网络就绪。执行ip -d link show vxlan-100,查看vxlan interface的详细配置

11: vxlan-100: mtu 1450 qdisc noqueue master brq85ae5035-20 state UNKNOWN mode DEFAULT group default qlen 1000

    link/ether 42:b8:81:9d:ab:66 brd ff:ff:ff:ff:ff:ff promiscuity 1

    vxlan id 100 dev eth2 srcport 0 0 dstport 8472 ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx

可见,vxlan-100的VNI是100,对应的VTEP网络网络接口是eth2.

下面是vxlan-100的dhcp部分的分析

[root@node1 ~]# openstack network list

+--------------------------------------+--------------+--------------------------------------+

| ID                                   | Name         | Subnets                              |

+--------------------------------------+--------------+--------------------------------------+

| 5ac5c948-909f-47ff-beba-a2ffaf917c5f | vlan99       | bbd536c6-a975-4841-8082-35b28de16ef0 |

| 85ae5035-203b-4ef7-b65c-397f80b5a8af | vxlan100_net | b81eec88-d7b5-49ef-bf45-7c251bebf165 |

+--------------------------------------+--------------+--------------------------------------+

[root@node1 ~]# ip netns list | grep 85ae5035-203b-4ef7-b65c-397f80b5a8af

qdhcp-85ae5035-203b-4ef7-b65c-397f80b5a8af (id: 1)

[root@node1 ~]# ip netns exec qdhcp-85ae5035-203b-4ef7-b65c-397f80b5a8af ip addr show

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: ns-d40d05b8-bd@if10: mtu 1450 qdisc noqueue state UP group default qlen 1000

    link/ether fa:16:3e:6e:c0:44 brd ff:ff:ff:ff:ff:ff link-netnsid 0

    inet 172.16.100.10/24 brd 172.16.100.255 scope global ns-d40d05b8-bd

       valid_lft forever preferred_lft forever

    inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-d40d05b8-bd

       valid_lft forever preferred_lft forever

    inet6 fe80::f816:3eff:fe6e:c044/64 scope link

       valid_lft forever preferred_lft forever

将instance连接到vlanx100_net

[root@node1 ~]# openstack server list

+--------------------------------------+---------------+--------+--------------------------------------------------+-----------------+--------+

| ID                                   | Name          | Status | Networks                                         | Image           | Flavor |

+--------------------------------------+---------------+--------+--------------------------------------------------+-----------------+--------+

| 929c6c23-804e-4cb7-86ad-d8db8554e33f | centos7.6-vm2 | ACTIVE | vlan99=172.16.99.117; vxlan100_net=172.16.100.19 | CentOS 7.6 64位 | 1c1g   |

| 027788d0-f189-4362-8716-2d0a9548dded | centos7.6-vm1 | ACTIVE | vlan99=172.16.99.123; vxlan100_net=172.16.100.12 | CentOS 7.6 64位 | 1c1g   |

+--------------------------------------+---------------+--------+--------------------------------------------------+-----------------+--------+

查看创建云主机后计算节点的网络情况

[root@node3 ~]# virsh list --name --uuid

929c6c23-804e-4cb7-86ad-d8db8554e33f instance-0000014b             

[root@node3 ~]# virsh domiflist 929c6c23-804e-4cb7-86ad-d8db8554e33f

Interface  Type       Source     Model       MAC

-------------------------------------------------------

tap9784ef08-97 bridge     brq5ac5c948-90 virtio      fa:16:3e:e7:c4:39

tap31c64a21-76 bridge     brq85ae5035-20 virtio      fa:16:3e:cb:98:2c

[root@node3 ~]# brctl show

bridge name    bridge id          STP enabled    interfaces

brq5ac5c948-90 8000.525400a141e1  no             eth1.99

                                                 tap9784ef08-97

brq85ae5035-20 8000.d2d05820b08c  no             tap31c64a21-76

vxlan-100

centos7.6-vm1(172.16.100.12)和centos7.6-vm2(172.16.100.19)位于不同的计算节点,通过vxlan100相连,下面执行PING验证连通性。

在centos7.6-vm2上执行ping 172.16.100.12

排错:centos7.6-vm1(172.16.100.12)和centos7.6-vm2(172.16.100.19)相互之间ping不通,两台云主机的selinux和iptables都关闭了,最后我们把问题定位在安全组中,在现在云主机挂载的安全组中添加一个ICMP协议放行规则

理解L2 Population

L2 Population是用来提高VXLAN网络的Scalability( 可扩展性)的。

解决VXLAN网络节点很多后广播成本的问题,L2 Population的作用是在VTEP上提供Proxy ARP功能,使得VTEP能够预先获知VXLAN网络中下面的信息:

    VM IP-MAC对应关系

    VM-VTEP的对应关系

查看控制节点上的forwarding database,可以看到VTEP保存了centos7.6-vm1和centos7.6-vm2)的port信息

[root@node1 ~]# bridge fdb show dev vxlan-100

42:b8:81:9d:ab:66 vlan 1 master brq85ae5035-20 permanent

42:b8:81:9d:ab:66 master brq85ae5035-20 permanent

fa:16:3e:cb:98:2c master brq85ae5035-20

fa:16:3e:06:42:34 master brq85ae5035-20

00:00:00:00:00:00 dst 192.168.248.4 self permanent

00:00:00:00:00:00 dst 192.168.248.3 self permanent

fa:16:3e:06:42:34 dst 192.168.248.4 self permanent

fa:16:3e:cb:98:2c dst 192.168.248.3 self permanent

centos7.6-vm2的MAC为fa:16:3e:cb:98:2c

centos7.6-vm1的MAC为fa:16:3e:06:42:34

我们在查看两个计算节点上的forwarding database

[root@node3 ~]# bridge fdb show dev vxlan-100

d2:d0:58:20:b0:8c master brq85ae5035-20 permanent

d2:d0:58:20:b0:8c vlan 1 master brq85ae5035-20 permanent

00:00:00:00:00:00 dst 192.168.248.1 self permanent

00:00:00:00:00:00 dst 192.168.248.4 self permanent

fa:16:3e:06:42:34 dst 192.168.248.4 self permanent

fa:16:3e:6e:c0:44 dst 192.168.248.1 self permanent

[root@node4 ~]#  bridge fdb show dev vxlan-100

da:1e:c7:c0:6a:dc master brq85ae5035-20 permanent

da:1e:c7:c0:6a:dc vlan 1 master brq85ae5035-20 permanent

00:00:00:00:00:00 dst 192.168.248.1 self permanent

00:00:00:00:00:00 dst 192.168.248.3 self permanent

fa:16:3e:6e:c0:44 dst 192.168.248.1 self permanent

fa:16:3e:cb:98:2c dst 192.168.248.3 self permanent

centos7.6-vm2(fa:16:3e:cb:98:2c)要与centos7.6-vm1(fa:16:3e:06:42:34)通讯时,node3计算节点VTEP 192.168.248.3会将封装好的VXLAN数据包直接发送给node4计算节点VTEP 192.168.248.4

扩展:配置中不需要指明eth2是虚拟机内部通讯的网络接口,应该local_ip = x.x.x.x这样的配置已经指明了拥有其IP的端口就是vxlan通讯使用的接口

local_ip 指定了 VTEP的IP地址

控制节点(node1)的VTEP IP是 192.168.248.1

计算节点(node3)的VTEP IP是 192.168.248.3

计算节点(node4)的VTEP IP是 192.168.248.4

关于报错:

发现node4的Linux bridge agent没有起来

[root@node1 ~]# openstack network agent list

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

| 051ac750-5922-489b-a2ff-9135ffb2103f | Linux bridge agent | node1 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| 2e4ed59b-1f22-4510-9258-fae0ff15e8e7 | DHCP agent         | node1 | nova              | :-)   | UP    | neutron-dhcp-agent        |

| a2778f7c-5ed2-4769-8786-84448d1fe27e | L3 agent           | node1 | nova              | :-)   | UP    | neutron-l3-agent          |

| c75f9b28-7010-4dfd-b646-ff79456f1435 | Linux bridge agent | node4 | None              | XXX   | UP    | neutron-linuxbridge-agent |

| d8f84f14-7f70-4dca-b4c2-0285b7a56166 | Linux bridge agent | node3 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| ffaff574-b3ca-49a3-9c00-78bc7033b642 | Metadata agent     | node1 | None              | :-)   | UP    | neutron-metadata-agent    |

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

[root@node4 ~]# systemctl status neutron-linuxbridge-agent.service

● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent

   Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)

   Active: failed (Result: start-limit) since Sun 2020-02-09 11:17:22 CST; 2min 1s ago

  Process: 8499 ExecStart=/usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent --log-file /var/log/neutron/linuxbridge-agent.log (code=exited, status=1/FAILURE)

  Process: 8493 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)

Main PID: 8499 (code=exited, status=1/FAILURE)

Feb 09 11:17:22 node4 systemd[1]: Unit neutron-linuxbridge-agent.service entered failed state.

Feb 09 11:17:22 node4 systemd[1]: neutron-linuxbridge-agent.service failed.

Feb 09 11:17:22 node4 systemd[1]: neutron-linuxbridge-agent.service holdoff time over, scheduling restart.

Feb 09 11:17:22 node4 systemd[1]: Stopped OpenStack Neutron Linux Bridge Agent.

Feb 09 11:17:22 node4 systemd[1]: start request repeated too quickly for neutron-linuxbridge-agent.service

Feb 09 11:17:22 node4 systemd[1]: Failed to start OpenStack Neutron Linux Bridge Agent.

Feb 09 11:17:22 node4 systemd[1]: Unit neutron-linuxbridge-agent.service entered failed state.

Feb 09 11:17:22 node4 systemd[1]: neutron-linuxbridge-agent.service failed.

查看日志

Feb 09 11:06:18 node4 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…

-- Subject: Unit neutron-linuxbridge-agent.service has begun start-up

-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

--

-- Unit neutron-linuxbridge-agent.service has begun starting up.

Feb 09 11:06:18 node4 neutron-enable-bridge-firewall.sh[4773]: net.bridge.bridge-nf-call-iptables = 1

Feb 09 11:06:18 node4 neutron-enable-bridge-firewall.sh[4773]: net.bridge.bridge-nf-call-ip6tables = 1

Feb 09 11:06:18 node4 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.

-- Subject: Unit neutron-linuxbridge-agent.service has finished start-up

-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

--

-- Unit neutron-linuxbridge-agent.service has finished starting up.

--

-- The start-up result is done.

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: Traceback (most recent call last):

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/bin/neutron-linuxbridge-agent", line 10, in

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: sys.exit(main())

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/neutron/cmd/eventlet/plugins/linuxbridge_neutron_agent.py", line 21, in main

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: agent_main.main()

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", line 985, in main

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: common_config.init(sys.argv[1:])

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/neutron/common/config.py", line 78, in init

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: **kwargs)

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2502, in __call__

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: else sys.argv[1:])

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3166, in _parse_cli_opts

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: return self._parse_config_files()

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3202, in _parse_config_files

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: self._oparser.parse_args(self._args, namespace)

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2330, in parse_args

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: return super(_CachedArgumentParser, self).parse_args(args, namespace)

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib64/python2.7/argparse.py", line 1688, in parse_args

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: args, argv = self.parse_known_args(args, namespace)

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib64/python2.7/argparse.py", line 1720, in parse_known_args

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: namespace, args = self._parse_known_args(args, namespace)

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib64/python2.7/argparse.py", line 1926, in _parse_known_args

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: start_index = consume_optional(start_index)

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib64/python2.7/argparse.py", line 1866, in consume_optional

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: take_action(action, args, option_string)

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib64/python2.7/argparse.py", line 1794, in take_action

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: action(self, namespace, argument_values, option_string)

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1695, in __call__

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: ConfigParser._parse_file(values, namespace)

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1950, in _parse_file

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: raise ConfigFileParseError(pe.filename, str(pe))

Feb 09 11:06:20 node4 neutron-linuxbridge-agent[4779]: oslo_config.cfg.ConfigFileParseError: Failed to parse /etc/neutron/plugins/ml2/linuxbridge_agent.ini: at /etc/neutron/plugins/ml2/linuxbridge_agent.ini:1, Invalid section (must end with ]): '[DEFAULT'

Feb 09 11:06:20 node4 systemd[1]: neutron-linuxbridge-agent.service: main process exited, code=exited, status=1/FAILURE

Feb 09 11:06:20 node4 systemd[1]: Unit neutron-linuxbridge-agent.service entered failed state.

Feb 09 11:06:20 node4 systemd[1]: neutron-linuxbridge-agent.service failed.

Feb 09 11:06:20 node4 systemd[1]: neutron-linuxbridge-agent.service holdoff time over, scheduling restart.

Feb 09 11:06:20 node4 systemd[1]: Stopped OpenStack Neutron Linux Bridge Agent.

-- Subject: Unit neutron-linuxbridge-agent.service has finished shutting down

-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

--

-- Unit neutron-linuxbridge-agent.service has finished shutting down.

发现是/etc/neutron/plugins/ml2/linuxbridge_agent.ini第一行错误….,修改配置文件手误造成的,改[DEFAULT为[DEFAULT]

重启服务

[root@node4 ~]# systemctl reset-failed neutron-linuxbridge-agent.service

[root@node4 ~]# systemctl start neutron-linuxbridge-agent.service

这次查看日志正常了

Feb 09 11:27:14 node4 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…

-- Subject: Unit neutron-linuxbridge-agent.service has begun start-up

-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

--

-- Unit neutron-linuxbridge-agent.service has begun starting up.

Feb 09 11:27:14 node4 neutron-enable-bridge-firewall.sh[9710]: net.bridge.bridge-nf-call-iptables = 1

Feb 09 11:27:14 node4 neutron-enable-bridge-firewall.sh[9710]: net.bridge.bridge-nf-call-ip6tables = 1

Feb 09 11:27:14 node4 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.

-- Subject: Unit neutron-linuxbridge-agent.service has finished start-up

-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

--

-- Unit neutron-linuxbridge-agent.service has finished starting up.

--

-- The start-up result is done.

Feb 09 11:27:14 node4 polkitd[3346]: Unregistered Authentication Agent for unix-process:9704:132919 (system bus name :1.52, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)

Feb 09 11:27:17 node4 sudo[9738]:  neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

Feb 09 11:27:17 node4 systemd[1]: Started Session c1 of user root.

-- Subject: Unit session-c1.scope has finished start-up

-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

--

-- Unit session-c1.scope has finished starting up.

--

-- The start-up result is done.

Feb 09 11:27:17 node4 sudo[9738]: pam_unix(sudo:session): session opened for user root by (uid=0)

Feb 09 11:30:01 node4 systemd[1]: Started Session 10 of user root.

-- Subject: Unit session-10.scope has finished start-up

-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

--

-- Unit session-10.scope has finished starting up.

--

-- The start-up result is done.

[root@node1 ~]# openstack network agent list

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

| 051ac750-5922-489b-a2ff-9135ffb2103f | Linux bridge agent | node1 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| 2e4ed59b-1f22-4510-9258-fae0ff15e8e7 | DHCP agent         | node1 | nova              | :-)   | UP    | neutron-dhcp-agent        |

| a2778f7c-5ed2-4769-8786-84448d1fe27e | L3 agent           | node1 | nova              | :-)   | UP    | neutron-l3-agent          |

| c75f9b28-7010-4dfd-b646-ff79456f1435 | Linux bridge agent | node4 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| d8f84f14-7f70-4dca-b4c2-0285b7a56166 | Linux bridge agent | node3 | None              | :-)   | UP    | neutron-linuxbridge-agent |

| ffaff574-b3ca-49a3-9c00-78bc7033b642 | Metadata agent     | node1 | None              | :-)   | UP    | neutron-metadata-agent    |

+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

热门专题

svg path在线生成

sql server中一个字符串包含

centos date命令 格式字符串正则

sql把一个字符串分割多个字段并给他赋值

centos7没有lsof 指令

hooks父组件获取子组件state

jieba(结巴) 来源

layui设置select显示高度

微信小程序字时间戳转成日期格式

腾讯地图设置地图在绘画坐标后自适应窗口大小

java 判断是什么操作系统

sqlserver生成创建索引语句

asp.net 网页一次上传多张图片

rsyslog配置local3输出的路径没有log输入

oracle to_number函数

Random Node采样

excel 统计出现次数 countifs

QObject QRunnable转换

unity 动画结束回调

小丸 压制后帧率不一致

Home

Powered By WordPress