学习openstack(四)
阅读原文时间:2023年07月08日阅读:1

一。KVM虚拟化

1.KVM的安装:

yum install qemu-kvm qemu-kvm-tools virt-manager libvirt virt-install

/etc/init.d/libvirtd start

2.创建一个5G的大小的虚拟机:

qemu-img create -f raw /opt/centos-6.5-x86_64.raw 5G

3.查看虚拟机空间使用大小:

qemu-img info /opt/centos-6.5-x86_64.raw

4.启动虚拟机,并指定内存512,磁盘位置,和CDROM的位置,并启动VNC。

virt-install --virt-type kvm --name centos-6.6-64 --ram 512 --cdrom=/opt/centos-6.5.iso --disk path=/opt/centos-6.5-x86_64.raw --network network=default --graphics vnc,listen=0.0.0.0 --noautoconsole --os-type=linux --os-variant=rhel6

5.使用VNC访问虚拟机:

172.16.2.210 5900端口        (第一台虚拟机默认是5900,第二台就是5901)

6.查看所有创建的虚拟机:

virsh list --all

7.启动虚拟机virsh start centos-6.6-64:

virsh start centos-6.6-64

7.1.直接进入虚拟机命令行:

virsh console centos-6.6-64

8.查看xml文件:

vim /etc/libvirt/qemu/centos-6.6-64.xml

virsh edit centos-6.6-64        (修改xml文件,不能直接用vim修改)

9.根据xml文件新建虚拟机:

virsh define /opt/centos-6.6-64.xml

10.虚拟机监控命令:virt-top

11.查看虚拟网桥状态:brctl show

12.创建桥接网卡并且关联到eth0上面:

brctl addbr br0

brctl addif br0 eth0 && ip del dev eth0 172.16.1.210/24 && ifconfig br0 172.16.1.210/24 up

(创建了桥接网卡以后需要把原来的eth0的IP去掉,把原来的IP配置在br0上面)

13。修改虚拟机的网卡为刚才创建的桥接网卡:

virsh edit centos-6.6-64   (修改虚拟机xml文件)

virsh destroy entos-6.6-64 (关闭在启动虚拟机生效)    virsh start entos-6.6-64

二。OPENSTACK 云计算与虚拟化 (I 版)

 

 1.安装Openstack基础环境:

2.安装mysql,并且在my.cnf里面加上配置。

3.给openstack创建Mysql相关库。

4.安装rabbitmq消息队列:

   yum install rabbitmq-server

5.安装rabbitmq的web管理插件:

   cd /usr/lib/rabbitmq/bin

   ./rabbitmq-plugins enable rabbitmq_management

   /etc/init.d/rabbitmq-server restart

   ./rabbitmq-plugins list  (检查插件是否正常安装)

  http://172.16.1.210:15672  (通过这个网址访问,默认账号密码都是guest)

二。认证服务keystone相关(5000,35357)

6.配置openstack的官方安装源:

7.安装openstack-keystone:

yum install openstack-keystone python-keystoneclient

 8.创建PKI的目录给keystone使用:

 keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

 chown -R keystone:keystone /etc/keystone/ssl

 chmod -R o-rwx /etc/keystone/ssl   (删掉其他用户的权限)

 9.修改keystone的配置文件:

vim /etc/keystone/keystone.conf 

10.初始化keystone的数据库表结构:

   keystone-manage db_sync  (没有报错就是成功,最好在进mysql里面show tables看下)

  rm /var/log/keystone/keystone.log  (不操作会权限报错)

11.启动keystone:

  /etc/init.d/openstack-keystone start

12.配置连接keystone连接需要的环境变量:

export OS_SERVICE_TOKEN=ADMIN

export OS_SERVICE_ENDPOINT=http://172.16.1.210:35357/v2.0

13.初始化话keystone的用户数据(admin,demo):

创建admin,demo用户,创建admin角色,创建service,admin租户

  1. keystone user-create --name=admin --pass=admin --email=admin@example.com

  2. keystone role-create --name=admin

  3. keystone tenant-create --name=admin --description="Admin Tenant"

  4. keystone user-role-add --user=admin --tenant=admin --role=admin

  5. keystone user-role-add --user=admin --role=_member_ --tenant=admin

  6. keystone user-create --name=demo --pass=demo

  7. keystone tenant-create --name=demo --description="demo Tenant"

  8. keystone user-role-add --user=demo --role=_member_ --tenant=demo

  9. keystone tenant-create --name=service

  10. keystone service-create --name=keystone --type=identity

  11. keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') --publicurl=http://172.16.1.210:5000/v2.0 --internalurl=http://172.16.1.210:5000/v2.0 --adminurl=http://172.16.1.210:35357/v2.0

创建完成后,使用keystone user-list 命令查看是否有admin和demo两个账号

14.查看admin的token

unset OS_SERVICE_TOKEN

unset OS_SERVICE_ENDPOINT

keystone --os-username=admin --os-password=admin --os-tenant-name=admin --os-auth-url=http://172.16.1.210:35357/v2.0 token-get

15.创建admin和demo用户的环境变量文件:

vim /root/keystone-admin

  1. export OS_TENANT_NAME=admin

  2. export OS_USERNAME=admin

  3. export OS_PASSWORD=admin

  4. export OS_AUTH_URL=http://172.16.1.210:35357/v2.0

vim /root/keystone-demo

  1. export OS_TENANT_NAME=demo

  2. export OS_USERNAME=demo

  3. export OS_PASSWORD=demo

  4. export OS_AUTH_URL=http://172.16.1.210:35357/v2.0

    三。镜像服务Glance(9292,9191)

1.安装glance:

yum install openstack-glance python-glance python-glanceclient

2.配置glance:

vim /etc/glance/glance-api.conf

  1. [DEFAULT]

  2. debug=True

  3. default_store=file

  4. filesystem_store_datadir=/data/glance/images/

  5. log_file=/var/log/glance/api.log

  6. notifier_strategy = rabbit

  7. rabbit_host=172.16.1.210

  8. rabbit_port=5672

  9. rabbit_use_ssl=false

  10. rabbit_userid=guest

  11. rabbit_password=guest

  12. rabbit_virtual_host=/

  13. rabbit_notification_exchange=glance

  14. rabbit_notification_topic=notifications

  15. rabbit_durable_queues=False

  16. [database]

  17. connection=mysql://glance:glance@172.16.1.210/glance

  18. [keystone_authtoken]

  19. auth_host=172.16.1.210

  20. auth_port=35357

  21. auth_protocol=http

  22. admin_tenant_name=service

  23. admin_user=glance

  24. admin_password=glance

  25. [paste_deploy]

  26. flavor=keystone

vim /etc/glance/glance-registry.conf

  1. debug=True

  2. log_file=/var/log/glance/registry.log

  3. connection=mysql://glance:glance@172.16.1.210/glance

  4. [keystone_authtoken]

  5. auth_host=172.16.1.210

  6. auth_port=35357

  7. auth_protocol=http

  8. admin_tenant_name=service

  9. admin_user=glance

  10. admin_password=glance

  11. [paste_deploy]

  12. flavor=keystone

3.同步glance的mysql数据库:

glance-manage db_sync  (警告报错可以忽略)

chown -R glance:glance /var/log/glance

4.在keystone中创建glance的用户:

keystone user-create --name=glance --pass=glance  (创建glance密码也是一样)

keystone user-role-add --user=glance --tenant=service --role=admin (讲glance用户加入到admin角色service租户中)

5.将glance在keystone注册服务和注册url:

  1. keystone service-create --name=glance --type=image

  2. keystone endpoint-create --service-id=$(keystone service-list|awk '/ image / {print $2}') --publicurl=http://172.16.1.210:9292 --internalurl=http://172.16.1.210:9292 --adminurl=http://172.16.1.210:9292

6.启动glance:

/etc/init.d/openstack-glance-api status

/etc/init.d/openstack-glance-registry start

7.查看镜像列表:

glance image-list

8.下载开发镜像并进行导入:

  1. wget http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

  2. glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 --container-format bare --is-public True --file cirros-0.3.2-x86_64-disk.img

四。计算服务nova相关  (5000,35357)

1.安装控制节点的nova服务:

yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

pip install websockify==0.5.1   (解决novnc启动不了的问题)

2.修改nova的配置文件:

vim /etc/nova/nova.conf

  1. rabbit_host=172.16.1.210

  2. rabbit_port=5672

  3. rabbit_use_ssl=false

  4. rabbit_userid=guest

  5. rabbit_password=guest

  6. rpc_backend=rabbit

  7. my_ip=172.16.1.210

  8. auth_strategy=keystone

  9. network_api_class=nova.network.neutronv2.api.API

  10. linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver

  11. neutron_url=http://172.16.1.210:9696

  12. neutron_admin_username=neutron

  13. neutron_admin_password=neutron

  14. neutron_admin_tenant_id=96616014997f4f79b7dbd9e319912154 ("keystone tenant-list"命令看到的 service_id)

  15. neutron_admin_tenant_name=service

  16. neutron_admin_auth_url=http://172.16.1.210:5000/v2.0

  17. neutron_auth_strategy=keystone

  18. firewall_driver=nova.virt.firewall.NoopFirewallDriver

  19. novncproxy_base_url=http://172.16.1.210:6080/vnc_auto.html

  20. vncserver_listen=0.0.0.0

  21. vncserver_proxyclient_address=172.16.1.210

  22. vnc_enabled=true

  23. vnc_keymap=en-us

  24. connection=mysql://nova:nova@172.16.1.210/nova

  25. auth_host=172.16.1.210

  26. auth_port=35357

  27. auth_protocol=http

  28. auth_uri=http://172.16.1.210:5000

  29. auth_version=v2.0

  30. admin_user=nova

  31. admin_password=nova

  32. admin_tenant_name=service

  33. vif_driver=nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver

3.初始化nova的数据库表结构:

nova-manage db sync

4在keystone中创建nova的用户:

source /root/keystone-admin

keystone user-create --name=nova --pass=nova

keystone user-role-add --user=nova --tenant=service --role=admin

5.将nova在keystone注册服务和注册url:

  1. source /root/keystone-admin (根据自己存放变量的文件)

  2. keystone service-create --name=nova --type=compute

  3. keystone endpoint-create --service-id=$(keystone service-list|awk '/ compute / {print $2}') --publicurl=http://172.16.1.210:8774/v2/%\(tenant_id\)s --internalurl=http://172.16.1.210:8774/v2/%\(tenant_id\)s --adminurl=http://172.16.1.210:8774/v2/%\(tenant_id\)s

6.启动nova所有相关服务:

for i in {api,cert,conductor,consoleauth,novncproxy,scheduler};do service openstack-nova-$i start; done

7.在计算节点上面安装相关服务:

yum install -y qemu-kvm libvirt openstack-nova-compute python-novaclient

yum upgrade device-mapper-libs

8.将控制节点nova配置文件复制到计算节点:

scp /etc/nova/nova.conf 172.16.1.211:/etc/nova/nova.conf

修改vncserver_proxyclient_address=172.16.1.211 配置的IP为计算节点的。

9.启动计算节点的相关服务:

/etc/init.d/libvirtd start

/etc/init.d/messagebus start

/etc/init.d/openstack-nova-compute start

10.在控制节点上面查看是否注册上来:

nova host-list

六。网络服务neutron相关  (9696)

1.在控制节点安装neutron服务:

yum install openstack-neutron openstack-neutron-ml2 python-neutronclient openstack-neutron-linuxbridge

2.修改neutron配置文件:

vim /etc/neutron/neutron.conf

  1. state_path = /var/lib/neutron

  2. lock_path = $state_path/lock

  3. core_plugin = ml2

  4. service_plugins = router,firewall,lbaas

  5. api_paste_config = /usr/share/neutron/api-paste.ini

  6. auth_strategy = keystone

  7. rabbit_host = 172.16.1.210

  8. rabbit_password = guest

  9. rabbit_port = 5672

  10. rabbit_userid = guest

  11. rabbit_virtual_host = /

  12. notify_nova_on_port_status_changes = true

  13. notify_nova_on_port_data_changes = true

  14. nova_url = http://172.16.1.210:8774/v2

  15. nova_admin_username = nova

  16. nova_admin_tenant_id = 96616014997f4f79b7dbd9e319912154 ("keystone tenant-list"命令看到的 service_id)

  17. nova_admin_password = nova

  18. nova_admin_auth_url = http://172.16.1.210:35357/v2.0

  19. root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf

  20. auth_host = 172.16.1.210

  21. auth_port = 35357

  22. auth_protocol = http

  23. admin_tenant_name = service

  24. admin_user = neutron

  25. admin_password = neutron

  26. connection = mysql://neutron:neutron@172.16.1.210:3306/neutron

  27. service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

  28. service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

vim /etc/neutron/plugins/ml2/ml2_conf.ini  (下面是单一扁平网络的配置)

  1. type_drivers = flat,vlan,gre,vxlan

  2. tenant_network_types = flat,vlan,gre,vxlan

  3. mechanism_drivers = linuxbridge,openvswitch

  4. flat_networks = physnet1

  5. enable_security_group = True

vim /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini    (下面是单一扁平网络的配置)

  1. network_vlan_ranges = physnet1

  2. physical_interface_mappings = physnet1:eth0 (根据网卡来配置)

  3. firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

  4. enable_security_group = True

3.在keystone中创建neutron的用户:

  1. keystone user-create --name neutron --pass neutron

  2. keystone user-role-add --user neutron --tenant service --role admin

4.这里需要改下nova的neutron相关配置(我前面已经整合进去了)

5.将neutron在keystone注册服务和注册url:

  1. keystone service-create --name neutron --type network

  2. keystone endpoint-create --service-id=$(keystone service-list |awk '/ network / {print $2}') --publicurl=http://172.16.1.210:9696 --internalurl=http://172.16.1.210:9696 --adminurl=http://172.16.1.210:9696

6.先使用手动启动看是否有报错(启动成功会监听9696端口):

neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/pluns/ml2/ml2_conf.ini  --config-file=/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini

7.修改neutron服务的init启动脚本:

vim /etc/init.d/neutron-server

vim /etc/init.d/neutron-linuxbridge-agent              (两个文件修改相同的地方)

  1. configs=(

  2. "/etc/neutron/neutron.conf" \

  3. "/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini" \

  4. "/etc/neutron/plugins/ml2/ml2_conf.ini" \

  5. )

8.启动neutron服务:

/etc/init.d/neutron-server status

/etc/init.d/neutron-linuxbridge-agent start

9.查看neutron的连接情况:

neutron agent-list

10.在计算节点安装neutron服务:

yum install openstack-neutron openstack-neutron-ml2 python-neutronclient openstack-neutron-linuxbridge

11.将控制节点的配置文件复制到计算节点上:

  1. scp /etc/init.d/neutron-* 172.16.1.211:/etc/init.d/

  2. scp /etc/neutron/neutron.conf 172.16.1.211:/etc/neutron/neutron.conf

  3. scp /etc/neutron/plugins/ml2/ml2_conf.ini 172.16.1.211:/etc/neutron/plugins/ml2/ml2_conf.ini

  4. scp /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini 172.16.1.211:/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini

12.在计算节点上启动neutron的客户端程序:

/etc/init.d/neutron-linuxbridge-agent start

七。web服务dashboard相关  (80)

1.安装dashboard服务:

yum install httpd mod_wsgi memcached python-memcached openstack-dashboard

2.修改dashboard的配置文件:

vim /etc/openstack-dashboard/local_settings    (修改下面两项)

  1. ALLOWED_HOSTS = ['horizon.example.com', 'localhost','172.16.1.210'] (允许的地址)

  2. CACHES = {

  3. 'default': {

  4. 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',

  5. 'LOCATION' : '127.0.0.1:11211',

  6. }

  7. }

  8. OPENSTACK_HOST = "172.16.1.210"

3.启动dashboard服务:

/etc/init.d/memcached start

/etc/init.d/httpd start

4.登录dashboard:

http://172.16.1.210/dashboard/

默认账户密码:admin/admin

八。调试Openstack

1.为demo创建一个单一扁平网络:

neutron net-create --tenant-id 9d18b0a337064af386cc0d599dd172fd flat_net --shared --provider:network_type flat --provider:physical_network physnet1

id为keystone tenant-list 查看的demo_id,  flat_net是创建网络的名称,--shared代表共享网络,后面是网络的类型,最后是对应neutron网络名称.

2.查看创建的网络:

neutron net-list

3.在网页上面创建子网:

管理员-网络-点击网络-创建子网-扁平网络网段就用和eth0一样的就可以

4.虚拟机创建流程图:

5.安装备注:

1.如果安装完成以后虚拟机无法ping同外网,请检查网卡是否打开“混杂模式”

2.强行修改虚拟机的状态为“运行”

nova reset-state 6986b3f8-be2c-4931-b3b9-90d8077210b6 --active

3.启动“主机集合”功能需要修改配置文件:

/etc/nova/nova.conf

scheduler_default_filters=AvailabilityZoneFilter,RetryFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter

4.将db4加入到gigold-2的主机集合里面:

nova aggregate-add-host gigold-2 db4

nova aggregate-remove-host gigold-2 db4 (这个是删除)

5.关闭默认的virbr0虚拟网卡

  • 大小: 386.1 KB

  • 大小: 214.9 KB

#####################################################################################################

一。块存储服务Cinder控制节点

1.安装cinder:

yum install openstack-cinder python-cinderclient

2.修改配置文件:

vim /etc/cinder/cinder.conf

  1. rabbit_host=172.16.1.210

  2. rabbit_port=5672

  3. rabbit_use_ssl=false

  4. rabbit_userid=guest

  5. rabbit_password=guest

  6. rpc_backend=rabbit

  7. my_ip=172.16.1.210

  8. glance_host=$my_ip

  9. auth_strategy=keystone

  10. connection=mysql://cinder:cinder@172.16.1.210/cinder

  11. auth_host=172.16.1.210

  12. auth_port=35357

  13. auth_protocol=http

  14. auth_uri=http://172.16.1.210:5000

  15. identity_uri=http://172.16.1.210:35357/

  16. auth_version=v2.0

  17. admin_user=cinder

  18. admin_password=cinder

  19. admin_tenant_name=service

3.初始化cinder数据库:

cinder-manage db sync

4.在keystone上面创建cinder用户:

keystone user-create --name=cinder --pass=cinder

keystone user-role-add --user=cinder --tenant=service --role=admin

5.将cinder在keystone注册服务和注册url:

  1. keystone service-create --name=cinder --type=volume

  2. keystone endpoint-create --service-id=$(keystone service-list|awk '/ cinder / {print $2}') --publicurl=http://172.16.1.210:8776/v1/%\(tenant_id\)s --internalurl=http://172.16.1.210:8776/v1/%\(tenant_id\)s --adminurl=http://172.16.1.210:8776/v1/%\(tenant_id\)s

  3. keystone service-create --name=cinderv2 --type=volumev2

  4. keystone endpoint-create --service-id=$(keystone service-list|awk '/ cinderv2 / {print $2}') --publicurl=http://172.16.1.210:8776/v2/%\(tenant_id\)s --internalurl=http://172.16.1.210:8776/v2/%\(tenant_id\)s --adminurl=http://172.16.1.210:8776/v2/%\(tenant_id\)s

6.启动cinder服务:

/etc/init.d/openstack-cinder-api start

/etc/init.d/openstack-cinder-scheduler start

7.查看cinder服务上面注册了哪些云硬盘:

cinder service-list

二。块存储服务Cinder存储节点(LVM)

1.在存储节点上面创建一块硬盘,然后用pv初始化:

pvcreate /dev/sdb

vgcreate cinder-volumes /dev/sdb

2.修改配置文件:

vim /etc/lvm/lvm.conf

  1. # Use anchors if you want to be really specific

  2. filter = [ "a/sda1/","a/sdb/","r/.*/" ]

3.安装共享存储target服务:

yum install scsi-target-utils

4.修改配置文件:

vim /etc/tgt/targets.conf

include /etc/cinder/volumes/*  (添加)

4.安装cinder服务:

yum install openstack-cinder

5.将控制节点的配置文件拷贝到存储节点:

scp /etc/cinder/cinder.conf 172.16.1.211:/etc/cinder/cinder.conf

6.修改配置文件:

vim /etc/cinder/cinder.conf

  1. my_ip=172.16.1.211

  2. glance_host=172.16.1.210

  3. iscsi_ip_address=$my_ip

  4. volume_backend_name=iSCSI-Storage

  5. iscsi_helper=tgtadm

  6. volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

7.启动相关服务:

/etc/init.d/tgtd start

/etc/init.d/openstack-cinder-volume start

8.创建cinder的iscsi类型(方便选择后端的存储位置):

  1. cinder type-create iSCSI

  2. cinder type-key iSCSI set volume_backend_name=iSCSI-Storage

  3. (命令在控制节点上操作,“iSCSI-Storage”这个是配置文件里面命名的)

9.下面可以通过网页的云硬盘操作了。

。块存储服务Cinder存储节点(NFS)

1.创建NFS共享(不说明了):

2.修改cinder配置文件:

vim /etc/cinder/cinder.conf

  1. volume_backend_name=Nfs-Storage

  2. nfs_shares_config=/etc/cinder/nfs_shares

  3. nfs_mount_point_base=$state_path/mnt

  4. volume_driver=cinder.volume.drivers.nfs.NfsDriver

vim /etc/cinder/nfs_shares

  1. 172.16.1.210:/data/nfs

  2. NFS地址

3.启动cinder服务:

/etc/init.d/openstack-cinder-volume start

4.创建cinder的nfs类型:

  1. cinder type-create NFS

  2. cinder type-key NFS set volume_backend_name=Nfs-Storage

5.下面可以通过网页的云硬盘操作了。

。块存储服务Cinder存储节点(glusterfs

1.安装glusterfs服务(两个存储节点都装上):

  1. wget http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.14/CentOS/glusterfs-epel.repo

  2. mv glusterfs-epel.repo /etc/yum.repos.d/

  3. yum install glusterfs-server

2.启动两台机器的glusterfs:

/etc/init.d/glusterd start

3.创建glusterfs劵:

  1. mkdir /data/glusterfs/exp1 (两台机器都要操作)

  2. gluster peer probe 172.16.1.211 (创建对等节点,在210上面操作)

  3. gluster volume create cinder-volume01 replica 2 172.16.1.210:/data/glusterfs/exp1 172.16.1.211:/data/glusterfs/exp1 force (创建cinder-volume01劵)

  4. gluster vol start cinder-volume01 (启动卷)

  5. gluster vol info (查看劵的状态)

4.修改cinder配置文件:

vim /etc/cinder/glusterfs_shares   (本来没有,需要创建)

172.16.1.210:/cinder-volome01

vim /etc/cinder/cinder.conf

  1. volume_backend_name=GLS-Storage

  2. glusterfs_shares_config=/etc/cinder/glusterfs_shares

  3. volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver

5.创建cinder的glusterfs类型:

6.现在可以到网页上面创建云硬盘了

####################################################################################

一。负载均衡服务LBaas

1.在dashboard中打开lbaas菜单:

  1. 'enable_lb': True,

  2. 'enable_firewall': True,

  3. 'enable_quotas': True,

  4. 'enable_vpn': True,

  5. 将原来的False改为True.(注意大写)

2.重启dashboard服务:

/etc/init.d/httpd restart

3.安装 haproxy服务:

yum install haproxy

4.修改neutron的配置文件:

vim /etc/neutron/lbaas_agent.ini

  1. interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

  2. device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

  3. user_group = haproxy

5.安装namespace支持:

ip netns list   (输入命令没有报错说明支持,不要在安装了)

yum update iproute

6.修改启动lbaas脚本:

vim /etc/init.d/neutron-lbaas-agent

  1. configs=(

  2. "/etc/neutron/neutron.conf" \

  3. "/etc/neutron/lbaas_agent.ini" \

  4. )

7.启动lbaas服务:

/etc/init.d/neutron-lbaas-agent start

8.下面可以在WEB界面添加负载均衡了