openstack
主要服务组件
本次实验在虚拟机里面安装,共两个节点 controller
、 compute1
,每个节点设置两个IP地址,其中一个网段是可以上网的,另一个网段用来管理,我这里只用了一个网卡,只安装核心的几个组件 keystone
、glance
、nova
、neutron
Password name Description
Database password (no variable used) Root password for the database
ADMIN_PASS Password of user admin
CEILOMETER_DBPASS Database password for the Telemetry service
CEILOMETER_PASS Password of Telemetry service user ceilometer
CINDER_DBPASS Database password for the Block Storage service
CINDER_PASS Password of Block Storage service user cinder
DASH_DBPASS Database password for the dashboard
DEMO_PASS Password of user demo
GLANCE_DBPASS Database password for Image service
GLANCE_PASS Password of Image service user glance
HEAT_DBPASS Database password for the Orchestration service
HEAT_DOMAIN_PASS Password of Orchestration domain
HEAT_PASS Password of Orchestration service user heat
KEYSTONE_DBPASS Database password of Identity service
NEUTRON_DBPASS Database password for the Networking service
NEUTRON_PASS Password of Networking service user neutron
NOVA_DBPASS Database password for Compute service
NOVA_PASS Password of Compute service user nova
RABBIT_PASS Password of user guest of RabbitMQ
SWIFT_PASS Password of Object Storage service user swift
关闭网络管理工具
关闭防火墙和SELinux(两端)
[root@controller ~]# systemctl stop iptables
Failed to stop iptables.service: Unit iptables.service not loaded.
[root@controller ~]# systemctl stop firewalld
Failed to stop firewalld.service: Unit firewalld.service not loaded.
[root@controller ~]# setenforce 0
添加IP地址(controller)
[root@controller ~]# vim /etc/sysconfig/network-scripts/ifcfg-eno16777736
#修改/添加如下内容
BOOTPROTO=none
ONBOOT=yes
####第一个IP地址,用于对外服务####
IPADDR0=192.168.100.130
NETMASK0=255.255.255.0
GATEWAY0=192.168.100.2
####第二个IP地址,用于管理####
IPADDR1=10.0.0.11
NETMASK1=255.255.255.0
添加IP地址(compute)
[root@compute1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eno16777736
#修改/添加如下内容
BOOTPROTO=none
ONBOOT=yes
####第一个IP地址,用于对外服务####
IPADDR0=192.168.100.131
NETMASK0=255.255.255.0
GATEWAY0=192.168.100.2
####第二个IP地址,用于管理####
IPADDR1=10.0.0.31
NETMASK1=255.255.255.0
配置hosts(两端)
[root@controller ~]# echo "10.0.0.11 controller" >> /etc/hosts
[root@controller ~]# echo "10.0.0.31 compute1" >> /etc/hosts
CentOS7
中时间服务变为 chrony
了
服务端操作
[root@controller ~]# yum install -y chrony
[root@controller ~]# vim /etc/chrony.conf
#添加如下行
allow 192.168.100.0/24
allow 10.0.0.0/24
[root@controller ~]# systemctl enable chronyd.service
[root@controller ~]# systemctl start chronyd.service
客户端操作
[root@compute1 ~]# yum install -y chrony
[root@compute1 ~]# vim /etc/chrony.conf
#注释掉默认的NTP服务器
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
#添加下面这行
server controller iburst
[root@compute1 ~]# systemctl enable chronyd.service
[root@compute1 ~]# systemctl start chronyd.service
验证
[root@controller ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? 202.118.1.130 0 8 0 10y +0ns[ +0ns] +/- 0ns
^? news.neu.edu.cn 0 8 0 10y +0ns[ +0ns] +/- 0ns
^? dns1.synet.edu.cn 0 8 0 10y +0ns[ +0ns] +/- 0ns
^* time5.aliyun.com 2 6 377 16 -71us[ -243us] +/- 24ms
[root@compute1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* controller 3 6 17 42 +1644ns[ -117us] +/- 26ms
首先要禁用系统原有的epel源
安装OpenStack提供的epel源
# yum install -y centos-release-openstack-liberty
# yum install -y https://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm
安装 python-openstackclient
# yum install -y python-openstackclient
(可选)如果在yum安装过程中遇到问题,可以改用阿里源
[root@controller yum.repos.d]# vim CentOS-OpenStack-liberty.repo
[centos-openstack-liberty]
name=CentOS-7 - OpenStack liberty
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-liberty/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
[centos-openstack-liberty-test]
name=CentOS-7 - OpenStack liberty Testing
baseurl=http://buildlogs.centos.org/centos/7/cloud/$basearch/openstack-liberty/
gpgcheck=0
enabled=0
[root@controller yum.repos.d]# cat rdo-release.repo
[openstack-liberty]
name=OpenStack Liberty Repository
baseurl=http://mirrors.aliyun.com/centos/7/cloud/x86_64/openstack-liberty/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
升级系统
# yum upgrade
如果之前没有禁用SELinux,则安装 openstack-selinux
软件包,这个包会自动为openstack管理SELinux
# yum install -y openstack-selinux
大多数的openstack服务都是采用SQL数据库来存储数据,支持mysql和PostgreSQL,通常的做法是安装在 controller
节点上。这里数据库密码设置为 rootroot
[root@controller ~]# yum install mariadb mariadb-server MySQL-python
[root@controller ~]# vim /etc/my.cnf.d/mariadb_openstack.cnf
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
[root@controller ~]# systemctl enable mariadb.service
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@controller ~]# systemctl start mariadb.service
部署了 Telemetry
服务才会需要NoSQL
本次不部署
OpenStack
使用消息队列来协调操作和状态信息,通常的做法是安装在 controller
节点上,OpenStack支持的消息队列服务含 RabbitMQ
, Qpid
和 ZeroMQ
,然而大多数 OpenStack
服务模块都只支持某种特定的消息队列服务,这里选择 RabbitMQ
,因为所有的 openstack
模块都支持它。
# yum install rabbitmq-server
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
添加 openstack
用户:
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
授权用户 openstack
读、写、配置的权限
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
keystone
安装在 controller
节点,为了提高服务性能,使用 apache
提供WEB请求,由 memcached
来保存 Token
信息
为keystone
建一个数据库
$ mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
安装软件包
# yum install openstack-keystone httpd mod_wsgi \
memcached python-memcached
# systemctl enable memcached.service
# systemctl start memcached.service
配置 keystone
注意:不同版本号的keystone,其默认配置可能会有所不同
# vi /etc/keystone/keystone.conf
[DEFAULT]
admin_token = ADMIN_TOKEN
verbose = True
[database]
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
[memcache]
servers = localhost:11211
[token]
provider = uuid
driver = memcache
[revoke]
driver = sql
初始化数据库
# su -s /bin/sh -c "keystone-manage db_sync" keystone
配置HTTP服务器
修改服务器名
# vim /etc/httpd/conf/httpd.conf
ServerName controller
添加 keystone
的服务
# vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
启动HTTP服务器
# systemctl enable httpd.service
# systemctl start httpd.service
验证
[root@controller ~]# ss -ntl | grep -E "5000|35357"
LISTEN 0 128 :::35357 :::*
LISTEN 0 128 :::5000 :::*
配置环境变量
[root@controller ~]# vim admin.rc
export OS_TOKEN=ADMIN_TOKEN
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
[root@controller ~]# source admin.rc
服务注册
[root@controller ~]# openstack service create \
--name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | c9c8ca22d2a54c9fa1f3c77e1af7037d |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
API注册
openstack为每一服务提供三种API,admin、public、internal
[root@controller ~]# openstack endpoint create --region RegionOne \
identity public http://controller:5000/v2.0
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 866c0c3f786c4f8c8c34d47c00ef2851 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c9c8ca22d2a54c9fa1f3c77e1af7037d |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v2.0 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
identity internal http://controller:5000/v2.0
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | e6893e1876ac4eca9e0de360c8ee71cc |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c9c8ca22d2a54c9fa1f3c77e1af7037d |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v2.0 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
identity admin http://controller:35357/v2.0
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b16d1443007b4d0cb126a354ec70c0f5 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c9c8ca22d2a54c9fa1f3c77e1af7037d |
| service_name | keystone |
| service_type | identity |
| url | http://controller:35357/v2.0 |
+--------------+----------------------------------+
创建 admin
项目
[root@controller ~]# openstack project create --domain default \
--description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| domain_id | default |
| enabled | True |
| id | 2e7ff30adaa74c1eacbfb6568e76a70c |
| is_domain | False |
| name | admin |
| parent_id | None |
+-------------+----------------------------------+
创建 admin
用户
这里输入的密码是 openstack
[root@controller ~]# openstack user create --domain default \
--password-prompt admin
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 28927d1a28b34c09bf13413907a57b76 |
| name | admin |
+-----------+----------------------------------+
创建 admin
规则
[root@controller ~]# openstack role create admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 62d1f3434ed746d580d771de68c0e459 |
| name | admin |
+-------+----------------------------------+
把 admin
项目、admin
规则、admin
用户关联起来
[root@controller ~]# openstack role add --project admin --user admin admin
创建 service
项目
[root@controller ~]# openstack project create --domain default \
--description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | a794f7f642264b718023387f306f8d04 |
| is_domain | False |
| name | service |
| parent_id | None |
+-------------+----------------------------------+
创建 demo
项目
[root@controller ~]# openstack project create --domain default \
--description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 9dad05492aa2462d83cbcb60fa4234c5 |
| is_domain | False |
| name | demo |
| parent_id | None |
+-------------+----------------------------------+
创建 demo
用户
这里设置的密码也是 openstack
[root@controller ~]# openstack user create --domain default \
--password-prompt demo
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 1fa35f1441b64169a9c82176f6ca3b43 |
| name | demo |
+-----------+----------------------------------+
创建 user
规则
[root@controller ~]# openstack role create user
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | d41c2832666240aca1a020cb4753baf2 |
| name | user |
+-------+----------------------------------+
把 demo
项目、user
规则、demo
用户关联起来
[root@controller ~]# openstack role add --project demo --user demo user
出于安全考虑,修改配置
编辑 /usr/share/keystone/keystone-dist-paste.ini
,从 [pipeline:public_api]
、 [pipeline:admin_api]
和 [pipeline:api_v3]
这三部分内容中删除 admin_token_auth
为了验证,临时改环境变量
[root@controller ~]# unset OS_TOKEN OS_URL
为 admin
用户请求 token
[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-id default --os-user-domain-id default \
--os-project-name admin --os-username admin --os-auth-type password \
token issue
Password:
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2016-08-29T01:48:02.212203Z |
| id | ebe202fdc5c84af180e89f1a9f67f7af |
| project_id | 2e7ff30adaa74c1eacbfb6568e76a70c |
| user_id | 28927d1a28b34c09bf13413907a57b76 |
+------------+----------------------------------+
从上一节的验证部分可以发现,openstack
的命令有很多参数,如果每一次输入命令都要带上所有的参数,那就太麻烦了,解决此问题的方法就是使用环境变量,只要设置了相关的环境变量,输入命令时就可以省略相应的参数
创建 admin
用户的环境脚本
[root@controller ~]# vim admin-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
创建 demo
用户的环境脚本
[root@controller ~]# vim demo-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
使用脚本
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack token issue
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2016-08-29T02:02:19.471758Z |
| id | 91890941cacd4e849197aeaff18d13fe |
| project_id | 2e7ff30adaa74c1eacbfb6568e76a70c |
| user_id | 28927d1a28b34c09bf13413907a57b76 |
+------------+----------------------------------+
glance为用户提供虚拟机镜像的发现、注册和取回服务。
默认把镜像存放在 /var/lib/glance/images/
目录下
数据库准备
[root@controller ~]# mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';
使用 admin
权限
[root@controller ~]# source admin-openrc.sh
创建 glance
用户
密码设置为 openstack
[root@controller ~]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | addd4b136dd848709a8510a7a51880c4 |
| name | glance |
+-----------+----------------------------------+
把 service
项目、admin
规则、glance
用户关联起来
[root@controller ~]# openstack role add --project service --user glance admin
注册名为 image
的服务
[root@controller ~]# openstack service create --name glance \
--description "OpenStack Image service" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image service |
| enabled | True |
| id | fbe45ebea79b49ef8dcf7ddb126e2da9 |
| name | glance |
| type | image |
+-------------+----------------------------------+
注册API端点
[root@controller ~]# openstack endpoint create --region RegionOne \
image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 8f3c456c8a6b4697a79b649637438682 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | fbe45ebea79b49ef8dcf7ddb126e2da9 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | df2c31ab52c9447da812040c4feff203 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | fbe45ebea79b49ef8dcf7ddb126e2da9 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | da36d20ff8964595b874112921d72eff |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | fbe45ebea79b49ef8dcf7ddb126e2da9 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
安装程序包
[root@controller ~]# yum install openstack-glance python-glance python-glanceclient
编辑 /etc/glance/glance-api.conf
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = openstack
[paste_deploy]
flavor = keystone
[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
编辑 /etc/glance/glance-registry.conf
[DEFAULT]
notification_driver = noop
verbose = True
[database]
connection = mysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = openstack
[paste_deploy]
flavor = keystone
导入数据
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
No handlers could be found for logger "oslo_config.cfg"
这个错误可以忽略
# systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
# systemctl start openstack-glance-api.service \
openstack-glance-registry.service
我们使用一个非常小的系统镜像来验证 glance
是否成功部署
修改环境变量脚本
# echo "export OS_IMAGE_API_VERSION=2" \
| tee -a admin-openrc.sh demo-openrc.sh
使用 admin
权限
# source admin-openrc.sh
下载镜像
# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
如果上面的镜像访问不了,可以使用下面这个
https://launchpadlibrarian.net/83305348/cirros-0.3.0-x86_64-disk.img
上传镜像给 glance
[root@controller ~]# glance image-create --name "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--visibility public --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | b085f55ed9f8dde416520d901b23ac4d |
| container_format | bare |
| created_at | 2016-08-29T02:45:11Z |
| disk_format | qcow2 |
| id | 2569f637-f41c-4747-8f53-fa6a687840c7 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 2e7ff30adaa74c1eacbfb6568e76a70c |
| protected | False |
| size | 16384 |
| status | active |
| tags | [] |
| updated_at | 2016-08-29T02:45:12Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------+
查看已上传的镜像
[root@controller ~]# glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 2569f637-f41c-4747-8f53-fa6a687840c7 | cirros |
+--------------------------------------+--------+
这一部分讲述的是 nova
在控制节点(compute)上的部署
创建数据库
[root@controller ~]# mysql -u root -p
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
使用 admin
用户权限
[root@controller ~]# source admin-openrc.sh
创建 nova
用户
这里密码设为: openstack
[root@controller ~]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | d51a6e818f4746028b67bbe8d04c2436 |
| name | nova |
+-----------+----------------------------------+
添加规则
[root@controller ~]# openstack role add --project service --user nova admin
创建服务
[root@controller ~]# openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 982dd382eacc434cafb2f18626e40e47 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
注册API
[root@controller ~]# openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | cc57d697e613465980060e3fa3499908 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 982dd382eacc434cafb2f18626e40e47 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 04f6ad7243e74f86b315a95e210a9c96 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 982dd382eacc434cafb2f18626e40e47 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | ebced03a21d94259801222520503033c |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 982dd382eacc434cafb2f18626e40e47 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
安装以及配置组件
[root@controller ~]# yum install openstack-nova-api openstack-nova-cert \
openstack-nova-conductor openstack-nova-console \
openstack-nova-novncproxy openstack-nova-scheduler \
python-novaclient
编辑 /etc/nova/nova.conf
[DEFAULT]
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
enabled_apis=osapi_compute,metadata
verbose = True
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[database]
connection = mysql://nova:NOVA_DBPASS@controller/nova
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = openstack #NOVA_PASS
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
host = controller
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
数据导入
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
No handlers could be found for logger "oslo_config.cfg"
完成安装
# systemctl enable openstack-nova-api.service \
openstack-nova-cert.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-cert.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service
这一部分讲述的是 nova
在计算节点(compute)上的部署
安装
[root@compute1 ~]# yum install openstack-nova-compute sysfsutils
编辑 /etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.31
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
verbose = True
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = openstack
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
host = controller
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
检查服务器是否支持硬件虚拟化
[root@compute1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
1
如果显示的数字是0,则表示不支持硬件虚拟化,需要设置服务器使支持,或者使用qemu,方法如下
[root@compute1 ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu
完成安装
[root@compute1 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute1 ~]# systemctl start libvirtd.service openstack-nova-compute.service
验证
拉取环境变量配置脚本
[root@compute1 ~]# scp controller:~/*openrc.sh .
root@controller's password:
admin-openrc.sh 100% 289 0.3KB/s 00:00
demo-openrc.sh 100% 285 0.3KB/s 00:00
使用环境变量
[root@compute1 ~]# source admin-openrc.sh
查看 nova
服务组件
[root@compute1 ~]# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2016-08-29T06:29:36.000000 | - |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2016-08-29T06:29:36.000000 | - |
| 3 | nova-cert | controller | internal | enabled | up | 2016-08-29T06:29:37.000000 | - |
| 4 | nova-scheduler | controller | internal | enabled | up | 2016-08-29T06:29:37.000000 | - |
| 5 | nova-compute | compute1 | nova | enabled | up | 2016-08-29T06:29:41.000000 | - |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
查看API端点(可以忽 WARNING
级别的信息)
[root@compute1 ~]# nova endpoints
WARNING: nova has no endpoint in ! Available endpoints for this service:
+-----------+------------------------------------------------------------+
| nova | Value |
+-----------+------------------------------------------------------------+
| id | 04f6ad7243e74f86b315a95e210a9c96 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:8774/v2/2e7ff30adaa74c1eacbfb6568e76a70c |
+-----------+------------------------------------------------------------+
+-----------+------------------------------------------------------------+
| nova | Value |
+-----------+------------------------------------------------------------+
| id | cc57d697e613465980060e3fa3499908 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:8774/v2/2e7ff30adaa74c1eacbfb6568e76a70c |
+-----------+------------------------------------------------------------+
+-----------+------------------------------------------------------------+
| nova | Value |
+-----------+------------------------------------------------------------+
| id | ebced03a21d94259801222520503033c |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:8774/v2/2e7ff30adaa74c1eacbfb6568e76a70c |
+-----------+------------------------------------------------------------+
WARNING: keystone has no endpoint in ! Available endpoints for this service:
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | 866c0c3f786c4f8c8c34d47c00ef2851 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:5000/v2.0 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | b16d1443007b4d0cb126a354ec70c0f5 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:35357/v2.0 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | e6893e1876ac4eca9e0de360c8ee71cc |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:5000/v2.0 |
+-----------+----------------------------------+
WARNING: glance has no endpoint in ! Available endpoints for this service:
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | 8f3c456c8a6b4697a79b649637438682 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:9292 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | da36d20ff8964595b874112921d72eff |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:9292 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | df2c31ab52c9447da812040c4feff203 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:9292 |
+-----------+----------------------------------+
查看镜像
[root@compute1 ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 2569f637-f41c-4747-8f53-fa6a687840c7 | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+
controller
节点数据库
# mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
使用 admin
权限
[root@controller ~]# source admin-openrc.sh
创建服务凭证
创建 neutron
用户,这里密码设置为: openstack
[root@controller ~]# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 26c2d483816443c69ebd68a2a0f7661f |
| name | neutron |
+-----------+----------------------------------+
添加规则
[root@controller ~]# openstack role add --project service --user neutron admin
注册 neutron
服务
[root@controller ~]# openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 7fe502d073214afda58b7b250ae9e962 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
创建API
[root@controller ~]# openstack endpoint create --region RegionOne \
network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b3c540fa5b54492cba3dd1f79b3bd51c |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7fe502d073214afda58b7b250ae9e962 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | fa2a872165b54ce3ad453a733dbafb7b |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7fe502d073214afda58b7b250ae9e962 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 64b2d00beda044099dfa90bc0feac9b5 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7fe502d073214afda58b7b250ae9e962 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
配置网络
根据使用场景的不同,网络配置分两种类型
1: Provider networks #这种最简单,下面的安装步骤就基于这种类型
2: Self-service networks
安装相关组件
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge python-neutronclient ebtables ipset
配置 neutron
服务端组件
服务端组件配置包含数据库、认证、消息队列、拓朴变化通知、插件
编辑 /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins =
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
verbose = True
[database]
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = openstack
[nova]
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = openstack
配置 ML2 plug-in
编辑 /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
# 注意:启用ML2后,如果删除了type_drivers的值将导致数据库异常
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = public
[securitygroup]
enable_ipset = True
配置 Linux bridge agent
编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = public:eno16777736
[vxlan]
enable_vxlan = False
[agent]
prevent_arp_spoofing = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置 DHCP Agent
编辑 /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
verbose = True
配置 metadata agent
编辑 /etc/neutron/metadata_agent.ini
[DEFAULT]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = openstack
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
verbose = True
配置 nova
使 compute
节点可以使用网络
编辑 /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = openstack
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
完成安装
建立链接
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启 nova-api
服务
[root@controller ~]# systemctl restart openstack-nova-api.service
启动及配置开机启动
[root@controller ~]# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
[root@controller ~]# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
compute
节点组件安装
# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset
公共组件配置
网络公共组件配置包含认证、消息队列和插件
编辑 /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
verbose = True
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[database]
# 注释掉该模块的所有配置,因不需要 compute 节点直接连接数据库
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = openstack
配置网络选项
根据网络类型不同,compute节点也有两种配置方法,要和controller节点一样
1: Provider networks #我们的选择
2: Self-service networks
配置 Linux bridge agent
编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = public:eno16777736
[vxlan]
enable_vxlan = False
[agent]
prevent_arp_spoofing = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置compute节点使用网络
编辑 /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = openstack
完成安装
重启 compute
服务
[root@compute1 ~]# systemctl restart openstack-nova-compute.service
启动 Linux bridge agent
并设置开机自启动
[root@compute1 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute1 ~]# systemctl start neutron-linuxbridge-agent.service
以下命令在controller节点上执行
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# neutron ext-list
+-----------------------+--------------------------+
| alias | name |
+-----------------------+--------------------------+
| flavors | Neutron Service Flavors |
| security-group | security-group |
| dns-integration | DNS Integration |
| net-mtu | Network MTU |
| port-security | Port Security |
| binding | Port Binding |
| provider | Provider Network |
| agent | agent |
| quotas | Quota management support |
| subnet_allocation | Subnet Allocation |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| rbac-policies | RBAC Policies |
| external-net | Neutron external network |
| multi-provider | Multi Provider Network |
| allowed-address-pairs | Allowed Address Pairs |
| extra_dhcp_opt | Neutron Extra DHCP opts |
+-----------------------+--------------------------+
[root@controller ~]# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 14707d27-e2ff-4444-9653-3082877e3e6e | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
| 6b3da4d5-d162-4756-8b01-a61000401140 | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
| 80b833cd-4733-4da7-8f6b-09a00408a0e2 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
| 83489ddc-36a1-46b9-94dc-afd2e36694be | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
下面这个应该能看到4个agent,3个在controller节点,1个在compute1节点
基于WEB的管理界面,用来管理openstack,通过API交互。一般安装在 controller
节点上。
安装程序
[root@controller ~]# yum install openstack-dashboard
编辑 /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', ]
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"volume": 2,
}
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
#时区设置
TIME_ZONE = "UTC"
完成安装
[root@controller ~]# systemctl enable httpd.service memcached.service
[root@controller ~]# systemctl restart httpd.service memcached.service
验证
用浏览器打开:http://controller/dashboard
域: default
用户: admin
或 demon
9.1.1 使用 admin
权限
[root@controller ~]# source admin-openrc.sh
9.1.2 创建共享网络
[root@controller ~]# neutron net-create public --shared --provider:physical_network public \
--provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 5be95c56-3ce8-4f97-84dd-283652e7c995 |
| mtu | 0 |
| name | public |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | public |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | 2e7ff30adaa74c1eacbfb6568e76a70c |
+---------------------------+--------------------------------------+
--shared
表示允许所有的项目使用该网络
9.1.3 创建子网
[root@controller ~]# neutron subnet-create public 192.168.100.0/24 --name public \
--allocation-pool start=192.168.100.50,end=192.168.100.99\
--dns-nameserver 211.162.66.66 --gateway 192.168.100.2
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "192.168.100.50", "end": "192.168.100.99"} |
| cidr | 192.168.100.0/24 |
| dns_nameservers | 211.162.66.66 |
| enable_dhcp | True |
| gateway_ip | 192.168.100.2 |
| host_routes | |
| id | 14982f32-9365-4931-ad55-be6b82361ae4 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | public |
| network_id | 5be95c56-3ce8-4f97-84dd-283652e7c995 |
| subnetpool_id | |
| tenant_id | 2e7ff30adaa74c1eacbfb6568e76a70c |
+-------------------+------------------------------------------------------+
9.2.1 使用 demo
权限
[root@controller ~]# source demo-openrc.sh
9.2.2 生成密钥对
如果已有密钥,则可以不使用 ssh-keygen
重新生成
[root@controller ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
[root@controller ~]# nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey
9.2.3 查看有哪些可用的密钥
[root@controller ~]# nova keypair-list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | f1:78:45:e2:a8:70:dc:cd:53:c1:69:04:26:df:97:96 |
+-------+-------------------------------------------------+
默认情况下,安全规则组 default
会应用到所有的实例当中,它会通过防火墙规则来拒绝所有的远程访问。一般来说,我们通常会放行 ICMP
和 SSH
这两种协议的访问。
9.3.1 放行 ICMP
协议
[root@controller ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
9.3.2 放行 SSH
协议
[root@controller ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
9.4.1 确定创建虚拟机实例所需要的相关组件及参数
使用 demo
权限
[root@controller ~]# source demo-openrc.sh
查看预定义虚拟机硬件配置
[root@controller ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
在这里我们将选择 m1.tiny
查看可用镜像
[root@controller ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 2569f637-f41c-4747-8f53-fa6a687840c7 | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+
查看可用的网络
[root@controller ~]# neutron net-list
+--------------------------------------+--------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+--------+-------------------------------------------------------+
| 5be95c56-3ce8-4f97-84dd-283652e7c995 | public | 14982f32-9365-4931-ad55-be6b82361ae4 192.168.100.0/24 |
+--------------------------------------+--------+-------------------------------------------------------+
这是之前我们创建的 public
网络,但必须使用 id
来调用网络
9.4.2 启动实例
[root@controller ~]# nova boot --flavor m1.tiny --image cirros --nic net-id=5be95c56-3ce8-4f97-84dd-283652e7c995 \
--security-group default --key-name mykey public-instance
+--------------------------------------+-----------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | Lps33tUD5Rwz |
| config_drive | |
| created | 2016-08-30T05:03:21Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | faad8a82-acf6-454a-80ea-366ce42986c4 |
| image | cirros (2569f637-f41c-4747-8f53-fa6a687840c7) |
| key_name | mykey |
| metadata | {} |
| name | public-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 9dad05492aa2462d83cbcb60fa4234c5 |
| updated | 2016-08-30T05:03:21Z |
| user_id | 1fa35f1441b64169a9c82176f6ca3b43 |
+--------------------------------------+-----------------------------------------------+
命令里的 5be95c56-3ce8-4f97-84dd-283652e7c995
就是之前通过 neutron net-list
查询到的网络ID
9.4.3 查看实例状态
[root@controller ~]# nova list
+--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
| faad8a82-acf6-454a-80ea-366ce42986c4 | public-instance | ACTIVE | - | Running | public=192.168.100.51 |
+--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
实例启动完成后,状态会由 BUILD
变为 ACTIVE
9.5.1 通过浏览器访问VNC
[root@controller ~]# nova get-vnc-console public-instance novnc
+-------+---------------------------------------------------------------------------------+
| Type | Url |
+-------+---------------------------------------------------------------------------------+
| novnc | http://controller:6080/vnc_auto.html?token=54f74d29-43bd-4928-96a7-28ae19617ae2 |
+-------+---------------------------------------------------------------------------------+
如果浏览器所在的主机不能解析 controller
则可以替换成IP地址
9.5.2 SSH登录测试
[root@controller ~]# ssh cirros@192.168.100.52
The authenticity of host '192.168.100.52 (192.168.100.52)' can't be established.
RSA key fingerprint is 43:5c:b8:69:46:d4:70:ef:7e:79:8b:b2:4a:11:02:e6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.100.52' (RSA) to the list of known hosts.
cirros@192.168.100.52's password:
$
$ ls
$ pwd
/home/cirros
$ cat /etc/issue
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
$
默认用户 cirros
,密码 cubswin:)
===================
1.官方镜像下载: http://docs.openstack.org/image-guide/obtain-images.html
2. 快照功能: 做快照会把实例关闭的,然后再快照生成镜像,快照完成后不会自动启动实例
3. 软重启和硬重启: 软重启是正常的通知系统启动系统;硬重启是直接断电,再启动系统。
手机扫一扫
移动阅读更方便
你可能感兴趣的文章