openstack部署2
阅读原文时间:2023年07月24日阅读:5

检查服务状态

检查计算节点,控制节点服务是up状态

检查网络节点是True的状态。这里的每个计算节点,都是一个neutron的客户端。

查看dashboard每个页面的情况

修改配额

镜像点击启动,就是根据这个镜像创建云实例,并且这个镜像,已经被分配给这个实例了,自己不需要设置源了

一次创建多个云主机

创建并使用用户

成功用刚刚创建的成员用户登录

成员用户的身份管理,也只有项目管理菜单,没有用户,组和角色的菜单权限。实例的权限好像没有少

项目信息也只能查看自己有的

只能查看到用户id和请求id,没有显示账号名字。成员账号成功创建下面的云实例

日志这里显示的好像和之前直接用kvm命令创建进入时一样的提示

禁用用户

已经登录的,查看的时候,发现还是能查看,但是退出重新登录的话,就提示凭据无效,登录不上了。

新增ip地址池

查看现有的ip地址池

ip地址分配池如下,下面还有网关,dns,cidr等信息

需要在管理员网络下,进入子网列表,编辑子网

看下面,多了一个ip分配区域了

我们连接上数据库

MariaDB [neutron]> show tables;
+-----------------------------------------+
| Tables_in_neutron |
+-----------------------------------------+
| address_scopes |
| agents |
| alembic_version |
| allowedaddresspairs |
| arista_provisioned_nets |
| arista_provisioned_tenants |
| arista_provisioned_vms |
| auto_allocated_topologies |
| bgp_peers |
| bgp_speaker_dragent_bindings |
| bgp_speaker_network_bindings |
| bgp_speaker_peer_bindings |
| bgp_speakers |
| brocadenetworks |
| brocadeports |
| cisco_csr_identifier_map |
| cisco_hosting_devices |
| cisco_ml2_apic_contracts |
| cisco_ml2_apic_host_links |
| cisco_ml2_apic_names |
| cisco_ml2_n1kv_network_bindings |
| cisco_ml2_n1kv_network_profiles |
| cisco_ml2_n1kv_policy_profiles |
| cisco_ml2_n1kv_port_bindings |
| cisco_ml2_n1kv_profile_bindings |
| cisco_ml2_n1kv_vlan_allocations |
| cisco_ml2_n1kv_vxlan_allocations |
| cisco_ml2_nexus_nve |
| cisco_ml2_nexusport_bindings |
| cisco_port_mappings |
| cisco_router_mappings |
| consistencyhashes |
| default_security_group |
| dnsnameservers |
| dvr_host_macs |
| externalnetworks |
| extradhcpopts |
| firewall_policies |
| firewall_rules |
| firewalls |
| flavors |
| flavorserviceprofilebindings |
| floatingipdnses |
| floatingips |
| ha_router_agent_port_bindings |
| ha_router_networks |
| ha_router_vrid_allocations |
| healthmonitors |
| ikepolicies |
| ipallocationpools |
| ipallocations |
| ipamallocationpools |
| ipamallocations |
| ipamsubnets |
| ipsec_site_connections |
| ipsecpeercidrs |
| ipsecpolicies |
| lsn |
| lsn_port |
| maclearningstates |
| members |
| meteringlabelrules |
| meteringlabels |
| ml2_brocadenetworks |
| ml2_brocadeports |
| ml2_distributed_port_bindings |
| ml2_flat_allocations |
| ml2_geneve_allocations |
| ml2_geneve_endpoints |
| ml2_gre_allocations |
| ml2_gre_endpoints |
| ml2_nexus_vxlan_allocations |
| ml2_nexus_vxlan_mcast_groups |
| ml2_port_binding_levels |
| ml2_port_bindings |
| ml2_ucsm_port_profiles |
| ml2_vlan_allocations |
| ml2_vxlan_allocations |
| ml2_vxlan_endpoints |
| multi_provider_networks |
| networkconnections |
| networkdhcpagentbindings |
| networkdnsdomains |
| networkgatewaydevicereferences |
| networkgatewaydevices |
| networkgateways |
| networkqueuemappings |
| networkrbacs |
| networks |
| networksecuritybindings |
| networksegments |
| neutron_nsx_network_mappings |
| neutron_nsx_port_mappings |
| neutron_nsx_router_mappings |
| neutron_nsx_security_group_mappings |
| nexthops |
| nsxv_edge_dhcp_static_bindings |
| nsxv_edge_vnic_bindings |
| nsxv_firewall_rule_bindings |
| nsxv_internal_edges |
| nsxv_internal_networks |
| nsxv_port_index_mappings |
| nsxv_port_vnic_mappings |
| nsxv_router_bindings |
| nsxv_router_ext_attributes |
| nsxv_rule_mappings |
| nsxv_security_group_section_mappings |
| nsxv_spoofguard_policy_network_mappings |
| nsxv_tz_network_bindings |
| nsxv_vdr_dhcp_bindings |
| nuage_net_partition_router_mapping |
| nuage_net_partitions |
| nuage_provider_net_bindings |
| nuage_subnet_l2dom_mapping |
| poolloadbalanceragentbindings |
| poolmonitorassociations |
| pools |
| poolstatisticss |
| portbindingports |
| portdnses |
| portqueuemappings |
| ports |
| portsecuritybindings |
| providerresourceassociations |
| provisioningblocks |
| qos_bandwidth_limit_rules |
| qos_dscp_marking_rules |
| qos_minimum_bandwidth_rules |
| qos_network_policy_bindings |
| qos_policies |
| qos_port_policy_bindings |
| qospolicyrbacs |
| qosqueues |
| quotas |
| quotausages |
| reservations |
| resourcedeltas |
| router_extra_attributes |
| routerl3agentbindings |
| routerports |
| routerroutes |
| routerrules |
| routers |
| securitygroupportbindings |
| securitygrouprules |
| securitygroups |
| segmenthostmappings |
| serviceprofiles |
| sessionpersistences |
| standardattributes |
| subnet_service_types |
| subnetpoolprefixes |
| subnetpools |
| subnetroutes |
| subnets |
| subports |
| tags |
| trunks |
| tz_network_bindings |
| vcns_router_bindings |
| vips |
| vpnservices |
+-----------------------------------------+
162 rows in set (0.00 sec)

MariaDB [neutron]>

neutron 的表

我们可以看的ip地址分配池的表

查看当前用了哪些ip

我们从数据库查看到的,就是页面钟的下面的ip地址那里,100的ip虽然没有显示出来,但是是已经被分配出去了的,当删除这个没有创建成功的mcw-test1实例的时候,不知道会不会释放这个ip出来。

删除实例,释放ip的数据库查询

删除实例,我们看下ip地址池是否是否出来,这里即使是运行状态的主机,也是可以删除实例的。

我们再看一下,111IP已经不在已分配的ip表中了。这应该就可以继续分配给其它新建实例了。

准备新节点

修改主机名和ip。

登录查看环境

部署新节点

参考:https://www.cnblogs.com/machangwei-8/p/17368098.html

把计算节点1的自制yum仓库,复制到新节点计算节点2上面。

nova的配置,my_ip要改成本机ip,这里是10.0.0.43,其它的按照往下就可以

因为计算节点的配置里用的控制节点的主机名,因此我们需要给主机名添加解析的,控制节点不需要添加。不添加解析记录,那么启动nova计算节点服务,会一直卡住,估计是找不到主机

安装nova计算服务,修改配置并启动

控制节点查看,计算节点2已经是up了

安装客户端,修改配置并启动

我们看到计算节点2已经添加进来了。

在控制台页面查看,虚拟机管理器,可以看的计算节点2了

我们先看下控制节点镜像默认存放位置,放在如下位置,文件名称就是镜像id

如下,镜像id

我们看下计算节点,实例的镜像放在哪里了,如下位置

我们这个节点有四个实例,上面目录对应了四个目录

并且上面的那个目录,是以虚拟机实例id作为目录名字的。

我们选择下面这个实例

然后看下下面的文件,我这里是只有三个

别人部署的那里多出个配置文件,我这里没有

控制台输出,虚拟机启动时的输出记录

我们看下,我们的实例是1G

但是在计算节点上,这个实例的disk显示是3M

我们可以看到,这是个qcow镜像文件,它有个后面的文件,也就是有基础镜像。cow,就是写时拷贝的机制,好像。因此,虚机的磁盘很小,他们有基础镜像,不一样的地方,就复制过来,写入到自己的磁盘文件中,省空间。

我们可以看下磁盘的详情,名字,格式,虚拟大小1g,实际大小3M,基础镜像。qcow2这个格式的磁盘小,还是被加密的;像下面raw格式,加密,压缩都没有。

磁盘信息文件是写了这个虚拟机使用的磁盘的位置和格式

我们看下下面的配置文件

我们可以看的里面的元数据里添加了nova相关的键值对。

下面看下网络:

下面有个桥接网卡桥接到ens33上面了。相当于所有虚拟机连接到一个交换机上,

下面我们看一下mykey是如何放进虚拟机的。

我们在计算节点看一下是不能直接登录的。

在控制节点是能连接上这个实例的,并且是免密的

mykey是如何在实例中使用,使得控制节点能免密登录云实例的呢,这就用到了metadata元数据了。元数据提供了一个用户在创建云主机的时候,可以对云主机做配置的功能。

原理是在实例启动的时候,可以访问元数据,然后将访问结果保存到实例当中。

我们看下计算节点1上mcw-test2实例的控制台输出日志,可以看的它去检查一个地址了。我们可以看的下面获取成功的,获得了一个实例id。

[root@compute1 ~]# cat /var/lib/nova/instances/d36c277b-91e9-456f-8a2d-29deb7399fb1/console.log
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 3.2.0-80-virtual (buildd@batsu) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 (Ubuntu 3.2.0-80.116-virtual 3.2.68)
[ 0.000000] Command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0
[ 0.000000] KERNEL supported cpus:
[ 0.000000] Intel GenuineIntel
[ 0.000000] AMD AuthenticAMD
[ 0.000000] Centaur CentaurHauls
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: 0000000000000000 - 000000000009fc00 (usable)
[ 0.000000] BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved)
[ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
[ 0.000000] BIOS-e820: 0000000000100000 - 0000000003fdc000 (usable)
[ 0.000000] BIOS-e820: 0000000003fdc000 - 0000000004000000 (reserved)
[ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] SMBIOS 2.8 present.
[ 0.000000] No AGP bridge found
[ 0.000000] last_pfn = 0x3fdc max_arch_pfn = 0x400000000
[ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
[ 0.000000] found SMP MP-table at [ffff8800000f63a0] f63a0
[ 0.000000] init_memory_mapping: 0000000000000000-0000000003fdc000
[ 0.000000] RAMDISK: 03c6d000 - 03fcc000
[ 0.000000] ACPI: RSDP 00000000000f6170 00014 (v00 BOCHS )
[ 0.000000] ACPI: RSDT 0000000003fe14f7 0002C (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
[ 0.000000] ACPI: FACP 0000000003fe140b 00074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
[ 0.000000] ACPI: DSDT 0000000003fe0040 013CB (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001)
[ 0.000000] ACPI: FACS 0000000003fe0000 00040
[ 0.000000] ACPI: APIC 0000000003fe147f 00078 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
[ 0.000000] No NUMA configuration found
[ 0.000000] Faking a node at 0000000000000000-0000000003fdc000
[ 0.000000] Initmem setup node 0 0000000000000000-0000000003fdc000
[ 0.000000] NODE_DATA [0000000003fd4000 - 0000000003fd8fff]
[ 0.000000] Zone PFN ranges:
[ 0.000000] DMA 0x00000010 -> 0x00001000
[ 0.000000] DMA32 0x00001000 -> 0x00100000
[ 0.000000] Normal empty
[ 0.000000] Movable zone start PFN for each node
[ 0.000000] early_node_map[2] active PFN ranges
[ 0.000000] 0: 0x00000010 -> 0x0000009f
[ 0.000000] 0: 0x00000100 -> 0x00003fdc
[ 0.000000] ACPI: PM-Timer IO Port: 0x608
[ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
[ 0.000000] IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[ 0.000000] Using ACPI (MADT) for SMP configuration information
[ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs
[ 0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
[ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[ 0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[ 0.000000] Allocating PCI resources starting at 4000000 (gap: 4000000:fbfc0000)
[ 0.000000] Booting paravirtualized kernel on bare hardware
[ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1
[ 0.000000] PERCPU: Embedded 27 pages/cpu @ffff880003a00000 s78848 r8192 d23552 u2097152
[ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 15974
[ 0.000000] Policy zone: DMA32
[ 0.000000] Kernel command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0
[ 0.000000] PID hash table entries: 256 (order: -1, 2048 bytes)
[ 0.000000] Checking aperture…
[ 0.000000] No AGP bridge found
[ 0.000000] Memory: 43560k/65392k available (6576k kernel code, 452k absent, 21380k reserved, 6620k data, 928k init)
[ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[ 0.000000] Hierarchical RCU implementation.
[ 0.000000] RCU dyntick-idle grace-period acceleration is enabled.
[ 0.000000] NR_IRQS:4352 nr_irqs:256 16
[ 0.000000] Console: colour VGA+ 80x25
[ 0.000000] console [tty1] enabled
[ 0.000000] console [ttyS0] enabled
[ 0.000000] allocated 1048576 bytes of page_cgroup
[ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[ 0.000000] Fast TSC calibration using PIT
[ 0.000000] Detected 2803.189 MHz processor.
[ 0.012870] Calibrating delay loop (skipped), value calculated using timer frequency.. 5606.37 BogoMIPS (lpj=11212756)
[ 0.014705] pid_max: default: 32768 minimum: 301
[ 0.017065] Security Framework initialized
[ 0.030630] AppArmor: AppArmor initialized
[ 0.032052] Yama: becoming mindful.
[ 0.049330] Dentry cache hash table entries: 8192 (order: 4, 65536 bytes)
[ 0.056003] Inode-cache hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.056496] Mount-cache hash table entries: 256
[ 0.062044] Initializing cgroup subsys cpuacct
[ 0.062627] Initializing cgroup subsys memory
[ 0.063925] Initializing cgroup subsys devices
[ 0.064259] Initializing cgroup subsys freezer
[ 0.064504] Initializing cgroup subsys blkio
[ 0.064718] Initializing cgroup subsys perf_event
[ 0.066972] mce: CPU supports 10 MCE banks
[ 0.068003] SMP alternatives: switching to UP code
[ 0.147209] Freeing SMP alternatives: 24k freed
[ 0.148008] ACPI: Core revision 20110623
[ 0.160222] ftrace: allocating 26610 entries in 105 pages
[ 0.172010] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.192011] CPU0: AMD QEMU Virtual CPU version 2.5+ stepping 03
[ 0.192011] APIC calibration not consistent with PM-Timer: 672ms instead of 100ms
[ 0.192011] APIC delta adjusted to PM-Timer: 6249988 (42019727)
[ 0.192011] Performance Events: Broken PMU hardware detected, using software events only.
[ 0.192011] NMI watchdog disabled (cpu0): hardware events not enabled
[ 0.192692] Brought up 1 CPUs
[ 0.192978] Total of 1 processors activated (5606.37 BogoMIPS).
[ 0.213541] devtmpfs: initialized
[ 0.216012] EVM: security.selinux
[ 0.216012] EVM: security.SMACK64
[ 0.216012] EVM: security.capability
[ 0.220013] print_constraints: dummy:
[ 0.220013] RTC time: 12:29:32, date: 05/07/23
[ 0.220013] NET: Registered protocol family 16
[ 0.220013] ACPI: bus type pci registered
[ 0.220013] PCI: Using configuration type 1 for base access
[ 0.224013] bio: create slab at 0
[ 0.224013] ACPI: Added _OSI(Module Device)
[ 0.224013] ACPI: Added _OSI(Processor Device)
[ 0.224013] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 0.224013] ACPI: Added _OSI(Processor Aggregator Device)
[ 0.246668] ACPI: Interpreter enabled
[ 0.247632] ACPI: (supports S0 S5)
[ 0.248014] ACPI: Using IOAPIC for interrupt routing
[ 0.280016] ACPI: No dock devices found.
[ 0.280016] HEST: Table not found.
[ 0.280016] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[ 0.280016] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[ 0.283490] pci_root PNP0A03:00: host bridge window [io 0x0000-0x0cf7]
[ 0.284017] pci_root PNP0A03:00: host bridge window [io 0x0d00-0xffff]
[ 0.284017] pci_root PNP0A03:00: host bridge window [mem 0x000a0000-0x000bffff]
[ 0.284017] pci_root PNP0A03:00: host bridge window [mem 0x04000000-0xfebfffff]
[ 0.284017] pci_root PNP0A03:00: host bridge window [mem 0x100000000-0x17fffffff]
[ 0.288017] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI
[ 0.288017] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB
[ 0.368022] pci0000:00: Unable to request _OSC control (_OSC support mask: 0x1e)
[ 0.372022] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[ 0.372022] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[ 0.372022] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[ 0.372022] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[ 0.372022] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
[ 0.373292] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
[ 0.373779] vgaarb: loaded
[ 0.373932] vgaarb: bridge control possible 0000:00:02.0
[ 0.376010] i2c-core: driver [aat2870] using legacy suspend method
[ 0.376022] i2c-core: driver [aat2870] using legacy resume method
[ 0.376022] SCSI subsystem initialized
[ 0.380023] usbcore: registered new interface driver usbfs
[ 0.380023] usbcore: registered new interface driver hub
[ 0.380023] usbcore: registered new device driver usb
[ 0.380023] PCI: Using ACPI for IRQ routing
[ 0.381603] NetLabel: Initializing
[ 0.381853] NetLabel: domain hash size = 128
[ 0.382038] NetLabel: protocols = UNLABELED CIPSOv4
[ 0.385772] NetLabel: unlabeled traffic allowed by default
[ 0.420025] AppArmor: AppArmor Filesystem Enabled
[ 0.420025] pnp: PnP ACPI init
[ 0.420025] ACPI: bus type pnp registered
[ 0.432026] pnp: PnP ACPI: found 9 devices
[ 0.432026] ACPI: ACPI bus type pnp unregistered
[ 0.447211] Switching to clocksource acpi_pm
[ 0.452932] NET: Registered protocol family 2
[ 0.473118] IP route cache hash table entries: 512 (order: 0, 4096 bytes)
[ 0.478884] TCP established hash table entries: 2048 (order: 3, 32768 bytes)
[ 0.478884] TCP bind hash table entries: 2048 (order: 3, 32768 bytes)
[ 0.478884] TCP: Hash tables configured (established 2048 bind 2048)
[ 0.478884] TCP reno registered
[ 0.478884] UDP hash table entries: 128 (order: 0, 4096 bytes)
[ 0.478884] UDP-Lite hash table entries: 128 (order: 0, 4096 bytes)
[ 0.478884] NET: Registered protocol family 1
[ 0.478884] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[ 0.478884] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[ 0.478884] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[ 0.478884] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ 0.478884] pci 0000:00:01.2: PCI INT D -> Link[LNKD] -> GSI 11 (level, high) -> IRQ 11
[ 0.478884] pci 0000:00:01.2: PCI INT D disabled
[ 0.478884] audit: initializing netlink socket (disabled)
[ 0.478884] type=2000 audit(1683462570.472:1): initialized
[ 0.569202] Trying to unpack rootfs image as initramfs…
[ 0.651427] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[ 0.783279] VFS: Disk quotas dquot_6.5.2
[ 0.785139] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.840429] fuse init (API version 7.17)
[ 0.842209] msgmni has been set to 85
[ 0.945138] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
[ 0.945174] io scheduler noop registered
[ 0.945174] io scheduler deadline registered (default)
[ 0.945174] io scheduler cfq registered
[ 0.945174] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[ 0.945174] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[ 0.945174] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[ 0.945174] ACPI: Power Button [PWRF]
[ 0.945174] ERST: Table is not found!
[ 0.945174] GHES: HEST is not enabled!
[ 1.017900] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10
[ 1.018225] virtio-pci 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 10 (level, high) -> IRQ 10
[ 1.020848] virtio-pci 0000:00:04.0: PCI INT A -> Link[LNKD] -> GSI 11 (level, high) -> IRQ 11
[ 1.020848] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ 1.020848] virtio-pci 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 10 (level, high) -> IRQ 10
[ 1.097276] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[ 1.100800] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 1.251928] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 1.285849] Linux agpgart interface v0.103
[ 1.316791] brd: module loaded
[ 1.341018] loop: module loaded
[ 1.367229] vda: vda1
[ 1.434025] scsi0 : ata_piix
[ 1.437245] scsi1 : ata_piix
[ 1.437761] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0a0 irq 14
[ 1.438150] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0a8 irq 15
[ 1.442626] Fixed MDIO Bus: probed
[ 1.442935] tun: Universal TUN/TAP device driver, 1.6
[ 1.443099] tun: (C) 1999-2004 Max Krasnyansky maxk@qualcomm.com
[ 1.464252] PPP generic driver version 2.4.2
[ 1.526723] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[ 1.527170] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[ 1.527472] uhci_hcd: USB Universal Host Controller Interface driver
[ 1.527912] uhci_hcd 0000:00:01.2: PCI INT D -> Link[LNKD] -> GSI 11 (level, high) -> IRQ 11
[ 1.528581] uhci_hcd 0000:00:01.2: UHCI Host Controller
[ 1.528581] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
[ 1.528581] uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c040
[ 1.535599] hub 1-0:1.0: USB hub found
[ 1.535599] hub 1-0:1.0: 2 ports detected
[ 1.535599] usbcore: registered new interface driver libusual
[ 1.535599] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[ 1.535599] serio: i8042 KBD port at 0x60,0x64 irq 1
[ 1.535599] serio: i8042 AUX port at 0x60,0x64 irq 12
[ 1.560853] mousedev: PS/2 mouse device common for all mice
[ 1.561961] Refined TSC clocksource calibration: 2803.208 MHz.
[ 1.562693] Switching to clocksource tsc
[ 1.589842] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
[ 1.626561] rtc_cmos 00:01: RTC can wake from S4
[ 1.626561] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
[ 1.626561] rtc0: alarms up to one day, y3k, 114 bytes nvram
[ 1.685670] device-mapper: uevent: version 1.0.3
[ 1.761383] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: dm-devel@redhat.com
[ 1.763572] cpuidle: using governor ladder
[ 1.782544] cpuidle: using governor menu
[ 1.782747] EFI Variables Facility v0.08 2004-May-17
[ 1.786370] TCP cubic registered
[ 1.786370] NET: Registered protocol family 10
[ 1.814276] NET: Registered protocol family 17
[ 1.814776] Registering the dns_resolver key type
[ 1.888590] registered taskstats version 1
[ 1.943185] Freeing initrd memory: 3452k freed
[ 2.067922] usb 1-1: new full-speed USB device number 2 using uhci_hcd
[ 2.268360] Magic number: 7:716:480
[ 2.269094] rtc_cmos 00:01: setting system clock to 2023-05-07 12:29:36 UTC (1683462576)
[ 2.269094] powernow-k8: Processor cpuid 6d3 not supported
[ 2.269094] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
[ 2.269094] EDD information not available.
[ 2.287879] Freeing unused kernel memory: 928k freed
[ 2.287879] Write protecting the kernel read-only data: 12288k
[ 2.362175] Freeing unused kernel memory: 1596k freed
[ 2.390371] Freeing unused kernel memory: 1184k freed

info: initramfs: up at 2.60
NOCHANGE: partition 1 is size 2072385. it cannot be grown
info: initramfs loading root from /dev/vda1
info: /etc/init.d/rc.sysinit: up at 5.68
info: container: none
Starting logging: OK
modprobe: module virtio_blk not found in modules.dep
modprobe: module virtio_net not found in modules.dep
WARN: /etc/rc3.d/S10-load-modules failed
Initializing random number generator… done.
Starting acpid: OK
cirros-ds 'local' up at 9.25
no results found for mode=local. up 9.80. searched: nocloud configdrive ec2
Starting network…
udhcpc (v1.20.1) started
Sending discover…
Sending select for 10.0.0.104…
Lease of 10.0.0.104 obtained, lease time 86400
route: SIOCADDRT: File exists
WARN: failed: route add -net "0.0.0.0/0" gw "10.0.0.254"
cirros-ds 'net' up at 12.09
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 12.61. request failed
failed 2/20: up 25.33. request failed
failed 3/20: up 37.68. request failed
failed 4/20: up 50.04. request failed
successful after 5/20 tries: up 62.28. iid=i-00000002
failed to get http://169.254.169.254/2009-04-04/user-data
warning: no ec2 metadata for user-data
found datasource (ec2, net)
cirros-apply-net already run per instance
check-version already run per instance
Top of dropbear init script
Starting dropbear sshd: remove-dropbear-host-keys already run per instance
WARN: generating key of type ecdsa failed!
OK
userdata already run per instance
=== system information ===
Platform: RDO OpenStack Compute
Container: none
Arch: x86_64
CPU(s): 1 @ 2803.189 MHz
Cores/Sockets/Threads: 1/1/1
Virt-type: AMD-V
RAM Size: 49MB
Disks:
NAME MAJ:MIN SIZE LABEL MOUNTPOINT
vda 253:0 1073741824
vda1 253:1 1061061120 cirros-rootfs /
=== sshd host keys ===
-----BEGIN SSH HOST KEY KEYS-----
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgwDRjv4xdk+LE/wd3EsR6o9CB9Nl8VUbIQcmOCxyuyw3TUOXQ+wqUXJtBnepqtNEtL+uQLUQGZYT2UZcsRl+FAymMRAQZFPZjZ4hW5ObEZV8vZG3mh7fGg9m08PB3anSKrzigyMF+zDOFl5x2xRIBIUyVpvyOkbaXTuuPfcc+VzVC59v root@mcw-test2
ssh-dss AAAAB3NzaC1kc3MAAACBANQSxThGvfRUc+A7Ozs4Ps32kRqwbz1N64fuP2sMZl9wNAC45nSFSTMWDEX3LTHLb9Ct4UONmZ7xIIGYj+kAjVUC3bbmddlF2ogU7b6bzAJI85Gcred1JTxYh6p6nvkT4+aUv51nEpJhgF+cwwQy/CeVo/7hszfuT3Yv+mWQMk4vAAAAFQDOKd/ZvUWfHHLr0TrCz65+BlheEwAAAIAOnvNYTqPfh2Hwm2vr18BPqOzqBdHP2pcWP8g5qHApWttF9XBzwUiS35ydPxOL8lXMwQxTkjfvS/kBSGog6405dNkzMZ0thV3/xi8XBZ5YXpRgMEUbM/PoqkEyvuN0xT6F0dc2d1HycX0U3VsoxAVDN3XuNHHkIqkoUx59YM77RAAAAIBX76GEOfoRGL+Xb84nDDCgoAaifDX3WTyzynPvpjKCTp8HbjnLfB37799lVQ1C/LZYjvIrVJf+r+QznhQw1IqAqmiYuVhIDX9I1twl9reghpwSAvhRfM2ZoH31mURcMh0uP4O4dGuA71T3PDMhaAMZPzZrAvBpNC11ILHS0m/n8A== root@mcw-test2
-----END SSH HOST KEY KEYS-----
=== network info ===
if-info: lo,up,127.0.0.1,8,::1
if-info: eth0,up,10.0.0.104,24,fe80::f816:3eff:febd:a625
ip-route:default via 10.0.0.254 dev eth0
ip-route:10.0.0.0/24 dev eth0 src 10.0.0.104
ip-route:169.254.169.254 via 10.0.0.100 dev eth0
=== datasource: ec2 net ===
instance-id: i-00000002
name: N/A
availability-zone: nova
local-hostname: mcw-test2.novalocal
launch-index: 0
=== cirros: current=0.3.5 latest=0.6.1 uptime=83.20 ===
____ ____ ____
/ __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \
\___//_//_/ /_/ \____/___/
http://cirros-cloud.net

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
mcw-test2 login: [18782.313707] Clocksource tsc unstable (delta = 9373947535 ns)
[44483.430035] hrtimer: interrupt took 14386086 ns

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
mcw-test2 login:

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
mcw-test2 login: [root@compute1 ~]#

mcw-test2的所有控制台输出

我们可以看到,控制节点能免密连接到计算节点1上的虚拟机,是因为这个云实例中已经生成了authorized_keys文件,并且保存了控制节点的公钥。

根据上面控制台日志的请求地址,我们在控制台免密连接上计算节点的虚拟机,然后请求这个地址,可以看到,在实例中能请求到一些数据,包括控制节点的公钥数据。

这个私钥是那里来的呢,是mykey里面的,我们之前上传了密钥对。

我们看下第一个命令应该是面交互生成控制节点密钥对用的。

然后执行命令密钥对创建,指定公钥是控制节点的公钥文件,给公钥起个名字mykey。然后把这个mykey,是弄到了元数据的接口里面,创建虚拟机时,指定这个密钥,就将这个公钥文件放到了元数据的即可中。虚拟机请求接口就能获取到这个公钥并写入到自己的认证文件中,这样就是把某个主机,这里是控制节点的公钥给了虚拟机,在网络通的情况然后这个主机就能免密登录这个虚拟机了,

我们再来看下,命令行创建实例,也是指定密钥对的,

openstack server create --flavor <**对_name> --image <镜像_name> --nic net-id=<网卡_id> --security-group <安全组_id> --availability-zone (:可加主机名) --key-name <**对_name> <虚拟机_name>

执行情况如下,指定mykey作为密钥对

我们可以看到云平台页面密钥对中,它也是显示的公钥,是我们控制台节点的公钥,因为我们创建命令就是公钥参数,指定的是控制台公钥文件

我们继续来看,在计算节点的一个虚拟机中,我们请求元数据中的公钥

为什么能请求到这个地址呢,重新部署一套,好像这个ip也是固定的,接口也是固定的。curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key

我们还是连接mcw-test2这台实例。看下路由,请求169.254.169.254这个ip的时候,网关是10.0.0.100,我们可以看到,在实例中这个网关是通的

我们可以在网络的端口下,找到实例中的那个网关10.0.0.100,可以看到,这个IP对应的连接设备是dhcp,

我们去控制节点看下,是有这个dhcp进程的。但是我们控制节点的ip是10.0.0.41,桥接到ens33上的。跟上图中显示的10.0.0.100是不同的,这是怎么回事呢,

这是因为这里使用了网络命名空间了,k8s中就使用了网络命名空间。我们在控制节点列出网络命名空间,然后 exec接它的id,再执行命令,就可以看到上面的网关10.0.0.100,我们也可以看到虚拟机中可以访问到元数据的ip地址169.254.168.254也在这里,

我们看下我们进入虚拟机中请求元数据接口,那么是需要有个80端口提供服务的

我们在这个网络命名空间下,找这个80服务,可以看到是个python进程,然后根据进程id,在网络命名空间下和控制节点下都能找个找个进程,可以看到进程就是元数据的,元数据端口是80。

因此,我们创建虚拟机时想要往虚拟机里面添加东西,可以使用把它加到元数据的方式,然后在虚拟机里面写脚本请求并保存到虚拟机中,

后面我们访问元数据500错误,

我们查看元数据服务的日志,可以看到500服务网关错误。这里直接在控制节点ps找个找个进程也是可以的。然后找到日志。

发现报错

[root@controller ~]# systemctl status neutron-l3-agent.service
● neutron-l3-agent.service - OpenStack Neutron Layer 3 Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-l3-agent.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Sat 2023-05-13 00:57:05 CST; 3min 17s ago
Process: 34086 ExecStart=/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file /var/log/neutron/l3-agent.log (code=exited, status=1/FAILURE)
Main PID: 34086 (code=exited, status=1/FAILURE)

May 13 00:57:05 controller systemd[1]: neutron-l3-agent.service: main process exited, code=exited, status=1/FAILURE
May 13 00:57:05 controller systemd[1]: Unit neutron-l3-agent.service entered failed state.
May 13 00:57:05 controller systemd[1]: neutron-l3-agent.service failed.
May 13 00:57:05 controller systemd[1]: neutron-l3-agent.service holdoff time over, scheduling restart.
May 13 00:57:05 controller systemd[1]: start request repeated too quickly for neutron-l3-agent.service
May 13 00:57:05 controller systemd[1]: Failed to start OpenStack Neutron Layer 3 Agent.
May 13 00:57:05 controller systemd[1]: Unit neutron-l3-agent.service entered failed state.
May 13 00:57:05 controller systemd[1]: neutron-l3-agent.service failed.
[root@controller ~]# tail /var/log/neutron/l3-agent.log
2023-05-13 00:56:59.895 34051 ERROR neutron.agent.l3.agent [-] An interface driver must be specified
2023-05-13 00:57:01.595 34064 INFO neutron.common.config [-] Logging enabled!
2023-05-13 00:57:01.595 34064 INFO neutron.common.config [-] /usr/bin/neutron-l3-agent version 10.0.7
2023-05-13 00:57:01.607 34064 ERROR neutron.agent.l3.agent [-] An interface driver must be specified
2023-05-13 00:57:03.334 34075 INFO neutron.common.config [-] Logging enabled!
2023-05-13 00:57:03.335 34075 INFO neutron.common.config [-] /usr/bin/neutron-l3-agent version 10.0.7
2023-05-13 00:57:03.346 34075 ERROR neutron.agent.l3.agent [-] An interface driver must be specified
2023-05-13 00:57:05.330 34086 INFO neutron.common.config [-] Logging enabled!
2023-05-13 00:57:05.331 34086 INFO neutron.common.config [-] /usr/bin/neutron-l3-agent version 10.0.7
2023-05-13 00:57:05.347 34086 ERROR neutron.agent.l3.agent [-] An interface driver must be specified
[root@controller ~]#

修改配置重启没生效

[root@controller ~]# grep interface_driver /etc/neutron/l3_agent.ini
#interface_driver =
interface_driver = linuxbridge
[root@controller ~]#
[root@controller ~]# systemctl start neutron-l3-agent.service

没办法了只能重启大的网络服务了

[root@controller ~]# systemctl restart neutron-server.service
[root@controller ~]#

重启之后,从虚拟机中请求元数据,可以正常请求到了

我们从元数据中以及其它信息可以看到,它这里是从dhcp获取到的ip。我们虚拟机应该不是分配的ip,应该用指定的ip,

我们可以看到,eth0是动态ip,我们应该设置为静态ip,那么怎么做呢,当创建实例的时候,写个脚本,启动脚本去请求元数据服务提供一个ip,然后将找个ip写入到网卡配置文件中,并且把网卡改成静态ip。这样就不会随着实例的重启,ip会发生变化。

官网地址:https://docs.openstack.org/image-guide/

可以获取,也可以手动自己创建镜像

将centos官网下载的镜像传到服务器compute1上面。

1、我们先创建一个磁盘作为系统盘,这里定为10G大小。

qemu-img create -f qcow2  /root/mcw/centos.qcow2  10G

2.创建虚拟机

指定上面创建的磁盘,指定使用的系统镜像

virt-install --virt-type kvm --name centos --ram 1024 \
--disk /root/mcw/centos.qcow2,format=qcow2 \
--network network=default \
--graphics vnc,listen=0.0.0.0 --noautoconsole \
--os-type=linux --os-variant=centos7.0 \
--location=/root/mcw/CentOS-7-x86_64-DVD-1708.iso

openstack计算节点,并没有创建kvm安装时默认的桥接网卡virbr0,我们这里也不创建这个网桥或者是指定使用计算节点的网桥了,防止openstack环境被弄坏。

我们开启之前安装kvm的主机mcw11,在这个上面创建openstack镜像,

语言,这里用一次中文的吧

这里只要一个根分区

网卡,虚拟机安装好像默认eth0,我们这里先不开启,后面再配置

上面磁盘改一下,我们这里没有boot分区,好像磁盘选择有问题。然后开始安装,并修改root密码为123456

3、启动虚拟机

4、配置以及安装常用软件

网卡默认配置如下

我们改成开机启动,把uuid ipv6等删除,只留下下面几个

停止禁用,修改dns配置

然后做些装机初始化的优化等等

我们先看一下需要用的init.sh脚本

#!/bin/bash

set_key(){
if [ ! -d /root/.ssh ]; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
fi
for ((i=1;i<=5;i++));do if [ ! -f /root/.ssh/authorized_keys ];then curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/metadata-key 2>/dev/null
if [ $? -eq 0 ];then
cat /tmp/metadata-key >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
restorecon /root/.ssh/authorized_keys
rm -f /tmp/metadata-key
echo "Successfully retrieved public key from instance metadata"
echo "*****************"
echo "AUTHORIZED KEYS"
echo "*****************"
cat /root/.ssh/authorized_keys
echo "*****************"
fi
fi
done
}

set_hostname(){
PRE_HOSTNAME=$(curl -s http://169.254.169.254/latest/meta-data/hostname)
DOMAIN_NAME=$(echo $PRE_HOSTNAME | awk -F '.' '{print $1}')
hostnamectl set-hostname `echo ${DOMAIN_NAME}.example.com`
}

set_static_ip(){
PRE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
NET_FILE="/etc/sysconfig/network-scripts/ifcfg-eth0"
echo "TYPE=Ethernet" > $NET_FILE
echo "BOOTPROTO=static" >> $NET_FILE
echo "NAME=eth0" >> $NET_FILE
echo "DEVICE=eth0" >> $NET_FILE
echo "ONBOOT=yes" >> $NET_FILE
echo "IPADDR=${PRE_IP}" >> $NET_FILE
echo "NETMASK=255.255.255.0" >> $NET_FILE
echo "GATEWAY=192.168.56.2" >> $NET_FILE
}

main(){
set_key;
set_hostname;
set_static_ip;
/bin/cp /tmp/rc.local /etc/rc.d/rc.local
reboot
}

main

我们连接上mcw-test2这个实例,去请求一下试试

将宿主机上写好的脚本复制过来,把那个cp的去掉

然后在开机自启动文件里执行脚本,给rc.local加了个执行权限

我在虚拟机中再添加一个文件,作为自定义的一些标记,

我们将上面配置好的虚拟机镜像文件复制到openstack控制节点上

我们上传到控制节点后,将镜像上传到openstack里面,

openstack image create "CentOS-7-x86_64" --file /opt/centos.qcow2 --disk-format \
qcow2 --container-format bare --public

[root@controller ~]# ls /opt/
centos.qcow2 cirros-0.3.5-x86_64-disk.img repo
[root@controller ~]# cd /opt/
[root@controller opt]#
[root@controller opt]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 6cfe6502-36f0-4155-ae4e-a84cb910049a | cirros | active |
+--------------------------------------+--------+--------+
[root@controller opt]# openstack image create "CentOS-7-x86_64" --file /opt/centos.qcow2 --disk-format \

qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | 71dbce3ab4fc7aa720fdc5d3aff321ef |
| container_format | bare |
| created_at | 2023-05-13T05:51:48Z |
| disk_format | qcow2 |
| file | /v2/images/8fec0b5d-4953-4323-adbc-ba6815c9c476/file |
| id | 8fec0b5d-4953-4323-adbc-ba6815c9c476 |
| min_disk | 0 |
| min_ram | 0 |
| name | CentOS-7-x86_64 |
| owner | b29c52befb8448378d99086df5053737 |
| protected | False |
| schema | /v2/schemas/image |
| size | 1429209088 |
| status | active |
| tags | |
| updated_at | 2023-05-13T05:52:57Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
[root@controller opt]# openstack image list
+--------------------------------------+-----------------+--------+
| ID | Name | Status |
+--------------------------------------+-----------------+--------+
| 8fec0b5d-4953-4323-adbc-ba6815c9c476 | CentOS-7-x86_64 | active |
| 6cfe6502-36f0-4155-ae4e-a84cb910049a | cirros | active |
+--------------------------------------+-----------------+--------+
[root@controller opt]#

已有的云主机类型,内存太小了,用我们创建的镜像,无法启动。我们需要admin用户创建一个实例类型

admin项目能用

我们用这个普通用户来登录使用

在普通用户下可以看到这个镜像

错误: 实例 "newImageTest" 执行所请求操作失败,实例处于错误状态。: 请稍后再试 [错误: Host 'compute2' is not mapped to any cell].

[root@controller opt]# nova-manage cell_v2 discover_hosts --verbose
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ac223cf4-f307-45bb-a9fc-71001b026dde
Found 2 computes in cell: ac223cf4-f307-45bb-a9fc-71001b026dde
Checking host mapping for compute host 'compute1': 093a16a7-e3bd-4734-bcf3-6dfbbdfa599a
Checking host mapping for compute host 'compute2': 395217a4-1605-4455-bea0-de597d50ef41
Creating host mapping for compute host 'compute2': 395217a4-1605-4455-bea0-de597d50ef41
[root@controller opt]# openstack compute service list --service nova-compute
+----+--------------+----------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+----------+------+---------+-------+----------------------------+
| 6 | nova-compute | compute1 | nova | enabled | up | 2023-05-13T06:29:07.000000 |
| 7 | nova-compute | compute2 | nova | enabled | up | 2023-05-13T06:29:11.000000 |
+----+--------------+----------+------+---------+-------+----------------------------+
[root@controller opt]#

因为第一次用这个磁盘作为系统盘,计算节点会去下载这个镜像到计算节点本地目录下,然后转换格式,

在这个实例下,创建好下面文件,基于上面下载好的镜像作为基础镜像,然后创建自己的磁盘文件disk。

init.sh脚本网关没有改

实例开机显示Booting from Hard Disk_boot from hard disk解决办法:

新增配置:

[root@compute2 instances]# vim /etc/nova/nova.conf

[libvirt]
cpu_mode = none
virt_type=qemu

重启服务

[root@compute2 instances]# systemctl restart openstack-nova-compute.service

上面镜像不行,修改一下重新上传一个

错误: 实例 "imageTest" 执行所请求操作失败,实例处于错误状态。: 请稍后再试 [错误: No valid host was found. There are not enough hosts available.].

创建报错了。回头看看怎么缩容

删除一些实例,就能创建了,这里的原因是限额了吧

实例被调度到了计算节点1了,这边正在复制新的镜像。我们前面上传到openstack的镜像。这个计算节点第一次使用这个镜像的时候,创建实例会很慢,因为这里是需要复制到计算节点,然后以这个镜像为基础镜像创建实例。

下面就是转换格式,我们可以 看到转换格式的进程

有问题啊

好像这个需要安装,我们的镜像是没有装的

放弃了,暂时不看这个了

openstack endpoint create --region RegionOne \
volume public http://192.168.56.11:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
volume internal http://192.168.56.11:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
volume admin http://192.168.56.11:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
volumev2 public http://192.168.56.11:8776/v2/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
volumev2 internal http://192.168.56.11:8776/v2/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
volumev2 admin http://192.168.56.11:8776/v2/%\(tenant_id\)s

1.使用本地硬盘
2.系统使用本地硬盘 + 云硬盘(数据盘) ISCSI NFS GlusterFS Ceph
cinder type-create NFS
cinder type-create ISCSI
cinder type-key NFS set volume_backend_name=NFS-Storage
cinder type-key ISCSI set volume_backend_name=ISCSI-Storage

1.cinder后端存储步骤

1.把存储准备好。
2.安装cinder-volume
3.vim /etc/cinder/cinder.conf
[xxx]
volume_driver=xxx
xxxxx
xxxxx
xxxxx
volume_backend_name=xxx-Storage
启动cinder-volume
4.创建类型
cinder type-create xxx
5.关联类型
cinder type-key xxx set volume_backend_name=xxx-Storage

[swz@swz ~]$ ifconfig
eth0: flags=4163 mtu 1500
inet 10.26.44.92 netmask 255.255.254.0 broadcast 10.26.45.255
ether 00:16:3e:00:5f:67 txqueuelen 1000 (Ethernet)
RX packets 62348 bytes 21135150 (20.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 96817 bytes 7229139 (6.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth1: flags=4163 mtu 1500
inet 47.90.51.58 netmask 255.255.254.0 broadcast 47.90.51.255
ether 00:16:3e:00:b5:6d txqueuelen 1000 (Ethernet)
RX packets 375570 bytes 44939268 (42.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 362047 bytes 24380509 (23.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 c

[swz@swz ~]$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 47.90.51.247 0.0.0.0 UG 0 0 0 eth1
10.0.0.0 10.26.45.247 255.0.0.0 UG 0 0 0 eth0
10.26.44.0 0.0.0.0 255.255.254.0 U 0 0 0 eth0
47.90.50.0 0.0.0.0 255.255.254.0 U 0 0 0 eth1
100.64.0.0 10.26.45.247 255.192.0.0 UG 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1
172.16.0.0 10.26.45.247 255.240.0.0 UG 0 0 0 eth0

openstack network create --share \
--provider-physical-network internal \
--provider-network-type flat internal

openstack subnet create --network internal \
--allocation-pool start=192.168.57.100,end=192.168.57.250 \
--dns-nameserver 192.168.56.2 --gateway 192.168.57.2 \
--subnet-range 192.168.57.0/24 internal

=======demo
openstack network create selfservice

openstack subnet create --network selfservice \
--dns-nameserver 192.168.56.2 --gateway 172.16.1.1 \
--subnet-range 172.16.1.0/24 selfservice

========admin
neutron net-update public --router:external

=======demo
openstack router create router
neutron router-interface-add router selfservice
neutron router-gateway-set router public

作业:

  1. 热迁移
  2. Cinder使用Glusterfs/Ceph作为后端存储。

冷迁移

  1. 没有共享存储 热迁移。
  2. 有共享存储

小生产建议:

架构1:控制节点 + 计算节点(FLAT、本地硬盘) 优点:简单、高效、管理难度低 + 高可用

测试:
架构2:控制节点 + 计算节点 + 共享存储(iscsi Glusterfs ceph)分两种:系统也允许在存储上 仅数据卷使用存储
优点:虚拟机硬盘自助管理
缺点:1.存储网络是瓶颈(需要万兆网络) 2.单点故障

架构3:控制节点 + 计算节点 + 存储节点 + 热迁移
所有机器挂载存储节点mount

大规模生产
架构4:控制节点 + 计算节点 + VLAN 优点:用的公司多。稳定 确定:网络不灵活
使用三层交换作为网关。Neutron使用VLAN。 + GlusterFS

架构5: 控制节点 + 计算节点 + VXLAN 优先:灵活。有浮动IP。 缺点:依赖L3-agent。 需要解决高可用+性能的问题。

控制节点可以做高可用
MySQL集群 PXC
RabbitMQ 集群
其它服务:可以主备,可以集群

LinuxBridge vs OpenvSwitch
古老而且稳定,进入内核早 新,功能更多,灵活。QOS

1.OpenStack 升级。
2.使用OpenStack、做公有云
计算小组 网络小组 存储小组 计费小组 运维小组 安全小组
业务小组(备案、工单、xx、xx)

1、完成下面的步骤以创建数据库:

  • * 用数据库连接客户端以 root 用户连接到数据库服务器:

    $ mysql -u root -p
    • 创建 cinder 数据库:

      CREATE DATABASE cinder;

    • 允许 cinder 数据库合适的访问权限:

      GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
      IDENTIFIED BY 'CINDER_DBPASS';
      GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
      IDENTIFIED BY 'CINDER_DBPASS';

      用合适的密码替换 CINDER_DBPASS

    • 退出数据库客户端。

  1. 获得 admin 凭证来获取只有管理员能执行的命令的访问权限:

    $ . admin-openrc

  2. 要创建服务证书,完成这些步骤:

    • 创建一个 cinder 用户:

      $ openstack user create --domain default --password-prompt cinder
      User Password:123456
      Repeat User Password:
      +-----------+----------------------------------+
      | Field | Value |
      +-----------+----------------------------------+
      | domain_id | e0353a670a9e496da891347c589539e9 |
      | enabled | True |
      | id | bb279f8ffc444637af38811a5e1f0562 |
      | name | cinder |
      +-----------+----------------------------------+

    • 添加 admin 角色到 cinder 用户上。

      $ openstack role add --project service --user cinder admin

  1. 安装软件包:

    # yum install openstack-cinder

  2. 编辑 /etc/cinder/cinder.conf,同时完成如下动作:

    • 在 [database] 部分,配置数据库访问:

      [database]

      connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

      用你为块设备存储数据库选择的密码替换 CINDER_DBPASS。

    • 在 “[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 消息队列访问:

      [DEFAULT]

      rpc_backend = rabbit

      [oslo_messaging_rabbit]

      rabbit_host = controller
      rabbit_userid = openstack
      rabbit_password = RABBIT_PASS

      用你在 “RabbitMQ” 中为 “openstack” 选择的密码替换 “RABBIT_PASS”。

    • 在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:

      [DEFAULT]

      auth_strategy = keystone

      [keystone_authtoken]

      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = cinder
      password = CINDER_PASS

      将 CINDER_PASS 替换为你在认证服务中为 cinder 用户选择的密码。

      注解

      在 [keystone_authtoken] 中注释或者删除其他选项。

    • 在 [DEFAULT 部分,配置``my_ip`` 来使用控制节点的管理接口的IP 地址。

      [DEFAULT]

      my_ip = 10.0.0.11

    • 在 [oslo_concurrency] 部分,配置锁路径:

      [oslo_concurrency]

      lock_path = /var/lib/cinder/tmp

  3. 初始化块设备服务的数据库:

    # su -s /bin/sh -c "cinder-manage db sync" cinder

配置计算节点以使用块设备存储

  • 编辑文件 /etc/nova/nova.conf 并添加如下到其中:

    [cinder]
    os_region_name = RegionOne

完成安装

最后,创建 cinder 和 cinderv2 服务实体:

$ openstack service create --name cinder \
--description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | ab3bbbef780845a1a283490d281e7fda |
| name | cinder |
| type | volume |
+-------------+----------------------------------+

$ openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | eb9fd245bdbc414695952e93f29fe3ac |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+

这里跟官网创建的顺序不一致

创建块设备存储服务的 API 入口点:

$ openstack endpoint create --region RegionOne \
volume public http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 03fa2c90153546c295bf30ca86b1344b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+

$ openstack endpoint create --region RegionOne \
volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 94f684395d1b41068c70e4ecb11364b2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+

$ openstack endpoint create --region RegionOne \
volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 4511c28a0f9840c78bacb25f10f62c98 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+

$ openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 513e73819e14460fb904163f41ef3759 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+

$ openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 6436a8a23d014cfdb69c586eff146a32 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+

$ openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | e652cf84dd334f359ae9b045a2c91d96 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+

注解

块设备存储服务每个服务实体都需要端点。

https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/cinder-storage-install.html

我这里安装到控制节点,把控制节点当成cinder的存储节点

安装支持的工具包:

  • 安装 LVM 包:

    # yum install lvm2

  • 启动LVM的metadata服务并且设置该服务随系统启动:

    # systemctl enable lvm2-lvmetad.service

    systemctl start lvm2-lvmetad.service

关机新增磁盘

我们可以看到新增的磁盘sdb

  1. 创建LVM 物理卷 /dev/sdb:

    # pvcreate /dev/sdb
    Physical volume "/dev/sdb" successfully created

  2. 创建 LVM 卷组 cinder-volumes:(vg的名称不能错)

    # vgcreate cinder-volumes /dev/sdb
    Volume group "cinder-volumes" successfully created

    块存储服务会在这个卷组中创建逻辑卷。

[root@controller ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.
[root@controller ~]# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
[root@controller ~]# vgdisplay
--- Volume group ---
VG Name cinder-volumes
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <20.00 GiB
PE Size 4.00 MiB
Total PE 5119
Alloc PE / Size 0 / 0
Free PE / Size 5119 / <20.00 GiB
VG UUID AqHQr0-QFEh-t7s6-T7nu-xUFo-POX8-JTuGIQ

--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <19.00 GiB
PE Size 4.00 MiB
Total PE 4863
Alloc PE / Size 4863 / <19.00 GiB
Free PE / Size 0 / 0
VG UUID Z1pOFF-O1om-NCXk-1obF-dnyN-eoJV-56CCg1

[root@controller ~]#

只有实例可以访问块存储卷组。不过,底层的操作系统管理这些设备并将其与卷关联。默认情况下,LVM卷扫描工具会扫描``/dev`` 目录,查找包含卷的块存储设备。如果项目在他们的卷上使用LVM,扫描工具检测到这些卷时会尝试缓存它们,可能会在底层操作系统和项目卷上产生各种问题。您必须重新配置LVM,让它只扫描包含``cinder-volume``卷组的设备。编辑``/etc/lvm/lvm.conf``文件并完成下面的操作:

  • 在``devices``部分,添加一个过滤器,只接受``/dev/sdb``设备,拒绝其他所有设备:

    devices {

    filter = [ "a/sdb/", "r/.*/"]

    每个过滤器组中的元素都以``a``开头,即为 accept,或以 r 开头,即为**reject**,并且包括一个设备名称的正则表达式规则。过滤器组必须以``r/.*/``结束,过滤所有保留设备。您可以使用 :命令:`vgs -vvvv` 来测试过滤器。

    警告

    如果您的存储节点在操作系统磁盘上使用了 LVM,您还必需添加相关的设备到过滤器中。例如,如果 /dev/sda 设备包含操作系统:

    filter = [ "a/sda/", "a/sdb/", "r/.*/"]
    类似地,如果您的计算节点在操作系统磁盘上使用了 LVM,您也必需修改这些节点上 /etc/lvm/lvm.conf 文件中的过滤器,将操作系统磁盘包含到过滤器中。例如,如果``/dev/sda`` 设备包含操作系统:

    filter = [ "a/sda/", "r/.*/"]

我们这里部署计算节点,我们这里应该是用下面这个

filter = [ "a/sda/", "a/sdb/", "r/.*/"]

vim /etc/lvm/lvm.conf

安全并配置组件

完成安装

  • 启动块存储卷服务及其依赖的服务,并将其配置为随系统启动:

    # systemctl enable openstack-cinder-volume.service target.service

    systemctl start openstack-cinder-volume.service target.service

没有lvm的,我们手动加一个

查看服务状态

[root@controller ~]# openstack volume service list
+------------------+----------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+----------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2023-05-13T15:54:14.000000 |
| cinder-volume | controller@lvm | nova | enabled | up | 2023-05-13T15:53:22.000000 |
+------------------+----------------+------+---------+-------+----------------------------+
[root@controller ~]#  

我们可以看到,云平台页面多个一个菜单,卷

创建一个G的卷

报错了

他的密码错了,改正,应该是123456才对,cinder.conf

然后重启服务

[root@controller ~]#
[root@controller ~]# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# systemctl restart openstack-cinder-volume.service target.service
[root@controller ~]#

此时,我们再创建一个卷,可以成功创建,并且是可用状态的

管理连接

选择实例

报错了

我换了个用户,换了个实例,然后就可以连接上了

我们连接上这个实例

连接上实例后,查看多了一块磁盘

我们格式化磁盘,挂载磁盘到实例中,然后写入数据到磁盘中。

如果这个实例出问题或者是其它原因,但是这个数据盘我们还需要,我们需要把这块数据盘挂载到其它云实例上去使用,数据还保留着。我们要先在使用这块数据盘的云实例上先把它卸载下来。

在云平台上面找到这个卷

点击分离卷

此时卷又是可用状态了

我们给它连接到另一个实例中‘

卷已经连接上了mcw-test2实例,之前我们用普通用户machangwei连接mcw-test2实例,但是连接失败了,可能是权限问题吧 ,回头再看看。

我们连接上mcw-test2这个虚拟机

我们连接到这个实例中,然后查看刚刚连接上去的卷,也就是给实例添加的磁盘,因为这个磁盘在其它实例上已经格式化过且有数据的,这里就不需要格式了。这里只需要挂载,然后就可以看到以前保存在这块磁盘中的数据了。

我们现在是一个系统盘,一个数据盘,

扩展卷:

我们看下这个卷

1G大小

正在使用,好像没有扩展卷的按钮

卸载卷

分离卷

当卷没有在使用的时候,就有了扩展卷的功能了,现在是1G

设置新的卷的大小为2G

扩展成功,2G可用的卷

我们在终端上执行命令再次查看,可用看到卷的大小是2G

现在我们用的是iscsi,生产环境建议用后面两种,测试环境建议用前面两种

openstack wiki:    https://wiki.openstack.org/wiki/Main_Page

我们先将计算节点2作为另一个存储节点,然后部署存储节点

官网参考地址;https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/cinder-storage-install.html

我的地址:https://www.cnblogs.com/machangwei-8/p/17392643.html#_label6

官网中,因为我们不用lvm,所以我们不需要上面lvm的步骤,下面这个targetcli也不需要,这个好像是iscsi才用的。

安装软件包

[root@compute2 ~]#  yum install openstack-cinder  python-keystone

[root@compute2 ~]# yum install -y nfs-utils rpcbind

在计算节点2上配置和启动nfs服务端

[root@compute2 ~]# mkdir -p /data/nfs
[root@compute2 ~]# vim /etc/exports
[root@compute2 ~]# tail /etc/exports
/data/nfs *(rw,sync,no_root_squash)
[root@compute2 ~]# systemctl start rpcbind
[root@compute2 ~]# systemctl start nfs
[root@compute2 ~]#

将控制节点之前部署好的cinder的配置,复制一份到计算节点,计算节点2作为存储节点,需要启动cinder的一些服务

[root@compute2 ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.confbak
[root@compute2 ~]# scp -rp controller://etc/cinder/cinder.conf /etc/cinder/
root@controller's password:
cinder.conf 100% 171KB 9.0MB/s 00:00
[root@compute2 ~]#

我们现在用nfs,不用lvm,所以这里要把这个配置删除掉

我们先看一下驱动的路径,我们要在cinder配置里指定nfs的驱动的时候,就是根据这个路径去配置nfs驱动的类的。

然后我们添加nfs的配置,创建共享配置,指定挂载点。

[root@compute2 ~]# vim /etc/cinder/cinder.conf
[root@compute2 ~]# tail -4 /etc/cinder/cinder.conf
[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = $state_path/mnt
[root@compute2 ~]# vim /etc/cinder/nfs_shares
[root@compute2 ~]# ip a|grep 0.43
inet 10.0.0.43/24 brd 10.0.0.255 scope global brq2fe697b2-ca
[root@compute2 ~]# cat /etc/cinder/nfs_shares
10.0.0.43:/data/nfs
[root@compute2 ~]#

[root@compute2 ~]# showmount -e 10.0.0.43
Export list for 10.0.0.43:
/data/nfs *
[root@compute2 ~]#

将下面修改为nfs,之前开启的是lvs

给共享配置授权

然后启动服务

[root@compute2 ~]# systemctl enable openstack-cinder-volume.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service.
[root@compute2 ~]# systemctl start openstack-cinder-volume.service
[root@compute2 ~]#

我们去控制节点查看下服务,现在一个是lvm,一个是nfs,那么能正常使用嘛。现在不行,我们需要创建类型

[root@controller ~]# openstack volume service list
+------------------+----------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+----------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2023-05-14T04:58:00.000000 |
| cinder-volume | controller@lvm | nova | enabled | up | 2023-05-14T04:58:02.000000 |
| cinder-volume | compute2@nfs | nova | enabled | up | 2023-05-14T04:58:03.000000 |
+------------------+----------------+------+---------+-------+----------------------------+
[root@controller ~]#

如下,我们可以看到,创建卷的时候,需要指定类型,我们现在有lvm,nfs两种卷类型,分别是在控制节点和计算节点2上面。因此我们需要创建卷类型,并且将卷类型和后端存储节点绑定上,这样我们创建卷的时候,就会根据你选择的卷类型,然后到对应卷类型的存储节点去创建这个卷。

那么创建了卷类型后,如何将卷类型和后端存储节点绑定呢。我们之前没有给存储节点添加配置volume_backend_name,用这个来指定这个存储节点是可以创建什么类型的卷

控制节点这个存储节点上添加配置,指定名称,重启volume

[root@controller ~]# vim /etc/cinder/cinder.conf
[root@controller ~]# tail -6 /etc/cinder/cinder.conf
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = ISCSI-Storage
[root@controller ~]# systemctl restart openstack-cinder-volume.service
[root@controller ~]#

计算节点2这个存储节点上添加配置,指定名称,重启volume

[root@compute2 ~]# vim /etc/cinder/cinder.conf
[root@compute2 ~]# tail -5 /etc/cinder/cinder.conf
[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = $state_path/mnt
volume_backend_name = NFS-Storage
[root@compute2 ~]# systemctl restart openstack-cinder-volume.service
[root@compute2 ~]#

我们前面在cinder里面配置了volume_backend_name,然后在控制节点创建这两个卷类型,将这两个卷类型和存储节点配置了了卷后端名称的绑定起来,这样就可以了。

[root@controller ~]# openstack volume service list
+------------------+----------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+----------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2023-05-14T05:31:31.000000 |
| cinder-volume | controller@lvm | nova | enabled | up | 2023-05-14T05:31:25.000000 |
| cinder-volume | compute2@nfs | nova | enabled | up | 2023-05-14T05:31:33.000000 |
+------------------+----------------+------+---------+-------+----------------------------+
[root@controller ~]#
[root@controller ~]# cinder type-create NFS
+--------------------------------------+------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| 37a44b36-4439-42f8-9e93-11509bfd041d | NFS | - | True |
+--------------------------------------+------+-------------+-----------+
[root@controller ~]# cinder type-create ISCSI
+--------------------------------------+-------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+-------+-------------+-----------+
| 24f0c2d6-125b-4b4c-938c-e5d998c5a863 | ISCSI | - | True |
+--------------------------------------+-------+-------------+-----------+
[root@controller ~]#
[root@controller ~]# cinder type-key NFS set volume_backend_name=NFS-Storage
[root@controller ~]# cinder type-key ISCSI set volume_backend_name=ISCSI-Storage
[root@controller ~]#
[root@controller ~]# tail -2 /etc/cinder/cinder.conf
iscsi_helper = lioadm
volume_backend_name = ISCSI-Storage
[root@controller ~]#

我们用普通用户登录,可以看到,创建卷的时候,已经有了这两个可选的卷了。

我们创建两个卷,分别是iscsi和NFS的卷。创建成功。

iscsi的

nfs的

并且nfs存储节点上,自动将这个目录挂载起来了

nfs的卷正常连接到实例上使用

总结:

1.cinder后端存储步骤

1.把存储准备好。
2.安装cinder-volume
3.vim /etc/cinder/cinder.conf
[xxx]
volume_driver=xxx
xxxxx
xxxxx
xxxxx
volume_backend_name=xxx-Storage
启动cinder-volume
4.创建类型
cinder type-create xxx
5.关联类型
cinder type-key xxx set volume_backend_name=xxx-Storage

我们要给实例创建一个外网一个内网,

我们修改网络控制节点,如下,宿主机物理网卡两个,一个外一个内的

修改配置,添加网卡

[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@controller ~]# grep interface /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = provider:ens33,internal:ens34
[root@controller ~]# ip a s ens34
3: ens34: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:5c:d3:41 brd ff:ff:ff:ff:ff:ff
[root@controller ~]# ifup ens34
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)
[root@controller ~]# ip a s ens34
3: ens34: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:5c:d3:41 brd ff:ff:ff:ff:ff:ff
inet 172.168.1.41/24 brd 172.168.1.255 scope global ens34
valid_lft forever preferred_lft forever
inet6 fe80::c7c4:97e9:a77b:a70b/64 scope link
valid_lft forever preferred_lft forever
[root@controller ~]#

我们看下ml2的配置,之前只有一个,现在上面配置增加了,我们这里再把新增的名字放到下面

然后重启网络

[root@controller ~]# systemctl restart neutron-server.service
[root@controller ~]# systemctl restart neutron-linuxbridge-agent.service

我们将计算节点1和2的内网网卡也启动起来,并且是可以通控制节点的内网ip的。

[root@compute1 _base]# ifup ens34
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/10)
[root@compute1 _base]# ping 172.168.1.41
PING 172.168.1.41 (172.168.1.41) 56(84) bytes of data.
64 bytes from 172.168.1.41: icmp_seq=1 ttl=64 time=0.773 ms
64 bytes from 172.168.1.41: icmp_seq=2 ttl=64 time=0.395 ms
^C
--- 172.168.1.41 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.395/0.584/0.773/0.189 ms
[root@compute1 _base]#

计算节点也是那样修改,但是只需要改一个文件就好,

[root@compute1 _base]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@compute1 _base]# grep interface /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = provider:ens33,internal:ens34
[root@compute1 _base]# ls /etc/neutron/plugins/ml2/
linuxbridge_agent.ini linuxbridge_agent.inibak
[root@compute1 _base]# systemctl restart neutron-linuxbridge-agent.service
[root@compute1 _base]#

我们上面添加了一个网络配置了,现在先创建一个网络

我们把这些改了

改成下面我们要添加的这个

第一个internal是映射的那个,好像就是配置文件中写的那个;后面那个internal,是我创建网络,起的网络名称,这里起的名字暂且和映射的那个保持一致

neutron net-create --shared --provider:physical_network internal \
--provider:network_type flat internal

创建网络

[root@controller ~]# openstack network list
+--------------------------------------+------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+------+--------------------------------------+
| 2fe697b2-ca93-453f-b0dd-726c7708fc99 | WAN | 730d0674-13c0-4af1-b3fb-e2741bd7a414 |
+--------------------------------------+------+--------------------------------------+
[root@controller ~]# neutron net-create --shared --provider:physical_network internal \

--provider:network_type flat internal
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2023-05-14T06:44:18Z |
| description | |
| id | a92ccad9-5319-4564-a164-b364f2b56c3c |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1500 |
| name | internal |
| port_security_enabled | True |
| project_id | b29c52befb8448378d99086df5053737 |
| provider:network_type | flat |
| provider:physical_network | internal |
| provider:segmentation_id | |
| revision_number | 3 |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | b29c52befb8448378d99086df5053737 |
| updated_at | 2023-05-14T06:44:18Z |
+---------------------------+--------------------------------------+
[root@controller ~]# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 2fe697b2-ca93-453f-b0dd-726c7708fc99 | WAN | 730d0674-13c0-4af1-b3fb-e2741bd7a414 |
| a92ccad9-5319-4564-a164-b364f2b56c3c | internal | |
+--------------------------------------+----------+--------------------------------------+
[root@controller ~]#

在网络上创建一个子网:

$ neutron subnet-create --name provider \
--allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS \
--dns-nameserver DNS_RESOLVER --gateway PROVIDER_NETWORK_GATEWAY \
provider PROVIDER_NETWORK_CIDR

给internel创建子网

[root@controller ~]# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 2fe697b2-ca93-453f-b0dd-726c7708fc99 | WAN | 730d0674-13c0-4af1-b3fb-e2741bd7a414 |
| a92ccad9-5319-4564-a164-b364f2b56c3c | internal | |
+--------------------------------------+----------+--------------------------------------+
[root@controller ~]#
[root@controller ~]# neutron subnet-create --name internal \

--allocation-pool start=172.168.1.200,end=172.168.1.250 \
--dns-nameserver 172.168.1.254 --gateway 172.168.1.254 \
internal 172.168.1.0/24
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field | Value |
+-------------------+----------------------------------------------------+
| allocation_pools | {"start": "172.168.1.200", "end": "172.168.1.250"} |
| cidr | 172.168.1.0/24 |
| created_at | 2023-05-14T06:52:31Z |
| description | |
| dns_nameservers | 172.168.1.254 |
| enable_dhcp | True |
| gateway_ip | 172.168.1.254 |
| host_routes | |
| id | 0b550ad4-b852-4de5-8b1a-80c764c46f3c |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | internal |
| network_id | a92ccad9-5319-4564-a164-b364f2b56c3c |
| project_id | b29c52befb8448378d99086df5053737 |
| revision_number | 2 |
| service_types | |
| subnetpool_id | |
| tags | |
| tenant_id | b29c52befb8448378d99086df5053737 |
| updated_at | 2023-05-14T06:52:31Z |
+-------------------+----------------------------------------------------+
[root@controller ~]# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 2fe697b2-ca93-453f-b0dd-726c7708fc99 | WAN | 730d0674-13c0-4af1-b3fb-e2741bd7a414 |
| a92ccad9-5319-4564-a164-b364f2b56c3c | internal | 0b550ad4-b852-4de5-8b1a-80c764c46f3c |
+--------------------------------------+----------+--------------------------------------+
[root@controller ~]#

云平台页面操作,

我们在页面上查看网络,可以看到这个新增的内网网络了

查看网络拓扑,也能看到内网网络的了

我们连接到这个实例中

连上去后查看,一个网卡

硬重启一下实例

暂且用admin去硬重启吧

已重启

重新连接上,没有啥变化啊

我们创建一个实例,先添加公网的,再添加内网的

我们可以看到,是有两个网卡的

这里点一下,刚刚是默认的,没有点,结果上面的镜像名称都没有,应该是有问题的

shuangwangka2的实例,我先添加内网网卡

感觉eth1有点问题

不知道怎么回事,先不管了

https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-controller-install.html

控制节点:

配置服务组件

  • 编辑``/etc/neutron/neutron.conf`` 文件并完成如下操作:

    • 在 [database] 部分,配置数据库访问:

      [database]

      connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

      使用你设置的数据库密码替换 NEUTRON_DBPASS 。

    • 在``[DEFAULT]``部分,启用Modular Layer 2 (ML2)插件,路由服务和重叠的IP地址:

      [DEFAULT]

      core_plugin = ml2
      service_plugins = router
      allow_overlapping_ips = True

    • 在 “[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 消息队列的连接:

      [DEFAULT]

      rpc_backend = rabbit

      [oslo_messaging_rabbit]

      rabbit_host = controller
      rabbit_userid = openstack
      rabbit_password = RABBIT_PASS

      用你在RabbitMQ中为``openstack``选择的密码替换 “RABBIT_PASS”。

    • 在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:

      [DEFAULT]

      auth_strategy = keystone

      [keystone_authtoken]

      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = neutron
      password = NEUTRON_PASS

      将 NEUTRON_PASS 替换为你在认证服务中为 neutron 用户选择的密码。

      注解

      在 [keystone_authtoken] 中注释或者删除其他选项。

    • 在``[DEFAULT]``和``[nova]``部分,配置网络服务来通知计算节点的网络拓扑变化:

      [DEFAULT]

      notify_nova_on_port_status_changes = True
      notify_nova_on_port_data_changes = True

      [nova]

      auth_url = http://controller:35357
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = nova
      password = NOVA_PASS

      使用你在身份认证服务中设置的``nova`` 用户的密码替换``NOVA_PASS``。

    • 在 [oslo_concurrency] 部分,配置锁路径:

      [oslo_concurrency]

      lock_path = /var/lib/neutron/tmp

上面其它的不需要修改了,我们修改一下,之前是空的

我们设置一个,并添加一个,修改为如下

vim /etc/neutron/neutron.conf

配置 Modular Layer 2 (ML2) 插件

ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施

  • 编辑``/etc/neutron/plugins/ml2/ml2_conf.ini``文件并完成以下操作:

    • 在``[ml2]``部分,启用flat,VLAN以及VXLAN网络:

      [ml2]

      type_drivers = flat,vlan,vxlan

    • 在``[ml2]``部分,启用VXLAN私有网络:

      [ml2]

      tenant_network_types = vxlan

    • 在``[ml2]``部分,启用Linuxbridge和layer-2机制:

      [ml2]

      mechanism_drivers = linuxbridge,l2population

      警告

      在你配置完ML2插件之后,删除可能导致数据库不一致的``type_drivers``项的值。

      注解

      Linuxbridge代理只支持VXLAN覆盖网络。

    • 在``[ml2]`` 部分,启用端口安全扩展驱动:

      [ml2]

      extension_drivers = port_security

    • 在``[ml2_type_flat]``部分,配置公共虚拟网络为flat网络

      [ml2_type_flat]

      flat_networks = provider

    • 在``[ml2_type_vxlan]``部分,为私有网络配置VXLAN网络识别的网络范围:

      [ml2_type_vxlan]

      vni_ranges = 1:1000

    • 在 ``[securitygroup]``部分,启用 ipset 增加安全组规则的高效性:

      [securitygroup]

      enable_ipset = True

修改前:

[DEFAULT]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2populatio

vim /etc/neutron/plugins/ml2/ml2_conf.ini

修改后添加一个vxlan,驱动类型还有gre greve的

租户网络类型也加上vxlan

下面找个驱动也加上一个

我们的配置现在如下,根据上面的继续修改

修改后

配置Linuxbridge代理

Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。

  • 编辑``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件并且完成以下操作:

    • 在``[linux_bridge]``部分,将公共虚拟网络和公共物理网络接口对应起来:

      [linux_bridge]
      physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

      将``PUBLIC_INTERFACE_NAME`` 替换为底层的物理公共网络接口。请查看:ref:environment-networking for more information。

    • 在``[vxlan]``部分,启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,启用layer-2 population:

      [vxlan]
      enable_vxlan = True
      local_ip = OVERLAY_INTERFACE_IP_ADDRESS
      l2_population = True

      将``OVERLAY_INTERFACE_IP_ADDRESS`` 替换为处理覆盖网络的底层物理网络接口的IP地址。这个示例架构中使用管理网络接口与其他节点建立流量隧道。因此,将``OVERLAY_INTERFACE_IP_ADDRESS``替换为控制节点的管理网络的IP地址。请查看:ref:environment-networking for more information。

    • 在 ``[securitygroup]``部分,启用安全组并配置 Linuxbridge iptables firewall driver:

      [securitygroup]

      enable_security_group = True
      firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

修改之前

修改后

配置layer-3代理

Layer-3代理为私有虚拟网络提供路由和NAT服务

  • 编辑``/etc/neutron/l3_agent.ini``文件并完成以下操作:

    • 在``[DEFAULT]``部分,配置Linuxbridge接口驱动和外部网络网桥:

      [DEFAULT]

      interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
      external_network_bridge =

      注解

      ``external_network_bridge``选项特意设置成缺省值,这样就可以在一个代理上允许多种外部网络

注释一行,改成下面这个

配置DHCP代理

The DHCP agent provides DHCP services for virtual networks.

  • 编辑``/etc/neutron/dhcp_agent.ini``文件并完成下面的操作:

    • 在``[DEFAULT]``部分,配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据

      [DEFAULT]

      interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
      dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
      enable_isolated_metadata = True

这个好像没有管

计算节点修改前,https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-compute-install-option2.html

修改后

上面配置错了,少了个r

重启网络,控制节点:systemctl restart neutron-server.service  neutron-linuxbridge-agent.service neutron-l3-agent.service

计算节点:systemctl restart  neutron-linuxbridge-agent.service

创建自服务网络

创建路由

私有网络通过虚拟路由来连接到公有网络,以双向NAT最为典型。每个路由包含至少一个连接到私有网络的接口以及一个连接到公有网络的网关的接口

公有提供网络必须包括 router: external``选项,用来使路由连接到外部网络,比如互联网。``admin``或者其他权限用户在网络创建时必须包括这个选项,也可以之后在添加。在这个环境里,我们把``public``公有网络设置成 ``router: external。

添加’ router:external ‘ 到’ provider’ 网络

创建路由

给路由器添加一个私网子网的接口:

给路由器设置公有网络的网关:

网络拓扑中,中间这个是我们刚刚创建的那些,是做vpc用的网络

我们给dashboard配置改下,先备份一下

[root@compute1 _base]# vim /etc/openstack-dashboard/local_settings
[root@compute1 _base]# cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settingsbak2
[root@compute1 _base]#

操作前

全都改成True了,有的我们现在用不到,但是也不会报错,之前是因为不是vxlan,好像是会报错的,是不支持的。

然后重启服务

[root@compute1 _base]# systemctl restart httpd.service

云平台页面使用vxlan网络创建实例;查看网络信息

否,不创建新卷

网络选择私有网络,

我们可以看到,三个网络,创建出来的,这里是不一样的。现在这个是vxlan网络创建出来的,好像是可以多种网络在云平台中共存

.5就是我们刚刚创建的云实例用的ip

这个vxlan网络的实例,该如何登录进去呢

控制节点试了下,网段是不通的

目前的想法是,找个找个实例

找到它的宿主机和它的实例名称

然后到它的宿主机,我们登录上去这个实例,可以看到它就是这个ip的,登录是对的

我们再创建一台实例2

然后在实例1中,网络是能通的

浮动ip

我们再看下,vxlan网络的实例,是有绑定浮动ip的这个功能的

我们看下菜单,有了vxlan的网络后,也是有了浮动ip的菜单的。路由菜单,好像也是这之后新增的

先看下路由

我们看下110这个外部网关,是wan这个网络上,当时创建vxlan的网络的时候,某个参数就是指定的这个网络创建的。

绑定一个浮动ip

绑定浮动ip成功

我们根据浮动ip,直接就能从控制节点免密登录上去,并且实例中是能通外网的,因为浮动ip是通外网的,

我们看下,这个dns服务器好像是谷歌的,百度解析出来多个ip.

我们再来看下,这个.5的实例,通过路由连接到wan,这个算是公网的外部网络上,

手机扫一扫

移动阅读更方便

阿里云服务器
腾讯云服务器
七牛云服务器