Greenplum是一个面向数据仓库应用的关系型数据库,因为有良好的体系结构,所以
在数据存储、高并发、高可用、线性扩展、反应速度、易用性和性价比等方面有非常
明显的优势。Greenplum是一种基于PostgreSQL的分布式数据库,其采用sharednothing架构,主机、操作系统、内存、存储都是自我控制的,不存在共享。
本质上讲Greenplum是一个关系型数据库集群,它实际上是由数个独立的数据库服务
组合成的逻辑数据库。与RAC不同,这种数据库集群采取的是MPP(Massively
Parallel Processing)架构。跟MySQL、Oracle 等关系型数据不同,Greenplum可以理解为分布式关系型数据库。
关于Greenplum的更多信息请访问https://greenplum.org/
https://github.com/greenplum-db/gpdb/releases/tag/6.1.0
- 关闭防火墙(所有机器)
iptables (centos6.x)
关闭:service iptables stop
永久关闭:chkconfig iptables off
- 检查firewalld (centos 7.x)
关闭:systemctl stop firewalld
永久关闭 :systemctl disable firewalld
[root@mdw ~]# vi /etc/selinux/config
确保SELINUX=disabled
为之后GP在各个节点之间相互通信做准备。
修改各台主机的主机名称。 一般建议的命名规则如下: 项目名_gp_节点
Master : dis_gp_mdw
Standby Master : dis_gp_smdw
Segment Host : dis_gp_sdw1 dis_gp_sdw2 以此类推
如果Standby也搭建在某Segment host下,则命名为:dis_gp_sdw3_smdw
[root@mdw ~]# vi /etc/hosts
添加每台机器的ip 和hostname,确保所有机器的/etc/hosts中包含以下信息:
192.168.xxx.xxx gp-mdw
192.168.xxx.xxx gp-sdw1
192.168.xxx.xxx gp-sdw2
192.168.xxx.xxx gp-sdw3-mdw2
Centos7.x vi /etc/hostname
Centos6.x vi /etc/sysconfig/network
修改完之后 reboot机器
vi /etc/sysctl.conf
kernel.shmall = 197951838 # echo $(expr $(getconf _PHYS_PAGES) / 2)
kernel.shmmax = 810810728448 # echo $(expr $(getconf _PHYS_PAGES) / 2 \* $(getconf PAGE_SIZE))
kernel.shmmni = 4096
vm.overcommit_memory = 2
vm.overcommit_ratio = 75 #vm.overcommit_ratio = (RAM - 0.026 * gp_vmem_rq) / RAM #gp_vmem_rq = ((SWAP + RAM) – (7.5GB + 0.05 * RAM)) / 1.7
net.ipv4.ip_local_port_range = 10000 65535
kernel.sem = 500 2048000 200 4096
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.swappiness = 10
vm.zone_reclaim_mode = 0
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
#vm.min_free_kbytes = 487119#awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}' /proc/meminfo
#对于大于64GB的主机,加上下面4行
vm.dirty_background_ratio = 0
vm.dirty_ratio = 0
vm.dirty_background_bytes = 1610612736 # 1.5GB
vm.dirty_bytes = 4294967296 # 4GB
#对于小于64GB的主机删除dirty_background_bytes dirty_bytes,加上下面2行
vm.dirty_background_ratio = 3
vm.dirty_ratio = 10
#vm.min_free_kbytes在内存 > 64GB系统的时候可以设置,一般比较少设置此参数。
#vm.min_free_kbytes,确保网络和存储驱动程序PF_MEMALLOC得到分配。这对内存大的系统尤其重要。一般系统上,默认值通常太低。可以使用awk命令计算vm.min_free_kbytes的值,通常是建议的系统物理内存的3%:
#awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}' /proc/meminfo >> /etc/sysctl.conf
#不要设置 vm.min_free_kbytes 超过系统内存的5%,这样做可能会导致内存不足。
file-max:这个参数表示进程可以同时打开的最大句柄数,这个参数直接限制最大并发连接数。
tcp_tw_reuse:这个参数设置为1,表示允许将TIME-WAIT状态的socket重新用于新的TCP链接。这个对服务器来说很有意义,因为服务器上总会有大量TIME-WAIT状态的连接。
tcp_keepalive_time:这个参数表示当keepalive启用时,TCP发送keepalive消息的频度。默认是7200 seconds,意思是如果某个TCP连接在idle 2小时后,内核才发起probe。若将其设置得小一点,可以更快地清理无效的连接。
tcp_fin_timeout:这个参数表示当服务器主动关闭连接时,socket保持在FIN-WAIT-2状态的最大时间。
tcp_max_tw_buckets:这个参数表示操作系统允许TIME_WAIT套接字数量的最大值,如果超过这个数字,TIME_WAIT套接字将立刻被清除并打印警告信息。默认是i180000,过多TIME_WAIT套接字会使Web服务器变慢。
tcp_max_syn_backlog:这个参数表示TCP三次握手建立阶段接受WYN请求队列的最大长度,默认1024,将其设置大一些可以使出现Nginx繁忙来不及accept新连接的情况时,Linux不至于丢失客户端发起的连接请求。
ip_local_port_range:这个参数定义了在UDP和TCP连接中本地端口的取值范围。
net.ipv4.tcp_rmem:这个参数定义了TCP接受缓存(用于TCP接收滑动窗口)的最小值,默认值,最大值。
net.ipv4.tcp_wmem:这个参数定义了TCP发送缓存(用于TCP发送滑动窗口)的最小值,默认值,最大值。
netdev_max_backlog:当网卡接收数据包的速度大于内核处理的速度时,会有一个队列保存这些数据包。这个参数表示该队列的最大值。
rmem_default:这个参数表示内核套接字接收缓存区默认的大小。
wmem_default:这个参数表示内核套接字发送缓存区默认的大小。
rmem_max:这个参数表示内核套接字接收缓存区默认的最大大小。
wmem_max:这个参数表示内核套接字发送缓存区默认的最大大小。
在/etc/security/limits.conf文件下增加以下参数
vi /etc/security/limits.conf
* soft nofile 524288
* hard nofile 524288
* soft nproc 131072
* hard nproc 131072
“*” 星号表示所有用户
noproc 是代表最大进程数
nofile 是代表最大文件打开数
RHEL / CentOS 6.X
修改:/etc/security/limits.d/90-nproc.conf 文件的nproc
[root@mdw ~]# vi /etc/security/limits.d/90-nproc.conf
确保 * soft nproc 131072
RHEL / CentOS 7.X 修改:
修改:/etc/security/limits.d/20-nproc.conf 文件的nproc
[root@mdw ~]# vi /etc/security/limits.d/20-nproc.conf
确保 * soft nproc 131072
ulimit -u 命令显示每个用户可用的最大进程数 max user processes,验证返回值为131072.
[root@mdw greenplum-db]# echo $LANG
en_US.UTF-8
如果为zh_CN.UTF-8
则要修改 CentOS 6.X /etc/syscconfig/i18n
CentOS 7.X /etc/locale.conf
Greenplum数据库管理程序中的gpexpand‘ gpinitsystem、gpaddmirrors,使用 SSH连接来执行任务。在规模较大的Greenplum集群中,程序的ssh连接数可能会超出主机的未认证连接的最大阈值。发生这种情况时,会收到以下错误:ssh_exchange_identification:连接被远程主机关闭。
为避免这种情况,可以更新 /etc/ssh/sshd_config 或者 /etc/sshd_config 文件的 MaxStartups 和 MaxSessions 参数
vi /etc/ssh/sshd_config
MaxStartups 300:30:1000
重启sshd,使参数生效
service sshd restart
为了保证集群各个服务的时间一致,首先在master 服务器上,编辑 /etc/ntp.conf,配置时钟服务器为数据中心的ntp服务器。若没有,先修改master 服务器的时间到正确的时间,再修改其他节点的 /etc/ntp.conf,让他们跟随master服务器的时间。
[root@mdw ~]# vi /etc/ntp.conf
在server 最前面加上
master:
把server1,2,3,4全删
改成 server xxx,可以问公司IT人员公司的时钟IP,如果没有就设置成
server 1.cn.pool.ntp.org
segment:
server mdw prefer # 优先主节点
server smdw # 其次standby 节点,若没有standby ,可以配置成数据中心的时钟服务器
[root@mdw ~]# service ntpd restart # 修改完重启ntp服务
在每个节点上创建gpadmin用户,用于管理和运行gp集群
[root@mdw ~]# groupadd gpadmin
[root@mdw ~]# useradd gpadmin -g gpadmin -s /bin/bash
[root@mdw ~]# passwd gpadmin
密码:gpadmin
[root@mdw ~]# yum install -y zip unzip openssh-clients ed ntp net-tools perl perl-devel perl-ExtUtils* mlocate lrzsz parted apr apr-util bzip2 krb5-devel libevent libyaml rsync
执行安装脚本,默认安装到/usr/local/ 目录下。
[root@mdw ~]# rpm -ivh greenplum-db-6.12.1-rhel7-x86_64.rpm
安装完成后可在/usr/local下看到greenplum-db-6.12.1和它的软连接greenplum-db
由于权限的问题,建议手动修改安装路径,放在/home/gpadmin下,执行以下语句
1.进入安装父目录
cd /usr/local
2.把安装目录移动到/home/gpadmin
mv greenplum-db-6.12.1 /home/gpadmin
3.删除软连接
/bin/rm –r greenplum-db
4.在/home/gpadmin下建立新的软链接
ln -s /home/gpadmin/greenplum-db-6.12.1 /home/gpadmin/greenplum-db
5.修改greenplum_path.sh (重要)【greenplum-db-6.12.1可能不用修改】
vi /home/gpadmin/greenplum-db/greenplum_path.sh
把 GPHOME=/usr/local/greenplum-db-6.12.1
修改为
GPHOME=/home/gpadmin/greenplum-db
6.把文件赋权给gpadmin
cd /home
chown -R gpadmin:gpadmin /home/gpadmin
执行安装脚本,默认安装到/usr/local/ 目录下。
[root@mdw ~]# rpm -ivh greenplum-db-6.12.1-rhel7-x86_64.rpm
安装完成后可在/usr/local下看到greenplum-db-6.12.1和它的软连接greenplum-db
由于权限的问题,建议手动修改安装路径,放在/home/gpadmin下,执行以下语句
。。。。。
6.把文件赋权给gpadmin
cd /home
chown -R gpadmin:gpadmin /home/gpadmin
生成密钥
GP6.x开始gpssh-exkeys命令已经不带自动生成密钥了,所以需要自己手动生成
cd /home/gpadmin/greenplum-db
[root@mdw greenplum-db]# ssh-keygen -t rsa
提示语不用管,一直按Enter键使用默认值即可
[root@mdw greenplum-db]# ssh-copy-id gp-sdw1
[root@mdw greenplum-db]# ssh-copy-id gp-sdw2
[root@mdw greenplum-db]# ssh-copy-id gp-sdw3-mdw2
vi all_host
增加所有hostname到文件中
gp-mdw
gp-sdw1
gp-sdw2
gp-sdw3-mdw2
[root@mdw greenplum-db]# source /home/gpadmin/greenplum-db/greenplum_path.sh
[root@mdw greenplum-db]# gpssh-exkeys -f all_host
打通gpadmin 用户免密登录
[root@mdw greenplum-db-6.2.1]# su - gpadmin
[gpadmin@mdw ~]$ source /home/gpadmin/greenplum-db/greenplum_path.sh
[gpadmin@mdw ~]$ ssh-keygen -t rsa
[gpadmin@mdw greenplum-db]$ ssh-copy-id gp-sdw1
[gpadmin@mdw greenplum-db]$ ssh-copy-id gp-sdw2
[gpadmin@mdw greenplum-db]$ ssh-copy-id gp-sdw3-mdw2
[gpadmin@mdw greenplum-db]$ mkdir gpconfigs
[gpadmin@mdw greenplum-db]$ cd gpconfigs
[gpadmin@mdw greenplum-db]$ vi all_hosts
把所有主机hostname添加进去
gp-mdw
gp-sdw1
gp-sdw2
gp-sdw3-mdw2
[gpadmin@mdw ~]$ gpssh-exkeys -f /home/gpadmin/gpconfigs/all_hosts
[gpadmin@mdw greenplum-db]$ vi /home/gpadmin/gpconfigs/seg_hosts
把所有数据节点hostname添加进去
gp-sdw1
gp-sdw2
gp-sdw3-mdw2
添加gp的安装目录,和环境信息到用户的环境变量中。
vi .bashrc
source /home/gpadmin/greenplum-db/greenplum_path.sh
示例:
[gpadmin@mdw gpconfigs]$ exit
[root@mdw ~]# source /home/gpadmin/greenplum-db/greenplum_path.sh
[root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/hosts root@=:/etc/hosts
[root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/security/limits.conf root@=:/etc/security/limits.conf
[root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/sysctl.conf root@=:/etc/sysctl.conf
[root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/security/limits.d/20-nproc.conf root@=:/etc/security/limits.d/20-nproc.conf
[root@mdw ~]# gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'sysctl -p'
[root@mdw ~]# gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'reboot'
3.5.1 模拟gpseginstall 脚本(gpadmin用户执行)
GP6.x无gpseginstall 命令,以下模拟此命令主要过程,完成gpsegment的部署
gpadmin 用户下执行
[root@gp-mdw gpadmin]# su - gpadmin
[gpadmin@gp-mdw ~]$ cd /home/gpadmin
[gpadmin@gp-mdw ~]$ tar -cf gp6.tar greenplum-db-6.12.1/
[gpadmin@gp-mdw ~]$ vi /home/gpadmin/gpconfigs/gpseginstall_hosts
添加
gp-sdw1
gp-sdw2
gp-sdw3-smdw
[gpadmin@gp-mdw ~]$ gpscp -f /home/gpadmin/gpconfigs/gpseginstall_hosts gp6.tar gpadmin@=:/home/gpadmin
[gpadmin@mdw gpconfigs]$ gpssh -f /home/gpadmin/gpconfigs/gpseginstall_hosts
tar -xf gp6.tar
ln -s greenplum-db-6.12.1 greenplum-db
exit
[gpadmin@mdw gpconfigs]$ exit
[root@mdw greenplum-db-6.2.1]# su - gpadmin
[gpadmin@mdw ~]$ cd gpconfigs
[gpadmin@mdw gpconfigs]$ vi seg_hosts
把segment的hostname都添加到文件中
gp-sdw1
gp-sdw2
gp-sdw3-smdw
[gpadmin@mdw gpconfigs]$ gpscp -f /home/gpadmin/gpconfigs/seg_hosts /home/gpadmin/.bashrc gpadmin@=:/home/gpadmin/.bashrc
1. 创建master 数据目录
mkdir -p /data/master
chown -R gpadmin:gpadmin /data
source /home/gpadmin/greenplum-db/greenplum_path.sh
如果有standby节点则需要执行下面2句 gp-sdw3-mdw2 这个hostname灵活变更
gpssh -h gp-sdw3-mdw2 -e 'mkdir -p /data/master'
gpssh -h gp-sdw3-mdw2 -e 'chown -R gpadmin:gpadmin /data'
2. 创建segment数据目录
source /home/gpadmin/greenplum-db/greenplum_path.sh
gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/p1'
gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/p2'
gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/m1'
gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/m2'
gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'chown -R gpadmin:gpadmin /data'
拷贝配置文件模板
su - gpadmin
cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/gpinitsystem_config
vi /home/gpadmin/gpconfigs/gpinitsystem_config
注意:To specify PORT_BASE, review the port range specified in the net.ipv4.ip_local_port_range parameter in the /etc/sysctl.conf file.
主要修改的参数:
#primary的数据目录
declare -a DATA_DIRECTORY=(/data/p1 /data/p2)
#master节点的hostname
MASTER_HOSTNAME=gp-mdw
#master节点的主目录
MASTER_DIRECTORY=/data/master
#mirror的端口要把前面的#去掉(启用mirror)
MIRROR_PORT_BASE=7000
#mirror的数据目录要把前面的#去掉(启用mirror)
declare -a MIRROR_DATA_DIRECTORY=(/data/m1 /data/m2)
执行脚本:
gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config --locale=C -h /home/gpadmin/gpconfigs/gpseginstall_hosts --mirror-mode=spread
注意:spread是指spread分布策略,只允许主机数>每个主机中的段实例数情况(number of hosts is greater than the number of segment instances.)
如果不指定mirror_mode,则是默认的group策略,这样做的情况在段实例数>1时,down机之后不会导致它的镜像全在另外一台机器中,降低另外一台机器的性能瓶颈。
安装成功日志
.
.
.
.
server shutting down
20201220:12:06:59:017597 gpstop:gp-mdw:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20201220:12:06:59:017597 gpstop:gp-mdw:gpadmin-[INFO]:-Terminating processes for segment /data/master/gpseg-1
20201220:12:06:59:017597 gpstop:gp-mdw:gpadmin-[ERROR]:-Failed to kill processes for segment /data/master/gpseg-1: ([Errno 3] No such process)
20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /data/master/gpseg-1
20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 6.1.0 build commit:6788ca8c13b2bd6e8976ccffea07313cbab30560'
20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Greenplum Catalog Version: '301908232'
20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Starting Master instance in admin mode
20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Setting new master era
20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Master Started...
20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Shutting down master
20201220:12:07:02:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
..
20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Process results...
20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:- Successful segment starts = 6
20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:- Failed segment starts = 0
20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0
20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Successfully started 6 of 6 segment instances
20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Starting Master instance gp-mdw directory /data/master/gpseg-1
20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Command pg_ctl reports Master gp-mdw instance active
20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Connecting to dbname='template1' connect_timeout=15
20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-No standby master configured. skipping...
20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Database successfully started
20201220:12:07:05:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
20201220:12:07:06:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances
20201220:12:07:06:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait...
......
20201220:12:07:07:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
.................................................................
20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------
20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Parallel process exit status
20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------
20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as completed = 6
20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as killed = 0
20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as failed = 0
20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Scanning utility log file for any warning messages
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Log file scan check passed
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Greenplum Database instance successfully created
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-------------------------------------------------------
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To complete the environment configuration, please
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/data/master/gpseg-1"
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:- to access the Greenplum scripts for this instance:
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:- or, use -d /data/master/gpseg-1 option for the Greenplum scripts
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:- Example gpstate -d /data/master/gpseg-1
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20201220.log
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To initialize a Standby Master Segment for this Greenplum instance
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Review options for gpinitstandby
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-------------------------------------------------------
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-The Master /data/master/gpseg-1/pg_hba.conf post gpinitsystem
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-has been configured to allow all hosts within this new
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-new array must be explicitly added to this file
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is
20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-located in the /home/gpadmin/greenplum-db/docs directory
20201220:12:08:15:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-------------------------------------------------------
安装中途失败,提示使用 bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_* 回退,执行该脚本即可,例如:
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[FATAL]:-Unknown host gpzq-sh-mb: ping: unknown host gpzq-sh-mb
unknown host Script Exiting!
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[WARN]:-Script has left Greenplum Database in an incomplete state
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[WARN]:-Run command bash
/home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20191218_203938 to remove these changes
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function BACKOUT_COMMAND
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[INFO]:-End Function BACKOUT_COMMAND
[gpadmin@mdw gpAdminLogs]$ ls
backout_gpinitsystem_gpadmin_20191218_203938 gpinitsystem_20191218.log
[gpadmin@mdw gpAdminLogs]$ bash backout_gpinitsystem_gpadmin_20191218_203938
Stopping Master instance
waiting for server to shut down.... done
server stopped
Removing Master log file
Removing Master lock files
Removing Master data directory files
若执行后仍然未清理干净,可执行一下语句后,再重新安装:
pg_ctl -D /data/master/gpseg-1 stop
rm -f /tmp/.s.PGSQL.5432 /tmp/.s.PGSQL.5432.lock
主节点
rm -rf /data/master/gpseg*
所有数据节点
rm -rf /data/p1/gpseg-x
rm -rf /data/p2/gpseg-x
rm -rf /data/p1/gpseg*
rm -rf /data/p2/gpseg*
编辑gpadmin 用户的环境变量,增加(重要)
export MASTER_DATA_DIRECTORY=/data/master/gpseg-1
除此之外,通常还增加:(可以不设置)
export PGPORT=5432 # 根据实际情况填写
export PGUSER=gpadmin # 根据实际情况填写
export PGDATABASE=gpdw # 根据实际情况填写
前面已经添加过 source /usr/local/greenplum-db/greenplum_path.sh,此处操作如下:
vi .bashrc
export MASTER_DATA_DIRECTORY=/data/master/gpseg-1
psql 登陆gp 并设置密码(gpadmin用户)
psql -h hostname -p port -d database -U user -W password
-h后面接对应的master或者segment主机名
-p后面接master或者segment的端口号
-d后面接数据库名可将上述参数配置到用户环境变量中,linux 中使用gpadmin用户不需要密码。
psql -h 127.0.0.1 -p 5432 -d database -U gpadmin
psql 登录,并设置gpadmin用户密码示例:
psql -d postgres
alter user gpadmin encrypted password 'gpadmin';
su gpadmin
psql -p 5432
修改数据库密码
alter role gpadmin with password '123456';
退出:
\q
显示数据库列表
\l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+---------+----------+---------+-------+---------------------
postgres | gpadmin | UTF8 | C | C |
template0 | gpadmin | UTF8 | C | C | =c/gpadmin +
| | | | | gpadmin=CTc/gpadmin
template1 | gpadmin | UTF8 | C | C | =c/gpadmin +
| | | | | gpadmin=CTc/gpadmin
(3 rows)
简单介绍
客户端认证是由一个配置文件(通常名为pg_hba.conf)控制的, 它存放在数据库集群的数据目录里。HBA的意思是"host-based authentication", 也就是基于主机的认证。在initdb初始化数据目录的时候, 它会安装一个缺省的pg_hba.conf文件。不过我们也可以把认证配置文件放在其它地方
配置 pg_hba.conf
vi /data/master/gpseg-1/pg_hba.conf
新增一行 host all all 0.0.0.0/0 md5
gpinitstandby -s gp-sdw3-smdw
[gpadmin@gp-mdw ~]$ gpinitstandby -s gp-sdw3-smdw
20210311:19:25:38:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
20210311:19:25:38:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Checking for data directory /data/master/gpseg-1 on gp-sdw3-smdw
20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:------------------------------------------------------
20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master initialization parameters
20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:------------------------------------------------------
20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master hostname = gp-mdw
20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master data directory = /data/master/gpseg-1
20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master port = 5432
20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master hostname = gp-sdw3-smdw
20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master port = 5432
20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master data directory = /data/master/gpseg-1
20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum update system catalog = On
Do you want to continue with standby master initialization? Yy|Nn (default=N):
> y
20210311:19:25:42:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-The packages on gp-sdw3-smdw are consistent.
20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Adding standby master to catalog...
20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Database catalog updated successfully.
20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Updating pg_hba.conf file...
20210311:19:25:45:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
20210311:19:25:49:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Starting standby master
20210311:19:25:49:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Checking if standby master is running on host: gp-sdw3-smdw in directory: /data/master/gpseg-1
20210311:19:25:53:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
20210311:19:25:54:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
20210311:19:25:54:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Successfully created standby master on gp-sdw3-smdw
手机扫一扫
移动阅读更方便
你可能感兴趣的文章