redhat5.5_11gR2_RAC_安装,这篇主要记录RAC安装的执行步骤,最烦琐的就是前期配置,到后面图形界面runInstaller,asmca,dbca就很容易了。
--hostname检查--
[root@node1 Server]# hostname
node1
--hostname修改,假定两个节点,node1,node2—
[root@node1 Server]#vi /etc/sysconfig/network
[root@node1 Server]#hostname xxx
--修改hosts文件,注意对应的hostname和命名,注意127那两列,不要出现主机名,每台机器需要两个网卡,一个绑定public-ip,一个绑定private-ip,virtual-ip和scan-ip会在安装完grid后自动开通--
[root@node1 Server]# cat /etc/hosts
# Do not remove the following line, or various programs
127.0.0.1 localhost.localdomain localhost
:: localhost6.localdomain6 localhost6
#public ip
192.168.100.101 node1
192.168.100.102 node2
#virtual ip
192.168.100.201 node1-vip
192.168.100.202 node2-vip
#private ip
10.10.10.1 node1-priv
10.10.10.2 node2-priv
#scan ip
192.168.100.250 racscan
--创建grid(管理grid)和oracle(管理database)用户,以及oinstall、dba、oper、asmadmin、asmoper、asmdba组—
--如果只创建一个oracle用户我试验也可以,但是会造成一定的混乱,因为grid和database两个的软件的目录中,存在相同的命令--
[root@node1 Server]#groupadd -g 1000 oinstall
[root@node1 Server]#groupadd -g 1001 dba
[root@node1 Server]#groupadd -g 1002 oper
[root@node1 Server]#groupadd -g 1003 asmadmin
[root@node1 Server]#groupadd -g 1004 asmoper
[root@node1 Server]#groupadd -g 1001 asmdba
[root@node1 Server]#useradd –u 1000 –g oinstall –G dba,asmdba,oper oracle
[root@node1 Server]#passwd oracle
[root@node1 Server]#useradd –u 1001 –g oinstall –G dba,asmadmin,asmoper,asmdba,oper grid
[root@node1 Server]#passwd grid
[oracle@node6 ~]$ id
uid=1000(oracle) gid=1000(oinstall) groups=1000(oinstall),1001(dba),1002(oper),1005(asmdba)
[grid@node6 ~]$ id
uid=1001(grid) gid=1000(oinstall) groups=1000(oinstall),1001(dba),1002(oper),1003(asmadmin),1004(asmoper),1005(asmdba)
--创建相关目录--
[root@node1 Server]# mkdir -p /u01/ora11g/oracle
[root@node1 Server]# mkdir -p /u01/ora11g/grid
[root@node1 Server]# mkdir -p /opt/ora11g
[root@node1 Server]# chown -R oracle:oinstall /u01/ora11g
[root@node1 Server]# chown -R grid:oinstall /u01/ora11g/grid
[root@node1 Server]# chown -R grid:oinstall /opt/ora11g
[root@node1 Server]# chmod 777 /u01
--关闭系统ntp服务(同步时间),11gR2自带同步时间的服务,移除ntp服务的配置文件,不然GRID还是会找ntp--
[root@node1 /]# service ntpd stop
[root@node1 /]# chkconfig ntpd off
[root@node1 /]# chkconfig sendmail off
[root@node1 /]# mv /etc/ntp.conf /etc/ntp.conf.bk
--安装相关软件,注意32位和64位,现在基本上都是64位的吧--
[root@node1 Server]# pwd
/media/RHEL_5. i386 DVD/Server
[root@node1 Server]# rpm -ivh compat-gcc-34-3.4.6-4.i386.rpm gcc-4.1.2-48.el5.i386.rpm gcc-c++-4.1.2-48.el5.i386.rpm glibc-devel-2.5-49.i386.rpm libstdc++-devel-4.1.2-48.el5.i386.rpm libXp-1.0.0-8.1.el5.i386.rpm openmotif22-2.2.3-18.i386.rpm elfutils-libelf-devel-0.137-3.el5.i386.rpm elfutils-libelf-devel-static-0.137-3.el5.i386.rpm sysstat-7.0.2-3.el5.i386.rpm libaio-devel-0.3.106-5.i386.rpm libgomp-4.4.0-6.el5.i386.rpm glibc-headers-2.5-49.i386.rpm kernel-headers-2.6.18-194.el5.i386.rpm
[root@localhost Server]# rpm -ivh compat-gcc-34-3.4.6-4.x86_64.rpm gcc-4.1.2-48.el5.x86_64.rpm gcc-c++-4.1.2-48.el5.x86_64.rpm glibc-devel-2.5-49.x86_64.rpm libstdc++-devel-4.1.2-48.el5.x86_64.rpm libXp-1.0.0-8.1.el5.x86_64.rpm openmotif22-2.2.3-18.x86_64.rpm elfutils-libelf-devel-0.137-3.el5.x86_64.rpm elfutils-libelf-devel-static-0.137-3.el5.x86_64.rpm sysstat-7.0.2-3.el5.x86_64.rpm libaio-devel-0.3.106-5.x86_64.rpm libgomp-4.4.0-6.el5.x86_64.rpm glibc-headers-2.5-49.x86_64.rpm kernel-headers-2.6.18-194.el5.x86_64.rpm
--设置系统参数,添加到最下方,可以根据自身系统进行修改--
[root@node1 Server]# vi /etc/sysctl.conf
kernel.shmall =
kernel.shmmax =
kernel.shmmni =
kernel.sem =
fs.file-max =
fs.aio-max-nr =
net.ipv4.ip_local_port_range =
net.core.rmem_default =
net.core.rmem_max =
net.core.wmem_default =
net.core.wmem_max =
[root@node1 Server]# sysctl -p
[root@node1 Server]# vi /etc/security/limits.conf
oracle soft nproc
oracle hard nproc
oracle soft nofile
oracle hard nofile
grid soft nproc
grid hard nproc
grid soft nofile
grid hard nofile
[root@node1 /]# vi /etc/pam.d/login
session required pam.limits.so
[root@node1 Server]# vi /etc/selinux/config
SELINUX=disabled
(重启生效)
--设置tmpfs的大小,/dev/shm限制了数据库可以配置的内存--
[root@node1 Server]# vi /etc/fstab
tmpfs /dev/shm /tmpfs defaults,size=30G 0 0
[root@node1 Server]# mount –o remount /dev/shm
--两个用户的环境变量,grid用户BASE与HOME不要设置为上下级关系,不然安装时会报错ins-32026。oracle用户添加一个ORACLE_UNQNAME,此环境变量可以打开em,emctl start dbconsole。根据需要添加NLS_LANG和NLS_DATE_FORMAT。注意修改各节点的SID—
export NLS_LANG="SIMPLIFIED CHINESE_CHINA”.ZHS16GBK & UTF8
export NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'
[grid@node1 ~]$ vi .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1
export ORACLE_BASE=/opt/ora11g
export ORACLE_HOME=/u01/ora11g/grid/product/11.2.0
export PATH=$ORACLE_HOME/bin:/usr/sbin:$PATH
if [ $USER = "oracle" ] || [ $USER = "grid" ];then
if [ $SHELL = "/bin/ksh" ];then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
[oracle@node1 ~]$ vi .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=racdb1
export ORACLE_TREM=xterm
export ORACLE_BASE=/u01/ora11g
export ORACLE_HOME=$ORACLE_BASE/oracle/product/11.2.0
export PATH=$ORACLE_HOME/bin:/usr/sbin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_UNQNAME=racdb
if [ $USER = "oracle" ] || [ $USER = "grid" ];then
if [ $SHELL = "/bin/ksh" ];then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
--查看共享磁盘,此时还未格式化,两节点必须相同,11gRAC一般要有3个共享磁盘,一个grid用,一个data,一个recovery--
[root@node1 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, bytes
heads, sectors/track, cylinders
Units = cylinders of * = bytes
Device Boot Start End Blocks Id System
/dev/sda1 * Linux
/dev/sda2 + Linux swap / Solaris
/dev/sda3 Linux
Disk /dev/sdb: MB, bytes
heads, sectors/track, cylinders
Units = cylinders of * = bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: MB, bytes
heads, sectors/track, cylinders
Units = cylinders of * = bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdf: 10.7 GB, bytes
heads, sectors/track, cylinders
Units = cylinders of * = bytes
Disk /dev/sdf doesn't contain a valid partition table
----------------------------以下方式是利用asmlib进行asm创建---------------------------------------------------------------------
--安装asm相关软件,注意安装32位或64位。也可以不用安装,利用系统自带的udev进行asm创建(udev我在redhat5.5中没有试验成功,在OEL6.4中成功)--
[root@node2 redhat]# uname -a
Linux node2 2.6.-.el5 #1 SMP Tue Mar 16 21:52:43 EDT 2010 i686 i686 i386 GNU/Linux
[root@node2 redhat]# ls
oracleasm-2.6.-.el5-2.0.-.el5.i686.rpm
oracleasmlib-2.0.-.el5.i386.rpm
oracleasm-support-2.1.-.el5.i386.rpm
[root@node1 x86_64]# ls
oracleasm-2.6.-.el5-2.0.-.el5.x86_64.rpm
oracleasmlib-2.0.-.el5.x86_64.rpm
oracleasm-support-2.1.-.el5.x86_64.rpm
[root@node2 redhat]# rpm –ivh *
--格式化过程,sdb~sdf,按顺序:n、p、1、enter、enter、w--
[root@node1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4):
First cylinder (1-130, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130):
Using default value 130
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
--再次查看磁盘,已经格式化完毕,两节点必须一样--
[root@node1 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, bytes
heads, sectors/track, cylinders
Units = cylinders of * = bytes
Device Boot Start End Blocks Id System
/dev/sda1 * Linux
/dev/sda2 + Linux swap / Solaris
/dev/sda3 Linux
Disk /dev/sdb: MB, bytes
heads, sectors/track, cylinders
Units = cylinders of * = bytes
Device Boot Start End Blocks Id System
/dev/sdb1 + Linux
Disk /dev/sdc: MB, bytes
heads, sectors/track, cylinders
Units = cylinders of * = bytes
Device Boot Start End Blocks Id System
/dev/sdc1 + Linux
Disk /dev/sdd: MB, bytes
heads, sectors/track, cylinders
Units = cylinders of * = bytes
Device Boot Start End Blocks Id System
/dev/sdd1 + Linux
Disk /dev/sde: 10.7 GB, bytes
heads, sectors/track, cylinders
Units = cylinders of * = bytes
Device Boot Start End Blocks Id System
/dev/sde1 Linux
Disk /dev/sdf: 10.7 GB, bytes
heads, sectors/track, cylinders
Units = cylinders of * = bytes
Device Boot Start End Blocks Id System
/dev/sdf1 Linux
--查看节点node1的磁盘文件--
[root@node1 ~]# ll /dev/sd*
brw-r----- 1 root disk 8, 0 2013-03-28 /dev/sda
brw-r----- 1 root disk 8, 1 03-28 09:16 /dev/sda1
brw-r----- 1 root disk 8, 2 2013-03-28 /dev/sda2
brw-r----- 1 root disk 8, 3 03-28 09:16 /dev/sda3
brw-r----- 1 root disk 8, 16 03-28 09:24 /dev/sdb
brw-r----- 1 root disk 8, 17 03-28 09:24 /dev/sdb1
brw-r----- 1 root disk 8, 32 03-28 09:24 /dev/sdc
brw-r----- 1 root disk 8, 33 03-28 09:25 /dev/sdc1
brw-r----- 1 root disk 8, 48 03-28 09:26 /dev/sdd
brw-r----- 1 root disk 8, 49 03-28 09:26 /dev/sdd1
brw-r----- 1 root disk 8, 64 03-28 11:27 /dev/sde
brw-r----- 1 root disk 8, 65 03-28 11:27 /dev/sde1
brw-r----- 1 root disk 8, 80 03-28 11:29 /dev/sdf
brw-r----- 1 root disk 8, 81 03-28 11:29 /dev/sdf1
--发现节点node2看不到全部格式化好的磁盘文件,重启--
[root@node2 ~]# ll /dev/sd*
brw-r----- 1 root disk 8, 0 2013-03-28 /dev/sda
brw-r----- 1 root disk 8, 1 03-28 09:16 /dev/sda1
brw-r----- 1 root disk 8, 2 2013-03-28 /dev/sda2
brw-r----- 1 root disk 8, 3 03-28 09:16 /dev/sda3
brw-r----- 1 root disk 8, 16 2013-03-28 /dev/sdb
brw-r----- 1 root disk 8, 32 2013-03-28 /dev/sdc
brw-r----- 1 root disk 8, 48 2013-03-28 /dev/sdd
brw-r----- 1 root disk 8, 64 2013-03-28 /dev/sde
brw-r----- 1 root disk 8, 80 2013-03-28 /dev/sdf
[root@node2 ~]#init 6
[root@node2 ~]# ll /dev/sd*
brw-r----- 1 root disk 8, 0 2013-03-28 /dev/sda
brw-r----- 1 root disk 8, 1 03-28 11:40 /dev/sda1
brw-r----- 1 root disk 8, 2 2013-03-28 /dev/sda2
brw-r----- 1 root disk 8, 3 03-28 11:40 /dev/sda3
brw-r----- 1 root disk 8, 16 2013-03-28 /dev/sdb
brw-r----- 1 root disk 8, 17 2013-03-28 /dev/sdb1
brw-r----- 1 root disk 8, 32 2013-03-28 /dev/sdc
brw-r----- 1 root disk 8, 33 2013-03-28 /dev/sdc1
brw-r----- 1 root disk 8, 48 2013-03-28 /dev/sdd
brw-r----- 1 root disk 8, 49 2013-03-28 /dev/sdd1
brw-r----- 1 root disk 8, 64 2013-03-28 /dev/sde
brw-r----- 1 root disk 8, 65 2013-03-28 /dev/sde1
brw-r----- 1 root disk 8, 80 2013-03-28 /dev/sdf
brw-r----- 1 root disk 8, 81 2013-03-28 /dev/sdf1
--节点node1,初始化oracleasm,并创建asm--
[root@node1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm createdisk CRS1 /dev/sdb1
Marking disk "CRS1" as an ASM disk: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm createdisk CRS2 /dev/sdc1
Marking disk "CRS2" as an ASM disk: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm createdisk CRS3 /dev/sdd1
Marking disk "CRS3" as an ASM disk: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm createdisk DATA1 /dev/sde1
Marking disk "DATA1" as an ASM disk: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm createdisk DATA2 /dev/sdf1
Marking disk "DATA2" as an ASM disk: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm listdisks
CRS1
CRS2
CRS3
DATA1
DATA2
--节点node2,初始化oracleasm,并扫描asm--
[root@node2 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node2 ~]# /etc/init.d/oracleasm listdisks
CRS1
CRS2
CRS3
DATA1
DATA2
--------------以下方式是利用udev进行asm创建,redhat5.5没有找到利用udev进行设置的方法,用OEL6.4做个例子------------------
首先确认是否安装了udev软件包
[root@node6 ~]# rpm -qa|grep udev
udev--14.21.el5
通过scsi_id获取设备的块设备的唯一标识名(在OEL6.4上面试验成功)
scsi_id -g -u /dev/sd*
在Oracle Linux 5下,可以使用如下命令(redhat5没成功):
SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_
创建并配置UDEVRules 文件
[root@rac1 rules.d]# touch /etc/udev/rules.d/99-oracle-asmdevices.rules
添加如下内容:
KERNEL=="sd?1",BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u /dev/$name",RESULT=="1ATA_VBOX_HARDDISK_VB83552343-28d5a489",NAME="asm-disk1", OWNER="oracle", GROUP="dba",MODE="0660"
KERNEL=="sd?1",BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u /dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBb96d5ecb-4eae1d96",NAME="asm-disk2", OWNER="oracle", GROUP="dba",MODE="0660"
KERNEL=="sd?1",BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u /dev/$name",RESULT=="1ATA_VBOX_HARDDISK_VBfd7bba6c-b91fba70",NAME="asm-disk3", OWNER="oracle", GROUP="dba",MODE="0660"
KERNEL=="sd?1",BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u /dev/$name",RESULT=="1ATA_VBOX_HARDDISK_VB3239ed0d-db15bbec",NAME="asm-disk4", OWNER="oracle", GROUP="dba",MODE="0660"
--设置oracle与grid用户不需要密码ssh互联,并且要分别执行ssh xxx1/2 date,以防runcluvfy过不去--
--在10g RAC中必须手动配置成功,在11g RAC中不必,因为grid和db的图形安装中可以进行自动配置,简单粗暴。但是为了跑检查脚本,还是手动配置上吧--
[oracle@node1 ~]$ mkdir .ssh
[oracle@node1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
e3::9f:fa:da::a9:cb:c1:::a2:6d:9d:9d:5f oracle@node1
[oracle@node1 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
0b:bf:0e:7f:1c:c7:e6::9c::d4::ec:f7:3f:b0 oracle@node1
[oracle@node1 ~]$ cd .ssh
[oracle@node1 .ssh]$ ls
id_dsa id_dsa.pub id_rsa id_rsa.pub
[oracle@node2 ~]$ mkdir .ssh
[oracle@node2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
:6c:3f:6f::ec:6f:e8:cc:1a:::0b:cf:f5: oracle@node2
[oracle@node2 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
8f:e9::9e:d3::4f::a4:c6:5d:ff:ce:::4f oracle@node2
[oracle@node1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ ssh node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys
[oracle@node1 ~]$ ssh node2 date
[oracle@node1 ~]$ ssh node1 date
[oracle@node2 ~]$ ssh node1 date
[oracle@node2 ~]$ ssh node2 date
[grid@node1 ~]$ mkdir .ssh
[grid@node1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
ce:e0::b0:4c:f2::7d:b9:e5:::::: grid@node1
[grid@node1 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again.
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
:::1a::ff:c3:9f:a1:b1:d2:d9:d9::: grid@node1
[grid@node2 ~]$ mkdir .ssh
[grid@node2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
:d7:d5:e8::5a:c6::c4::::9b:5f:f9: grid@node2
[grid@node2 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
9f:d7:3e:cf:e4:ff::df:c5:8c:::6b:e2::1b grid@node2
[grid@node1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@node1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@node1 ~]$ ssh node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@node1 ~]$ ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@node1 ~]$ scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys
[grid@node1 ~]$ ssh node2 date
[grid@node1 ~]$ ssh node1 date
[grid@node2 ~]$ ssh node1 date
[grid@node2 ~]$ ssh node2 date
--在两个节点安装cvuqdisk包并验证--
在两个 Oracle RAC 节点上安装操作系统程序包 cvuqdisk。
如果没有 cvuqdisk,集群验证实用程序就无法发现共享磁盘,当运行(手动运行或在 Oracle Grid Infrastructure 安装结束时自动运行)集群验证实用程序时,会收到这样的错误消息:“Package cvuqdisk not installed”。使用适用于硬件体系结构(例如,x86_64 或 i386)的 cvuqdisk RPM。
cvuqdisk RPM 包含在 Oracle Grid Infrastructure 安装介质上的 rpm 目录中。
设置环境变量 CVUQDISK_GRP,使其指向作为 cvuqdisk 的所有者所在的组(本文为 oinstall):
[root@node1 rpm]# pwd
/oracle-tools/oracle11gR2_linux/grid/rpm
[root@node1 rpm]# ll
总计 12
-rwxr-xr-x 1 root root 8233 2011-09-22 cvuqdisk-1.0.9-1.rpm
[root@node1 rpm]# rpm -ivh *
Preparing… (100%########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk ( 99%########################################### [100####3
[grid@node1 ~]$ export CVUQDISK_GRP=oinstall
--安装前检查,最后会因为dns过不去,可以忽略。最后就是图形界面安装,先用grid用户安装grid,再用oracle用户安装db,之后用grid用户设置asm,最后oracle用户dbca建立数据库--
[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose >> /opt/ora11g/check
手机扫一扫
移动阅读更方便
你可能感兴趣的文章