头图

测试环境介绍

测试环境:VMware® Workstation 15 Pro 虚拟机
操作系统:CentOS Linux release 7.6.1810 (Core)
系统内核:3.10.0-957.el7.x86_64
硬件架构:x86_64
软件版本:Oracle 11.2.0.4.0

节点名称处理器内存硬盘IP地址(公网)IP地址(私网)
orclrac12C 2core3GB30GB192.168.32.139192.168.49.192
orclrac22C 2core3GB30GB192.168.32.140192.168.49.193

搭建部署规划

系统软件安装规划

数据库配置参数VALUE
ORACLE_BASE/u01/app/oracle
ORACLE_HOME/u01/app/oracle/product/11.2.0/db_1
DB_NAMEORCLRAC
ORACLE_SIDorclrac1,orclrac2
TNS_ADMIN$ORACLE_HOME/network/admin
ORACLE管理账户的口令sys/oracle,system/oracle
数据库存放位置ASM
数据库归档模式开启

系统用户组规划

组名称组ID说明节点
oinstall501Oracle清单和软件所有者rac1,rac2
asmadmin504Oracle自动存储管理组rac1,rac2
asmdba506ASM数据库管理员组rac1,rac2
dba502数据库管理员组rac1,rac2

磁盘存储规划

存储组件存储大小组名称数量
数据文件10GBDATA3块
集群资源文件2GBOCR3块
归档数据文件20GBFRA1块

image.png

部署前准备

关闭防火墙,虚拟化设置,NTP时间同步(RAC双节点)

[root@orclrac1 ~]# systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)

[root@orclrac1 ~]# getenforce
Disabled

[root@orclrac1 soft]# systemctl status ntpd
[root@orclrac1 soft]# mv /etc/ntp.conf /etc/ntp.conf.bak

软件包准备(RAC双节点)

[root@orclrac1 ~]# rpm -q binutils compat-libcap1 compat-libstdc++-33 gcc gcc-c++ glibc glibc-devel libgcc libstdc++ libstdc++-devel libaio libaio-devel make sysstat xorg-x11-apps elfutils-libelf-devel
[root@orclrac1 ~]# yum -y install compat-libcap1 compat-libstdc++-33 gcc-c++ libstdc++-devel libaio-devel xorg-x11-apps elfutils-libelf-devel

上传软件包

[root@orclrac1 ~]# mkdir /soft
[root@orclrac1 ~]# cd /soft/
通过rz或其他方式上传软件包到服务器上
[root@orclrac1 soft]# ll
p13390677_112040_Linux-x86-64_1of7.zip
p13390677_112040_Linux-x86-64_2of7.zip
p13390677_112040_Linux-x86-64_3of7.zip
pdksh-5.2.14-37.el5_8.1.x86_64.rpm
[root@orclrac1 soft]# rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm

orclrac2节点也同步安装一下pdksh包,注意主机名称

[root@orclrac2 ~]# mkdir /soft
[root@orclrac2 ~]# cd /soft/
[root@orclrac1 soft]# scp pdksh-5.2.14-37.el5_8.1.x86_64.rpm 192.168.32.140:/soft/
[root@orclrac2 soft]# rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm

创建系统用户和组(RAC双节点)

[root@orclrac1 soft]# /usr/sbin/groupadd -g 501 oinstall
[root@orclrac1 soft]# /usr/sbin/groupadd -g 502 dba
[root@orclrac1 soft]# /usr/sbin/groupadd -g 504 asmadmin
[root@orclrac1 soft]# /usr/sbin/groupadd -g 506 asmdba
[root@orclrac1 soft]# /usr/sbin/groupadd -g 507 asmoper
创建系统账户,并指定给对应的组
[root@orclrac1 soft]# /usr/sbin/useradd -u 501 -g oinstall -G dba,asmadmin,asmdba,asmoper grid
[root@orclrac1 soft]# /usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle
[root@orclrac1 soft]# passwd grid
[root@orclrac1 soft]# passwd oracle
[root@orclrac1 soft]# id grid
uid=501(grid) gid=501(oinstall) groups=501(oinstall),502(dba),504(asmadmin),506(asmdba),507(asmoper)
[root@orclrac1 soft]# id oracle
uid=502(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),506(asmdba)

修改系统用户的环境变量(RAC双节点)

grid

注意两个节点的ORACLE_SID区别不同,+ASM1和+ASM2
[root@orclrac1 soft]# su - grid
[grid@orclrac1 ~]$ vim .bash_profile
export LANG=en_US.utf8
export LANGUAGE=en_US.utf8
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid_home
export ORACLE_UNQNAME=+ASM

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin
export PATH
[grid@orclrac1 ~]$ source .bash_profile

oracle

注意两个节点的ORACLE_SID区别不同,orclrac1和orclrac2
[root@orclrac1 soft]# su - oracle
[oracle@orclrac1 ~]$ vim .bash_profile
umask 022
export LANG=en_US.utf8
export LANGUAGE=en_US.utf8
export ORACLE_SID=orclrac1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export ORACLE_UNQNAME=orcl
export NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss'
alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin

export PATH
[oracle@orclrac1 ~]$ source .bash_profile

创建软件目录(RAC双节点)

[root@orclrac1 soft]# mkdir -p /u01/app/oraInventory
[root@orclrac1 soft]# chown -R grid:oinstall /u01/app/oraInventory
[root@orclrac1 soft]# chmod -R 775 /u01/app/oraInventory
[root@orclrac1 soft]# mkdir -p /u01/app/11.2.0/grid_home
[root@orclrac1 soft]# mkdir -p /u01/app/grid/
[root@orclrac1 soft]# chown -R grid:oinstall /u01/app/11.2.0/grid_home
[root@orclrac1 soft]# chown -R grid:oinstall /u01/app/grid/
[root@orclrac1 soft]# chmod -R 775 /u01/app/11.2.0/grid_home
[root@orclrac1 soft]# chmod -R 775 /u01/app/grid/
[root@orclrac1 soft]# mkdir -p /u01/app/oracle
[root@orclrac1 soft]# chown -R oracle:oinstall /u01/app/oracle
[root@orclrac1 soft]# chmod -R 775 /u01/app/oracle
[root@orclrac1 soft]# mkdir -p /u01/app/oracle/product/11.2.0/db_1
[root@orclrac1 soft]# chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1
[root@orclrac1 soft]# chmod -R 775 /u01/app/oracle/product/11.2.0/db_1
切换到不同用户下,测试cd $ORACLE_BASE和$ORACLE_HOME
[root@orclrac1 soft]# su - grid
Last login: Mon Jul 19 15:41:07 CST 2021 on pts/1
[grid@orclrac1 ~]$ cd $ORACLE_BASE
[grid@orclrac1 grid]$ pwd
/u01/app/grid
[grid@orclrac1 grid]$ cd $ORACLE_HOME
[grid@orclrac1 grid_home]$ pwd
/u01/app/11.2.0/grid_home
[grid@orclrac1 grid_home]$ exit
logout
[root@orclrac1 soft]# su - oracle
Last login: Mon Jul 19 15:45:06 CST 2021 on pts/1
[oracle@orclrac1 ~]$ cd $ORACLE_BASE
[oracle@orclrac1 oracle]$ pwd
/u01/app/oracle
[oracle@orclrac1 oracle]$ cd $ORACLE_HOME
[oracle@orclrac1 db_1]$ pwd
/u01/app/oracle/product/11.2.0/db_1

修改系统参数(RAC双节点)

[root@orclrac1 soft]# vim /etc/sysctl.conf
kernel.shmall = 2097152
kernel.shmmax = 1934714880
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
kernel.panic_on_oops = 1
[root@orclrac1 soft]# /sbin/sysctl -p

[root@orclrac1 soft]# vim /etc/security/limits.conf
# 添加如下
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536

[root@orclrac1 soft]# vim /etc/pam.d/login
# 添加如下
session required pam_limits.so

修改/etc/hosts文件(RAC双节点)

[root@orclrac1 soft]# vim /etc/hosts
# rac-public-ip
192.168.32.139 orclrac1
192.168.32.140 orclrac2

# rac-vip
192.168.32.23 orclrac1-vip
192.168.32.13 orclrac2-vip

# rac-private-ip
192.168.49.192 orclrac1-priv
192.168.49.193 orclrac2-priv

# rac-scan
192.168.32.14 orclrac-scan

配置用户等效性,免密访问(RAC双节点)

[root@orclrac1 soft]# su - grid
[grid@orclrac1 ~]$ ssh-keygen -t rsa
[grid@orclrac1 ~]$ ssh-keygen -t dsa
[root@orclrac1 ~]# su - oracle
[oracle@orclrac1 ~]$ ssh-keygen -t rsa
[oracle@orclrac1 ~]$ ssh-keygen -t dsa

在节点1执行

[grid@orclrac1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@orclrac1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@orclrac1 ~]$ ssh orclrac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@orclrac1 ~]$ ssh orclrac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@orclrac1 ~]$ scp /home/grid/.ssh/authorized_keys orclrac2:~/.ssh/authorized_keys

[oracle@orclrac1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@orclrac1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@orclrac1 ~]$ ssh orclrac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@orclrac1 ~]$ ssh orclrac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@orclrac1 ~]$ scp /home/oracle/.ssh/authorized_keys orclrac2:~/.ssh/authorized_keys

验证用户等效性

[grid@orclrac1 ~]$ ssh orclrac1 date
Mon Jul 19 16:48:32 CST 2021
[grid@orclrac1 ~]$ ssh orclrac2 date
Mon Jul 19 16:48:35 CST 2021
[oracle@orclrac1 ~]$ ssh orclrac1 date
Mon Jul 19 16:48:58 CST 2021
[oracle@orclrac1 ~]$ ssh orclrac2 date
Mon Jul 19 16:49:00 CST 2021

[grid@orclrac2 ~]$ ssh orclrac1 date
Mon Jul 19 16:49:21 CST 2021
[grid@orclrac2 ~]$ ssh orclrac2 date
Mon Jul 19 16:49:24 CST 2021
[oracle@orclrac2 ~]$ ssh orclrac1 date
Mon Jul 19 16:49:49 CST 2021
[oracle@orclrac2 ~]$ ssh orclrac2 date
Mon Jul 19 16:49:50 CST 2021

构建共享存储iSCSI

服务端(Shared storage server)使用节点1(orclrac1)进行创建

安装管理工具

[root@orclrac1 soft]# yum install -y targetcli

启动iSCSI服务

[root@orclrac1 soft]# systemctl start target.service
[root@orclrac1 soft]# systemctl enable target.service

查看磁盘挂载情况

[root@orclrac1 soft]# fdisk -l

Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000af22d

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 616447 307200 83 Linux
/dev/sda2 616448 6907903 3145728 82 Linux swap / Solaris
/dev/sda3 6907904 62914559 28003328 83 Linux

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sde: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdf: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdg: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdh: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

在Server端配置共享给orclrac1和orclrac2

[root@orclrac1 soft]# rpm -q epel-release
package epel-release is not installed
[root@orclrac1 soft]# yum -y install epel-release
[root@orclrac1 soft]# yum --enablerepo=epel -y install scsi-target-utils libxslt

配置targets.conf,在文件末尾添加如下内容

[root@orclrac1 soft]# vim /etc/tgt/targets.conf
<target iqn.2021-07.com.oracle:rac>

  backing-store /dev/sdb
  backing-store /dev/sdc
  backing-store /dev/sdd
  backing-store /dev/sde
  backing-store /dev/sdf
  backing-store /dev/sdg
  backing-store /dev/sdh
  initiator-address 192.168.49.0/24
  write-cache off

</target>

启动tgtd

[root@orclrac1 soft]# /bin/systemctl restart tgtd.service
[root@orclrac1 soft]# systemctl restart target.service
[root@orclrac1 soft]# systemctl enable tgtd
[root@orclrac1 soft]# tgtadm --lld iscsi --mode target --op show
Target 1: iqn.2021-07.com.oracle:rac

System information:
    Driver: iscsi
    State: ready
I_T nexus information:
    I_T nexus: 5
        Initiator: iqn.1994-05.com.redhat:e49b77a92f94 alias: rac1
        Connection: 0
            IP Address: 192.168.49.192
    I_T nexus: 8
        Initiator: iqn.1994-05.com.redhat:e49b77a92f94 alias: rac2
        Connection: 0
            IP Address: 192.168.49.193
LUN information:
    LUN: 0
        Type: controller
        SCSI ID: IET     00010000
        SCSI SN: beaf10
        Size: 0 MB, Block size: 1
        Online: Yes
        Removable media: No
        Prevent removal: No
        Readonly: No
        SWP: No
        Thin-provisioning: No
        Backing store type: null
        Backing store path: None
        Backing store flags: 
    LUN: 1
        Type: disk
        SCSI ID: IET     00010001
        SCSI SN: beaf11
        Size: 10737 MB, Block size: 512
        Online: Yes
        Removable media: No
        Prevent removal: No
        Readonly: No
        SWP: No
        Thin-provisioning: No
        Backing store type: rdwr
        Backing store path: /dev/sdb
        Backing store flags: 
    LUN: 2
        Type: disk
        SCSI ID: IET     00010002
        SCSI SN: beaf12
        Size: 10737 MB, Block size: 512
        Online: Yes
        Removable media: No
        Prevent removal: No
        Readonly: No
        SWP: No
        Thin-provisioning: No
        Backing store type: rdwr
        Backing store path: /dev/sdc
        Backing store flags: 
    LUN: 3
        Type: disk
        SCSI ID: IET     00010003
        SCSI SN: beaf13
        Size: 10737 MB, Block size: 512
        Online: Yes
        Removable media: No
        Prevent removal: No
        Readonly: No
        SWP: No
        Thin-provisioning: No
        Backing store type: rdwr
        Backing store path: /dev/sdd
        Backing store flags: 
    LUN: 4
        Type: disk
        SCSI ID: IET     00010004
        SCSI SN: beaf14
        Size: 2147 MB, Block size: 512
        Online: Yes
        Removable media: No
        Prevent removal: No
        Readonly: No
        SWP: No
        Thin-provisioning: No
        Backing store type: rdwr
        Backing store path: /dev/sde
        Backing store flags: 
    LUN: 5
        Type: disk
        SCSI ID: IET     00010005
        SCSI SN: beaf15
        Size: 2147 MB, Block size: 512
        Online: Yes
        Removable media: No
        Prevent removal: No
        Readonly: No
        SWP: No
        Thin-provisioning: No
        Backing store type: rdwr
        Backing store path: /dev/sdf
        Backing store flags: 
    LUN: 6
        Type: disk
        SCSI ID: IET     00010006
        SCSI SN: beaf16
        Size: 2147 MB, Block size: 512
        Online: Yes
        Removable media: No
        Prevent removal: No
        Readonly: No
        SWP: No
        Thin-provisioning: No
        Backing store type: rdwr
        Backing store path: /dev/sdg
        Backing store flags: 
    LUN: 7
        Type: disk
        SCSI ID: IET     00010007
        SCSI SN: beaf17
        Size: 21475 MB, Block size: 512
        Online: Yes
        Removable media: No
        Prevent removal: No
        Readonly: No
        SWP: No
        Thin-provisioning: No
        Backing store type: rdwr
        Backing store path: /dev/sdh
        Backing store flags: 
Account information:
ACL information:
    192.168.49.0/24

配置客户端(client),在RAC双节点上

安装 iscsi-initiator-utils,安裝 iSCSI Client 软件

[root@orclrac1 soft]# yum -y install iscsi-initiator-utils
[root@orclrac1 soft]# rpm -qa | grep iscsi
iscsi-initiator-utils-iscsiuio-6.2.0.874-20.el7_9.x86_64
libiscsi-1.9.0-7.el7.x86_64
libvirt-daemon-driver-storage-iscsi-4.5.0-10.el7.x86_64
iscsi-initiator-utils-6.2.0.874-20.el7_9.x86_64

重启客户端

[root@orclrac1 soft]# systemctl restart iscsid.service

配置 initiatorname.iscsi

[root@orclrac1 soft]# vim /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2021-07.com.oracle:rac

启动iscsi

[root@orclrac1 soft]# systemctl restart iscsi
[root@orclrac1 soft]# systemctl enable iscsi.service

共享存储注册

通过3260端口查看开放了哪些共享存储
[root@orclrac1 soft]# iscsiadm -m discovery -tsendtargets -p 192.168.49.192:3260
192.168.49.192:3260,1 iqn.2021-07.com.oracle:rac
[root@orclrac1 soft]# iscsiadm -m node -T discovery -T iqn.2021-07.com.oracle:rac -p 192.168.49.192:3260
登录共享存储
[root@orclrac1 soft]# iscsiadm -m node -T iqn.2021-07.com.oracle:rac -p 192.168.49.192:3260 -l
Logging in to [iface: default, target: iqn.2021-07.com.oracle:rac, portal: 192.168.49.192,3260] (multiple)
Login to [iface: default, target: iqn.2021-07.com.oracle:rac, portal: 192.168.49.192,3260] successful.
探测下共享存储的目录
[root@orclrac1 soft]# partprobe
[root@orclrac1 soft]# fdisk -l

使用udev建立磁盘映射关系

这里我们不采用raw裸设备的方式,而是使用块设备(不需要分区操作)

获取WWID

虚拟机通过下面这个命令才能获取到WWID,wwn为设备识别号注意区分,映射的磁盘为i->o,原因是orclrac1作为了共享存储Server端
[root@orclrac1 ~]# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root 9 Jul 19 14:20 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root 9 Jul 19 17:35 scsi-360000000000000000e00000000010001 -> ../../sdi
lrwxrwxrwx 1 root root 9 Jul 19 17:35 scsi-360000000000000000e00000000010002 -> ../../sdj
lrwxrwxrwx 1 root root 9 Jul 19 17:35 scsi-360000000000000000e00000000010003 -> ../../sdk
lrwxrwxrwx 1 root root 9 Jul 19 17:35 scsi-360000000000000000e00000000010004 -> ../../sdl
lrwxrwxrwx 1 root root 9 Jul 19 17:35 scsi-360000000000000000e00000000010005 -> ../../sdm
lrwxrwxrwx 1 root root 9 Jul 19 17:35 scsi-360000000000000000e00000000010006 -> ../../sdn
lrwxrwxrwx 1 root root 9 Jul 19 17:35 scsi-360000000000000000e00000000010007 -> ../../sdo
lrwxrwxrwx 1 root root 9 Jul 19 17:35 wwn-0x60000000000000000e00000000010001 -> ../../sdi
lrwxrwxrwx 1 root root 9 Jul 19 17:35 wwn-0x60000000000000000e00000000010002 -> ../../sdj
lrwxrwxrwx 1 root root 9 Jul 19 17:35 wwn-0x60000000000000000e00000000010003 -> ../../sdk
lrwxrwxrwx 1 root root 9 Jul 19 17:35 wwn-0x60000000000000000e00000000010004 -> ../../sdl
lrwxrwxrwx 1 root root 9 Jul 19 17:35 wwn-0x60000000000000000e00000000010005 -> ../../sdm
lrwxrwxrwx 1 root root 9 Jul 19 17:35 wwn-0x60000000000000000e00000000010006 -> ../../sdn
lrwxrwxrwx 1 root root 9 Jul 19 17:35 wwn-0x60000000000000000e00000000010007 -> ../../sdo

[root@orclrac1 ~]# cd /etc/udev/rules.d/
[root@orclrac1 rules.d]# vim 99-ASM.rules
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360000000000000000e00000000010001", SYMLINK+="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360000000000000000e00000000010002", SYMLINK+="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360000000000000000e00000000010003", SYMLINK+="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360000000000000000e00000000010004", SYMLINK+="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360000000000000000e00000000010005", SYMLINK+="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360000000000000000e00000000010006", SYMLINK+="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360000000000000000e00000000010007", SYMLINK+="asm-diskh", OWNER="grid", GROUP="asmadmin", MODE="0660"
# 注意未分区使用$name,分区使用$parent

注意以下发现磁盘变更的操作,请不要在生产环境中使用

[root@orclrac1 rules.d]# udevadm trigger --type=devices --action=change
[root@orclrac1 rules.d]# systemctl restart systemd-udevd.service
[root@orclrac1 rules.d]# udevadm control --reload-rules
[root@orclrac1 rules.d]# udevadm trigger

[root@orclrac1 rules.d]# ll /dev/asm*
lrwxrwxrwx 1 root root 3 Jul 20 10:32 /dev/asm-diskb -> sdi
lrwxrwxrwx 1 root root 3 Jul 20 10:32 /dev/asm-diskc -> sdj
lrwxrwxrwx 1 root root 3 Jul 20 10:32 /dev/asm-diskd -> sdk
lrwxrwxrwx 1 root root 3 Jul 20 10:32 /dev/asm-diske -> sdl
lrwxrwxrwx 1 root root 3 Jul 20 10:32 /dev/asm-diskf -> sdm
lrwxrwxrwx 1 root root 3 Jul 20 10:32 /dev/asm-diskg -> sdn
lrwxrwxrwx 1 root root 3 Jul 20 10:32 /dev/asm-diskh -> sdo

orclrac2同步映射关系

[root@orclrac1 rules.d]# scp 99-ASM.rules orclrac2:/etc/udev/rules.d/
[root@orclrac2 rules.d]# udevadm trigger --type=devices --action=change
[root@orclrac2 rules.d]# systemctl restart systemd-udevd.service
[root@orclrac2 rules.d]# udevadm control --reload-rules
[root@orclrac2 rules.d]# udevadm trigger
[root@orclrac2 rules.d]# ll /dev/asm*
lrwxrwxrwx. 1 root root 3 Jul 20 10:37 /dev/asm-diskb -> sdb
lrwxrwxrwx. 1 root root 3 Jul 20 10:37 /dev/asm-diskc -> sdc
lrwxrwxrwx. 1 root root 3 Jul 20 10:37 /dev/asm-diskd -> sdd
lrwxrwxrwx. 1 root root 3 Jul 20 10:37 /dev/asm-diske -> sde
lrwxrwxrwx. 1 root root 3 Jul 20 10:37 /dev/asm-diskf -> sdf
lrwxrwxrwx. 1 root root 3 Jul 20 10:37 /dev/asm-diskg -> sdg
lrwxrwxrwx. 1 root root 3 Jul 20 10:37 /dev/asm-diskh -> sdh

集群软件安装

[root@orclrac1 soft]# mv p13390677_112040_Linux-x86-64_3of7.zip /home/grid/
[root@orclrac1 soft]# chown grid:oinstall /home/grid/p13390677_112040_Linux-x86-64_3of7.zip
[root@orclrac1 soft]# su - grid
[grid@orclrac1 ~]$ unzip p13390677_112040_Linux-x86-64_3of7.zip
[grid@orclrac1 ~]$ cd grid/
解决图形安装界面出现提示框不存在或过小打不开的问题
[grid@orclrac1 grid]$ ./runInstaller -jreLoc /etc/alternatives/jre_1.8.0

image.png

image.png

image.png

image.png

!image.png

image.png

!image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png
根据检查状态提示信息,修改了内核对应的参数值,使用yum安装了遗漏的package
其实cvuqdisk包在grid的安装包里面自带,进入grid软件包目录下的rpm目录即可找到,安装即可

image.png
RAC双节点都需要安装

image.png
剩下的两个,一个是DNS,另外一个是ASM检查,暂时忽略跳过

image.png

image.png

使用root.sh脚本执行时候会卡在下面这句话上:
[client(42459)]CRS-2101:The OLR was formatted using version 3
该问题是ORACLE的一个BUG
由于在执行root.sh时候 会在/tmp/.oracle下产生一个文件npohasd文件,此文件的只有root用户有权限,因此,出现不能启动ohasd进程,解决办法如下图:
image.png
重新再次执行roo.sh脚本即可

image.png
提示[INS-20802],点ok出现这个错误是因为在Hosts配置文件里配置了SCAN,未启用DNS解析,不影响RAC正常运行
image.png

image.png
集群启动完毕

创建ASM磁盘组

本次任务将创建3个ASM磁盘组,分别为:OCR,DATA,FRA。其中DATA存放数据库文件,FRA存放闪回文件。

使用asmca,启动ASM磁盘组创建向导

image.png

image.png

image.png
比较上图创建DATA磁盘组的不同之处是,我把冗余度调整了一下

image.png

安装Oracle软件

[root@orclrac1 ~]# cd /soft/
[root@orclrac1 soft]# mv p13390677_112040_Linux-x86-64_* /home/oracle/
[root@orclrac1 soft]# ll /home/oracle/
total 2487200
-rw-r--r-- 1 root root 1395582860 Jun 22 13:50 p13390677_112040_Linux-x86-64_1of7.zip
-rw-r--r-- 1 root root 1151304589 Jun 22 13:48 p13390677_112040_Linux-x86-64_2of7.zip
[root@orclrac1 soft]# ll -d /home/oracle
drwx------ 6 oracle oinstall 248 Jul 22 10:25 /home/oracle
[root@orclrac1 soft]# chown oracle:oinstall /home/oracle/*
[root@orclrac1 soft]# ll /home/oracle/
total 2487200
-rw-r--r-- 1 oracle oinstall 1395582860 Jun 22 13:50 p13390677_112040_Linux-x86-64_1of7.zip
-rw-r--r-- 1 oracle oinstall 1151304589 Jun 22 13:48 p13390677_112040_Linux-x86-64_2of7.zip
[root@orclrac1 soft]# su - oracle
[oracle@orclrac1 ~]$ unzip p13390677_112040_Linux-x86-64_1of7.zip
[oracle@orclrac1 ~]$ unzip p13390677_112040_Linux-x86-64_2of7.zip
[oracle@orclrac1 ~]$ ll
total 2487200
drwxr-xr-x 7 oracle oinstall 136 Aug 27 2013 database
-rw-r--r-- 1 oracle oinstall 1395582860 Jun 22 13:50 p13390677_112040_Linux-x86-64_1of7.zip
-rw-r--r-- 1 oracle oinstall 1151304589 Jun 22 13:48 p13390677_112040_Linux-x86-64_2of7.zip
[oracle@orclrac1 ~]$ cd database/
[oracle@orclrac1 database]$ ./runInstaller
image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

创建数据库

[oracle@orclrac1 database]$ dbca
image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

至此,集群搭建成功!!!

主要参考文献

CSDN
image.png
https://blog.csdn.net/weixin_...


瑞耀东芳
1 声望0 粉丝

DBA