在Oracle Enterprise Linux(OEL) 7.6操作系统上搭建2节点Oracle 19c RAC (一、环境准备)
限于篇幅,整个RAC安装拆分为以下几个独立文章。
3.DB软件安装部分
4.建库部分 待发布
本文主要记录在Oracle Enterprise Linux(OEL)7.6操作系统上搭建2节点19c RAC的过程。首先初始化环境,还是通过oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm包来设置内核参数、limit及创建oracle用户,两个节点都需要这个操作。
[root@SL010A-IVDB02 u01]# yum install oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm Loaded plugins: langpacks, ulninfo Examining oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm: oracle-database-preinstall-19c-1.0-1.el7.x86_64 Marking oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package oracle-database-preinstall-19c.x86_64 0:1.0-1.el7 will be installed --> Processing Dependency: ksh for package: oracle-database-preinstall-19c-1.0-1.el7.x86_64 --> Processing Dependency: libaio-devel for package: oracle-database-preinstall-19c-1.0-1.el7.x86_64 --> Running transaction check ---> Package ksh.x86_64 0:20120801-139.0.1.el7 will be installed ---> Package libaio-devel.x86_64 0:0.3.109-13.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved =============================================================================================================== Package Arch Version Repository Size =============================================================================================================== Installing: oracle-database-preinstall-19c x86_64 1.0-1.el7 /oracle-database-preinstall-19c-1.0-1.el7.x86_64 55 k Installing for dependencies: ksh x86_64 20120801-139.0.1.el7 centos-local-yum 883 k libaio-devel x86_64 0.3.109-13.el7 centos-local-yum 12 k Transaction Summary =============================================================================================================== Install 1 Package (+2 Dependent packages) Total size: 950 k Total download size: 895 k Installed size: 3.2 M Is this ok [y/d/N]: y Downloading packages: --------------------------------------------------------------------------------------------------------------- Total 3.1 MB/s | 895 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ksh-20120801-139.0.1.el7.x86_64 1/3 Installing : libaio-devel-0.3.109-13.el7.x86_64 2/3 Installing : oracle-database-preinstall-19c-1.0-1.el7.x86_64 3/3 Verifying : libaio-devel-0.3.109-13.el7.x86_64 1/3 Verifying : ksh-20120801-139.0.1.el7.x86_64 2/3 Verifying : oracle-database-preinstall-19c-1.0-1.el7.x86_64 3/3 Installed: oracle-database-preinstall-19c.x86_64 0:1.0-1.el7 Dependency Installed: ksh.x86_64 0:20120801-139.0.1.el7 libaio-devel.x86_64 0:0.3.109-13.el7 Complete!
oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm包虽然会创建oracle用户及相关的组,安装单实例数据库足够了,但缺少RAC需要的组,另外grid用户也需要去创建,这个包并不会创建grid用户。调整后oracle和grid用户的组信息如下:
[root@SL010A-IVDB02 u01]# id oracle uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba),54332(asmdba) [root@SL010A-IVDB02 u01]# useradd -u 54322 -g oinstall -G dba,oper,asmadmin,asmdba,asmoper,racdba grid [root@SL010A-IVDB02 u01]# id grid uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54330(racdba),54331(asmadmin),54332(asmdba),54333(asmoper)
设置oracle和grid用户的密码。
[root@SL010A-IVDB02 u01]# passwd oracle Changing password for user oracle. New password: Retype new password: passwd: all authentication tokens updated successfully. [root@SL010A-IVDB02 u01]# passwd grid Changing password for user grid. New password: Retype new password: passwd: all authentication tokens updated successfully.
两台服务器都需要关闭SELINUX。
[root@SL010A-IVDB01 ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
两台服务器都需要关闭防火墙。
[root@SL010A-IVDB01 ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@SL010A-IVDB01 ~]# systemctl stop firewalld
内核参数oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm包会自动配置,基本不需要在动了,信息如下:
[root@SL010A-IVDB01 limits.d]# cat /etc/sysctl.conf # sysctl settings are defined through files in # /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/. # # Vendors settings live in /usr/lib/sysctl.d/. # To override a whole file, create a new file with the same in # /etc/sysctl.d/ and put new settings there. To override # only specific settings, add a file with a lexically later # name in /etc/sysctl.d/ and put new settings there. # # For more information, see sysctl.conf(5) and sysctl.d(5). # oracle-database-preinstall-19c setting for fs.file-max is 6815744 fs.file-max = 6815744 # oracle-database-preinstall-19c setting for kernel.sem is '250 32000 100 128' kernel.sem = 250 32000 100 128 # oracle-database-preinstall-19c setting for kernel.shmmni is 4096 kernel.shmmni = 4096 # oracle-database-preinstall-19c setting for kernel.shmall is 1073741824 on x86_64 kernel.shmall = 1073741824 # oracle-database-preinstall-19c setting for kernel.shmmax is 4398046511104 on x86_64 kernel.shmmax = 4398046511104 # oracle-database-preinstall-19c setting for kernel.panic_on_oops is 1 per Orabug 19212317 kernel.panic_on_oops = 1 # oracle-database-preinstall-19c setting for net.core.rmem_default is 262144 net.core.rmem_default = 262144 # oracle-database-preinstall-19c setting for net.core.rmem_max is 4194304 net.core.rmem_max = 4194304 # oracle-database-preinstall-19c setting for net.core.wmem_default is 262144 net.core.wmem_default = 262144 # oracle-database-preinstall-19c setting for net.core.wmem_max is 1048576 net.core.wmem_max = 1048576 # oracle-database-preinstall-19c setting for net.ipv4.conf.all.rp_filter is 2 net.ipv4.conf.all.rp_filter = 2 # oracle-database-preinstall-19c setting for net.ipv4.conf.default.rp_filter is 2 net.ipv4.conf.default.rp_filter = 2 # oracle-database-preinstall-19c setting for fs.aio-max-nr is 1048576 fs.aio-max-nr = 1048576 # oracle-database-preinstall-19c setting for net.ipv4.ip_local_port_range is 9000 65500 net.ipv4.ip_local_port_range = 9000 65500
oracle用户的limit信息oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm包也会设置,但需要增加grid用户的limit设置,两个节点都需要。
[root@SL010A-IVDB01 limits.d]# cat oracle-database-preinstall-19c.conf grid soft nofile 1024 grid hard nofile 65536 grid soft nproc 16384 grid hard nproc 16384 grid soft stack 10240 grid hard stack 32768 grid hard memlock 134217728 grid soft memlock 134217728 # oracle-database-preinstall-19c setting for nofile soft limit is 1024 oracle soft nofile 1024 # oracle-database-preinstall-19c setting for nofile hard limit is 65536 oracle hard nofile 65536 # oracle-database-preinstall-19c setting for nproc soft limit is 16384 # refer orabug15971421 for more info. oracle soft nproc 16384 # oracle-database-preinstall-19c setting for nproc hard limit is 16384 oracle hard nproc 16384 # oracle-database-preinstall-19c setting for stack soft limit is 10240KB oracle soft stack 10240 # oracle-database-preinstall-19c setting for stack hard limit is 32768KB oracle hard stack 32768 # oracle-database-preinstall-19c setting for memlock hard limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90 % of RAM oracle hard memlock 134217728 # oracle-database-preinstall-19c setting for memlock soft limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90% of RAM oracle soft memlock 134217728
配置/etc/hosts文件,两个节点都需要。
[root@SL0101A-IVDB01 19c]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 ##public ip 10.0.2.96 sl010a-ivdb01 10.0.2.98 sl010a-ivdb02 ##vip 10.0.2.75 sl010a-ivdb01-vip 10.0.2.76 sl010a-ivdb02-vip ##private ip 192.168.100.96 sl010a-ivdb01-pri 192.168.100.98 sl010a-ivdb02-pri ##scan ip 10.0.2.74 scan
两台服务器都需要设置双网卡绑定并进行网卡切换测试,配置后信息如下:
[root@SL010A-IVDB02 u01]# ifconfig bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet6 fe80::224e:a422:6672:56e2 prefixlen 64 scopeid 0x20<link> ether 98:be:94:35:74:80 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 1180 (1.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 98:be:94:35:74:80 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 1180 (1.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 98:be:94:35:74:80 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s29f0u2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 9a:be:94:35:f4:83 txqueuelen 1000 (Ethernet) RX packets 1077 bytes 70075 (68.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens2f0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 90:e2:ba:6a:66:c8 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xc0700000-c077ffff ens2f1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 90:e2:ba:6a:66:c9 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xc0780000-c07fffff ens3f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.2.98 netmask 255.255.255.0 broadcast 10.0.2.255 inet6 fe80::e5bf:ef44:c11b:ad0 prefixlen 64 scopeid 0x20<link> ether 90:e2:ba:8c:5c:e8 txqueuelen 1000 (Ethernet) RX packets 23158 bytes 1472551 (1.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3378 bytes 1419406 (1.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens3f1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 90:e2:ba:8c:5c:e9 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 336 bytes 31332 (30.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 336 bytes 31332 (30.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:63:fe:75 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
两台服务器都需要使用多路径软件进行磁盘绑定,本案例使用的是Liunx自带的multipath工具做的磁盘绑定,这步可以在一个节点先进行配置并测试,成功后,将multipath的配置文件拷贝到其他节点即可,限于篇幅,具体详细的配置信息这里就不记录了,在BLOG里会有独立的文章记载,可用到文章目录查找或站内搜索查找,绑定后,信息如下:
crs3 (36006016086003500b49583f23803e911) dm-7 DGC ,RAID 10 crs2 (360060160860035002689ec0a3903e911) dm-1 DGC ,RAID 10 crs1 (360060160860035006c1c231e3903e911) dm-12 DGC ,RAID 10 data9 (3600601608600350086c452a93903e911) dm-2 DGC ,RAID 10 data8 (36006016086003500ba73dad43903e911) dm-8 DGC ,RAID 10 data10 (36006016086003500b46155863903e911) dm-10 DGC ,RAID 10 data7 (36006016086003500482ebee93903e911) dm-3 DGC ,RAID 10 data6 (360060160860035000eed47083a03e911) dm-9 DGC ,RAID 10 data5 (360060160860035001819932c3a03e911) dm-11 DGC ,RAID 10 data4 (36006016086003500f0a963ac3a03e911) dm-6 DGC ,RAID 5 data3 (36006016086003500d2b6fbca3a03e911) dm-5 DGC ,RAID 5 data2 (36006016086003500ea9890223b03e911) dm-4 DGC ,RAID 5 data1 (36006016086003500c22ae6353b03e911) dm-0 DGC ,RAID 5
两台服务器都需要配置udev绑定磁盘权限。
[root@SL0101A-IVDB01 network-scripts]# cd /etc/udev/rules.d/ [root@SL0101A-IVDB01 rules.d]# vi 99-asm.rules KERNEL=="dm-0", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-1", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-2", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-3", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-4", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-5", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-6", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-7", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-8", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-9", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-10", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-11", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="dm-12", OWNER="grid", GROUP="asmadmin", MODE="0660"
重启或刷新udev信息。
[root@SL0101A-IVDB01 rules.d]# udevadm trigger --type=devices --action=change [root@SL010A-IVDB02 rules.d]# udevadm control --reload-rules [root@SL010A-IVDB02 rules.d]# ll /dev/dm-* brw-rw---- 1 grid asmadmin 252, 0 May 17 10:20 /dev/dm-0 brw-rw---- 1 grid asmadmin 252, 1 May 17 10:20 /dev/dm-1 brw-rw---- 1 grid asmadmin 252, 10 May 17 10:20 /dev/dm-10 brw-rw---- 1 grid asmadmin 252, 11 May 17 10:20 /dev/dm-11 brw-rw---- 1 grid asmadmin 252, 12 May 17 10:20 /dev/dm-12 brw-rw---- 1 grid asmadmin 252, 2 May 17 10:20 /dev/dm-2 brw-rw---- 1 grid asmadmin 252, 3 May 17 10:20 /dev/dm-3 brw-rw---- 1 grid asmadmin 252, 4 May 17 10:20 /dev/dm-4 brw-rw---- 1 grid asmadmin 252, 5 May 17 10:20 /dev/dm-5 brw-rw---- 1 grid asmadmin 252, 6 May 17 10:20 /dev/dm-6 brw-rw---- 1 grid asmadmin 252, 7 May 17 10:20 /dev/dm-7 brw-rw---- 1 grid asmadmin 252, 8 May 17 10:20 /dev/dm-8 brw-rw---- 1 grid asmadmin 252, 9 May 17 10:20 /dev/dm-9
两台服务器都需要创建grid及oracle的安装目录。
[root@SL010A-IVDB02 network-scripts]# mkdir -p /u01/app/oracle/product/19c/dbhome_1 [root@SL010A-IVDB02 network-scripts]# mkdir -p /u01/grid/product/19c/gridhome_1 [root@SL010A-IVDB02 network-scripts]# mkdir -p /u01/gridbase [root@SL010A-IVDB02 network-scripts]# chown -R oracle.oinstall /u01/ [root@SL010A-IVDB02 network-scripts]# chown -R grid.oinstall /u01/grid*
两台服务器都需要设置grid和oracle用户的环境变量,这里只记录第一个节点的相关信息,第二节点除ORACLE_SID不一样之外,其他都一样。
[root@SL010A-IVDB02 network-scripts]# su - grid Last login: Thu May 16 15:16:31 CST 2019 on pts/1 [grid@SL010A-IVDB02 ~]$ cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/.local/bin:$HOME/bin export PATH export ORACLE_SID=+ASM1 export ORACLE_BASE=/u01/gridbase export ORACLE_HOME=/u01/grid/product/19c/gridhome_1 export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin umask 022 [grid@SL010A-IVDB02 ~]$ exit logout [root@SL010A-IVDB02 network-scripts]# su - oracle Last login: Thu May 16 15:08:39 CST 2019 on pts/1 -bash: /home/oracle: Is a directory [oracle@SL010A-IVDB02 ~]$ cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/.local/bin:$HOME/bin export PATH export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/19c/dbhome_1 export ORACLE_SID=ivldb2 export PATH.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin export NLS_LANG=AMERICAN_AMERICA.AL32UTF8 umask 022
将grid和oracle的安装文件拷贝到对应的ORACLE_HOME路径下,并解压。
[root@SL0101A-IVDB01 19c]# mv LINUX.X64_193000_grid_home.zip /u01/grid/product/19c/gridhome_1/ [root@SL0101A-IVDB01 19c]# mv LINUX.X64_193000_db_home.zip /u01/app/oracle/product/19c/dbhome_1 [grid@SL0101A-IVDB01 gridhome_1]# unzip LINUX.X64_193000_grid_home.zip [oracle@SL0101A-IVDB01 dbhome_1]# unzip LINUX.X64_193000_db_home.zip
两台服务器都需要安装cvuqdisk-1.0.10-1.rpm包,这个包linux的光盘内并不包含,需要到解压后的grid的安装文件中去找,在cv目录下面的rpm目录里面。
[root@SL010A-IVDB01 Packages]# cd /u01/grid/product/19c/gridhome_1/cv/rpm [root@SL010A-IVDB01 rpm]# ls cvuqdisk-1.0.10-1.rpm [root@SL010A-IVDB01 rpm]# yum install cvuqdisk-1.0.10-1.rpm Loaded plugins: langpacks, ulninfo Examining cvuqdisk-1.0.10-1.rpm: cvuqdisk-1.0.10-1.x86_64 Marking cvuqdisk-1.0.10-1.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package cvuqdisk.x86_64 0:1.0.10-1 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================== Package Arch Version Repository Size ============================================================================================================== Installing: cvuqdisk x86_64 1.0.10-1 /cvuqdisk-1.0.10-1 22 k Transaction Summary ============================================================================================================== Install 1 Package Total size: 22 k Installed size: 22 k Is this ok [y/d/N]: y Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Using default group oinstall to install package Installing : cvuqdisk-1.0.10-1.x86_64 1/1 Verifying : cvuqdisk-1.0.10-1.x86_64 1/1 Installed: cvuqdisk.x86_64 0:1.0.10-1 Complete!
由于第二个节点上没有grid的安装包,需要把这个rpm包拷贝到节点二安装。
[root@SL010A-IVDB01 rpm]# scp cvuqdisk-1.0.10-1.rpm 10.0.2.98:/u01/ root@10.0.2.98's password: cvuqdisk-1.0.10-1.rpm [root@SL010A-IVDB02 u01]# rpm -ivh cvuqdisk-1.0.10-1.rpm Preparing... ################################# [100%] Using default group oinstall to install package Updating / installing... 1:cvuqdisk-1.0.10-1 ################################# [100%]
至此,环境准备工作完成,接下来就可用安装grid软件了。
【下一篇】在Oracle Enterprise Linux(OEL) 操作系统上7.6搭建2节点Oracle 19c RAC (二、GRID软件安装)