Step By Step: Install and setup Oracle 10g R2 RAC on Oracle Enterprise Linux 5.5 (32 bit).
By Bhavin Hingu
<<PREV <<HOME>> NEXT>>
This document shows the step by step process of installing and setting up 3-node Oracle 10gR2 RAC. This setup uses openfiler as a Shared Storage Solution.
Hardware Used in setting up 3-node 10g R2 RAC using iSCSI SAN (Openfiler):
Total Machines: 4 (3 for RAC nodes + 1 for NAS) Network Switches: 3 (for Public, Private and Shared Storage) Extra Network Adaptors: 7 (6 for RAC nodes (2 for each node) and one for Storage
Server) Network cables: 10 (9 for RAC nodes (3 for each node) and one for Shared Storage) External USB HD: 1 (1 TB)
Machines Specifications:
DELL OPTIPLEX GX620
CPU: Intel 3800MHz
RAM: 4084MB
HD: 250GB
DVD, 10/100 NIC, 8 MB VRAM
Network Adaptor Specifications:
Linksys EG1032 Instant Gigabit Network Adapter
Network Switch Specifications:
D-Link 24-Port Rackmountable Gigabit Switch
Network Cables Specifications:
25-Foot Cat6 Snagless Patch Cable – (Blue, Black and Grey)
Software Used for the 3-node RAC Setup using NAS (Openfiler):
NAS Storage Solution: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686) Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE) Clusterware: Oracle 10g R2 (10.2.0.1) Oracle RAC: Oracle RDBMS 10g R2 (10.2.0.1)
3-Node RAC Setup
Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE):
Server: All the RAC Nodes
Oracle Clusterware:
Server: All the RAC Nodes
ORACLE_BASE: /u01/app/oracle
ORACLE_HOME: /u01/app/oracle/crs
Owner: oracle (Primary Group: oinstall, Secondary Group: dba)
Permissions: 775
OCR/Voting Disk Storage Type: Raw Devices
Oracle Inventory Location: /u01/app/oraInventory
Oracle Database Software (RAC 10.2.0.1) for ASM_HOME:
Server: All the RAC Nodes
ORACLE_BASE: /u01/app/oracle
ORACLE_HOME: /u01/app/oracle/asm
Owner: oracle (Primary Group: oinstall, Secondary Group: dba)
Permissions: 775
Oracle Inventory Location: /u01/app/oraInventory
Listener: LISTENER (TCP:1521)
Oracle Database Software (RAC 10.2.0.1) for DB_HOME:
Server: All the RAC Nodes
ORACLE_BASE: /u01/app/oracle
ORACLE_HOME: /u01/app/oracle/db
Owner: oracle (Primary Group: oinstall, Secondary Group: dba)
Permissions: 775
Oracle Inventory Location: /u01/app/oraInventory
Database Name: labdb
Listener: LAB_LISTENER (TCP:1530)
Openfiler 2.3:
Server: single dedicated server acting as NAS.
OS: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686).
3-Node RAC Architecture:
Cluster Name: lab
Public Network: 192.168.2.0/eth2
Private network (cluster Interconnect): 192.168.0.0/eth0
Private network (Storage Network): 192.168.1.0/eth1
Machine Public IP Private IP VIP Storage IP
RAC Node1 192.168.2.1 192.168.0.1 192.168.2.51 192.168.1.1
RAC Node2 192.168.2.2 192.168.0.2 192.168.2.52 192.168.1.2
RAC Node3 192.168.2.3 192.168.0.3 192.168.2.53 192.168.1.3
Storage N/A N/A N/A 192.168.1.101
Machine Public Name Private Name VIP Name
RAC Node1 node1.hingu.net node1-prv node1-vip.hingu.net
RAC Node2 node2.hingu.net node2-prv node2-vip.hingu.net
RAC Node3 node3.hingu.net node3-prv node3-vip.hingu.net
Storage nas-server N/A N/A
This setup is divided into below 2 main categories:
Pre-installation task. Installation of Oracle 10g R2 Clusterware (10.2.0.1), 10gR2 RAC for ASM_HOME,
10gR2 RAC for DB HOME and Create RAC Database.
Pre-installation task:
Server Hardware requirements
Hardware Used in this exercise to setup 3-Node RAC
Software Requirement.
3-Node 10g R2 RAC Architecture/Setup
Installation of Oracle Enterprise Linux 5
Installation of Openfiler 2.3
Linux Package Requirement
Network Setup
Creating Oracle Software owners/Groups/Permissions/HOMEs
Installation of cvuqdisk Package
Setup Oracle Software Owner’s Environment
Setting up SSH equivalency for Oracle Software Owners
Configure Shared Storage iSCSI disks using openfiler
Configure the iSCSI disk Devices for OCR and Voting Disks
Configure the iSCSI disk Devices for Oracle ASM with ASMLib
Server Hardware Requirements:
Each node in the Cluster must meet the below requirement. At least 1024 x 768 display resolution, so that OUI displays correctly. 1 GB of space in the /tmp directory 5.5 GB space for Oracle Clusterware Home. At least 2.5 GB of RAM and equivalent swap space (for 32 bit installation as in my
case). All the RAC nodes must share the same Instruction Set Architecture. For a testing
RAC setup, it is possible to install RAC on servers with mixtures of Intel 32 and AMD 32 with differences in sizes of Memory/CPU speed.
Installation of OEL5.5 (On All the RAC Nodes):
The below selection was made during the installation of OEL5 on the Node 1 (node1.hingu.net). The same process was followed to install RHEL 5 on all the remaining RAC nodes The Hostname/IP information was appropriately chosen for respective nodes from the Architecture diagram.
Insert Installation Media #1:
Testing the CD Media: Skip
Language: English
Key Board: U.S. English
Partition Option: “Remove all Partitions on selected drives and create default layout”
Boot Loader: “ The GRUB boot loader will be installed on /dev/sda”
Network Devices:
Active on Boot Devices IPV4.Netmask IPV6/Prefix
Yes eth0 192.168.0.1/255.255.255.0 Auto
Yes eth1 192.168.1.1/255.255.255.0 Auto
Yes eth2 192.168.2.1/255.255.255.0 Auto
Hostname Manually node1.hingu.net
Ignore both the Warning Messages at this point
Region: America/New York
System Clock Uses UTC (checked)
Root Password Enter the root password
Additional Tasks On top of Default Installation: “Checked all Software Development” and “Web Server”
Customize Now (Selected)
(Below is the extra selection on top of the default selected packages)
Applications Authoring and Publishing (checked)
Development Development Libraries
libstdc++44-devel
Development Java Development
Development Legacy Software Development
Servers Checked All the servers
Servers Legacy Network Server
bootparamd, rsh-server, rusers, rusers-server, telnet-server
Servers Network Servers
dhcp, dhcpv6, dnsmasq, ypserv
Servers Servers Configuration Tools
Checked All
Base System Administration Tools
Checked All
Base System Base
device-mapper-multipath, iscsi-initiator-utils,
Base System Legacy Software Support
openmotif22
Base System System Tools
OpenIPMI-gui, lsscsi, oracle*, sysstat, tsclient
Post Installation Steps:
(1) Yes to License Agreement.(2) Disable the firewall(3) Disable SELinux(4) Disable kdump
(5) Set the clock(6) Finish
Installation of openfiler 2.3
Version: Openfiler V 2.3 (downloaded from here)
This Install guide was followed to install Openfiler with below values of Hostname and IP.
HOSTNAME: nas-server
Network:
NAS IP: 192.168.1.101
NETMASK: 255.255.255.0
Post installation Steps:
Disabled the Firewall using system-config-securitylevel-tui Changed the password of the openfiler user (default is password) Connected to the nas-server using: https://192.168.1.101:446/ link. Registered the cluster nodes in the “Network Access Configuration” under the
“System” tab. ‘Enable” all the services shown under the ‘Service” tab
System Setup Screen
Minimum Required RPMs for OEL 5.5 (All the 3 RAC Nodes):
binutils-2.17.50.0.6-2.el5
compat-libstdc++-33-3.2.3-61
elfutils-libelf-0.125-3.el5
elfutils-libelf-devel-0.125
gcc-4.1.1-52
gcc-c++-4.1.1-52
glibc-2.5-12
glibc-common-2.5-12
glibc-devel-2.5-12
glibc-headers-2.5-12
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.1-52
libstdc++-4.1.1
libstdc++-devel-4.1.1-52.e15
make-3.81-1.1
sysstat-7.0.0
unixODBC-2.2.11
unixODBC-devel-2.2.11
libXp-1.0.0-8
Below command verifies whether the specified rpms are installed or not. Any missing rpms can be installed from the OEL Media Pack
rpm -q binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel \
gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers libaio libaio-devel \
libgcc libstdc++ libstdc++-devel make sysstat unixODBC unixODBC-devel libXp
I had to install below extra RPMs.
oracleasmlib Available here (one for RHEL compatible)
cvuqdisk Available on Clusterware Media (under rpm folder)
[root@node1 ~]# rpm -ivh numactl-devel-0.9.8-11.el5.i386.rpm
warning: numactl-devel-0.9.8-11.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:numactl-devel ########################################### [100%]
[root@node1 ~]#
[root@node1 rpms]# rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm
warning: oracleasmlib-2.0.4-1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:oracleasmlib ########################################### [100%]
Network Configuration for RAC Nodes/NAS Server:
Public, VIPs and SCAN VIPs are resolved by DNS. The private IPs for Cluster Interconnects are resolved through /etc/hosts. The hostname along with public/private and NAS network is configured at the time of OEL network installations. The final Network Configurations files are listed here.
(a) hostname :
For Node node1:
[root@node1 ~]# hostname node1.hingu.net
node1.hingu.net: /etc/sysconfig/networkNETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=node1.hingu.net
For Node node2:
[root@node2 ~]# hostname node2.hingu.net
node2.hingu.net: /etc/sysconfig/networkNETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=node2.hingu.net
For Node node3:
[root@node3 ~]# hostname node3.hingu.net
node3.hingu.net: /etc/sysconfig/networkNETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=node3.hingu.net
For Node nas-server:
[root@nas-server ~]# hostname nas-server
nas-server: /etc/sysconfig/networkNETWORKING=yes
HOSTNAME=nas-server
(b) Private Network for Cluster Interconnect:
node1.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:60
IPADDR=192.168.0.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes
node2.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:4B
IPADDR=192.168.0.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes
node3.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:49
IPADDR=192.168.0.3
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes
(c) Public Network:
node1.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth2# Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:18:8B:04:6A:62
IPADDR=192.168.2.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
node2.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth2# Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:18:8B:24:F8:58
IPADDR=192.168.2.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
node3.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth2# Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:19:B9:0C:E6:EF
IPADDR=192.168.2.3
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
(d) Private Network for Shared Storage:
node1.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1# Linksys Gigabit Network Adapter
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:4E:60
IPADDR=192.168.1.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
node2.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1# Linksys Gigabit Network Adapter
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:45:13
IPADDR=192.168.1.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
node3.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1# Linksys Gigabit Network Adapter
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:4E:48
IPADDR=192.168.1.3
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
nas-server.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:43:D6
IPADDR=192.168.1.101
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
TYPE=Ethernet
(e) /etc/hosts files:
node1.hingu.net: /etc/hosts# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
##=======================================
# Pulic Network
##=======================================
192.168.2.1 node1.hingu.net node1
192.168.2.2 node2.hingu.net node2
192.168.2.3 node3.hingu.net node3
##=======================================
# VIPs
##=======================================
192.168.2.51 node1-vip.hingu.net node1-vip
192.168.2.52 node2-vip.hingu.net node2-vip
192.168.2.53 node3-vip.hingu.net node3-vip
##=======================================
# Private Network for Cluster Interconnect
##=======================================
192.168.0.1 node1-prv
192.168.0.2 node2-prv
192.168.0.3 node3-prv
##=======================================
##=======================================
node2.hingu.net: /etc/hosts# # Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
##=======================================
# Pulic Network
##=======================================
192.168.2.1 node1.hingu.net node1
192.168.2.2 node2.hingu.net node2
192.168.2.3 node3.hingu.net node3
##=======================================
# VIPs
##=======================================
192.168.2.51 node1-vip.hingu.net node1-vip
192.168.2.52 node2-vip.hingu.net node2-vip
192.168.2.53 node3-vip.hingu.net node3-vip
##=======================================
# Private Network for Cluster Interconnect
##=======================================
192.168.0.1 node1-prv
192.168.0.2 node2-prv
192.168.0.3 node3-prv
##=======================================
##=======================================
node3.hingu.net: /etc/hosts# # Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
##=======================================
# Pulic Network
##=======================================
192.168.2.1 node1.hingu.net node1
192.168.2.2 node2.hingu.net node2
192.168.2.3 node3.hingu.net node3
##=======================================
# VIPs
##=======================================
192.168.2.51 node1-vip.hingu.net node1-vip
192.168.2.52 node2-vip.hingu.net node2-vip
192.168.2.53 node3-vip.hingu.net node3-vip
##=======================================
# Private Network for Cluster Interconnect
##=======================================
192.168.0.1