Oracle RAC 11g Install on OEL5

Dont always have to have oracle and grid users – some samples seem to indicate one user. dont use root

For this case I am using three separate servers: two RAC node boxes and one NFS server, and also possibly a separate server for DNS or the NFS server for DNS on a secondary NIC card.

Network Configuration

The Linux tool system-config-network can be run from the command line in putty duping X to a Windows client machine using Xming.

On rac1 under DNS:

Hostname = rac1.localdomain

On rac1 under Devices:

eth0 static IP address = [a static i address]
eth0 subnet mask is the same as Windows XP box (255.255.255.0)
eth0 default gateway = [modem IP]

eth1 another static IP address

Add appropriate entries to /etc/hosts something like this, exposing only private ip addresses in the hosts file (the rest goes into DNS):

#this depends on the specific host
#127.0.0.1      nas    nas.localdomain        localhost.localdomain localhost
#127.0.0.1      rac1    rac1.localdomain        localhost.localdomain localhost
#127.0.0.1      rac2     rac2.localdomain        localhost.localdomain localhost
#127.0.0.1      rac3    rac3.localdomain        localhost.localdomain localhost
::1            localhost6.localdomain6 localhost6

#scan is down and inactive
192.168.1.101   nas.localdomain        nas
#192.168.1.105   scan.localdomain        scan
#192.168.1.106  scan.localdomain        scan
#192.168.1.107  scan.localdomain        scan

#rac1-vips set up as down and inactive
192.168.1.111   rac1.localdomain        rac1
192.168.1.112   rac1-priv.localdomain   rac1-priv
192.168.1.113   rac1-vip.localdomain    rac1-vip

#rac2-vips set up as down and inactive
192.168.1.121   rac2.localdomain        rac2
192.168.1.122   rac2-priv.localdomain   rac2-priv
192.168.1.123   rac2-vip.localdomain    rac2-vip

#rac3-vips set up as down and inactive
192.168.1.131   rac3.localdomain        rac3
192.168.1.132   rac3-priv.localdomain   rac3-priv
192.168.1.133   rac3-vip.localdomain    rac3-vip

And dont forget c:\windows\system32\drivers\etc\hosts file on my Windows XP client machine. Also when i talk to my servers from my Windows client I will be talking to the -priv addresses until DNS is set up.

Use system-config-network to set up the NICs on each server:

RAC1: eth2 must be inactive and down (rac1-vip)
RAC2: eth2 must be inactive and down (rac2-vip)
NAS: eth1 must be inactive and down (scan)

Save and quit from the Network tool in Linux and run this:

service network restart

Now go to the other RAC servers, the NFS server and the DNS server; and do similar to that on RAC1 using the appropriate hostnames and IP addresses. I have added names and ip addresses in the hosts file above, which could be changed.

Check this on all servers:

[root@rac1 ~]

# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=rac1.localdomain

And check the configuration of each NIC on each server (system-config-network can be run to bring up the X Windows networking setup tool even in putty through X11 forwarding):

cat /etc/sysconfig/network-scripts/ifcfg-eth0

Next we have to sort out rpm’s, which includes bind which is required for DNS.

Getting Up to Date with Linux rpm Packages

The next step is to get the Linux rpm’s up to date by executing the following set of commands against the /media//Server subdirectory on each CD (or a DVD if you have one). Bind is required for DNS and I am being explicit because there are multiple versions on some of the versions of OEL5 and OEL6 I have:

cd /media/<cdrom-or otherwise>/Server
rpm -Uvh binutils-2.*
rpm -Uvh compat-libstdc++-33*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh elfutils-libelf-devel-*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh ksh-2*
rpm -Uvh libaio-0.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh make-3.*
rpm -Uvh sysstat-7.*
rpm -Uvh unixODBC-2.*
rpm -Uvh unixODBC-devel-2.*
cd /
eject

And don’t forget to do the rpm installations for all node servers by copying the above rpms to a local directory perhaps; the above is probably not needed for the NFS server but I did it anyway. So it’s a good idea to copy the rpm files to separate area so you can install them from a hard drive on machines additional to the initial RAC server machine.

Verify all rpm packages are properly installed:

rpm -qv *

Use yum to install online:

yum install -y binutils-2.*
yum install -y compat-libstdc++-33*
yum install -y elfutils-libelf-0.*
yum install -y elfutils-libelf-devel-*
yum install -y gcc-4.*
yum install -y gcc-c++-4.*
yum install -y glibc-2.*
yum install -y glibc-common-2.*
yum install -y glibc-devel-2.*
yum install -y glibc-headers-2.*
yum install -y ksh-2*
yum install -y libaio-0.*
yum install -y libaio-devel-0.*
yum install -y libgcc-4.*
yum install -y libstdc++-4.*
yum install -y libstdc++-devel-4.*
yum install -y make-3.*
yum install -y sysstat-7.*
yum install -y unixODBC-2.*
yum install -y unixODBC-devel-2.*

If Not Using DNS

Run system-config-network and remove all DNS screen entries other than hostname.localdomain and restart networking on all servers:

service network restart

Clear out any entries in the /etc/resolv.conf file:

cat /etc/resolv.conf

Make sure all DNS processing is stopped on all servers:

service nscd status
service named status

If present stop the services:

service nscd stop
service named stop

Using the GNS (Grid Naming Service) to Configure SCAN (OPTIONAL for this Configuration)

When Using DNS (NAS only)

Using a single IP address as the SCAN in the hosts file is a work around for grid installation that allows postponement of DNS configuration.

Leave DNS settings in system-config-network for now (they depend on the routers, etc).

bind is required on the DNS server ONLY (NAS):

rpm -Uvh ypbind-1.19-12.el5.i386.rpm
rpm -Uvh bind-libs-9.3.6-4.P1.el5_4.2.i386.rpm
rpm -Uvh bind-9.3.6-4.P1.el5_4.2.i386.rpm 
rpm -Uvh bind-utils-9.3.6-4.P1.el5_4.2.i386.rpm
rpm -Uvh bind-chroot-9.3.6-4.P1.el5_4.2.i386.rpm         
rpm -Uvh bind-sdb-9.3.6-4.P1.el5_4.2.i386.rpm
rpm -Uvh bind-devel-9.3.6-4.P1.el5_4.2.i386.rpm          
rpm -Uvh bind-libbind-devel-9.3.6-4.P1.el5_4.2.i386.rpm
rpm -Uvh system-config-bind-4.0.3-4.0.1.el5.noarch.rpm

One easy way to start this process is to run the system-config-bind tool, which you should be able to
run through X, XMing, and onto your client machine from putty; can be run on Linix interface if need be.

Edit the contents of /var/named/chroot/etc/named.conf:

[root@rac1 named]

# vi /var/named/chroot/etc/named.conf options { directory “/var/named”; dump-file “/var/named/data/cache_dump.db”; statistics-file “/var/named/data/named_stats.txt”; listen-on port 53 { 127.0.0.1; 192.168.1.150; }; /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; };

Add all hostnames and related ip addresses to /var/named/chroot/var/named/localdomain.zone:

[root@rac1 named]

# vi /var/named/chroot/var/named/localdomain.zone $TTL 86400 @ IN SOA localhost root.localhost ( 42 ; serial (d. adams) 3H ; refresh 15M ; retry 1W ; expiry 1D ) ; minimum IN NS localhost localhost IN A 127.0.0.1 nas IN A 192.168.1.146 rac-scan IN A 192.168.1.150 rac1 IN A 192.168.1.148 rac1-priv IN A 192.168.1.151 rac1-vip IN A 192.168.1.152 rac2 IN A 192.168.1.149 rac2-priv IN A 192.168.1.155 rac2-vip IN A 192.168.1.156

Then start up DNS:

service nscd stop
service named stop
service named start
chkconfig named on
service nscd start

Add to the /etc/resolv.conf file (not in /var/named/chroot), commenting out any internet stuff:

nameserver 192.168.1.150
search localdomain

Now let’s test it:

nslookup rac-scan
nslookup rac-scan.localdomain
nslookup nas
nslookup nas.localdomain
nslookup rac1
nslookup rac1-priv
nslookup rac1-vip
nslookup rac1.localdomain
nslookup rac1-priv.localdomain
nslookup rac1-vip.localdomain
nslookup rac2
nslookup rac2-priv
nslookup rac2-vip.localdomain
nslookup rac2-vip
nslookup rac2.localdomain
nslookup rac2-priv.localdomain
nslookup rac2-vip.localdomain

Setting Virtual Memory Space for Linux (RAC1 & RAC2)

Temp (virtual memory) space in Linux has to be 1.5Gb to allow for Oracle’s Automated Memory Management feature (Red Hat installs with a 1Gb default) to work under installation:

umount tmpfs
mount -t tmpfs shmfs -o size=1500m /dev/shm

Don’t forget to resize tmpfs properly on all nodes and NFS server. And also note that the above mount command only resizes tmp space until the next restart so it’s best to change vi /etc/fstab to retain the 1.5Gb setting, by replacing the defaults setting with size=1500m as follows:

#tmpfs                   /dev/shm                tmpfs   defaults        0 0
tmpfs                   /dev/shm                 tmpfs   size=1500m     0 0

Linux Kernel Parameters for Oracle (RAC1 & RAC2)

These settings will be added or picked out in the installation, might as well do now on both nodes and the NFS server (it won’t do any harm and Oracle datafiles will be stored here) in the vi /etc/sysctl.conf file:

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586

To instantiate the above kernel variables do this (any errors should be obvious and you don’t want to restart if something is wrong in this configuration file):

/sbin/sysctl -p

Linux Security, Oracle and PAM

Add the following lines to the vi /etc/security/limits.conf file:

oracle               soft    nproc   2047
oracle               hard    nproc   16384
oracle               soft    nofile  1024
oracle               hard    nofile  65536
grid               soft    nproc   2047
grid               hard    nproc   16384
grid               soft    nofile  1024
grid               hard    nofile  65536

Add this to the vi /etc/pam.d/login configuration file:

session    required     pam_limits.so

Disable SELinux altogether by adding the following to the cat /etc/selinux/config file, or by changing permissive to disabled or by disabling the firewall in the Linux GUI:

SELINUX=disabled

Check that the firewall is stopped:

/etc/rc.d/init.d/iptables status

And turn off UDP ICMP rejections:

chkconfig iptables off

If NTP (Network Time Protocol) is not used then Oracle will use Oracle Cluster Time Synchronization Service (ctssd) to synchronize times of RAC nodes; so deconfigure NTP as follows:

service ntpd stop
chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.org
rm /var/run/ntpd.pid

Oracle Linux Groups and Users

Oracle requires specific users and group for installation of Oracle software:

/usr/sbin/groupadd -g 1010 oinstall
/usr/sbin/groupadd -g 1020 asmadmin
/usr/sbin/groupadd -g 1021 asmdba
/usr/sbin/groupadd -g 1030 asmoper
/usr/sbin/groupadd -g 1031 dba
/usr/sbin/groupadd -g 1032 oper
useradd -u 1101 -g oinstall -G dba,asmdba oracle
useradd -u 1100 -g oinstall -G dba,asmadmin,asmdba,asmoper grid
passwd oracle
passwd grid

Don’t forget all nodes and the NFS server.

Now set the grid user .bash_profile settings. Login in as grid:

su - grid
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi
export ORACLE_HOSTNAME=rac<N>.localdomain
export ORACLE_BASE=/u01/app
export ORACLE_HOME=$ORACLE_BASE/grid/11.2.0
PATH=$PATH:$HOME/bin
export PATH
umask 022
export EDITOR=vi
set -o vi
alias rm='rm -i'

And test it:

. .bash_profile

Shared Disks Configuration

Run system_config-lvm from the command line to sort out the disk partitions, formats and allocations visually. Setting up partitions and formatting on unused free disk space might require use of the parted command.

Log in as root on NAS and create the shared directories:

NAS:

mkdir -p /cluster
chown -R grid:oinstall /cluster
chmod -R 775 /cluster
mkdir -p /oradata
chown -R oracle:oinstall /oradata
chmod -R 775 /oradata

Add this to the /etc/exports file on NAS to export the new shared areas as separate partitions, that other machines will be able to work with:

NAS:

/cluster        *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/oradata        *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

Now execute NFS share on NAS so that the shares will be accessible:

NAS:

chkconfig nfs on
service nfs restart

Create an Area to Install grid In on RAC1 and RAC1 (not the database area). oracle is different

Login in as root and create structures for grid software:

mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
chmod -R 775 /u01

When installing grid set oracle base in the runInstaller to /u01/app/grid and for oracle /u01/app (the base directories have to be different)
#
mkdir -p /u01/app/11.2.0/oracle
chmod 775 /u01/app/11.2.0/oracle
chown -R oracle:oinstall /u01
#

change to oracle when installing the database

chown -R oracle:oinstall /u01
chmod -R 775 /u01

Now login into both RAC1 and RAC2 as root and create working directories on RAC1 and RAC2 (not on NAS) – these working directories will be mounted onto the shares from both nodes, and thus these working directories will be shared (not all parts of the Oracle installation on each node will be shared, thus the four separate shares):

RAC1 & RAC2:

mkdir -p /u01/cluster
chown -R grid:oinstall /u01/cluster
chmod -R 775 /u01/cluster
mkdir -p /u01/oradata
chown -R grid:oinstall /u01/oradata
chmod -R 775 /u01/oradata

Add the following lines to the vi /etc/fstab file on RAC1 and RAC2 (not on NAS) so that they can mount and access the NFS shares on restart:

RAC1 & RAC2:

nas:/cluster /u01/cluster nfs rw,hard,wsize=32768,rsize=32768,timeo=300,actimeo=0 0 0
nas:/oradata /u01/oradata nfs rw,hard,wsize=32768,rsize=32768,timeo=300,actimeo=0 0 0

The two above give an error in verification after grid install. The suggested config. yields a linux error on mounting shrug.

Mount the NFS shares on RAC1 and RAC2 (changing /etc/fstab only mounts shares on Linux restart):

RAC1 & RAC2:

mount /u01/cluster
mount /u01/oradata
chown -R grid:oinstall /u01/cluster
chown -R oracle:oinstall /u01/oradata
chmod -R 775 /u01

Create shared OCR configuration and voting disk files in the shared storage area; run on RAC1 and test on RAC2 to ensure accessibility:

RAC1 & RAC2 (ERROR):

#touch /oradata/rac-scan/ocr
#touch /oradata/rac-scan/vdsk
touch /u01/cluster
touch /u01/oradata

On the RAC1 node: Download and Change Ownership of Oracle Software for grid Install

Connected as root again:

mkdir -p /oinstall
chown -R grid:oinstall /oinstall
chmod -R 775 /oinstall

Download the grid (clusterware) and database software into /oinstall and unzip:

su - grid
unzip linux_11gR2_grid.zip

runInstaller for grid

On your Windows client machine run Xming (a Windows XWindows emulator) – or just build your servers on a virtualized environment. Then login in as root from Putty on your Windows client machine, with X11 forwarding enabled in Putty, and do this:

Later on in the advanced install create the external redundancy locations as /oradata/rac-scan/ocr and /oradata/rac-scan/vdsk. I found that changing any of the contents of this path gae me errors later on in the installation scripts.

Also when specifying the instllation location in the installer the oracle base and software locations must be different due to ownership and permissions requirements, as in oracle base is /u00/app/grid and software location is /u00/app/11.2.0/grid

xclock

You should get xclock running on your Windows client machine screen. Now do this:

[root@rac1 ~]

# su – grid

[oracle@rac1 ~]

$ xclock Xlib: connection to “localhost:11.0” refused by server Xlib: PuTTY X11 proxy: wrong authentication protocol attempted Error: Can’t open display: localhost:11.0

[oracle@rac1 ~]

$

The above problem is caused by a firewall issue an is resolved by going to root, pulling a key and adding it to the grid user in linux.
If you can run Oracle install software under the root user don’t because that will create huge problems for yourself down the line with Oracle RAC.
Do the following as root:

[root@rac1 ~]

# xauth list rac1.localdomain/unix:12 MIT-MAGIC-COOKIE-1 8fa8670cb0820da76e5f5d41e093b5b4 rac1.localdomain/unix:10 MIT-MAGIC-COOKIE-1 a166f879e16ebdc950964a910bbaeaf9 rac1.localdomain/unix:11 MIT-MAGIC-COOKIE-1 908bae9579979edcded14256ddd5f24e

And now authorize grid logged in as grid:

xauth add rac1.localdomain/unix:12  MIT-MAGIC-COOKIE-1  8fa8670cb0820da76e5f5d41e093b5b4
xauth add rac1.localdomain/unix:10  MIT-MAGIC-COOKIE-1  a166f879e16ebdc950964a910bbaeaf9
xauth add rac1.localdomain/unix:11  MIT-MAGIC-COOKIE-1  908bae9579979edcded14256ddd5f24e

Start Here in a Virtualized Environment

su - grid

Now go to /oinstall/grid and execute the installer:

<.pre>. runInstaller

On the Specify Cluster Configuration screen I add my second node and my NFS file server. The NFS server will be removed shortly.
Set up SSH connectivity for the grid user in the runInstaller tool, come back and remove the NFS server. Then run this in a shell:

/oinstall/grid/runcluvfy.sh stage -pre crsinst -n rac1,rac2,nas -verbose > /oinstall/grid/cluvfy.out

And set oracle base for in the runInstaller to /u00/app and /u01/app for oracle (when we get to the database install).

Clusterware Installation

This is a separate process from the database software and needs a separate username (not the oracle database user).

Use the configuration screen to set up SSH connectivity for all servers, including NFS and DNS servers. I even added a VIP address
for my NAS server just to automate the SSH connectivity. SSH can be set up manually but using the tool’s easier.

#
#
#

herehere – use this page

http://oracleabout.blogspot.com/2012/08/install-11203-grid-infrastructure-and.html

#
#
#

oracle User .bash_profile Settings

herehere

mkdir -p /oradata/rac-scan

chown -R grid:oinstall /oradata

chmod -R 775 /oradata

mkdir /oracle

chown -R oracle:oinstall /oracle

chmod -R 775 /oracle

Log into the oracle user on the two RAC1 and RAC2 nodes and then edit the .bash_profile like this (this is the profile I prefer and I generally don’t bother with anything other than bash unless I consult somewhere that something else is used):

RAC1 (change for RAC2):

# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

## Oracle Setup ##
umask 022
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
#export ORACLE_SID=rac1
export ORACLE_HOSTNAME=rac1.localdomain
export ORACLE_UNQNAME=rac

#export JAVA_HOME=$ORACLE_HOME/jdk
#export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
#export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

#export TNS_ADMIN=$ORACLE_HOME/network/admin
#export PS1="[\u@\$ORACLE_SID \W]\\$ "
#export PATH=$HOME/bin:$ORACLE_HOME/bin:$JAVA_HOME/bin:/usr/sbin:$PATH

export EDITOR=vi
set -o vi

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_TERM=xterm
if [ $USER = "oracle" ]; then
    ulimit -u 16384 -n 65536
fi

alias tns='cd $ORACLE_HOME/network/admin'
alias dbs='cd $ORACLE_HOME/dbs'
alias home='cd $ORACLE_HOME'
alias base='cd $ORACLE_BASE'
alias flash='cd $ORACLE_BASE/flash_recovery_area'
alias oem='cd $ORACLE_HOME/sysman/config'
alias rm='rm -i'

Instantiate the profile by doing this on all nodes:

. .bash_profile

The .bash_profile for the NAS file is different because it’s just a file server and it doesn’t contain an Oracle Instance, even though it will contain database files:

# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

## Oracle Setup ##
umask 022

export PS1="[\u@$HOST \W]\\$ "
export PATH=$HOME/bin:/usr/sbin:$PATH

export EDITOR=vi
set -o vi

export TMP=/tmp
export TMPDIR=$TMP
if [ $USER = "oracle" ]; then
    ulimit -u 16384 -n 65536
fi

alias rm='rm -i'

And don’t forget this on NAS:

. .bash_profile

Installing Oracle Software

When installing grid set oracle base in the runInstaller to /u00/app and for oracle /u01/app (the base directories have to be different)

This is where you get to run the Oracle software installer from the linux_11gR1_database_1013.zip file, which gets unzipped into the database /oinstall/database directory. All the selections are similar to single instance installation – just don’t forget to add nodes in addition to the first when the screen appears. Also there is a simplistic root.sh script to execute on both nodes at the end.

The Oracle Home goes into /u01/app/oracle/product/11.2.0/dbhome_1.

Like the clusterware installer, the database installer is executed from the primary RAC1 node and not on all nodes in the cluster – the software installs on multiple nodes with a single execution.

Create a DBCA Database

  1. Select all the nodes in the cluster.
  2. Set global database name to rac.world.
  3. Set SID prefix to rac.
  4. I selected a cluster file system (uses shared NAS/NSF drive).
  5. You will have to change the database datafiles default path to /u01/oradata.
  6. Undo datafiles for each node cannot be separated to local storage for each instance at this stage.

TNS Configuration

Like most of a RAC installation is all automated common sense, after you have all the *NIX stuff squared away. The listener and TNS configuration files are automated as follows:

# listener.ora Network Configuration File: /u01/app/oracle/product/11.1.0/db_1/network/admin/listener.ora
# Generated by Oracle configuration tools.

LISTENER_RAC2 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521)(IP = FIRST))
      )
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.6)(PORT = 1521)(IP = FIRST))
      )
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
      )
    )
  )

LISTENER_RAC1 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521)(IP = FIRST))
      )
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.5)(PORT = 1521)(IP = FIRST))
      )
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
      )
    )
  )

And for the tnsnames.ora file:

# tnsnames.ora Network Configuration File: /u01/app/oracle/product/11.1.0/db_1/n
etwork/admin/tnsnames.ora
# Generated by Oracle configuration tools.

RAC =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = rac.world)
    )
  )

LISTENERS_RAC =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))
  )

RAC2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = rac.world)
      (INSTANCE_NAME = rac2)
    )
  )

RAC1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = rac.world)
      (INSTANCE_NAME = rac1)
    )
  )

And you should be able to connect to RAC, RAC1 and RAC2. I found that a full cold restart on all three servers made it all work happily as well.

Before removing the database and Oracle software, use the following command for a clean shutdown of the entire rac:

srvctl stop database -d RAC

You can stop each instance individually but the srvctl utility is part of the clusterware installation and stops the entire rac from the clusterware layer.

Configuring ASM

To configure ASM I have to all the way back to the Oracle software installation and install ASM, as I omitted it on my first pass of installing Oracle software. Also my installation of Oracle software does not include ASM creation so in goes back to the clusterware installation, so I also uninstall the Oracle database software using the Oracle deinstallation option (I have not as yet deinstalled the clusterware software). Also, ASM is configured with clusterware install (makes sense), and clusterware cannot re-install into an existing home so the clusterware must be removed and then installed again. Stop the clusterware layer with this command (logged in as root):

crsctl stop crs

Creating a Database on ASM

ASM configuration is built in the Oracle software installation stage or in the DBCA. Find ASM stuff here and here. Under the first link, simply find the correct version of Linux and download the Library and Tools in addition to the Drivers for the Kernel here.

When I check the version of my installed kernel I get this:

[root@nas asm]

# uname -r 2.6.18-194.el5

ASM packages required are three in number:

rpm -Uvh oracleasm-support-2.1.3-1.el5.i386.rpm
rpm -Uvh oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm
rpm -Uvh oracleasmlib-2.0.4-1.el5.i386.rpm

If you installed the wrong packages or too many like me simply use this command to seek them out:

rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | sort

And remove them with this command to blow incorrectly installed packages away:

rpm -ev [package_name]

The version of required packages can depend on output from “uname -r” command.

Next you need to configure ASM on Linux startup with the following command:

/etc/init.d/oracleasm configure

You should get output something like this:

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y / n) [n]: y
Scan for Oracle ASM disks on boot (y / n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [OK]
Scanning the system for Oracle ASMLib disks: [OK]

ASM needs extra groups for OSASM and OSDBA groups:

/usr/sbin/groupadd -g 504 asmadmin
/usr/sbin/groupadd -g 506 asmdba

omit enterprise manager when creating database

In the Event Of Deinstall

Do not use the deinstall zip download tool it sux!

On RAC1 navigate to the Software Location directory created on the grid infrastructure installation tool:

cd /u01/app/11.2.0/grid/deinstall
./deinstall

Follow all the instructions and commands on each node in sequence.

Post Database Creation

to find ORACLE_UNQNAME environment variable:

SELECT name, db_unique_name FROM v$database;