8i | 9i | 10g | 11g | 12c | 13c | 18c | 19c | 21c | 23ai | Misc | PL/SQL | SQL | RAC | WebLogic | Linux
Oracle Application Server 10g (9.0.4) Cold Failover Cluster Infrastructure On Red Hat Enterprise Linux 3 (RHEL3)
In this article I'll describe the installation of Oracle Application Server 10g (9.0.4) Cold Failover Cluster Infrastructure on Red Hat Enterprise Linux 3 (RHEL3). The article assumes you've performed the standard advanced server installation including the development tools, and have set up shared disks and a cluster service associated with the shared disks.
At the time of writing this method is only documented for 9iAS on Sun Cluster with an amendments for Linux, see 9ias_cfc.pdf.
- Download Software
- Unpack Files
- Hosts File
- Set Kernel Parameters
- Setup
- Infrastructure Installation
- Post Infrastructure Installation
- Starting and Stopping Services
Download Software
Download the following software.
- Oracle Application Server 10g (9.0.4) Software
- Patch 3006854
- Cold Failover Cluster Infrastructure Support Files
Unpack Files
First unzip the files.
gunzip ias904_linux_disk1.cpio.gz gunzip ias904_linux_disk2.cpio.gz gunzip ias904_linux_disk3.cpio.gz gunzip ias904_linux_disk4.cpio.gz
Next unpack the contents of the files.
cpio -idmv < ias904_linux_disk1.cpio cpio -idmv < ias904_linux_disk2.cpio cpio -idmv < ias904_linux_disk3.cpio cpio -idmv < ias904_linux_disk4.cpio
You should now have four directories (Disk1, Disk2, Disk3 & Disk4) containing installation files.
Hosts File
The "/etc/hosts" file on each node must contain qualified names for all servers, a cluster alias and the heartbeat IP addresses.
# Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost 192.168.1.11 infra01.mydomain.com infra01 192.168.1.12 infra02.mydomain.com infra02 192.168.1.10 clust01.mydomain.com clust01 # # Heartbeat IPs # 192.168.0.10 infra01-int.mydomain.com infra01-int 192.168.0.11 infra02-int.mydomain.com infra02-int
Set Kernel Parameters
Add the following lines to the "/etc/sysctl.conf" file on each node.
kernel.shmall = 2097152 kernel.shmmax = 2147483648 kernel.shmmni = 142 # semaphores: semmsl, semmns, semopm, semmni kernel.sem = 256 32000 100 142 fs.file-max = 131072 net.ipv4.ip_local_port_range = 20000 65000 kernel.msgmni = 2878 kernel.msgmax = 8192 kernel.msgmnb = 65535
Run the following command to change the current kernel parameters.
/sbin/sysctl -p
Add the following lines to the "/etc/security/limits.conf" file on each node.
* soft nproc 16384 * hard nproc 16384 * soft nofile 65536 * hard nofile 65536
Add the following line to the "/etc/pam.d/login" file on each node, if it does not already exist.
session required pam_limits.so
The "/etc/services" file should be modified to remove any references to ldap and ldaps. These references will prevent the installer using the default ldap and ldaps ports (389, 636 respectively).
Ensure the the appropriate DNS entries are present for the nodes and the cluster alias. In addition make sure the IP address of the cluster alias (the logical IP address) is associated with the appropriate cluster service. To check this do the following.
- Run the
redhat-config-cluster
command or run it from the menu (System Settings -> Server Settings -> Cluster). - Highlight the relevant service and click "Properties" button.
- The logical IP address is displayed in the "Service Information" dialog.
If no locical IP address is associated with the service do the following.
- Run the
redhat-config-cluster
command or run it from the menu (System Settings -> Server Settings -> Cluster). - Highlight the relevant service and click "Disable" button.
- From the menu select "Cluster -> Configure".
- Click the "Services" tab, highlight the relevant service and click the "Add Child" button.
- Select the "Add Service IP Address" option and click the "OK" button.
- Enter the relevant cluster alias (logical) network details and click the "OK" button.
- Return the main dialog, saving any config changes along the way.
- Highlight the relevant service and click "Enable" button.
- Ping the cluster alias to test it is working correctly.
Setup
Install the following packages on each node.
# From RedHat AS3 Disk 2 cd /mnt/cdrom/RedHat/RPMS rpm -Uvh setarch-1.3-1.i386.rpm rpm -Uvh sysstat-4.0.7-4.i386.rpm # From RedHat AS3 Disk 3 cd /mnt/cdrom/RedHat/RPMS rpm -Uvh openmotif21-2.1.30-8.i386.rpm rpm -Uvh ORBit-0.5.17-10.4.i386.rpm rpm -Uvh libpng10-1.0.13-8.i386.rpm rpm -Uvh gnome-libs-1.4.1.2.90-34.1.i386.rpm rpm -Uvh compat-glibc-7.x-2.2.4.32.5.i386.rpm compat-gcc-7.3-2.96.122.i386.rpm compat-gcc-c++-7.3-2.96.122.i386.rpm compat-libstdc++-7.3-2.96.122.i386.rpm compat-libstdc++-devel-7.3-2.96.122.i386.rpm
Put gcc296 and g++296 first in $PATH variable by creating the following symbolic links on each node.
mv /usr/bin/gcc /usr/bin/gcc323 mv /usr/bin/g++ /usr/bin/g++323 ln -s /usr/bin/gcc296 /usr/bin/gcc ln -s /usr/bin/g++296 /usr/bin/g++
Install the 3006854 patch on each node.
unzip p3006854_9204_LINUX.zip cd 3006854 sh rhel3_pre_install.sh
Install the CFC Infrastructure support files.
tar -xvf CFCInfra_linux.tar cd setup chown root libloghost.so.1 chgrp root libloghost.so.1 chmod 4755 libloghost.so.1 cp libloghost.so.1 /usr/lib
The following will test if the cluster support is working.
$ hostname infra01.mydomain.com $ LD_PRELOAD=libloghost.so.1; export LD_PRELOAD $ LHOSTNAME=clust01.mydomain.com; export LHOSTNAME $ hostname clust01.mydomain.com
Do not proceed until the cluster alias is being returned by the hostname
command.
Create the new groups and users on each node, making sure the IDs of the groups and users match on each node.
groupadd oinstall -g 500 groupadd dba -g 501 groupadd oper -g 502 useradd -g oinstall -G dba -s /bin/ksh -u 400 oracle passwd oracle
Create the directories in which the Oracle software will be installed. This step will only be done on the node that is currently running the cluster service as it will be the only one that can currently see the shared disks.
mkdir -p /u01/app/oracle/product/904_infra chown -R oracle.oinstall /u01
Login as root and issue the following command.
xhost +<machine-name>
Login as the oracle user and add the following lines at the end of the ".profile" file on each node.
# Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/904_infra; export ORACLE_HOME ORACLE_TERM=xterm; export ORACLE_TERM PATH=/usr/sbin:$ORACLE_HOME/bin:$PATH; export PATH PATH=$PATH:$ORACLE_HOME/dcm/bin:$ORACLE_HOME/opmn/bin; export PATH PATH=$PATH:$ORACLE_HOME/Apache/Apache/bin; export PATH if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 16384 else ulimit -u 16384 -n 16384 fi fi PS1="`hostname`> " set -o emacs set filec
Infrastructure Installation
Log into the oracle user on the node running the cluster service. If you are using X emulation then set the DISPLAY environmental variable.
DISPLAY=<machine-name>:0.0; export DISPLAY
Start the Oracle Universal Installer (OUI) by issuing the following command in the Disk1 directory.
./runInstaller
During the installation select the appropriate ORACLE_HOME for infrastructure (904_infra) and select the infrastructure installation and proceed as normal.
Post Infrastructure Installation
On completion of the infrastructure installation connect to the Enterprise Manager Website (http://clust01.mydomain.com:1810) using the username "ias_admin" and the password you assigned during the installation. If EM is not available start it with the "emctl start iasconsole" command. Check that all the components are started.
The "$ORACLE_HOME/opmn/conf/opmn.xml" should have the following entries added under the <process-type> tag for all OC4J components.
<environment> <variable id="DISPLAY" value="clust01.mydomain.com:0.0"/> <variable id="LD_LIBRARY_PATH" value="/u01/app/oracle/product/904_infra/lib"/> <variable id="LD_PRELOAD" value="libloghost.so.1"/> <variable id="LHOSTNAME" value="clust01.mydomain.com"/> </environment>
Starting and Stopping Services
The cluster support files contain scripts to start and stop the infrastructure. The following are copies of the files that have been modified to support AS10g.
Only the start and stop scripts should need adjustment with the relevant paths, cluster aliases and passwords. The infrastructure can then be started and stopped by running the start and stop scripts as the root user.
To switch the infrastructure between nodes do the following on the node currently running the infrastructure.
- Stop the infrastructure by running the "stop" script as the root user.
- Run the
redhat-config-cluster
command or run it from the menu (System Settings -> Server Settings -> Cluster). - Highlight the relevant service and click "Properties" button.
- Select the node you wish to switch to in the "Member" dropdown and wait until the service has switched.
Then on the node you are switching to do the following.
- Start the infrastructure by running the "start" script as the root user.
- Check everything is running OK by logging into Enterprise Manager Console.
For more information see:
Hope this helps. Regards Tim...