8i | 9i | 10g | 11g | 12c | 13c | 18c | 19c | 21c | 23c | Misc | PL/SQL | SQL | RAC | WebLogic | Linux
Oracle Database 10g (10.1.0.2) RAC Installation on Tru64 5.1b
This article is intended as a brief guide to installing Oracle Database 10g (10.1.0.2) Real Application Clusters (RAC) on Tru64 5.1b. It assumes that the servers are using a cluster file system and all hardware cluster setup has been completed.
- Download Software
- Unpack Files
- Set Kernel Parameters
- Setup
- CRS Installation
- DB Installation
- Post Installation
- Client Network Configuration
Download Software
Download Alpha Java SDK SDK v 1.4.2-3.
Download the Oracle installation files from otn.oracle.com.
Unpack Files
First unzip the files.
gunzip ship_10g_crs_disk1.cpio.z gunzip ship_10g_db_disk1.cpio.z gunzip ship_10g_db_disk2.cpio.z
Next unpack the contents of the files.
mkdir crs mv ship_10g_crs_disk1.cpio crs cd crs cpio -idmv < ship_10g_crs_disk1.cpio cd .. cpio -idmv < ship_10g_db_disk1.cpio cpio -idmv < ship_10g_db_disk2.cpio
You should now have two directories (Disk1 and Disk2) containing the database installation files, along with a "crs" directory containing a Disk1 directory for the cluster ready services software.
Set Kernel Parameters
For a complete list of kernel settings read Configure Kernel Subsystem Attributes on Tru64 UNIX.
The current settings can be viewed using the following commands.
/sbin/sysconfig -q ipc /sbin/sysconfig -q proc
If any settings need to be altered they can be added to the end of the "/etc/sysconfigtab" file. The typical alterations are.
vm: new_wire_method = 1 # Set to 0 if <= 5.1b PK2 vm_bigpg_enabled = 0 ipc: shm_max = 4278190080 # (4 GB less 16MB) shm_min = 1 shm_mni = 256 shm_seg = 256 proc: max_per_proc_stack_size = 33554432 per_proc_stack_size = 8388608 max_per_proc_data_size = 335544320 per_proc_data_size = 335544320 max_per_proc_address_space = 1073741824 # or RAM (whichever is greater) per_proc_address_space = 1073741824 rdg: msg_size = 32768 max_objs = 5120 max_async_req = 256 max_sessions = 500 rdg_max_auto_msg_wires = 0 rdg_auto_msg_wires = 0 inet: udp_sendspace = 65536 udp_recvspace = 65536 rt: aio_task_max_num = 8193 vfs: fifo_do_adaptive = 0
Check Metalink Note: 169706.1 to confirm the current best settings.
With the file created it can be applied by issuing the following command on each node.
/sbin/sysconfigdb -m -f sys_tuning.stanza
Once the file is applied check for, and remove, any duplicate or conflicting entries (like vm_swap_eager) in the /etc/sysconfigtab file. The server must be rebooted to take on the new settings. This can be done using the following command.
shutdown -r now
The performance of gettimeofday() can be improved in Oracle by performing the following as root on each node.
mknod /dev/timedev c 15 0 chmod 664 /dev/timedev
Setup
Install the Java development kit.
mkdir /tmp/java142 cp java142-3.tar /tmp/java142/ cd /tmp/java142 tar -xvf java142-3.tar setld -l /tmp/java142
Choose the option to load all subsets.
Create the new groups and users. Since the environment is clustered this only needs to be done on a single node.
# addgroup Enter a new group name or <Return> to exit: oinstall Enter a new group number [200]: <CR> Group oinstall was added to the /etc/group file. # addgroup Enter a new group name or <Return> to exit: dba Enter a new group number [201]: <CR> Group dba was added to the /etc/group file. # adduser Enter a login name for the new user (for example, john): oracle Enter a UID for (oracle) [101]: 400 Enter a full name for (oracle): Oracle software owner & admin Enter a login group for (oracle) [users]: oinstall Enter another group that (oracle) should be a member of. (<Return> only if none): dba Enter another group that (oracle) should be a member of. (<Return> only if none): Enter a parent directory for (oracle) [/usr/users]: <CR> The shells are: /usr/bin/sh /usr/bin/ksh /bin/sh /bin/ksh /usr/bin/csh /usr/bin/posix/sh /bin/csh Enter a login shell for (oracle) [/bin/sh]: /bin/ksh Adding new user... You must enter a new password for (oracle). Changing password for oracle. New password: Retype new password: Hashed database not in use, only /etc/passwd text file updated. Finished adding user account for (oracle).
Create additional directories for RAC configuration files.
mkdir /var/opt/oracle chown oracle:oinstall /var/opt/oracle chmod 755 /var/opt/oracle mkdir /usr/local/bin chmod 755 /usr/local/bin
Append the following line to the ".profile" file under the apache home directory (/usr/users/apache).
PATH=$PATH:/usr/local/bin; export PATH
Append the following line to the "/etc/services" file.
listener 1521/tcp # Oracle
Change the ownership of some filesystem directories.
chown -R oracle:oinstall /u01 chown -R oracle:oinstall /u02
Append the following lines to the .profile file in the oracle users home directory (/usr/users/oracle).
TMP=/u01/temp; export TMP TMPDIR=$TMP; export TMPDIR # Oracle RAC ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE CRS_HOME=$ORACLE_BASE/product/10.1.0/crs; export CRS_HOME DB10G_HOME=$ORACLE_BASE/product/10.1.0/db_1; export DB10G_HOME BASE_PATH=.:$PATH:/usr/sbin:/usr/ccs/bin:/usr/bin:/usr/bin/X11:/usr/local/bin BASE_PATH=$BASE_PATH:/usr/opt/java142:/usr/opt/java142/bin; export BASE_PATH ORACLE_HOME=$DB10G_HOME; export ORACLE_HOME PATH=$BASE_PATH:$ORACLE_HOME/bin:$CRS_HOME/bin:$ORACLE_HOME/Apache/Apache/bin; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH member=`clu_get_info|grep "this member"|awk '{print $6}'`; export member ORACLE_SID=db10g$member; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM SRVM_SHARED_CONFIG=/u01/db10g_configuration; export SRVM_SHARED_CONFIG DISPLAY=server01:0.0; export DISPLAY stty erase "^H" set -o emacs set filec
Make the Oracle directories.
mkdir /u01/tmp mkdir -p /u01/app/oracle/product/10.1.0/crs mkdir -p /u01/app/oracle/product/10.1.0/db_1 mkdir -p /u01/oradata/db10g mkdir -p /u02/oradata/db10g
As we are using system wide clusterware we don't need to configure individual partitions. We must simply create the configuration and voting files.
touch /u01/db10g_configuration touch /u01/css_voting
Login as root and issue the following command.
xhost +<machine-name>
CRS Installation
Log into the oracle user. If you are using X emulation then set the DISPLAY environmental variable.
DISPLAY=<machine-name>:0.0; export DISPLAY
Start the Oracle Universal Installer (OUI) by issuing the following command in the "./crs/Disk1" directory.
./runInstaller
Enter "crs" as the home name and the "/u01/app/oracle/product/10.1.0/crs" as the ORACLE_HOME then proceed with the installation entering the appropriate answers when requested. Be sure to run the $ORACLE_HOME/root.sh file on each node before proceeding with the configuration part of the installation.
DB Installation
Log into the oracle user. If you are using X emulation then set the DISPLAY environmental variable.
DISPLAY=<machine-name>:0.0; export DISPLAY
Start the Oracle Universal Installer (OUI) by issuing the following command in the Disk1 directory.
./runInstaller
With the exception of the cluster node selection screen the installation follows the usual format. Once the installation is complete run the root.sh on all nodes in the cluster when prompted.
You may wish to make some files member specific, although this is not necessary. The mkcdsl
command creates context dependent symbolic links so that a single shared directory is actually link to a member-specific directory.
mkcdsl -ac /u01/app/oracle/product/10.1.0/db_1/network/admin mkcdsl -ac /u01/app/oracle/product/10.1.0/db_1/network/agent mkcdsl -ac /u01/app/oracle/product/10.1.0/db_1/network/log
If for some reason you need to remove a CDSL simply remove the link directory and remove it from the inventory:
rm /u01/app/oracle/product/10.1.0/db_1/network/agent mkcdsl -i /u01/app/oracle/product/10.1.0/db_1/network/agent
The physical member-specific directories will still be present so the original data can be copied back into the correct location before the member-specific directories are cleaned up manually.
If you create an instance during the installation the network files are created automatically. If no you must create them manually. First create the "$ORACLE_HOME/network/admin/listener.ora" file. This can be done using the Oracle Net Configuration Assistant (netca) or by adding the file manually to each node like:
LISTENER_SERVER01 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = server01-v)(PORT = 1521)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 123.123.123.1)(PORT = 1521)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) ) ) LISTENER_SERVER02 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = server02-v)(PORT = 1521)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 123.123.123.2)(PORT = 1521)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) ) )
Each node should have a copy of the "tnsnames.ora" file like.
db01g = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = server01-v)(PORT = 1526)) (ADDRESS = (PROTOCOL = TCP)(HOST = server02-v)(PORT = 1526)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = db01g.ORACLE-BASE.COM) ) ) db01g1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = server01-v)(PORT = 1526)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = db01g.ORACLE-BASE.COM) (INSTANCE_NAME = db01g1) ) ) db01g2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = server02-v)(PORT = 1526)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = db01g.ORACLE-BASE.COM) (INSTANCE_NAME = db01g2) ) ) LISTENERS_db01g = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = server01-v)(PORT = 1526)) (ADDRESS = (PROTOCOL = TCP)(HOST = server02-v)(PORT = 1526)) )
The LISTENER_db10g
entries are referenced by the SID.LOCAL_LISTENER
and SID.REMOTE_LISTENER
parameters in the spfile.
Run the following commands on all nodes to start the GSD daemon and the listener.
$CRS_HOME/bin/gsdctl start $ORACLE_HOME/bin/lsnrctl start
Starting the gsd daemon will take longer on subsequent nodes due to the extra inter-node communication.
Start dbca and select the cluster option. When prompted select all nodes in the cluster and proceed with install as normal. If the network configuration is not correct the DBCA will produce errors towards the end of the instance creation, but will complete successfully, leaving the instances down. If the network configuiration is amended the instances will start successfully.
If the DBCA produces "ORA-12547: TNS:lost contact" errors when attempting to connect to an instance, relink Oracle using the following commands and erstart the DBCA.
cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk numa_off make -f ins_rdbms.mk ioracle
Once the database instances are built they must be added to the cluster configuarion file to allow them to be managed by the srvctl
utility. If this is the first clustered database in the server you must initialize the configuration.
srvconfig -init
Once this is done you can add and remove databases and their respective clustered instances using the srvctl
utility.
# Add two-node cluster to config srvctl add database -d db10g -o /u01/app/oracle/product/10.1.0/db_1 srvctl add instance -d db10g -i db10g1 -n server01 srvctl add instance -d db10g -i db10g2 -n server02 # remove instance from cluster srvctl remove instance -d db10g -i db10g2 # Start and stop cluster srvctl start database -d db10g srvctl stop database -d db10g # Start and stop individual instance srvctl start instance -d db10g -i db10g1 srvctl stop instance -d db10g -i db10g1 # Get status of whole database or specific instance srvctl status database -d db10g srvctl status instance -d db10g -i db10g1 # Get current database config srvctl config database -d db10g
You can also get information about the current cluster configuarion from any open instance by ussuing the following queries.
CONN sys/password@db10g AS SYSDBA SELECT instance_number, instance_name FROM v$instance; SELECT * FROM v$active_instances;
Post Installation
Create a file called "/sbin/init.d/oracle" containing the following.
#!/bin/sh # # change the value of ORACLE_HOME to be correct for your # installation CRS_HOME=/u01/app/oracle/product/10.1.0/crs ORACLE_HOME=/u01/app/oracle/product/10.1.0/db_1 PATH=${PATH}:$ORACLE_HOME/crs/bin:$ORACLE_HOME/db_1/bin:/usr/sbin HOST=`hostname` # # change the value of ORACLE to the login name of the # oracle owner at your site # ORACLE=oracle export ORACLE_HOME PATH # if [ ! "$2" = "ORA_DB" ] ; then rsh $HOST -l $ORACLE /sbin/init.d/oracle $1 ORA_DB exit fi # LOG=$ORACLE_HOME/db_1/startup.log touch $LOG chmod a+r $LOG # member=`clu_get_info|grep "this member"|awk '{print $6}'` # case $1 in 'start') echo "$0: starting up" >> $LOG date >> $LOG # Start Oracle Net if [ -f $ORACLE_HOME/bin/tnslsnr ] ; then echo "starting Oracle Net listener" $ORACLE_HOME/bin/lsnrctl start >> $LOG 2>&1 fi echo "starting Oracle databases" # Amended for RAC $CRS_HOME/bin/gsdctl start $ORACLE_HOME/bin/srvctl start instance -d db10g -i db10g$member >> $LOG 2>&1 ;; 'stop') echo "$0: shutting down" >> $LOG date >> $LOG # Stop Oracle Net if [ -f $ORACLE_HOME/bin/tnslsnr ] ; then echo "stopping Oracle Net listener" $ORACLE_HOME/bin/lsnrctl stop >> $LOG 2>&1 fi echo "stopping Oracle databases" # Amended for RAC $ORACLE_HOME/bin/srvctl stop instance -d db10g -i db10g$member >> $LOG 2>&1 $CRS_HOME/bin/gsdctl stop ;; *) echo "usage: $0 {start|stop}" exit ;; esac # exit
Use chmod to set the privileges to 750.
chmod 750 /sbin/init.d/oracle
Add the following entry into the .rhosts file in the oracle user login home directory to allow root login access to the account.
<cluster-alias> root <node1-name> root <node2-name> root
Link the file into the startup and shutdown directories using.
ln -s /sbin/init.d/oracle /sbin/rc3.d/S99oracle ln -s /sbin/init.d/oracle /sbin/rc0.d/K01oracle
Add root user to the dba and oinstall groups (/etc/group) to allow the script to function correctly. The relevant instances should now startup/shutdown automatically at system startup/shutdown.
Comment from Pascal Oegerli at www.edorex.ch - When using an ethernet gigabite card as the cluster interconnection instead of using memory channel the RDG protocol does not work. Therefore, you should relink the kernel to use ipc/udp protocol instead of RDG.
make -f ins_rdbms.mk rac_on ipc_udp ioracle
Client Network Configuration
Client applications can take advantage of RACs load balancing and failover features by adjusting their network parameters. A typical "tnsnames.ora" entry might look like.
db10g = (DESCRIPTION= (LOAD_BALANCE=ON) (FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=server01)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=server02)(PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=db10g) (FAILOVER_MODE= (TYPE=SELECT) (METHOD=BASIC) (RETRIES=20) (DELAY=15) ) ) )
Depending on your business needs you may wish to partition your processing across different nodes of the RAC depending on the type of processing like.
db10g_oltp = (DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=server01)(PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=db10g)) ) db10g_reports = (DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=server02)(PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=db10g)) )
For further information see:
- Installation Guide for UNIX Systems
- Oracle 9i Real Application Cluster (RAC) on Compaq Tru64 UNIX
- Oracle9i Real Application Clusters Setup and Configuration Release 2 (9.2)
- Cluster Wide CRON Jobs On Tru64
Hope this helps. Regards Tim...