8i | 9i | 10g | 11g | 12c | 13c | 18c | 19c | 21c | 23c | Misc | PL/SQL | SQL | RAC | WebLogic | Linux
Patching : Apply a Grid Infrastructure (GI) Release Update (RU) to New ORACLE_HOMEs (Out-Of-Place Patching)
This article gives an example of applying a Grid Infrastructure (GI) Release Update (RU) to new ORACLE_HOMEs for a Real Application Clusters (RAC) installation. This is know as out-of-place patching.
You should always check the patch notes before doing any patching. It's always possible some changes have been introduced that make the process differ from that presented here.
- Assumptions
- Environment
- Apply the Patch
- Datapatch on Closed PDBs
- Clean Up
- Check the Patch History
- Rollback the Patch
- Pros and Cons
Related articles.
Assumptions
This article makes some assumptions.
- We have an existing Oracle 19c or 21c RAC installation.
- We have a backup of the database and all
ORACLE_HOME
s. - We've downloaded the relevant OPatch and patch files for this quarter, as listed here.
Environment
Set up the environment. This includes the OPatch and patch file names, and the paths. Notice how OPatch has been added to the PATH
environment variable. Remember to reset these if switching between users.
export SOFTWARE_DIR=/u01/software export ORACLE_BASE=/u01/app/oracle # 19c export OLD_GRID_HOME=/u01/app/19.0.0/grid export NEW_GRID_HOME=/u01/app/19.16.0/grid export OLD_DB_HOME=${ORACLE_BASE}/product/19.0.0/dbhome_1 export NEW_DB_HOME=${ORACLE_BASE}/product/19.16.0/dbhome_1 export OPATCH_FILE="p6880880_190000_Linux-x86-64.zip" export PATCH_FILE="p34130714_190000_Linux-x86-64.zip" export PATCH_TOP=${SOFTWARE_DIR}/34130714 # 21c export OLD_GRID_HOME=/u01/app/21.0.0/grid export NEW_GRID_HOME=/u01/app/21.7.0/grid export OLD_DB_HOME=${ORACLE_BASE}/product/21.0.0/dbhome_1 export NEW_DB_HOME=${ORACLE_BASE}/product/21.0.0/dbhome_1 export OPATCH_FILE="p6880880_210000_Linux-x86-64.zip" export PATCH_FILE="p34155589_210000_Linux-x86-64.zip" export PATCH_TOP=${SOFTWARE_DIR}/34155589 export ORACLE_HOME=${OLD_GRID_HOME} export PATH=${ORACLE_HOME}/OPatch:${PATH}
Apply the Patch
Issue to following commands as the grid owner user, unless otherwise stated.
Keep a copy of the existing OPatch, and unzip the latest version of OPatch on all nodes of the cluster. You may have to do this as the root user for the grid home, but make sure the ownership of the resulting OPatch directory matches the original ownership once unzipped.
cd ${OLD_GRID_HOME} mv OPatch OPatch.`date +"%Y"-"%m"-"%d"` unzip -oq ${SOFTWARE_DIR}/${OPATCH_FILE} cd ${OLD_DB_HOME} mv OPatch OPatch.`date +"%Y"-"%m"-"%d"` unzip -oq ${SOFTWARE_DIR}/${OPATCH_FILE}
Unzip the GI release update patch software on all nodes of the cluster.
cd ${SOFTWARE_DIR} unzip -oq ${PATCH_FILE}
Check for patch conflicts by running the following commands as the grid owner. The patch numbers will vary depending on the GI release update you are using.
# 19c opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34133642 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34160635 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34139601 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34318175 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/33575402 # 21c opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34160444 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34172227 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34172231 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34174046 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34320616 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34327014
Check there is space to complete the patching. Create a file called "/tmp/patch_list_gihome.txt" containing the list of patches, then run the space check as the grid owner. The patch numbers will vary depending on the GI release update you are using.
# 19c cat > /tmp/patch_list_gihome.txt <<EOF ${PATCH_TOP}/34133642 ${PATCH_TOP}/34160635 ${PATCH_TOP}/34139601 ${PATCH_TOP}/34318175 ${PATCH_TOP}/33575402 EOF # 21c cat > /tmp/patch_list_gihome.txt <<EOF ${PATCH_TOP}/34160444 ${PATCH_TOP}/34172227 ${PATCH_TOP}/34172231 ${PATCH_TOP}/34174046 ${PATCH_TOP}/34320616 ${PATCH_TOP}/34327014 EOF opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
It should report the "Prereq "checkSystemSpace" passed." message. If not, make more free space available.
Create a "clone.properties" file in the patch directory containing the mapping between the old homes and the new homes.
cd ${PATCH_TOP} cat > clone.properties <<EOF ${OLD_GRID_HOME}=${NEW_GRID_HOME} ${OLD_DB_HOME}=${NEW_DB_HOME} EOF
Before running the patch we can optionally run an analyze, which checks the patch is suitable, but doesn't affect anything.
cd ${PATCH_TOP} opatchauto apply -analyze
Run the patch as the root user on the first node of the cluster.
cd ${PATCH_TOP} opatchauto apply ${PATCH_TOP} -outofplace -silent ./clone.properties
If there are any errors, we can correct the issue and resume the patch using the following command.
opatchauto resume
Assuming the patching completes without errors, run the patch on the remaining nodes of the cluster. The remaining nodes can be patched at the same time. Only the first node must be patched on its own. From an availability perspective, it's better to patch them one at a time, so we only have one node out of action at any one time.
opatchauto apply ${PATCH_TOP} -outofplace -silent ./clone.properties
Once complete, make sure all the services are running as expected.
${NEW_GRID_HOME}/bin/crsctl stat res -t
At this point your patching should be complete.
Check the PDBs are running as expected on all nodes.
sqlplus / as sysdba show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO SQL>
Datapatch on Closed PDBs
Under normal circumstances this step should not be necessary as it is run automatically as part of opatchauto
. If you have some PDBs that are closed, or in mounted mode, you may have to apply datapatch to them separately.
Perform the following action on a single node as the database owner. Make sure we are pointing to the database home.
export ORACLE_SID=cdbrac1 # Apply export ORACLE_HOME=${NEW_DB_HOME} # Rollback #export ORACLE_HOME=${OLD_DB_HOME} export PATH=${ORACLE_HOME}/bin:$PATH
Make sure all pluggable databases are open and run datapatch./p>
sqlplus / as sysdba <<EOF alter pluggable database all open; exit; EOF cd $ORACLE_HOME/OPatch ./datapatch -verbose
Recompile any invalid objects.
$ORACLE_HOME/perl/bin/perl \ -I$ORACLE_HOME/perl/lib \ -I$ORACLE_HOME/rdbms/admin \ $ORACLE_HOME/rdbms/admin/catcon.pl \ -l /tmp/ \ -b postpatch_${ORACLE_SID}_recompile \ -C 'PDB$SEED' \ $ORACLE_HOME/rdbms/admin/utlrp.sql
Check the PDBs on all nodes to make sure they are running as expected.
sqlplus / as sysdba show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO SQL>
If any are down or marked as mounted, it's worth checking the PDB_PLUG_IN_VIOLATIONS
view, which may tell us why. Typically we can just restart them to fix the status.
alter pluggable database pdb1 close; alter pluggable database pdb1 open; show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO SQL>
Clean Up
Clean up the patch software.
cd ${SOFTWARE_DIR} rm -Rf ${PATCH_TOP} rm -Rf ${OPATCH_FILE} rm -Rf ${PATCH_FILE} rm -Rf PatchSearch.xml
Check the Patch History
We can check the patch history by running the following command on the relevant home on the nodes.
opatch lsinventory
Rollback the Patch
Run the following command on each node of the cluster as the root user.
cd ${PATCH_TOP} export ORACLE_HOME=${NEW_GRID_HOME} export PATH=${ORACLE_HOME}/OPatch:${PATH} opatchauto rollback -switch-clone
Once complete, make sure all the services are running as expected.
${OLD_GRID_HOME}/bin/crsctl stat res -t
Check the PDBs on all nodes to make sure they are running as expected.
sqlplus / as sysdba show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO SQL>
If any are down or marked as mounted, it's worth checking the PDB_PLUG_IN_VIOLATIONS
view, which may tell us why. Typically we can just restart them to fix the status.
sqlplus / as sysdba alter pluggable database pdb1 close; alter pluggable database pdb1 open; show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO SQL>
Pros and Cons
Pros:
- By patching to new
ORACLE_HOME
s, the instances are offline for a shorter period of time. Remember, we are patching one node at a time, so the database remains available for the whole of the operation. - If we do need to rollback the patch, the rollback operation is quicker, as the original
ORACLE_HOME
s are still present.
Cons:
- We need additional disk space for the new
ORACLE_HOME
. - If we have any additional environment files or scripts containing paths that include the
ORACLE_HOME
, they will need to be adjusted. - We will need to clean up the old
ORACLE_HOME
at some point.
For more information see:
- Critical Patch Updates, Security Alerts and Bulletins
- Patching : Apply a Grid Infrastructure (GI) Release Update (RU) to Existing ORACLE_HOMEs
Hope this helps. Regards Tim...