8i | 9i | 10g | 11g | 12c | 13c | 18c | 19c | 21c | 23c | Misc | PL/SQL | SQL | RAC | WebLogic | Linux
Patching : Apply a Grid Infrastructure (GI) Release Update (RU) to Existing ORACLE_HOMEs
This article gives an example of applying a Grid Infrastructure (GI) Release Update (RU) to existing ORACLE_HOME
s for a Real Application Clusters (RAC) installation.
You should always check the patch notes before doing any patching. It's always possible some changes have been introduced that make the process differ from that presented here.
- Assumptions
- Environment
- Apply the Patch
- Datapatch on Closed PDBs
- Clean Up
- Check the Patch History
- Rollback the Patch
- Pros and Cons
Related articles.
Assumptions
This article makes some assumptions.
- We have an existing Oracle 19c or 21c RAC installation.
- We have a backup of the database and all
ORACLE_HOME
s. We are applying the patch to the existingORACLE_HOME
, so we need a way to fallback if something goes wrong that can't be fixed by rolling back the patch. - We've downloaded the relevant OPatch and patch files for this quarter, as listed here.
Environment
Set up the environment. This includes the OPatch and patch file names, and the paths. Notice how OPatch has been added to the PATH
environment variable. Remember to reset these if switching between users.
export SOFTWARE_DIR=/u01/software export ORACLE_BASE=/u01/app/oracle # 19c export GRID_HOME=/u01/app/19.0.0/grid export DB_HOME=${ORACLE_BASE}/product/19.0.0/dbhome_1 export OPATCH_FILE="p6880880_190000_Linux-x86-64.zip" export PATCH_FILE="p34130714_190000_Linux-x86-64.zip" export PATCH_TOP=${SOFTWARE_DIR}/34130714 # 21c export GRID_HOME=/u01/app/21.0.0/grid export DB_HOME=${ORACLE_BASE}/product/21.0.0/dbhome_1 export OPATCH_FILE="p6880880_210000_Linux-x86-64.zip" export PATCH_FILE="p34155589_210000_Linux-x86-64.zip" export PATCH_TOP=${SOFTWARE_DIR}/34155589 export ORACLE_HOME=${GRID_HOME} export PATH=${ORACLE_HOME}/OPatch:${PATH}
Apply the Patch
Issue to following commands as the grid owner user, unless otherwise stated.
Keep a copy of the existing OPatch, and unzip the latest version of OPatch on all nodes of the cluster. You may have to do this as the root user for the grid home, but make sure the ownership of the resulting OPatch directory matches the original ownership once unzipped.
cd ${GRID_HOME} mv OPatch OPatch.`date +"%Y"-"%m"-"%d"` unzip -oq ${SOFTWARE_DIR}/${OPATCH_FILE} cd ${DB_HOME} mv OPatch OPatch.`date +"%Y"-"%m"-"%d"` unzip -oq ${SOFTWARE_DIR}/${OPATCH_FILE}
Unzip the GI release update patch software on all nodes of the cluster.
cd ${SOFTWARE_DIR} unzip -oq ${PATCH_FILE}
Check for patch conflicts by running the following commands as the grid owner. The patch numbers will vary depending on the GI release update you are using.
# 19c opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34133642 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34160635 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34139601 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34318175 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/33575402 # 21c opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34160444 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34172227 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34172231 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34174046 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34320616 opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ${PATCH_TOP}/34327014
Check there is space to complete the patching. Create a file called "/tmp/patch_list_gihome.txt" containing the list of patches, then run the space check as the grid owner. The patch numbers will vary depending on the GI release update you are using.
# 19c cat > /tmp/patch_list_gihome.txt <<EOF ${PATCH_TOP}/34133642 ${PATCH_TOP}/34160635 ${PATCH_TOP}/34139601 ${PATCH_TOP}/34318175 ${PATCH_TOP}/33575402 EOF # 21c cat > /tmp/patch_list_gihome.txt <<EOF ${PATCH_TOP}/34160444 ${PATCH_TOP}/34172227 ${PATCH_TOP}/34172231 ${PATCH_TOP}/34174046 ${PATCH_TOP}/34320616 ${PATCH_TOP}/34327014 EOF opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
It should report the "Prereq "checkSystemSpace" passed." message. If not, make more free space available.
Run the patch as the root user on the first node of the cluster. We could also add the "-inplace" flag, but this is not necessary.
opatchauto apply ${PATCH_TOP}
Assuming this completes without errors, run the patch on the remaining nodes of the cluster. The remaining nodes can be patched at the same time. Only the first node must be patched on its own.
opatchauto apply ${PATCH_TOP}
Once complete, make sure all the services are running as expected.
${GRID_HOME}/bin/crsctl stat res -t
At this point your patching should be complete.
Check the PDBs are running as expected on all nodes.
sqlplus / as sysdba show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO SQL>
Datapatch on Closed PDBs
Under normal circumstances this step should not be necessary as it is run automatically as part of opatchauto
. If you have some PDBs that are closed, or in mounted mode, you may have to apply datapatch to them separately.
Perform the following action on a single node as the database owner. Make sure we are pointing to the database home.
export ORACLE_SID=cdbrac1 export ORACLE_HOME=${DB_HOME} export PATH=${ORACLE_HOME}/bin:$PATH
Make sure all pluggable databases are open and run datapatch./p>
sqlplus / as sysdba <<EOF alter pluggable database all open; exit; EOF cd $ORACLE_HOME/OPatch ./datapatch -verbose
Recompile any invalid objects.
$ORACLE_HOME/perl/bin/perl \ -I$ORACLE_HOME/perl/lib \ -I$ORACLE_HOME/rdbms/admin \ $ORACLE_HOME/rdbms/admin/catcon.pl \ -l /tmp/ \ -b postpatch_${ORACLE_SID}_recompile \ -C 'PDB$SEED' \ $ORACLE_HOME/rdbms/admin/utlrp.sql
Check the PDBs on all nodes to make sure they are running as expected.
sqlplus / as sysdba show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO SQL>
If any are down or marked as mounted, it's worth checking the PDB_PLUG_IN_VIOLATIONS
view, which may tell us why. Typically we can just restart them to fix the status.
alter pluggable database pdb1 close; alter pluggable database pdb1 open; show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO SQL>
Clean Up
Clean up the patch software.
cd ${SOFTWARE_DIR} rm -Rf ${PATCH_TOP} rm -Rf ${OPATCH_FILE} rm -Rf ${PATCH_FILE} rm -Rf PatchSearch.xml
Check the Patch History
We can check the patch history by running the following command on the relevant home on the nodes.
opatch lsinventory
Rollback the Patch
Run the following command on each node of the cluster as the root user.
opatchauto rollback ${PATCH_TOP}
Once complete, make sure all the services are running as expected.
${GRID_HOME}/bin/crsctl stat res -t
Check the PDBs on all nodes to make sure they are running as expected.
sqlplus / as sysdba show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO SQL>
If any are down or marked as mounted, it's worth checking the PDB_PLUG_IN_VIOLATIONS
view, which may tell us why. Typically we can just restart them to fix the status.
sqlplus / as sysdba alter pluggable database pdb1 close; alter pluggable database pdb1 open; show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO SQL>
Pros and Cons
Pros:
- We don't have an additional
ORACLE_HOME
s, so this reduces the space needed to complete the patching. - Since the
ORACLE_HOME
isn't changing, we don't need to worry about altering any configuration files to point at the new home.
Cons:
- Patching the existing
ORACLE_HOME
requires more downtime, as the services must be offline while the home is being patched. This is offset by the fact that the RAC nodes are patched one at a time, so the database remains up for the whole of the operation. - If something goes wrong that can't be fixed by rolling back the patch, we may have to recover the
ORACLE_HOME
from a backup to restore the original running instance.
For more information see:
- Critical Patch Updates, Security Alerts and Bulletins
- Patching : Apply a Grid Infrastructure (GI) Release Update (RU) to New ORACLE_HOMEs (Out-Of-Place Patching)
Hope this helps. Regards Tim...