If this option is selected, the environment will be analyzed for suitability of the patch on each home without affecting the home. The patch will not be applied or rolled back and targets will not be shut down.
If opatchauto apply is run and encounters an individual patch within a patch set that cannot be installed, that patch will be skipped and OPatchAuto will continue with the installation of the next patch in the sequence.
If opatchauto apply is run and encounters an individual patch that is identical (same patch ID and Unique Patch Identifier (UPI)) to a patch already installed in the product home, OPatchAuto perform the following based on specific patch conditions:
This analyze option simulates an OPatchAuto apply session by running all prerequisite checks, when possible, without making changes to the system (either bits or configurations). Because the analyze command does not modify the system, it will perform the following checks:
In this Document
PurposeDetails Concepts Preparation for 'opatchauto apply' Central Inventory Oracle Home Inventory (Local Inventory) Other Aspects Preparation for 'opatchauto resume' OPatchauto Session GI/RDBMS Status Top Issues Issue #1 Fixed Central Inventory's oui-patch.xml problem and first opatchauto resume fails with 'corrupted. PatchObject constructor: Input file "... config/actions" or "... config/inventory" does not exist' regarding Grid Home Issue #2 opatchauto apply fails with java.io.FileNotFoundException: /ContentsXML/oui-patch.xml (Permission denied)
Issue #3 opatchauto apply fails with //custom/scripts/prepatch.sh: Permission denied and file on has permission: rw-r--r--
Issue #4 After successfully applied the GI RU on all nodes, cluster upgrade state is [ROLLING PATCH], not [NORMAL] when GI stack up and running on all nodes Issue #5 OPATCHAUTO-72050 and 'Topology creation failed' when opatchauto apply Issue #6 opatchauto apply fails with 'CRS-1159: The cluster cannot be set to rolling patch mode because Oracle Clusterware is not active on at least one remote node.' and CLSRSC-430 Issue #7 OPATCHAUTO-72030: Cannot execute in rolling mode, as CRS home is shared Issue #8 Copy Action: Destination File "/perl/bin/perl" is not writeable Questions & Answers Why "This command doesn't support System Patch" is reported when attempting to run opatch [ command ] ? How can I monitor the files copying progress during apply ? To workaround a failure, can I copy oneoffs/ folder from another node's Local Inventory?References My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.
Set up the environment. This includes the OPatch and patch file names, and the paths. Notice how OPatch has been added to the PATH environment variable. Remember to reset these if switching between users.
Keep a copy of the existing OPatch, and unzip the latest version of OPatch on all nodes of the cluster. You may have to do this as the root user for the grid home, but make sure the ownership of the resulting OPatch directory matches the original ownership once unzipped.
Check there is space to complete the patching. Create a file called "/tmp/patch_list_gihome.txt" containing the list of patches, then run the space check as the grid owner. The patch numbers will vary depending on the GI release update you are using.
Assuming the patching completes without errors, run the patch on the remaining nodes of the cluster. The remaining nodes can be patched at the same time. Only the first node must be patched on its own. From an availability perspective, it's better to patch them one at a time, so we only have one node out of action at any one time.
Under normal circumstances this step should not be necessary as it is run automatically as part of opatchauto. If you have some PDBs that are closed, or in mounted mode, you may have to apply datapatch to them separately.
You could of course use oplan which is bundled in $ORACLE_HOME/OPatch/oplan/oplan to generate a lot more detailed profile for the patch application. For a first glance at the activity opatchauto -generateSteps seems quite useful.
In general, when we invoke opatchauto will patch both the GI stack and the database software stack. Since we have mentioned the -oh it will apply the PSU to the specified home.
At first, let me warn you that there is no single solution. So you may need to read further and identify your area(s). And in addition, let me tell you that some of the topics will affect only environments where you patch in-place while others apply to Multitenant only. And of course, it could be easily that you may be affected by more than one issue.
What is the difference between in-place and out-of-place patching? With in-place patching I refer to applying a patch bundle or a patch into the existing home. This is the classical way how you patch Grid Infrastructure. With out-of-place patching I mean that you install the new base release into a new home, apply all the necessary patches, may it be RUs or OJVM PSU or one-offs or merges. Then you stop your database instance in home_old, start it in home_new, and once you did this on all available nodes, you invoke datapatch to apply the necessary changes to the database(s).
And as of now, there is no way to tell opatch (as I do with my Linux environment which I configure to keep only the current kernel and the previous one but not the ones from 2019) to keep only the n-1 version of patch bundles.
Looking forward to the blog posts. I think it is an area with oportunity for lots of improvements. Based on Doc ID 2853839.1 it seems that out-of-place patching is recomended for GI but in-place is recomended for database home. And for the database home then we have the OJVM patch. In the case of RAC I would like to patch all the components (GI, RDBMS, OJVM) affecting services in each node only once, not several times. And one of the problems with out-of-place patching is the extra work required after patching (for example changing the new home directory in Enterprise Manager for the affected targets).
Hi Mike,
thanks a lot.
Then thinking about this approach I got a quite unsure about this one:
We installed 19.10 and lots of one-offs.
Is there a quick way to find out, if all one-offs are includes in 19.16?
My first thought was to check which bugid is fixed with the one-off. Next Step would be to check if 19.16 covers this.
Is there a better and more secure way to check? Or should we open an SR just to be sure?
Regards
Christian
After a successful installation of the Grid Infrastructure and RDBMS (12.2.1.0.0), we got into a bug, regarding ACFS and the kernel version (the bug id s 25078431). After downloading the patch to correct this bug, the README of the patch simply points to a doc which explais how to apply patches in the 11.2 version (which i found to be an absurd). After following the doc, i got a error saying that the command is deprecated. So i tryied the "opatchauto apply" command, in which i've used many times on 12.1 databases. But i keep getting the following error (see command below):
This is a classic case where the patching failed as there were few executables/files from the HOME still active. Same you can verify in the standard logging directory cfgtoollogs for opatchauto for the patch failed.
opatchauto is a really powerful tool which even let you resume your patch even when the patching crashed in between by any reasons like server crash, reboot cases or even manual CTRL+C etc. The other two regular options are rollback and version.
The 19.12 patch cycle was out last week for both Database and Grid Infrastructure. Since then, I've seen some customers complaining about an issue in the tomcat patch when trying to patch their environment with the latest opatch version:
You must use the OPatch utility version 12.2.0.1.25 or later to apply this patch. Oracle recommends that you use the latest released OPatch version for 12.2 which is available for download from My Oracle Support patch 6880880 by selecting ARU link for the 12.2.0.1.0 OPatch release. It is recommended that you download the OPatch utility and the patch in a shared location to be able to access them from any node in the cluster for the patch application on each node.
PS: Note that I always recommend OOP (Out-of-Place) patching, as you reduce downtime and also the risk of issues. However, here I will use in-place to show what to do when you get stuck in the middle with one of your nodes down.
So as you can see I haven't had any issues. After researching a bit more, I found out this was an issue with "opatchauto resume", not with "opatchauto apply". And more specific to the opatch version ".25", the one required by this GI.
So let's forcibly introduce an error and try again the "opatchauto resume" in a new environment to simulate this issue. Here what I did was removing the execution flag from the java binary that will be shipped to the GRID_HOME. So this way java will fail to execute after the node is patched:
However, applying a patch manually in GI is not that simple. You may need to stop your CRS / unlock grid home / do it in multiple nodes / etc. There is even a MOS note for it. In step 5 of note below, you have all the details:
And it succeeds. That is it. Here are my 50 cents with a third option to solve the stuck GI patch apply. Just keep in mind that If DB is involved, then also start the database using "srvctl start database".
I recently came accross a new (to me) error when trying to upgrade Grid Infrastructure on my lab, a 2-node 18.6 RAC cluster. To upgrade Grid Infrastructure directly to 19c with the latest Release Update 19.4, I downloaded the 19.3 base release and tried to apply RU 19.4 before launching the upgrade :
c80f0f1006