I wrote a bunch of angry tweets yesterday about the 18.7 patch, which probably weren’t very coherent. I just thought I would summarise them here.
Before we start, I think it’s worth saying I try to keep things as vanilla as possible, so we rarely have problems with patches. I guess this is why I launched into these so quickly, and was surprised by how much hassle I had. So far I’ve only been doing the 18.7 patches on servers running OL6 (hence not upgraded to 19c yet). I have no idea at this point if patches for 19c and the lingering 12.1 instances will have problems or not (see updates below).
A few days ago I had upgraded a development database from 22.214.171.124 to 18.6. The upgrade went fine. No worries. Yesterday I patched that same instance from 18.6 to 18.7. The datapatch run complained about invalid objects, specifically some of our packages (not Oracle packages) that were invalid, because database links they referenced were not working.
The schema in question contains some archived tables and code, and the servers these DB links reference no longer exist. In an ideal world all this would be cleaned up, but that would take development time that we don’t have. Welcome to the real world.
At the end of the datapatch run the PDB was left in restricted mode, so I did the following.
- Granted restricted session to the user.
- Proxied in.
- Dropped the invalid objects in question.
- Revoked restricted session from the user.
- Ran datapatch again.
At the end of that I had a functioning instance.
That sounds like it was relatively smooth and controlled, but of there was lots of checking log files, Googling for bugs and all that stuff before I eventually just dropped the objects and moved on.
Issue 2 (Issue 1b)
The second issue was a kind-of rerun of the first. This time it was an upgrade from 126.96.36.199 to 18.7 directly. This instance was similar to the previous one, as they were both clones of the same production instance. Unfortunately, I was doing this at the same time as the patch, so I couldn’t learn the lesson from the previous one.
The upgrade went fine until datapatch ran, then boom. Same problem has before. Also, same resolution, so I got through this pretty quick.
If I had completed the patch before starting this upgrade, I might have got though this with the preemptive cleanup, but maybe not. See Issue 3.
Having got through two development databases and learned the lessons, I did a preemptive cleanup of the offending objects from the UAT instance. It also was based on a clone of the same production database as the previous two.
Having done the clean-up, I started the upgrade from 188.8.131.52 to 18.7 feeling pretty confident.
The upgrade went fine, but when it ran datapatch it failed with an issue that looked very much like that described in this MOS note Doc ID 2054286.1.
I say “looked very much like”, rather than “was this”, because that issue relates to patching multiple instances on the same machine at the same time. This was datapatch running against a single instance on this machine. The symptom looked the same…
The resolution for this was to run datapatch again. It just ran through fine. No drama.
I’m not looking forward to upgrading the production instance to 18.7. All these databases were clones of it, so I suspect I’m going to have some drama. Luckily the project can tolerate some downtime.
I’m also curious how the 184.108.40.206 and 19c patches are going to play out… (see updates below).
So if you were following my various rants on Twitter yesterday and wondered why I was so angry, this is why. Keep in mind also, these were “background tasks” I threw in on top of the rest of the stuff I was meant to be doing yesterday.
I hope this was just a problem with my instances or me, not indicative of the quality of the patch. It is interesting that 18.6 was happy with it, but 18.7 was a nightmare…
Good luck everyone, and remember, don’t give me a job. I can’t be trusted… 🙂
Update 1: Sven-Olaf Hilmer wrote on Twitter.
“I have a problem with 220.127.116.11.190716DBBP. RMAN crashes on all patched DBs. Rollback to 190416 solves the problem. SR filed. 11.2, 18.104.22.168, 18, 19 are not affected.”
Update 2: Anil Vejendla wrote on Twitter.
“We are facing restricted mode issue for 12.2 after applying July patch and even after rollback also issue still persists.”