Oracle Database 18.7 Patch : Summary

I wrote a bunch of angry tweets yesterday about the 18.7 patch, which probably weren’t very coherent. I just thought I would summarise them here.

Before we start, I think it’s worth saying I try to keep things as vanilla as possible, so we rarely have problems with patches. I guess this is why I launched into these so quickly, and was surprised by how much hassle I had. So far I’ve only been doing the 18.7 patches on servers running OL6 (hence not upgraded to 19c yet). I have no idea at this point if patches for 19c and the lingering 12.1 instances will have problems or not (see updates below).

Issue 1

A few days ago I had upgraded a development database from to 18.6. The upgrade went fine. No worries. Yesterday I patched that same instance from 18.6 to 18.7. The datapatch run complained about invalid objects, specifically some of our packages (not Oracle packages) that were invalid, because database links they referenced were not working.

The schema in question contains some archived tables and code, and the servers these DB links reference no longer exist. In an ideal world all this would be cleaned up, but that would take development time that we don’t have. Welcome to the real world.

At the end of the datapatch run the PDB was left in restricted mode, so I did the following.

  • Granted restricted session to the user.
  • Proxied in.
  • Dropped the invalid objects in question.
  • Revoked restricted session from the user.
  • Ran datapatch again.

At the end of that I had a functioning instance.

That sounds like it was relatively smooth and controlled, but of there was lots of checking log files, Googling for bugs and all that stuff before I eventually just dropped the objects and moved on.

Issue 2 (Issue 1b)

The second issue was a kind-of rerun of the first. This time it was an upgrade from to 18.7 directly. This instance was similar to the previous one, as they were both clones of the same production instance. Unfortunately, I was doing this at the same time as the patch, so I couldn’t learn the lesson from the previous one.

The upgrade went fine until datapatch ran, then boom. Same problem has before. Also, same resolution, so I got through this pretty quick.

If I had completed the patch before starting this upgrade, I might have got though this with the preemptive cleanup, but maybe not. See Issue 3.

Issue 3

Having got through two development databases and learned the lessons, I did a preemptive cleanup of the offending objects from the UAT instance. It also was based on a clone of the same production database as the previous two.

Having done the clean-up, I started the upgrade from to 18.7 feeling pretty confident.

The upgrade went fine, but when it ran datapatch it failed with an issue that looked very much like that described in this MOS note Doc ID 2054286.1.

I say “looked very much like”, rather than “was this”, because that issue relates to patching multiple instances on the same machine at the same time. This was datapatch running against a single instance on this machine. The symptom looked the same…

The resolution for this was to run datapatch again. It just ran through fine. No drama.


I’m not looking forward to upgrading the production instance to 18.7. All these databases were clones of it, so I suspect I’m going to have some drama. Luckily the project can tolerate some downtime.

I’m also curious how the and 19c patches are going to play out… (see updates below).

So if you were following my various rants on Twitter yesterday and wondered why I was so angry, this is why. Keep in mind also, these were “background tasks” I threw in on top of the rest of the stuff I was meant to be doing yesterday.

I hope this was just a problem with my instances or me, not indicative of the quality of the patch. It is interesting that 18.6 was happy with it, but 18.7 was a nightmare…

Good luck everyone, and remember, don’t give me a job. I can’t be trusted… 🙂



Update 1: Sven-Olaf Hilmer wrote on Twitter.

“I have a problem with RMAN crashes on all patched DBs. Rollback to 190416 solves the problem. SR filed. 11.2,, 18, 19 are not affected.”

Update 2: Anil Vejendla wrote on Twitter.

“We are facing restricted mode issue for 12.2 after applying July patch and even after rollback also issue still persists.”

Author: Tim...

DBA, Developer, Author, Trainer.

7 thoughts on “Oracle Database 18.7 Patch : Summary”

  1. If I may ask, what are the best practices in place at your site for archiving database objects. That’s part of core database capacity planning both for existing environment and future DBAs who may assume DBA role.

  2. It seems like these days people don’t delete and archive. I’ve worked on loads of projects where everyone is focusing on building, not removing. It’s quite frustrating.

    If you are using version control, then delete unused objects. You can always retrieve them from your version control if you need to.



  3. I have also problems with installing July patch for, because of not worked dblinks. Fix for that is.
    Find file in rdbms/admin “bug_27303287_apply.sql” and replace:
    end loop;
    with begin … exception .. end section:
    when others then
    end loop;
    and launch datapatch again. That’s solved problem.
    Possibly for 18+ and 19+ patches has the same problem.

  4. Hi Tim,
    Just hit the same issue. MOS Note is handy and has a workaround similar to the one posted by Victor:
    ORA-04045 ORA-04042 ORA-00604 Error After Applying July 2019 RU Patches (Doc ID 2569427.1)

  5. Hi Tim –
    Wow, timely blog – I’m in the middle of upgrading our Dev environment to 18.7, got invalid for XDB and pieces we use with ArcGIS items. After much searching, finding same to be true – best to recreate objects.
    Thanks as always for post

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.