Database Patching Revisited : Take off and nuke the entire site from orbit…

I was reading a post by Pete Finnigan the other day.

I put out a tweet mentioning it, and linking to one of my old posts on the subject too.

This started a bit of a debate on Twitter about how people patch their databases. In this post I want to touch on a few points that came out Pete’s post an some of the other Twitter comments.

You have to have a plan!

An extremely important point made by Pete was you have to have a plan. That doesn’t have to be the same for everyone, and there may be compromises due to constraints in your company, but that doesn’t stop you making a plan. Your plan might be:

  • We will start a new round of patching immediately when a new on-off patch is released, and every quarter with the security announcements. I can’t see how this is possible.
  • We will patch every quarter with the security announcements. That’s what my company does.
  • We will patch once per (six months, year etc.)

Hopefully your plan will not be:

  • We will never patch and person X will take the blame when we have a problem.

Release Updates (RUs) or Release Update Revisions (RURs)

Database quarterly patches are classified as release updates (RUs) and release update revisions (RURs). First let’s explain what they are.

  • Release Updates (RUs) : These are like the old proactive bundle patches. They contain bug fixes, security fixes and limited new features. Let’s call that “extra stuff”. In 19c the blockchain tables and immutable tables features were introduced in RUs. Backporting and new features can introduce new risks.
  • Release Update Revisions (RURs) : These are just bug fixes and security fixes. In theory these are safer than RUs as less new stuff is introduced, but… See below.

So from first glance you are saying to yourself I want the safest option, so I want to go for RURs. The problem is RURs aren’t like the old security patches that you could continue applying forever. Ultimately you have to include all the “extra stuff” from the previous RUs, but you get the option of doing it later. This page in the documentation explains things quite well.

This table from that link is quite useful, showing you what version you will be on during a quarterly patching cycle.

What does this mean?

  • If you patch using the RUs, you are going to the latest and greatest each quarter.
  • If you use RUR-1, you are constantly 1 quarter behind on the RUs extra content, but you add in the missing bug fixes and security fixes using the RUR-1 patch.
  • If you use RUR-2, you are constantly 2 quarters behind on the RUs extra content, but you add in the missing bug fixes and security fixes using the RUR-2 patch.

In all cases you have the latest bug fixes and security fixes. You are just delaying getting the “extra bits”. So at first glance it seems like you might as well go with the RUs. The issue is some of the RUs are a bit buggy. If you go for the RUR-1 or RUR-2 there is a chance the bugs introduced in the base RU have been fixed in the subsequent RURs for that RU. So we could say this.

  • RUs: Oracle have zero time to identify and fix the bugs they’ve introduced in the RU.
  • RUR-1: Oracle have 3 months to find and fix the bugs they’ve introduced in the base RU.
  • RUR-2: Oracle have 6 months to find and fix the bugs they’ve introduced in the base RU.

I tend to stick with the RUs, although I am considering changing. Ilmar Kerm said he’s found RUs too buggy and tends to stick with the RUR-1 approach. I guess a more conservative approach would be to stick with the RUR-2 approach.

Your experience of the RUs verses the RURs will depend on what features you use, what extra stuff Oracle decide to include in the RU and what they break by including that extra stuff. The biggest problem I got was 19.10 breaking hot-cloning of PDBs, which was kind-of important. If I had used the RUR-1 approach I would never have seen that issue. Different people using different features see different bugs.

How good is your testing?

The biggest factor in the decision of which approach to take is probably the quality of your testing.

  • If your testing of applications against new patches is good, you can probably stick with the RUs. If the RU fails testing, go with the RUR-1 that quarter.
  • If you just work on the “generally considered safe” approach, meaning you apply the patches and don’t do any testing, maybe you should be using the RUR-1 or RUR-2 approach!
  • The ultra-conservative approach would be to stick with the RUR-2 approach.

Just patch!

Regardless of which approach you take, you’ve got to have a plan, and you should be patching. I know some of you don’t care about patching, and you are fools. I know some of you would like to patch, but your companies are dinosaurs. All I can say to you is keep trying.

In my current company we never used to patch. I spent years sending out quarterly reports summarising all the vulnerabilities in our systems and still nothing. Eventually a few other people jumped on the bandwagon, we had a couple of embarrassing issues, and the constant threat of GDPR gave us some more leverage. Now we have a quarterly patching schedule for all our databases and middle tier servers. We are not perfect, but it can be done.

Even now, we still have questions like, “can we miss out this quarter?”, but we push back very hard against this. One quarter becomes two, becomes three, becomes never.

New patches on the 20th July (see here). Good luck everyone!

Cheers

Tim…

PS. If you are not patching externally facing WebLogic servers you might as well close your company now. You have already given all your data away. Good luck with that GDPR fine…

Database Upgrades : It’s been a long road getting from there to here

Please play the Star Trek – Enterprise theme while you are reading this post. πŸ™‚

I’ve mentioned database upgrades a few times over the last year or more. Like many others, we are pushing hard to get everything upgraded to 19c. Over the last couple of weeks a bunch more systems got upgraded, and we are now looking like this.

The remaining 11.2 and 12.1 databases are all in various stages of migration/upgrade. I would not curse us by giving a deadline for the final databases, but hopefully soon!

The reason for mentioning that theme song is it starts with the words, “It’s been a long road getting from there to here”, and that is exactly how it feels.

Many of the database upgrades are technically simple, but the projects surrounding them are soul destroying. Getting all the relevant people to agree and provide the necessary resources can be really painful. This is especially true for “mature” projects, where the, “if it ain’t broke, don’t fix it”, mentality is strong. I wrote about the problems with that mentality here.

I’m not going to give you any blinding insights into how to do your database upgrades, because every upgrade is potentially unique, as I discussed here.

We always go for the multitenant architecture (CDB/PDB) unless there is a compelling reason not to. I think we only have one non-CDB installation of 19c because of a vendor issue. None of our other 3rd party applications have had a problem with using PDBs, provided we’ve made sure they connect with the service, not a SID. We don’t use the USE_SID_AS_SERVICE_listener_name parameter. I would rather find and fix the connection issues than rely on this sticking plaster fix.

In know I’ve said some of these things before, but they are worth repeating.

  • Oracle 19c is the current long term release, so it’s going to have support for a longer time than an innovation release.
  • Oracle 21c is an innovation release. Even when the on-prem version does drop, you probably shouldn’t use it for your main systems unless you are happy with the short support lifespan.
  • I recently heard there won’t be an Oracle 22c, so the next release after Oracle 21c will be Oracle 23c, which is currently slated to be the next long term release.

In short, get all your databases to Oracle 19c, and you should probably stick there until Oracle 23c is released, unless you have a compelling case for going to Oracle 21c.

Cheers

Tim…

Video : DDL_LOCK_TIMEOUT : DDL With the WAIT Option

In today’s video we demonstrate the DDL_LOCK_TIMEOUT feature, introduced in 11g.

The video is based on this article.

The star of today’s video is Bertrand Drouvot, who’s sporting a particularly impressive beard here. πŸ™‚

Cheers

Tim…

Video : TRUCATE TABLE … CASCADE

In today’s video we demonstrate the TRUNCATE TABLE … CASCADE feature added in Oracle 12.1.

The video is based on this article.

The star of today’s video is Øyvind Isene, who reluctantly took a break from his coffee to let me film this clip. πŸ™‚

Cheers

Tim…

Don’t be a moron. Install APEX in all your Oracle databases!

If you come from an Oracle background you know what Application Express (APEX) is. You know it’s an awesome low-code development environment, and you know it can turn DBAs and PL/SQL developers into productive members of society in no time at all. πŸ™‚

Even if you don’t want to use APEX, you should install APEX anyway, because it comes with some really handy PL/SQL packages. These come to mind.

APEX_WEB_SERVICE : I’ve written my own APIs in the past, but I never use them anymore. I recommend people make web service callouts from the database using the APEX_WEB_SERVICE package. You can use it for REST and SOAP calls, and it works equally well for JSON and XML web services.

APEX_JSON : Oracle included some JSON functionality in Oracle 12.1, but they only completed the implementation in Oracle 12.2. If you are working with a database version older than Oracle 12.2 and you care about JSON, you need the APEX_JSON package.

APEX_DATA_PARSER : This package allows you to convert CSV, JSON, XML and XLSX data into rows and columns. It’s really simple to use!

APEX_ZIP : You can do some simple gzip/gunzip stuff with the UTL_COMPRESS package, but the APEX_ZIP package allows you to zip up multiple files into an archive, or extract files from an existing archive.

APEX_MAIL : This package gives you an API over the APEX mail functionality. It’s more flexible than the UTL_MAIL package, and is much simpler than writing your own APIs using the UTL_SMTP package.

APEX_STRING : I always used to use the STRING_TO_TABLE and TABLE_TO_STRING functions in the APEX_UTIL package. Those have now been moved to the APEX_STRING package. You might prefer to use the SPLIT and JOIN functions instead. There is also a bunch of other string handling stuff worth checking out.

Every release brings more goodies for PL/SQL programmers, regardless of whether you want to use APEX or not!

So do yourself a favour. Install APEX into all your Oracle databases and make your PL/SQL developers happy.

Cheers

Tim…

PS. I don’t actually think you are a moron if you don’t install APEX. If you take the title of blog posts seriously, when they are clearly clickbait, maybe you are a moron…

Update: There seems to be a little confusion on some social media comments. Installing APEX into a database doesn’t mean APEX is available for people to develop applications. You can only use APEX proper if you have a gateway (ORDS, EPG, mod_plsql) fronting it. Without a gateway, APEX doesn’t work. Someone can’t take it upon themselves to run a gateway somewhere else if they don’t know all the passwords, so installing APEX and using it are not the same thing. Installing it does make the built-in packages available for normal PL/SQL development, which is really handy. Hence this post.

Oracle Database Upgrades : One Size Does Not Fit All

If you follow me on Twitter, you’ll know I’ve been doing a lot of upgrades recently. Whenever I mention upgrades, someone comes back with a comment asking me to describe in detail what I’ve done. This always makes me nervous, as every upgrade is potentially unique. What you have to do depends on a number of factors.

  • The database version you are upgrading from, and the version you are upgrading to.
  • The options you have installed, especially if some are deprecated or desupported in the new release.
  • The topology of your system. If you are running single instance on a regular file system, you’ve got a lot less to do compared to someone working with RAC on ASM with a Data Guard standby.
  • Are you taking the opportunity to move to new kit and/or base operating system? From OL6 to OL7/8 for example.
  • Are you changing OS completely? Maybe you’ve finally made the decision to upgrade from AIX/HP-UX/Solaris to Oracle Linux. πŸ™‚
  • Are you planning to convert from non-CDB to the multitenant architecture? You should be!
  • How big is the database you are upgrading? I’ve done some small ones with data pump rather than doing a regular upgrade.
  • What is your tolerance for downtime? If you have a reasonable downtime window, you can turn everything off and upgrade it, which is a lot simpler than trying to keep the lights on.
  • Are there any vendor restrictions that alter how you approach the upgrade?

All of these things, and more I can’t think of off the top of my head, have to be factored in when planning an upgrade, and this is why I say every upgrade is potentially unique.

The types of upgrades I’ve done recently fall into the following general groups, with varying numbers in each group. A number of them have included a move to new kit, because they were running on Oracle Linux 6.

  • 11.2.0.4 to 19c non-CDB (Vendor support issues)
  • 11.2.0.4 to 19c PDB
  • 12.1.0.2 non-CDB to 19c PDB
  • 18c PDB to 19c PDB
  • 18c PDB to 19c PDB using Data Pump
  • 11.2.0.4 to 12.1.0.2 as stepping stone to 19c

The last one may seem odd to people, but this is due to an application dependency issue. The full process for this is as follows.

  • Upgrade database from 11.2.0.4 to 12.1.0.2.
  • Upgrade application to a version that supports both 12.1.0.2 and 19c.
  • Migrate to new kit.
  • Upgrade database from 12.1.0.2 to 19c.
  • Upgrade application to latest version.

What we are trying to do is make everything as “one size fits all” as possible, to make future upgrades, or moves to the cloud, easier, but that’s not always possible due to other constraints.

I do have a couple of upgrade articles on the site, but they are intentionally basic and I never intend to write anything more detailed about upgrades, because it’s impossible to write something that will satisfy every possibility.

So in summary, there is no one size fits all solution to upgrades unless you have already commoditized all your systems, like the cloud providers do. If you are working with a load of on-prem systems, some of which you have inherited from others, each upgrade will be a voyage of discovery, so don’t ask me for a detailed breakdown of what I did, because I’m just going to say no. There is a reason why there is a great big upgrade manual released with every version of the database!

Cheers

Tim…

Video : Partial Indexes for Partitioned Tables

In today’s video we demonstrate partial indexes for partitioned tables, introduced in Oracle database 12.1.

The video is based on this article.

The star of today’s video is Carry Millsap, who is being lead astray by a very naughty James Morle.

Cheers

Tim…

Video : JSON_TRANSFORM in Oracle Database 21c

In today’s video we demonstrate the JSON_TRANSFORM function, introduced in Oracle database 21c.

This video is based on the following article, which has a lot more information and examples than the video does.

I’ve written a bunch of stuff on JSON and ORDS, and there’s a YouTube playlist.

The star of today’s video is Tom Kyte. Here are two funny (to me) stories about my interactions with Tom.

On my first visit to OpenWorld in 2006, Tom walked up to me and introduced himself. I was blown away he even knew who I was. It was a massive ego trip. On my next visit to OpenWorld in 2007, I walked up to Tom and said casually, “Hi Tom”. He looked at me, looked down at my conference badge, looked at me again and said, “Hi Tim”. Needless to say, my ego was placed firmly back where it belonged. I still laugh when I think about it now.

At a later OpenWorld I was sitting at a table and someone was fanboying me. Tom came along and asked if he could sit at the same table. I said sure. My fanboy turned and said in a rather snarky tone, “And who are you?” Tom replied, “Hi. I’m Tom Kyte”. Fanboy’s face immediately dropped and at that exact moment in time I became invisible, and fanboy began to fanboy Tom. It was priceless!

Good times!

Cheers

Tim…

Video : Multitenant : Disk I/O (IOPS, MBPS) Resource Management for Pluggable Databases (PDBs)

In today’s video we’ll discuss how Resource Manager allows us to manage the disk I/O (IOPS, MBPS) usage in PDBs. This can be useful to stop a small number of PDBs using all disk IOPS and/or bandwidth on the server.

The video is based on the following article.

You might also find these useful.

The star of the today’s video is my dad Graham Wood, who is now living a life of leisure. Unfortunately the travel restrictions mean I won’t be able to visit him this year. πŸ™

Cheers

Tim…

Oracle REST Data Services (ORDS) : Database APIs – First Steps

In my never ending quest for automation, I finally got round to looking at the Oracle REST Data Services (ORDS) Database APIs.

These have been around for some time, but I was testing them for the first time using ORDS version 20.2, so I was basing my tests on that version of the documentation, and more importantly version 20 of the APIs.

The are several sets of APIs, and they don’t have the same dependencies or authentication methods. It’s not that big a deal once you know what’s going on, but it confused the hell out of me for a while, and the documentation doesn’t give you much of a steer for some of this.

PDB Lifecycle Management

My first tests were of the PDB Lifecycle Management endpoints. I enable all the relevant features in my normal installation, but there was one big road block. I always install ORDS in the PDB, and this feature only works if ORDS is installed in the root container. This makes sense as the management of PDBs is done at the root container level, but I prefer not to put anything in the root container if I can help it. I uninstalled and reinstalled ORDS so I could give it a go. This resulted in this article.

The PDB Lifecycle Management functionality seemed better suited to a self-contained article, as it is only available from a CDB installation, has its own authentication setup and only has a small number of endpoints. The available APIs are kind-of basic, but they could still be useful. It will be interesting to see if this expands to fit all the possible requirements for a PDB, which are now pretty large. I suspect not.

Most of the other stuff

Next up was “most of the other stuff”. There are too many endpoints to go into any level of detail in a single article, so I figured this should focus on the setup to use most of the other endpoints.

There are two methods of authentication discussed. The default administrator approach, which is good because it hides the database credentials from the user making the API calls. Instead they use application server credentials mapped to the “System Administrator” role. This is similar to that used by the PDB Lifecycle Management feature, except that uses the “SQL Administrator” role, and the ORDS properties are different..

The other approach is to use an ORDS enabled schema. This will be very familiar to people already using ORDS, but it comes with one big disadvantage compared to the previous method. For this functionality you have to expose the database credentials of the ORDS enabled schema to the person calling the API. Normally we would not expose these, instead using another form of authentication (Basic, OAUTH2 etc.) to allow the user to gain access. Even then the ORDS enabled schema would be a weak user that only has access to the specific objects we want it to interact with, but in this case it’s a DBA user, so it makes me nervous. Using the default administrator method the caller is constrained to some extent by the APIs, but with the database credentials they have everything if they have direct access to the database server. It’s probably insignificant when you consider the amount of damage someone could do with the APIs alone, but I feel myself wincing a little when putting DBA credentials into a HTTPS call.

For me as a DBA/Developer I would see myself as the person using these APIs to develop something, whether that was an automation, or an application. If this were to be handed over to a developer to do the work, these security questions may be a much bigger issue.

Having read that, you are probably thinking, just use the default administrator method then. I would, only some APIs don’t work with that method. Some seem to only work with the ORDS enabled schema method for authentication, while others only work with the default administrator method. What’s more, I don’t see any reference to this in the documentation. The API doc doesn’t even mention the default administrator approach, and the setup doc doesn’t mention the limitations on any of the approaches except the PDB lifecycle management. As a result, I think you will need to use a mix of the authentication methods if you plan to use a variety of functionality.

The good thing is they can all live side-by-side. At one point I was testing with a CDB installation of ORDS with credentials for PDB Lifecycle Management, default administrator and ORDS enabled schema authentication all configured at the same time. No problem. It’s just confusing when endpoints fail and you have to “trial and error” your way through them. It would be nice if there was a grid of which groups of endpoints need which type of authentication.

Now I am a noob, so maybe I’ve missed the point here, but I spent a long time trying out variations, and this seems like the way it is. If someone can educate me about why I am wrong I will willingly amend the articles, and this blog post. πŸ™‚

Thoughts and what next?

At this point I’ve just been finding my feet, and I’m not sure what I will do next. There are some endpoints that interest me, so I might do separate articles on those, and refer back to the setup in the above articles. Then again, it may feel like just regurgitating the API documentation, so I may not. It’s worth taking a look at the available endpoints, broken down into these main sections.

  • Clusterware CLIs
  • Data Dictionary
  • Environment
  • Fleet Patching and Provisioning
  • General
  • Monitoring
  • Performance
  • Pluggable Database Lifecycle Management

Some will require additional setup, but many will not.

From the look of it, the vast majority of the endpoints are for reporting purposes. There are far fewer that actually allow you to manipulate the contents of the database. You can always write your own services for that, or use REST Enabled SQL to do it I guess. The question will be, can I get enough value out of these APIs as they stand to warrant the investment in time? I’m not sure at this point.

Cheers

Tim…

PS. If you were watching my twitter feed over the weekend and wondered what bit of tech I gave up on. It was this. I’m very stubborn though, so I came back…