Over the years I’ve extolled the virtues of Oracle Application Express (APEX) because of the ease of development. I think low code tools are a massive boon to productivity. Of course there are some tasks that need alternative tools, but for many scenarios low code tools are awesome.
Something else I find really appealing about APEX is the ease of upgrades. I’m not talking about how easy it is to apply the upgrade itself, because updating Java and Tomcat versions on a server is really easy too. I mean how simple it is from a wider perspective.
I was the first person in my company to use APEX. I used it to write some utility type applications, when it was still “forbidden”. Some of these applications were written over a decade ago, and they are still working fine. In that time we’ve had regular APEX upgrades, and they’ve just kept going. No refactoring. No drama.
Of course, they aren’t using all of the new features that were added in subsequent releases, but the important thing is all that development investment was not impacted by staying on the latest APEX release and patch set. In comparison, updating some of our other platforms and frameworks is a nightmare, requiring substantial development effort and testing.
So it’s not just about improving productivity during the development phase. It’s also about the reduction in the total cost of ownership (from a development perspective) over the lifespan of the application.
Just thought I would share that thought, as I upgrade & patch some production systems… 🙂
Upgrading a database is not about the technical side of things. It’s about the planning that is required. I can upgrade a database in a few minutes, but the project to upgrade all the environments for a specific application can take months to complete. In this post I want to discuss some of the issues we are discussing at the moment regarding our future Oracle 23c upgrades.
What support dates are relevant?
Here are the support dates for the Oracle database.
To upgrade directly to 23c we must be running Oracle 19c or 21c.
All our databases are on 19c, so this puts us in a good position. It took a lot of pain and effort to get us to 19c, but it was worth it!
PDB or non-CDB architecture?
The non-CDB architecture was deprecated in 188.8.131.52, but it has remained supported up to, and including, 19c. So Oracle 23c will be the first long term release where the non-CDB architecture is not an option. If you’ve not got up to speed on pluggable databases, you better get started soon! (Multitenant Articles)
With one exception, we have PDBs across the board, so there is nothing new for us here. It sometimes felt like I was swimming against the tide by pushing PDBs so hard over the years, but it all seems worth it now.
What OS are you running on?
I’m going to conveniently forget that anything other than RHEL/OL exist, because other operating systems don’t exist for me in the context of running Oracle databases.
It took us a long time to migrate from OL6 to OL7. The majority of our Oracle databases are currently still running on OL7, which is fast approaching end of life. Since Oracle 23c will not be supported on OL7, we are going to need to migrate to a newer operating system. I wrote about my scepticism around in-place RHEL/OL upgrades (here), so that leaves us two choices.
Move our existing databases to a new OS now, then upgrade to 23c later.
Wait for the 23c upgrade, and do a big-bang OS migration and database upgrade.
What’s stopping us from doing the first option now? Nothing really. We could migrate our 19c databases to OL8 servers. It would be nicer to migrate them to OL9, but it is not supported for 19c yet. I recently wrote a rant about product certifications on Oracle Linux 9, which resulted in this response from Oracle.
“Oracle Database product management has confirmed that when Oracle Database 23c ships, it will be certified for both OL8 and OL9. Also, Oracle Database 19c will be certified on OL9 before end of 2023.”
That’s really good news, as it gives us more options when we move forward.
Will Oracle Linux exist in the future? Yes!
Just as I thought I had got my head around the sequence of events, RHEL dropped the bombshell about how they would distribute their source in future (see here). This raised concerns about if RHEL clones such as Oracle Linux, Rocky Linux and AlmaLinux could even exist in future.
Without knowing the future of our main operating system, we were questioning what to deploy on new servers. Do we continue with OL or switch to RHEL? Rocky and Alma released some “don’t panic” messages, but Oracle were very quiet. I wasn’t surprised by that, because Oracle don’t say anything until it has passed through legal, but as a customer it was a very nervy time.
A couple of days ago we got a statement from Oracle (here), with a firm commitment to the future of Oracle Linux. I immediately spoke to our system admins and said OL8/OL9 is back on the table. Phew!
I have my own opinions on the RHEL vs clones situation, but as a customer it’s not about politics. I just need to know the OS we are using is still going to exist, and will be supported for our databases. In that respect the statement from Oracle was very welcome!
Do you need a hardware refresh?
If you are running on physical kit, you need to check your maintenance agreements. You may need a hardware fresh to keep everything up to date.
We run everything on virtual machines (VMs), so the hardware changes to the clusters have no impact on our VMs. We’ve had at least one hardware refresh during the lifespan of some of our database VMs.
We use a lot of third party applications, and some of the vendors are really slow to certify their applications on new versions of the database and operating systems.
Ultimately we will make a choice on destination versions and timings based on application vendor support.
Manual or AutoUpgrade?
In Oracle 23c manual upgrades are deprecated (but still supported). I was late to the party with AutoUpgrade, but now I’ve used it I will never do manual upgrades again. We will definitely be using AutoUpgrade for our 23c upgrades!
If you are new to AutoUpgrade I have some examples of using it when I was doing 21c upgrades (see here). That should help you get started.
What are you going to test?
Testing is always a big stumbling block for us. We are not very far down the path of automated testing, which means we need bodies to complete testing. The availability of testing resource is always an issue. There are times of the year when it is extremely unlikely people will be made available, so planning this resource is really important.
So what’s the plan?
It’s always a balancing act around support for the OS, database and application vendors. Ultimately each project will have to be dealt with on a case by case basis, as the allocation of testing resources and potential disruption to the business have to be factored in. Everything is open to change, but…
Our default stance is we will upgrade to Oracle 23c on OL9. We will build new OL9 servers and install 23c on them, then use AutoUpgrade to migrate and upgrade the databases. For some of our internal developments I feel this could happen relatively quickly (kiss of death).
Application vendor support is often a sticking point for us, and timing will have to factor in the OL7 end of life. If support for 19c on OL9 comes in time, we may migrate our 19c databases to OL9, while we wait for a vendor to support Oracle 23c. Alternatively we could pay for extended support for OL7, and do the OS and database in one go once the application vendor is happy.
I realise this has been a bit of a ramble, but I just wanted to write it down to get things straight in my own head. 🙂
PS. I have some technical posts on upgrading to 23c that will be released once the on-prem version of 23c goes GA.
The RHEL distribution is still really popular in the enterprise Linux market. The stats sometimes look worse because the users are split across both RHEL and all of its many clones, but it still represents a massive chunk of the enterprise market. With that in mind, why are RHEL upgrades still so unreliable?
My history of OS upgrades
Over the years I’ve done loads of operating system upgrades. In “recent” times I’ve done Windows upgrades (8 -> 8.1 -> 10 -> 11) and loads of Intel macOS upgrades with minimal drama. That’s not the case with Linux.
About a decade ago I was using Fedup to upgrade between versions of Fedora. That was later replace by the DNF upgrade process. More recently I’ve tried my hand using Leapp to upgrade Oracle Linux. I’m no expert, but these upgrades always feel really janky. You never really know what you are going to get until the upgrade is complete. The milage varies depending on configuration of the server, and what software has been installed on it. In some cases you have a system that is running fine. In some cases you have a running server, but a bunch of the configuration has to be redone to make your application work. In some cases you have bricked your server. Not exactly confidence building.
So what is the alternative?
My overall feel has always been don’t upgrade! Build new kit, migrate across to it, then ditch the old kit.
In the past we would typically make that process fit with the lifespan of the physical kit, but now when we use virtual machines we could potentially have a VM that outlives the physical kit many times over, so the desire to migrate is not so pressing, and a reliable in-place upgrade process is more desirable.
Migrating to a new server comes with a hole bunch of overheads. It’s a lot more work than an in-place OS upgrade.
Infrastructure as Code
If you have true infrastructure as code, you can build new systems really quickly, including all the VMs, networking and firewalls, which just leaves you with the data migration to worry about. If you are not in that position, it’s a nightmare of service tickets and endless waiting.
Is it too much to ask for such a dominant operating system to have reliable upgrades?
I would love to know your experiences of OS upgrades for RHEL and its clones. Have I just been really unlucky with Leapp, or are upgrade really as bad as I think?
Update 1: I ran a poll on Twitter. There weren’t many responses, but at least I can see I’m not completely alone. 🙂
Update 2: There are some interesting responses in the comments. Well worth you giving them a look.
It’s over a year since Oracle Linux 9 was released and we still haven’t seen any certifications of Oracle products on it. What is going on?
Check out the really important update at the end of the post.
Setting the scene
Here are the current support dates for Oracle Linux.
Premier Support Ends
Extended Support Ends
So unless you want to start paying for extended support, you need to be getting rid of OL7 soon. The problem is, you can only move to OL8, because OL9 is not supported for Oracle 19c yet.
So if you are trying to do the right thing by migrating off OL7, you are being forced on to an old version of the OS, which in turn reduces the amount of time you can run before the next OS move.
The next long term release of the Oracle database is Oracle 23c. It’s still in beta, so we don’t know what it will be supported on yet, but 23c Free is only available for RHEL8/OL8 at this time.
So your choices are:
Migrate on to OL8, which solves your OS support issue, and will allow you to upgrade to 23c when it is released.
Wait for 23c to be released and hope it is supported on OL9. If it is, do a big bang migration to OL9 and 23c, hoping you can get it all done before the OL7 support deadline.
I don’t like either of these choices.
What do I want?
Apart from unlimited wishes, I want:
A definitive statement about if Oracle 19c will ever be supported on OL9. If it will be, when is that likely to happen? My guess is this will never happen, but I want an official statement ASAP.
A definitive statement about support for Oracle 23c on OL9. Will it be supported when 23c goes GA?
I guess it would also be nice if RHEL/OL upgrades actually worked reliably, but that is the subject for another rant!
I think like most people, I just want some clarity. It’s hard to plan when you don’t have any of the vital information. The thought of having to move to OL8 rather than OL9 is kind-of depressing… 😞
Update from Oracle: “Oracle Database product management has confirmed that when Oracle Database 23c ships, it will be certified for both OL8 and OL9. Also, Oracle Database 19c will be certified on OL9 before end of 2023.”
Update: Oracle have now certified Oracle 19c on Oracle Linux 9, as discussed here.
I’ve mentioned database upgrades a few times over the last year or more. Like many others, we are pushing hard to get everything upgraded to 19c. Over the last couple of weeks a bunch more systems got upgraded, and we are now looking like this.
The remaining 11.2 and 12.1 databases are all in various stages of migration/upgrade. I would not curse us by giving a deadline for the final databases, but hopefully soon!
The reason for mentioning that theme song is it starts with the words, “It’s been a long road getting from there to here”, and that is exactly how it feels.
Many of the database upgrades are technically simple, but the projects surrounding them are soul destroying. Getting all the relevant people to agree and provide the necessary resources can be really painful. This is especially true for “mature” projects, where the, “if it ain’t broke, don’t fix it”, mentality is strong. I wrote about the problems with that mentality here.
We always go for the multitenant architecture (CDB/PDB) unless there is a compelling reason not to. I think we only have one non-CDB installation of 19c because of a vendor issue. None of our other 3rd party applications have had a problem with using PDBs, provided we’ve made sure they connect with the service, not a SID. We don’t use the USE_SID_AS_SERVICE_listener_name parameter. I would rather find and fix the connection issues than rely on this sticking plaster fix.
In know I’ve said some of these things before, but they are worth repeating.
Oracle 19c is the current long term release, so it’s going to have support for a longer time than an innovation release.
Oracle 21c is an innovation release. Even when the on-prem version does drop, you probably shouldn’t use it for your main systems unless you are happy with the short support lifespan.
I recently heard there won’t be an Oracle 22c, so the next release after Oracle 21c will be Oracle 23c, which is currently slated to be the next long term release.
In short, get all your databases to Oracle 19c, and you should probably stick there until Oracle 23c is released, unless you have a compelling case for going to Oracle 21c.
If you follow me on Twitter, you’ll know I’ve been doing a lot of upgrades recently. Whenever I mention upgrades, someone comes back with a comment asking me to describe in detail what I’ve done. This always makes me nervous, as every upgrade is potentially unique. What you have to do depends on a number of factors.
The database version you are upgrading from, and the version you are upgrading to.
The options you have installed, especially if some are deprecated or desupported in the new release.
The topology of your system. If you are running single instance on a regular file system, you’ve got a lot less to do compared to someone working with RAC on ASM with a Data Guard standby.
Are you taking the opportunity to move to new kit and/or base operating system? From OL6 to OL7/8 for example.
Are you changing OS completely? Maybe you’ve finally made the decision to upgrade from AIX/HP-UX/Solaris to Oracle Linux. 🙂
Are you planning to convert from non-CDB to the multitenant architecture? You should be!
How big is the database you are upgrading? I’ve done some small ones with data pump rather than doing a regular upgrade.
What is your tolerance for downtime? If you have a reasonable downtime window, you can turn everything off and upgrade it, which is a lot simpler than trying to keep the lights on.
Are there any vendor restrictions that alter how you approach the upgrade?
All of these things, and more I can’t think of off the top of my head, have to be factored in when planning an upgrade, and this is why I say every upgrade is potentially unique.
The types of upgrades I’ve done recently fall into the following general groups, with varying numbers in each group. A number of them have included a move to new kit, because they were running on Oracle Linux 6.
184.108.40.206 to 19c non-CDB (Vendor support issues)
220.127.116.11 to 19c PDB
18.104.22.168 non-CDB to 19c PDB
18c PDB to 19c PDB
18c PDB to 19c PDB using Data Pump
22.214.171.124 to 126.96.36.199 as stepping stone to 19c
The last one may seem odd to people, but this is due to an application dependency issue. The full process for this is as follows.
Upgrade database from 188.8.131.52 to 184.108.40.206.
Upgrade application to a version that supports both 220.127.116.11 and 19c.
Migrate to new kit.
Upgrade database from 18.104.22.168 to 19c.
Upgrade application to latest version.
What we are trying to do is make everything as “one size fits all” as possible, to make future upgrades, or moves to the cloud, easier, but that’s not always possible due to other constraints.
I do have a couple of upgrade articles on the site, but they are intentionally basic and I never intend to write anything more detailed about upgrades, because it’s impossible to write something that will satisfy every possibility.
So in summary, there is no one size fits all solution to upgrades unless you have already commoditized all your systems, like the cloud providers do. If you are working with a load of on-prem systems, some of which you have inherited from others, each upgrade will be a voyage of discovery, so don’t ask me for a detailed breakdown of what I did, because I’m just going to say no. There is a reason why there is a great big upgrade manual released with every version of the database!
Do you remember when everyone disabled SSLv3 on their websites?
Do you remember how loads of people running Oracle database version 22.214.171.124 and lower cried because all their database callouts failed?
Do you remember how they were all forced to patch to 126.96.36.199 or 188.8.131.52 to get support for TLS?
Do you remember thinking, I’ll never let something like that happen again?
I’m so sick of saying this. I know I sound like a broken record, but it’s like I’m living in the movie Groundhog Day.
There is no such thing as standing still in tech. It’s like swimming upstream in a river. It takes work to remain stationary. The minute you stop for a rest you are actually moving backwards. I’m sure your next response is,
“But Tim, if it ain’t broke, don’t fix it!”
The minute you stop patching and upgrading, your application is already broken. Yesterday you had an up-to-date system. Today you don’t. You have stopped, but the world around you continued to move on, and sometimes what they do will have a direct impact on you.
The security folks have been complaining about TLSv1.0 and TLSx1.1 for ages, but we are now in the position where the world and their dog are switching off those protocols, and the “we don’t need no stinking patches or upgrades” brigade are pissing and moaning again.
You knew this was going to happen. You had plenty of warning. It is your fault things are now failing. The bad decisions you made have led you to this point, so stop blaming other people. IT IS YOUR FAULT!
Where do you go from here?
First things first, start planning your patch cycles and upgrade cycles. That isn’t a “one time and done” plan. That is from now until forever. You’ve got to keep your server operating systems and software up to date.
If you can’t cope with that, then move to a cloud service that will patch your shit for you!
I know upgrades aren’t necessarily a quick fix, as they need some planning, so you will need some sticking plasters to get your through the immediate issues. Things to consider are:
Your load balancers and/or reverse proxies can hide some of your crap from the outside world. You can support TLSv1.2+ between the client and the reverse proxy, then drop down to a less secure protocol between your reverse proxy and your servers.
You can do a similar thing with database callouts to the outside world. Use an internal proxy between you and the external resource. The connection between your proxy and the outside world will speak on TLSv1.2+, but the callout from the database to your proxy will speak using a protocol your database can cope with.
These are not “fixes”. They are crappy sticking-plaster solutions to hide your incompetence. You need to fix your weak infrastructure, but these will buy you some time…
I don’t really care if you think you have a compelling counter argument, because I’m still going to scream “WRONG” at you. If you don’t think patching and upgrades are important, please quit your tech job and go be incompetent somewhere else. Have a nice life and don’t let the door hit you on the ass on your way out!
PS. You know this is going to happen again soon, when the world decides that anything less than TLSv1.3 is evil.
Saturday’s post about 19c generated a lot of feedback on a number of my social media networks. To generalise, you could probably split the responses into two camps.
Of course, this discussion applies equally to other technologies, including the middle tier and development frameworks. It can also be applied to people’s attitudes to patching, as well as to upgrades.
We don’t want to upgrade.
I totally get why people think this is a viable option. Such arguments might include:
Stability is more important than new features. With new versions come new bugs and new instabilities.
We don’t have time/resources to test our application against the new version.
The cost (in time and resources) of upgrading is not worth the pay-off of being at the new version. It’s cheaper to pay for extended support than upgrade.
The version we are using has all the features we need, so upgrading is a waste of time.
Our customers don’t care about new versions.
We do all our business logic in the middle tier, so the DB is just a bit bucket. New features are irrelevant.
We want to upgrade to the latest versions.
This is closer to my position, with a few important caveats.
You have to have time to learn the new version, so you can get the most out of it, and not fall foul of new features that do things you might not expect.
You need to test your application properly against it, to make sure the new version doesn’t break anything. This seems to be the biggest sticking point for companies that haven’t invested in automated testing. A lacklustre approach to testing your application will often result in a disastrous upgrade.
So why would you bother? This is where I do my totally biased sales pitch. 🙂
New versions contain new features, some of which actually work. 🙂 There are headline new features that are just marketing bumf and irrelevant to me, but there are also some really useful things, which make life easier for you and your company. Look what’s happened just for online operations in the last few versions. I know some of these features have saved loads of downtime for people.
I tend to think development new features drive change more than DBA new features, because they outwardly affect more people in the company. For people who do development in the database, the last few releases have included a lot of really useful things. Let’s look at just one of them that is dear to my heart.
If you know what is in the new releases and you don’t find anything compelling, then a choice to stay where you are is fine. We don’t all want the same things, and different opinions are fine. If you are just sticking with what you’ve got because you can’t be bothered to learn the new stuff, and you are content to do the same thing you’ve always done forever, I think you are selling yourself and your company short. Just my opinion though.
I’ve already declared my bias towards upgrades. Why? Because many of the problems I come across actually come from not staying up to date with versions and/or patches. The rest of the industry keeps moving, and somehow DBAs want to be totally static. I don’t understand that. Caution is good. Static is bad IMHO! 🙂
In a previous post I mentioned the updates to my Vagrant builds to include this version, as well as updates of Tomcat and Java. I’ve subsequently done the updates for APEX 18.2 on Docker too. If you are interested you can see them here.
In addition to this we’ve rolled APEX 18.2 out at work. We already had some installations of APEX 18.1, but many were stuck on version 5.1.4 because of time constraints. Now everything is up to APEX 18.2. We still have a range of database versions (11.2, 12.1, 12.2 and soon to be 18c) at work, and it’s worked fine on all of them.
I spied a couple of people asking about the upgrade process. There’s no difference to previous versions. In the past, if one of the first two numbers change you do a regular install. If it’s not one of those major version changes you download the patch from MOS and apply it. Since this is a major version number change, I installed it in the normal way and everything was fine. I’m not sure how this will work going forward, as I suspect all releases will start to use the new version format, so does that mean every release from now on will be an “install”, not a “patch”? (see update) Someone has probably discussed this already and I missed it. 🙂
I only have one little gripe about the upgrades, which is I have to run an ORDS Validate once it’s complete to make sure ORDS is working fine. It would be really nice if APEX could fix whatever gets broken in ORDS, so I don’t have to do it. It’s just one less step to do… 🙂
Update: The subject of install vs. patch was raised at OpenWorld 2018. I sounds like the current plan is to get rid of patches and take the install approach each time. The APEX team are working on reducing the downtime associated with the upgrades…
As I said in a recent post, you know you are meant to, but you don’t. Why not?
The reasons will vary a little depending on the tech you are using, but I’ll divide this answer into two specific parts. The patch/upgrade process itself and testing.
The Patch/Upgrade Process
I’ve lived through the bad old days of Oracle patching and upgrades and it was pretty horrific. In comparison things are a lot better these days, but they are still not what they should be in my opinion. I can script patches and upgrades, but I shouldn’t have to. I’m sure this will get some negative feedback, but I think people need to stop navel gazing and see how simple some other products are to deal with. I’ll stop there…
That said, I don’t think patches and upgrades are actually the problem. Of course you have to be careful about limiting down time, but much of the this is predictable and can be mitigated.
One of the big problems is the lack of standardisation within a company. When every system is unique, automating a patch or upgrade procedure can become problematic. You have to include too much logic in the automation, which can make the automation a burden. What the cloud has taught us is you should try to standardise as much as possible. When everything most things are the same, scripting and automation gets a lot easier. How do you guarantee things conform to a standard? You automate the initial build process. 🙂
So if you automate your build process, you actually make automating your patch/upgrade process easier too. 🙂
The app layer is a lot simpler than the database layer, because it’s far easier to throw away and replace an application layer, which is what people aim to do nowadays.
Testing is usually the killer part of the patch/upgrade process. I can patch/upgrade anything without too much drama, but getting someone to test it and agree to moving it forward is a nightmare. Spending time to test a patch is always going to lose out in the war for attention if there is a new spangly widget or screen needed in the application.
This is where automation can come to the rescue. If you have automated testing not only can you can move applications through the development pipeline quicker, but you can also progress infrastructure changes, such as patches and upgrades, much quicker too, as there will be a greater confidence in the outcome of the process.
Patching and upgrades can’t be considering in isolation where automation is concerned. It doesn’t matter how quick and reliably you can patch a database or app server if nobody is ever going to validate it is safe to progress to the next level.
I’m not saying don’t automate patching and upgrades, you definitely should. What I’m saying is it might not deliver on the promise of improved roll-out speed as a chain is only as strong as the weakest link. If testing is the limiting factor in your organisation, all you are doing by speeding up your link in the chain is adding to the testing burden down the line.
Having said all that, at least you will know your stuff is going to work and you can spend your time focusing on other stuff, like maybe helping people sort out their automated testing… 🙂