Planning our next Oracle database upgrades. A customer perspective…

Upgrading a database is not about the technical side of things. It’s about the planning that is required. I can upgrade a database in a few minutes, but the project to upgrade all the environments for a specific application can take months to complete. In this post I want to discuss some of the issues we are discussing at the moment regarding our future Oracle 23c upgrades.

What support dates are relevant?

Here are the support dates for the Oracle database.

ReleasePremier Support (PS) EndsFree Extended Support (ES) Ends
19cApril 30 2024April 30 2025
21cApril 30 2024N/A

Release Schedule of Current Database Releases (Doc ID 742060.1)

Here are the current support dates for Oracle Linux.

ReleaseGA DatePremier Support EndsExtended Support Ends
OL7Jul 2014Jul 2024Jun 2026
OL8Jul 2019Jul 2029Jul 2031
OL9Jun 2022Jun 2032Jun 2034

Lifetime Support Policy: Coverage for Oracle Open Source Service Offerings

What database versions are we currently running?

To upgrade directly to 23c we must be running Oracle 19c or 21c.

All our databases are on 19c, so this puts us in a good position. It took a lot of pain and effort to get us to 19c, but it was worth it!

PDB or non-CDB architecture?

The non-CDB architecture was deprecated in 12.1.0.2, but it has remained supported up to, and including, 19c. So Oracle 23c will be the first long term release where the non-CDB architecture is not an option. If you’ve not got up to speed on pluggable databases, you better get started soon! (Multitenant Articles)

With one exception, we have PDBs across the board, so there is nothing new for us here. It sometimes felt like I was swimming against the tide by pushing PDBs so hard over the years, but it all seems worth it now.

What OS are you running on?

I’m going to conveniently forget that anything other than RHEL/OL exist, because other operating systems don’t exist for me in the context of running Oracle databases.

It took us a long time to migrate from OL6 to OL7. The majority of our Oracle databases are currently still running on OL7, which is fast approaching end of life. Since Oracle 23c will not be supported on OL7, we are going to need to migrate to a newer operating system. I wrote about my scepticism around in-place RHEL/OL upgrades (here), so that leaves us two choices.

  • Move our existing databases to a new OS now, then upgrade to 23c later.
  • Wait for the 23c upgrade, and do a big-bang OS migration and database upgrade.

What’s stopping us from doing the first option now? Nothing really. We could migrate our 19c databases to OL8 servers. It would be nicer to migrate them to OL9, but it is not supported for 19c yet. I recently wrote a rant about product certifications on Oracle Linux 9, which resulted in this response from Oracle.

“Oracle Database product management has confirmed that when Oracle Database 23c ships, it will be certified for both OL8 and OL9. Also, Oracle Database 19c will be certified on OL9 before end of 2023.”

That’s really good news, as it gives us more options when we move forward.

Will Oracle Linux exist in the future? Yes!

Just as I thought I had got my head around the sequence of events, RHEL dropped the bombshell about how they would distribute their source in future (see here). This raised concerns about if RHEL clones such as Oracle Linux, Rocky Linux and AlmaLinux could even exist in future.

Without knowing the future of our main operating system, we were questioning what to deploy on new servers. Do we continue with OL or switch to RHEL? Rocky and Alma released some “don’t panic” messages, but Oracle were very quiet. I wasn’t surprised by that, because Oracle don’t say anything until it has passed through legal, but as a customer it was a very nervy time.

A couple of days ago we got a statement from Oracle (here), with a firm commitment to the future of Oracle Linux. I immediately spoke to our system admins and said OL8/OL9 is back on the table. Phew!

I have my own opinions on the RHEL vs clones situation, but as a customer it’s not about politics. I just need to know the OS we are using is still going to exist, and will be supported for our databases. In that respect the statement from Oracle was very welcome!

Do you need a hardware refresh?

If you are running on physical kit, you need to check your maintenance agreements. You may need a hardware fresh to keep everything up to date.

We run everything on virtual machines (VMs), so the hardware changes to the clusters have no impact on our VMs. We’ve had at least one hardware refresh during the lifespan of some of our database VMs.

Thanks to Ludovico Caldara for mentioning this point.

What versions do vendors support?

We use a lot of third party applications, and some of the vendors are really slow to certify their applications on new versions of the database and operating systems.

Ultimately we will make a choice on destination versions and timings based on application vendor support.

Manual or AutoUpgrade?

In Oracle 23c manual upgrades are deprecated (but still supported). I was late to the party with AutoUpgrade, but now I’ve used it I will never do manual upgrades again. We will definitely be using AutoUpgrade for our 23c upgrades!

If you are new to AutoUpgrade I have some examples of using it when I was doing 21c upgrades (see here). That should help you get started.

What are you going to test?

Testing is always a big stumbling block for us. We are not very far down the path of automated testing, which means we need bodies to complete testing. The availability of testing resource is always an issue. There are times of the year when it is extremely unlikely people will be made available, so planning this resource is really important.

So what’s the plan?

It’s always a balancing act around support for the OS, database and application vendors. Ultimately each project will have to be dealt with on a case by case basis, as the allocation of testing resources and potential disruption to the business have to be factored in. Everything is open to change, but…

  • Our default stance is we will upgrade to Oracle 23c on OL9. We will build new OL9 servers and install 23c on them, then use AutoUpgrade to migrate and upgrade the databases. For some of our internal developments I feel this could happen relatively quickly (kiss of death).
  • Application vendor support is often a sticking point for us, and timing will have to factor in the OL7 end of life. If support for 19c on OL9 comes in time, we may migrate our 19c databases to OL9, while we wait for a vendor to support Oracle 23c. Alternatively we could pay for extended support for OL7, and do the OS and database in one go once the application vendor is happy.

I realise this has been a bit of a ramble, but I just wanted to write it down to get things straight in my own head. πŸ™‚

Cheers

Tim…

PS. I have some technical posts on upgrading to 23c that will be released once the on-prem version of 23c goes GA.

Deprecated and Desupported Features in Oracle Database 23c

Every time there is a new database release on the horizon it’s worth looking at the deprecated and desupported features in that release, so you can start planning for the future. Here is the full list from the documentation.

Behavior Changes, Deprecated and Desupported Features for Oracle Database

I’m going to comment on a few things that standout for me. You might find other things more interesting…

Deprecated

DBUA and Manual Upgrade Deprecation : About time! From 21c onward AutoUpgrade is the preferred upgrade approach. Signalling the deprecation of the other approaches is welcome in my opinion. If you’ve never used AutoUpgrade you can see some examples here.

Oracle Persistent Memory Deprecation : Intel killing Optane was the writing on the wall for Oracle Persistent Memory Database (PMEM) and Oracle Memory Speed (OMS) File System. This is not really a surprise.

Deprecation of the mkstore Command-Line Utility : Not a major thing, but I will probably need to revisit a handful of articles to do some small edits. As pointed out by Piotr Wrzosek, the mkstore utility is used for credentials when using a secure external password store. I’m guessing this will be baked into another utility like orapki going forward, but we will see. (see update 2)

DBMS_RESULT_CACHE Function Name Deprecations : I love this move. References to “black lists” are changed to “block lists”. I personally try to use “allow list” and “block list” instead of “white list” and “black list” in conversation. Regardless of any other motivation, I think they are more descriptive.

Desupported

Non-CDB Architecture : This was deprecated in 12.1.0.2 and desupported in 21c. I’m listing it here because 23c is the first long term released where this is desupported. Most people won’t have progressed past 19c, and may have resisted the multitenant architecture. You can’t resist any longer. I’ve written loads about pluggable databases here. Please get up to speed with it.

Original Export Utility (EXP) Desupported : For some reason this feels like a “WOW” moment, but in reality I can’t remember the last time I used imp/exp. The IMP utility is clearly still supported to allow direct upgrades from older releases. Can you believe it’s about 18 years since Data Pump was introduced? πŸ™‚

Oracle Enterprise Manager Database Express (EM Express) Desupported : This feature never really hit home with me, so I’m not sorry to see it gone. For most stuff you can just use SQL Developer which replicates the functionality. Of course, if you have Cloud Control, you wouldn’t be using the express feature anyway.

Transport Layer Security versions 1.0 and 1.1 Desupported : Great! If you do still need to make database callouts to old services that don’t support TLSv1.2 or above, just put a load balancer or reverse proxy in front of them and you are sorted. We usually do that anyway to ease certificate management for database callouts. See here.

Traditional Auditing Desupported : Cool. I prefer unified audit policies anyway, and it’s been the preferred method since 12.1, so it’s hardly a surprise. I mentioned this here.

Desupport of 32-Bit Oracle Database Clients : I can’t remember the last time I used a 32-bit client or server, so this doesn’t phase me.

Remember

Deprecated is not desupported. You can continue to use deprecated features, but you should be looking to move away from them before they are desupported in a future version.

The desupported stuff shouldn’t come as a big surprise as most things have been deprecated for some time. In some cases over many releases.

Make sure you check the full list for yourself, as there might be something important you need to think about.

Cheers

Tim…

Update 1: As mentioned in Mike Dietrich’s blog post (here) the public docs are currently for Oracle Database 23c Free, so the final on-prem release may include some changes. Keep your eyes open. πŸ™‚

Update 2: Martin Bach confirmed by assumption that the credentials functionality would be included in a later version of orapki, as mentioned in this post.

Oracle Database 23c Free Developer-Release : Getting Started…

You may have noticed the flurry of posts about the new Oracle Database 23c Free Developer-Release.

In summary Oracle 23c Free is the replacement for what would have been Oracle 23c XE, but it is a developer release, so it’s not the final form of Oracle 23c Free. We should get an updated version of 23c Free once the main Oracle Database 23c release becomes GA.

Where do I get it?

If you want to install it from the RPM you can download it from here.

There is a VirtualBox appliance and a Docker image available from Oracle, so you don’t actually have to install it if you don’t want to.

How do I install it?

You have a few options here.

  • Install documentation here.
  • My installation article here.
  • My Vagrant build here.
  • VirtualBox appliance from Oracle here.
  • Docker image from Oracle here.

What about your articles?

I have a 23c page on the website ready to post Oracle 23c articles here.

I’ve written a bunch of articles against the 23c beta 1 release, but I’ve not published any of them yet because of the beta program NDA. I’m going to work through them against the 23c Free developer-release, and anything that I’m allowed to publish I will. Some of the articles will have to be held back until the GA release of 23c, as they are not covered by this release.

Basically, if it is documented in Oracle Database 23c Free, I can write about it. If not, I’m still under NDA, so I will release those articles later.

Documentation

The documentation is available here.

It is not the full 23c documentation set, as this is not the full release of the product.

You will notice it focusses on the application development side of things. There is no RAC and Data Guard stuff, so those subjects are off limits for now.

Bug Reports

There is a community forum for reporting bugs here.

What next?

Have fun! Remember, this is not the final 23c release, so this is meant as a way to get used to some of the new development features in 23c, while we wait for the full release.

Remember, if you see any problems, please shout out about them! You can report bugs here.

Cheers

Tim…

Update Oracle Database Time Zone Files (Poll Results Discussed)

In case you didn’t know, countries occasionally change their time zones, or alter the way they handle daylight saving time (DST). To let the database know about these changes we have to apply a new database time zone file. The updated files have been shipped with upgrades and patches since 11gR2, but applying them to the database has always been a manual operation.

With the recent switch over to daylight savings time in the UK I decided to post this question on Twitter yesterday.

How often do you update your Oracle database time zone files?

We get less than 6% of people updating their time zone files on a regular schedule. Nearly 45% who only do the updates after a database upgrade, and nearly 50% of people who never do it at all.

I can’t say I’m surprised by the results. In terms of the reasoning for these responses, I’ll reference some of the comments on Twitter.

Regular Schedule

“Every ru patch, also thanks to 19.18 it is included now and with out of place upgrade and autoupgrade, i dont do it anymore πŸ™‚ all automatic.”

Mustafa KALAYCI

If you are using AutoUpgrade to patch to a new Oracle Home, then applying updated time zone files is really easy. Before 19.18 it’s just a single entry “timezone_upg=yes” in the AutoUpgrade config file. From 19.18 onward the update of the time zone file is the default action (see here).

So interestingly, there may be some people who don’t know they are applying an update of their time zone file, who actually are now…

After Upgrades

This feels like the natural time to do it for me, and it seems many other people feel the same.

As mentioned previously, AutoUpgrade makes it simple. From 21c onward AutoUpgrade is the main upgrade approach, even for those that have resisted using it for previous versions, so this question goes away from an upgrade perspective.

We can specifically tell it not to perform the action using “timezone_upg=no”, but I’m guessing most people will just go with the default action.

Never

“NEVER. As an American-only company with very little need for time-specific data, quite unnecessary. Horrible design with no rollbacks and headaches w/data pump. Just not worth it if possible to avoid”

Taylor

I totally understand this response. Many of us work with systems that are limited to our own country. Assuming our country doesn’t alter its own daylight savings time rules, then using an old time zone file is unlikely to cause an issue.

When you consider the number of people that run *very old* versions of Oracle, you can see that using old versions of the time zone file doesn’t present a major issue in these circumstances.

With reference to the data pump issue, I’ve experienced this, and it was also picked up in the comments.

“My hypothesis: Most do it when datapump tells they need to do it to get the import file they just received to load”

Connor McDonald

Offline/Online Operation

The point about this being an offline operation was raised.

“Well it is an offline operation, so pretty exceptional thing to do. Only in a rare case where some feature requires the upgrade – like DataPump failing or query over dblink failing.”

Ilmar Kerm

Downtime is never welcome, but it was also pointed out it can be an online operation in 21c.

“Offline will be a thing of the past…

https://docs.oracle.com/en/database/oracle/oracle-database/21/refrn/TIMEZONE_VERSION_UPGRADE_ONLINE.html

Connor McDonald

Conclusion

It seems like the time zone file version is not high on the list of priorities for most people, providing it is not causing a data pump issue. I totally understand this, and I myself only consider it during database upgrades.

I always like reading these poll results. I know the sample size is small, but it gives you a good idea of how your beliefs compare to the wider audience.

If you are interested to know how to manually upgrade your time zone file, you can read about it here.

Cheers

Tim…

Oracle Database Patching (Poll Results Discussed)

Having recently put out a post about database patching, I was interested to know what people out in the world were doing, so I went to Twitter to ask.

As always, the sample size is small and my followers have an Oracle bias, so you can decide how representative you think these number are…

Patching Frequency

Here was the first question.

How often do you patch your production Oracle GI/DB installations? (Pick the nearest that applies)

There was a fairly even spread of answers, with about a third of people doing quarterly patching, and a quarter doing six-monthly patching. I feel like both these options are reasonable. About 20% were doing yearly patching, which is starting to sound a little risky to me. The real downer was over 22% of people never patch their databases. This is interesting when you consider the recent announcement about monthly recommended patches (MRPs).

For those people that never patch, I can think of a few reasons off the top off my head why.

  • Lack of testing resource. I think patch frequency has more to do with testing than any other factor. If you have a lot of databases, the testing resource to get through a patching cycle can be quite considerable. This is why you have to invest some time and money into automated testing.
  • If it ain’t broke, don’t fix it. The problem is, it is broken! How long after your system has been compromised will it be before you notice? How are your customers going to feel when you have a data breach and they find out you haven’t even taken basic steps to protect them? I don’t envy you explaining this…
  • Fear of downtime. I know downtime is a real issue to some companies, but there are several ways to mitigate this, and you have to balance the pros and the cons. I think if most people are honest, they can afford the downtime to patch their systems. They are just using this as an excuse.
  • Patching is risky. I understand that patches can introduce new issues, but that is why there are multiple ways to patch, with some being more conservative from a risk perspective. I think this is just another excuse.
  • Out of support database versions. I think this is a big factor. A lot of people run really old versions of the database that are no longer in support, and are no longer receiving patches. I don’t even think I need to explain why this is a terrible idea. Once again, how are you going to explain this to your customers?
  • Lack of skills. We like to think that every system is looked after by a qualified DBA, but the reality is that is just not true. I get a lot of questions from people who are SQL Server and MySQL DBAs that have been given some Oracle databases to look after, and they freely admit to not having the skills to look after them. Even amongst Oracle DBAs there is a massive variation in skills. Oracle patching has improved over the years, but it is still painful compared to other database engines. Just saying.

Type of Patching

This was the second question.

When patching your production Oracle GI/DB installations, which method do you use?
In-Place = Current ORACLE_HOME
Out-Of-Place = New ORACLE_HOME

This was a fairly even split, with In-Place winning by a small margin. Oracle recommend Out-Of-Place patching, but I think both options are fine if you understand the implications. I discussed these in my previous post.

Conclusion

I think of patch frequency in a similar way to upgrade frequency. If you do it very rarely, it’s really scary, and because nobody remembers what they did last time, there are a bunch of problems that occur, which makes everyone nervous about the next patch/upgrade. There are two ways to respond to this. The first is to delay patching and upgrades as long as possible, which will result in the next big disaster project. The second is to increase your patch/upgrade frequency, so everyone becomes well versed in what they have to do, and it becomes a well oiled machine. You get good at what you do frequently. As you might expect, I prefer the second option. I’ve fought long and hard to get my company into a quarterly patching schedule, and it will only decrease in frequency over my dead body!

Assuming the results of these polls are representative of the wider community, I feel like Oracle need to sit up and take notice. Patching is better than it was, but “less bad” is not the same as “good”. It is still too complicated, and too prone to introducing new issues IMHO!

Cheers

Tim…

Database Patching : It’s a difficult subject

If you came hear hoping I was going to say there are valid reasons not to patch, you are out of luck. There is never a valid reason not to patch…

Instead this post is more about the general approach to patching. I’ve spent 22+ years writing about Oracle, including how to install it, but I’ve written practically nothing about how to patch a database. My stock answer is “read the patch notes”, and to be honest that is probably the best thing anyone can do. Although patching is a lot more standardized these days, it’s still worth reading the patch notes in case something unexpected happens. In this post I just want to talk about a few top-level things…

Patching to a new ORACLE_HOME

There are two big reasons for patching to a new ORACLE_HOME, or out-of-place patching.

  1. You can apply the binary patches to the new home while the database is still running in the old home, so you reduce the total amount of downtime.
  2. You have a natural fallback in the event of the wanting to revert the patch. You don’t have to wait for the patch rollback to complete.

There are some downsides though.

  1. It requires extra space to hold both the unpatched and patched homes, until you reach a point where you are happy to remove the unpatched home.
  2. If you have any scripts that reference the ORACLE_HOME, they will need to be updated. Hopefully you’ve centralized this into a single environment setup script.
  3. I guess it’s a little more complicated, and the patch notes are not that helpful.

So should you follow the recommendation of patching to a new home or not? The answer as always is “it depends”.

The reduction in downtime for a single instance database is good, but if you are running RAC or Data Guard, this isn’t really an issue as the database remains online for most of the patching anyway. Having a quick fallback is great, but once again if you are running RAC or Data Guard this isn’t a big deal.

If you are running without RAC or Data Guard, you have made a decision that you can tolerate a certain level of downtime, so is taking the system down for an hour every quarter that big a deal? I’ve heard of folks who use RAC and/or Data Guard who still bring the whole system offline to patch, so the decision is probably going to be very different for people, depending on their environment and the constraints they are working with.

I hope you’re taking OS and database backups before patching. If something catastrophic happens, such that a rollback of the patch is not possible, you can recover your original home and database from the backups. Clearly this could take a long time, depending on how your backups are done, but the risk of loss is low. So the question is, can you tolerate the additional downtime?

You have to make a decision on the pros and cons of each approach for you, and of course deal with the consequences. If in doubt, go with the recommendation and patch to a new home.

Read-only Oracle homes

Read-only Oracle homes were introduced in 18c (here) as an option, and are the default from Oracle 21c onward. One of the benefits of read-only Oracle homes is they make switching homes so much easier. You haven’t got to worry about copying configuration files between homes, as they are already located outside the home.

Release Update (RU) or Release Update Revision (RUR)?

You have a choice between patching using a Release Update (RU), or a Release Update Revision (RUR). To put it simply, a RU contains not only the latest security patches and regression fixes, but may also include additional functionality, so the risk of introducing a new bug is higher. A RUR is just the security patches and regression fixes. Unlike the Critical Patch Updates (CPUs) of the past, that ran on endlessly, RURs are tied to specific RUs, so you will end up applying the RUs, but at a later date, when hopefully the bugs have been sorted by the RUR…

The folks at Oracle suggest applying the RUs, which is what I (currently) do. Some in the Oracle community suggest applying RURs is the safer strategy. If you look at the “Known Issues” for each RU, and the list of recommended one-off patches that should be applied after the RU, you can see why some people are nervous of going directly to RUs.

Once again, this comes down to you and your experience of patching with the feature set you use. If you are finding RUs are too problematic, go with the RUR approach. You can always change your mind at any time…

Monthly Recommended Patches (MRPs)

There’s a new kid on the block starting with 19.17 on Linux, which are monthly recommended patches (MRPs). They replace RURs. There are 6 MRPs per RU, with each MRP containing the RU and the current batch of recommended one-off patches, as documented in MOS Note 555.1.

I’m assuming these are rolling and standby-first patches, but I can’t confirm that yet.

RAC Patching : Rolling Patches

Rolling patches can be applied one node at a time, so there are always database instances running, which means the database remains available for the whole of the patching process.

Release Updates (RUs) and Release Update Revisions (RURs) are always rolling patches, so it makes sense to take advantage of this approach. If you are applying one-off patches, these may not be rolling patches, so always check the patch notes to make sure.

Even when rolling patches are available, you can still make the decision to take the whole system offline to apply the patches. I’m not sure why you would want to do this, but the option is there for you.

Data Guard : Standby-First Patches

Release Updates (RUs) and Release Update Revisions (RURs) are always standby-first patches. This gives you some flexibility on how you approach patching your system. Here are two scenarios with a two node Data Guard setup, where node 1 is the primary and node 2 is the standby.

Scenario 1 : Switchovers

  • Patch the node 2 binaries (not datapatch) and bring the standby back into recovery mode.
  • Switchover roles, making the node 2 the primary and node 1 the standby.
  • Patch the node 1 binaries (not datapatch) and bring the standby back into recovery mode.
  • Run datapatch against node 2 (the primary database).
  • Optionally switchover roles making node 1 the primary database again.

Scenario 2 : No switchovers

  • Patch the node 2 binaries (not datapatch), but don’t start the standby.
  • Patch the node 1 binaries (not datapatch) and start the database.
  • Start the standby on node 2.
  • Run datapatch on node 1 (the primary).

Scenario 1 reduces downtime, as the primary is always running while the standby is having the binaries patched. Scenario 2 is simpler, but has a more extensive downtime as the primary is out of action while the binaries are being patched.

Remember, one-off patches may not be standby-first patches, so you may only have the option of scenario 2 when applying them. You have to read the patch notes.

OJVM Patching : Which approach?

Oracle 21c has simplified the OJVM patching situation. In previous releases the OJVM patches were completely separate. The grid infrastructure (GI) and database patches for 21c include the OJVM patches. For 19c the OJVM patches are still separate.

The separate 19c OJVM patches come with additional restrictions. They are not standby-first patches, and according to the patch notes, they can only be applied as RAC rolling patches if you use out-of-place patching.

Why don’t you write about patching much?

Writing about patching is difficult, because everyone has a unique environment, and their own constraints placed on them by their business. I’ve always avoided writing too much about patching because I know it’s opening myself up for criticism. Whatever you say, someone will always disagree because of their unique situation, or demand yet another patching scenario because of their unique environment. You’re damned if you do, and damned if you don’t.

I’ve recently written a few patching articles for specific scenarios (here). I may add some more, but it’s not going to be a complete list, and don’t expect me to write articles about stuff I don’t use, like Exadata. These are purely meant as inspiration for new people. Ultimately, you need to read the patch notes and decide what is best for you!

Let the cloud do it!

If all this is too much hassle, you do have the option of moving your database to the cloud and letting them worry about patching it. πŸ™‚

Conclusion

Read the patch notes!

Cheers

Tim…

Operating Systems for Oracle Databases (Poll Results Discussed)

I put out some questions on Twitter a couple of days ago, asking about the operating systems people were using for their Oracle database servers.

As with all these polls, we have to discuss some caveats. Most of the people that follow me are from the Oracle community, so that puts a heavy bias on the outcome. The questions relate to Oracle databases, which also influences the results. Someone may choose one distribution to run Oracle workloads, and a different distribution to run non-Oracle workloads. We also have to remember the sample size is small. Despite this, I’m going to discuss the results as if this were a representative sample of people, even though I accept it may not be. πŸ™‚

This was the first question I asked.

Which operating system are you using for your Oracle Databases servers?

You’ll notice I totally forgot to include Windows, which was a shame because it would have been nice to see that. My main focus was to see how many people were still holding on to the traditional UNIX systems. There was a really strong showing for Linux over UNIX, which was hardly surprising. Every year the dominance of Linux is increasing. A few years back a lot of big companies were still using the traditional UNIX systems, but I guess a lot of people have got sick of spending that sort of cash, and some have probably switched to buying Exadata kit instead. I cant say I’m surprised by this result.

Something I’ve said repeatedly over the years is you should stick to the operating system that is the most popular, as that is the one that is going to get tested the most. There is no point in purposely making yourself a minority IMHO. Having lived through the death of Oracle on Tru64 and HP-UX, I wouldn’t dream of using anything other than Linux now.

This was the next question.

For Linux users, which Linux distro are you using for your Oracle database servers?

Over 65% of the folks picked Oracle Linux, and about 27% picked RHEL. The fact this is a poll about Oracle database servers no doubt added to the skew in this result. Oracle have done a good job of promoting Oracle Linux, and the fact it is free probably helps a lot. I thought Oracle Linux would be the winner here, but I’m not sure I expected it to be by this much. Personally I wouldn’t run on anything other than Oracle Linux by choice. Remember, this is what Exadata uses, and this is what Oracle Cloud uses.

I suspect some of the people that picked “Other” were speaking about non-production systems. Perhaps I should have made it clear I was thinking about production, not test labs…

This was the final question.

For Enterprise Linux users, which version of Oracle Linux and/or RHEL are you using for your Oracle database servers?

It’s good to see that nobody is owning up to OL5/RHEL5. There are still a few things lingering on OL6/RHEL6, but I guess those are probably running old versions of the database.

OL7/RHEL7 is still the most common version, but I guess a lot of this is down to the long lifespan of database servers. I suspect many of these servers were provisioned some time ago. I’m hoping most new deployments are using OL8/RHEL8.

So nothing really that surprising about the outcome of this batch of questions. Pity I didn’t include Windows in the first question. Maybe next time…

Cheers

Tim…

Are you running production databases on the cloud? Poll results discussed.

It can be quite difficult to know if your impression of technology usage is skewed. Your opinion is probably going to depend on a number of factors including what you read, who you follow, and the type of company you work for. For this reason I asked some questions on Twitter the other day, just to gauge the response.

Let me start by saying, this is a small sample size, and most of my followers come from the Oracle community, including a number of Oracle staff. This may skew the results compared to other database engines, and technology stacks. I’m commenting on the results as if this were a representative sample, but you can decide if think it is…

So this was the first question I asked.

Is your company running production relational databases in the cloud?

We can see there was a fairly even spread of answers.

  • All prod DBs in cloud: A response of nearly 19% picking this option kind-of surprised me. I speak to a lot of people, and there always seems to be something they’ve got that doesn’t fit well in the cloud for them. Having this many people saying they’ve managed to make everything fit is interesting.
  • Some prod DBs in cloud: I expected this response to be high and with over 27% it was. When we add this to the previous category, we can see that over 46% of companies have got some or all of their production relational databases in the cloud. That’s a lot.
  • Not yet, but planned: At over 24%, when added to the previous categories, it would seem that over 70% of companies see some perceived value in running their databases in the cloud. Making that initial step can be difficult. I would suggest people try with a greenfield project, so they can test the water.
  • Over my dead body: At 29%, this is a lot of people that have no intention of moving their databases to the cloud at this moment in time. We might get some answers about why from the next question.

This was my second question.

What’s stopping you from moving your databases to the cloud?

Once again, we get a fairly even spread of responses.

  • Legal/Compliance: Over 17% of respondents have hit this brick wall. Depending on your industry and your country, cloud may not be an option for you yet. Cloud providers are constantly opening up data centres around the world, but there are still countries and regions that are not well represented. Added to that, some organisations can’t use public cloud. Most cloud providers have special regions for government or defence systems, but they tend to be focused in certain geographical regions. This is a show stopper, until the appropriate services become available, or some hybrid solution becomes acceptable.
  • Company Culture: At over 30%, this is a road block to lots of things. Any sort of technology disruption involves a change in company culture, and that’s one of the hardest things to achieve. It’s very hard to push this message from the bottom up. Ultimately it needs senior management who understand the need for change and *really* want to make that change. I say *really* because I get the feeling most management like to talk the talk, but very few can walk the walk.
  • Cloud is Expensive: At nearly 29%, this is an interesting one. The answer to the question, “is cloud more expensive?”, is yes and no. πŸ™‚ If you are only looking at headline figures for services, then it can seem quite expensive, but the cloud gives us a number of ways to reduce costs. Reserved instances reduce the cost of compute power. Selecting the correct shape and tier of the service can change costs a lot. Spinning down non-production services when they are not used, and down-scaling production services during off-peak hours can save a lot of money, and these are not things that necessarily result in a saving on-prem. I also get the impression many companies don’t work out their total cost of ownership (TCO) properly. They forget that their on-prem kit requires space, power, lighting, cooling, networking, staffing etc. When they check the price of a service on the cloud, it includes all that, but if you don’t take that into consideration, you are not making a fair comparison. Some things will definitely be cheaper on the cloud. Some things, not so much. πŸ™‚
  • Cloud Sucks: At nearly 23%, this is a big chunk of people. It’s hard to know if they have valid reasons for this sentiment or not. Let’s take it on face value and assume they do. If this were a reflection of the whole industry, it’s going to be interesting to see how these people will be won over by the cloud providers.

The comments resulted in a few interesting things. I’ve responded to some of them here.

  • “Lack of cloud skills.” We all have to start somewhere. I would suggest starting with small proof of concept (POC) projects to test the water.
  • “Unreasonable Oracle licencing restrictions.” In case you don’t know, the core factor doesn’t apply to clouds other than Oracle Cloud, which makes Oracle licensing more expensive on non-Oracle clouds. Of course, everything can be negotiated.
  • “Lack of availability of Cloud experts to assist/advise.” I’m sure there are lots of people that claim they would be able to help, but how many with a proven track record is questionable. πŸ™‚
  • “We have a massive legacy estate to consider.” Certainly, not everything is easy to move the the cloud, and the bigger your estate, the more daunting it is. I’m sure most cloud providers would love to help. πŸ™‚
  • “Latency with fat client applications.” I had this conversation myself when discussing moving some of our SQL Server databases to Azure. It can be a problem!
  • “Seasonal businesses with uncertain money flow may not able to meet the deadlines for subscription payments.” Scaling services correctly could help with this. Scale down services during low periods, and scale up during high periods.
  • “The prime fear is being pulled off from the grid. Undependable internet connections.” Sure. Not every place has dependable networking.
  • “Bandwidth requirements & limited customization possibilities.” Ingress and egress costs vary with cloud providers. It may be intelligent design of your processes can reduce the amount of data being pushed outside the cloud provider. The cloud is very customisable, so I’m not sure what the issue is here, but I’m sure there are some things that will be problematic to some people.

Overall I think this was an interesting exercise. Even five years ago I would have expected the responses to skew more in favour of on-prem. Barring some huge change in mindset, I would expect the answers to be even more in favour of cloud in another 5 years.

Regardless of your stance, it seems clear that familiarity with cloud services should be on your radar, if it’s not already. Your current company may not be fans of the cloud, but if you change jobs the cloud may be a high priority for your new company.

Cheers

Tim…

PS. I’ve been running my website on AWS since 2016 . I started to write about some services on AWS and Azure in 2015. I’ve been playing with Oracle Cloud since its inception in 2016 (I think). Despite all this, I consider myself a dabbler, rather than an expert.

Video : Gradual Database Password Rollover Time

In today’s video we demonstrate gradual database password rollover time, introduced in Oracle database 21c, and backported to 19c.

The video is based on this article.

This is a small, but really useful quality of life feature!

The stars of today’s video are the offspring of Jeff Smith. It was cold, dark and miserable when I recorded the video, and this is one of my favourite clips, so I included it to bring a touch a summer! πŸ™‚

Cheers

Tim…

Video : DBMS_CLOUD : External Tables

In today’s video we’ll demonstrate the functionality of the DBMS_CLOUD package, with specific reference to external tables based on objects in a cloud object store.

The video is based on a section in this article.

You may find these useful.

The star of today’s video is Alex Zaballa, who has got to be one of the most certified people I’ve ever met. πŸ™‚

Cheers

Tim…