UNIX/Linux Time Command : Record elapsed time

In a recent post I mentioned using a scratchpad to record everything I do. As part of that process I try to make regular use of the UNIX/Linux “time” command to record elapsed times of long running commands.

It’s really simple to use. All you do is put “time” in front of the command and it will display how long it takes to complete the command. In this example I do a sleep for 10 seconds and use the time command to report the elapsed time. 🙂

$ time sleep 10

real    0m10.002s
user    0m0.001s
sys     0m0.001s
$

Clearly that’s a silly example, but it gives you an idea of how this works.

If you get into the habit of using this with all long running processes, you can get accurate timings for steps. That way, when someone asks you how long something takes you can give them a real answer, rather than making something up and hoping for the best.

Just remember, timings can vary if the load on the system varies between runs. Even so it’s always nicer to have some real data to inform your decisions going forward. Especially when planning for something that will cause disruption in production. 🙂

Cheers

Tim…



Using a scratchpad…

Followers of the blog know I’m a big advocate for writing things down. The main reason I do this is because I want a record of everything I do.

I rarely type a command directly into the command line. I nearly always type it in a scratchpad first. Currently I have 67,250 lines in my work scratchpad and 12,309 lines in my personal scratchpad.

When I say scratchpad, I just mean a text file, which I edit using a text editor. Nothing fancy.

Why do I do this?

Inspiration

Most of my articles and blog posts start life as notes in my personal scratchpad. At work some of my scratchpad notes become more formal documentation, like knowledge base notes and how-to files in Git etc.

I know if I don’t make the notes as I go along, I will forget what I did, and struggle to write the documentation later.

If something makes it as far as being written up, it gets removed from my scratchpads, so what’s in there at the moment are notes that have not made the cut, so to speak. 🙂

One of the reasons I’ve been able to produce content for so many years is there is a constant stream of stuff added to my scratchpads. Of course, some of it is junk, but some of it is not.

If you are struggling with documentation or inspiration, I think taking this approach will really help.

Reflection

One of the things that I find really useful about taking notes is it allows me to look back and reflect on what I did to complete something. For example I might search through my scratchpad to see what happened over the lifetime of a server. I can see all tickets that were raised and what firewall rules and configuration changes were required. When I get a similar request this allows me to estimate the amount of work that needs to be done, and I can see what teams will be involved in the process.

I could search though our ticketing system for much of this information, but I find it a lot easier to keep a record of my actions in a scratchpad, then drill into the tickets if I need more info, which I rarely do.

Rewrites

Much like my articles, if I read back through some notes and they aren’t 100% clear, I often rewrite them. Maybe adding some more text, or a clearer example. This process may result in something graduating into being a separate document, but sometimes it just stays in the scratchpad forever.

Give it a go

If you don’t already do this, give it a go and see how you feel about it. Especially you content creators.

Cheers

Tim…

URGENT : Why you should {almost} never put URGENT in your message

Just a little note about something that rubs me up the wrong way.

I quite often get messages with the word URGENT in the subject or text. I scan through the content, and if it doesn’t seem truly urgent to me, I put it at the bottom of my list of things to do. Why?

You are not the central character in my life

When someone is communicating with me, they are thinking it’s a 1-to-1 interaction. What they forget is that I am working on many different things. As a result, for me it is a 1-to-many relationship.

Just because something is urgent to you, it doesn’t mean it takes priority over the other work I am doing. You don’t know what I’m doing, so you can’t possibly know how your issue sits in my list of priorities. Assuming your needs are more important than the needs of others is really rude.

This is even more annoying when it comes from someone outside of work. If you are not paying me, you have no business sending me an “urgent” request.

Your bad planning is not my emergency

In many cases these “urgent” issues could have been solved well in advance. It’s bad planning that has caused this issue, so I don’t see why it should have a negative impact on my life.

Sometimes there are genuine reasons for something to be classed as an emergency, like P1 incidents, but that’s not what I’m talking about.

There are some people that bounce from one emergency to the next. It soon becomes obvious that these people are just really bad at planning, and as a result are constantly in the weeds, and asking you to help drag them out.

Personal heroics don’t help the company long term

Occasionally you have to dig deep to get through a real emergency, but for the constant stream of self-inflicted emergencies, the only solution is to let things fail so people can see the root cause.

Personal heroics may feel good to you in the short term, but in the long term it is bad for your company and for you. The company needs to know what is failing and do something about it. Relying on a small number of people to pull them out of the weeds is not a long term strategy. Sooner or later this will stop working because the “heroes” will get annoyed and leave, or quiet quit.

What does urgent even mean to you?

I once got a message late on a Friday about an “urgent” issue. I felt sorry for the person in question, so I cancelled my plans, worked on the issue and sent them back the solution. I then got a reply saying, “Great, I’ll have a look at it on Monday”. Needless to say I lost my shit. That clearly was not an urgent issue.

I’m not alone

Over the years I’ve had this conversation many times, and I know I’m not the only person that gets annoyed by messages marked as urgent. I also know I’m not the only person that puts them to the bottom on my to-do list if they are not truly urgent.

Conclusion

As you can see, unnecessarily marking things as urgent is a bad idea, and likely to result in a longer resolution time, so next time you consider adding that little word into your message, just don’t!

Cheers

Tim…

PS. Rant over…

Answering some questions about Vagrant

Someone on YouTube asked me some general questions about my experience of Vagrant, so I thought I would write them down as a blog post.

Could you share the story of when and how you first encountered Vagrant, and how did you feel about it at the time?

I was quite late to the party. In 2017 I was at a VMware workshop in Cork, Ireland. I was sitting in the hotel and Frits Hoogland was showing me his Vagrant build for a test Oracle database. Like most things when they are unfamiliar, it seemed a little complex. He gave me access to his Vagrant repository, but I hardly looked at it. It was on my list of things to do, but there is always so much on my to-do list. When Frits talks you should listen, but unfortunately I failed that mission. 🙂

About a year later a colleague at work asked me what Vagrant was, and I struggled to give a reasonable answer. That evening I Googled it, and tried a couple of really simple builds. As someone with lots of VMs at home it totally blew my mind. From that point on I was hooked. I wrote Vagrant builds for all my test databases, so I could rebuild them whenever I wanted to. I went from never using it, to never shutting up about it overnight.

Now my PC is a lot less bloated. I don’t have to keep loads of VMs for different database versions, RAC and Data Guard etc. If I need something to do a test I just build it from scratch.

Another benefit is it makes live demos feel a lot less stressful. I remember being in my hotel in India, and a few minutes before I was due to start presenting I was having some issues with my demo VM. I just typed “vagrant destroy -f” followed by “vagrant up” and my demo system was rebuilt and I was good to go. Nightmare averted, and no need for loads of backups and snapshots.

How long does it usually takes for you to create the builds that are used to create the database?

The first couple of real builds took some time as there was a learning curve with Vagrant. Fortunately I have a lot of Oracle skills and some basic system administration skills, so it wasn’t too bad. It would have been a lot harder if I was trying to pick up several new skills at the same time.

All my Vagrant builds are based on articles, so I already know what to do. I’m basically tweaking the instructions from my articles to form the Vagrant builds.

The more vagrant builds you do, the quicker you get at doing them, because you have a repository of previous builds to pull ideas from. I’m now at the point where most new builds are slight variations of a previous build, so they are really quick to write.

How are you using Vagrant at work? I assume that some companies will require us to install the database with its options in a bare metal server, and not with VMs.

I use Vagrant at home. All my writing is based on VMs I’m running at home. Those VMs are built using Vagrant.

Automation at work is a little tricky because there is a separation of duties between virtualization, system administration and DBA teams. Configuring a complete automation is quite time consuming and political. Terraform and Ansible are more commonly used at work, but we are still on that journey. We are less DevOps and more DevHopeful. 🙂

Our cloud automations all use Terraform. Terraform is similar to Vagrant, as both are produced by Hashicorp.

The tools you use for automation are not as important as the attitude. Once you get into automation you can switch between tools a lot more easily, because you understand the approach you need to take.

How does using Vagrant help you in serving your customers?

Using Vagrant at home makes it quick to set up new environments, which allows me to learn new stuff faster. I don’t like doing anything at work unless I’ve already tried it at home. I guess me knowing more helps me do my job better, which ultimately benefits the people depending on me at work, so I guess there is an indirect relationship. 🙂

How do you learn Vagrant?

If you want to know more about Vagrant you can start here.

I find the easiest way to learn about Vagrant is to build things. There are loads of builds on the internet to use for inspiration. You can find mine here.

Cheers

Tim…

The case against GUIs (again)…

Recent events have made me think about this post again…

Software Vendors

I can’t explain how much I despise being forced to use a GUI to do something that could be scripted.

If you are a software vendor, please make sure you offer some form of scriptable API to interact with your product, and make sure it’s documented properly. I don’t care how much time and effort you put into your GUI, I don’t want to use it. I want everything in a script that can be checked into Git and automated.

If you are a software vendor that doesn’t provide a scriptable way to interact with your system, you are going to the bottom of my list. Even if I am forced to use your product now, I will switch at the first possible opportunity.

Staff

I’m sure this will ruffle a few feathers, but as I said in the linked article, when I see people using a GUI to perform certain maintenance operations my immediate reaction is they are wasting time. It is very rare a manual operation will be as fast and accurate as a scripted operation.

In the past we have hired “experts” to do work for us, and they’ve taken days working with GUIs to accomplish something that could have been scripted and run in much less time. If they are truly experts I would have expected them to have scripts for everything they do anyway.

I realise some consultants are running up chargeable hours by taking the long route, and some are not the experts they claim to be. It is noticed!

Why the rant?

The further down the rabbit hole I go with automation, the less I can stand doing manual operational work. I’m reaching the point where the mere sight of an unnecessary GUI gives me toxic shock…

GUIs have their place, but not for operational tasks IMHO!

Cheers

Tim…

Oracle VirtualBox 7.0.10, Vagrant 2.3.7 and Packer 1.9.2

Oracle VirtualBox 7.0.10

VirtualBox 7.0.10 was released a few days ago and I finally got round to trying it.

The downloads and changelog are in the usual places.

I’ve installed it on my Windows 10 and 11 machines with no drama. For the previous release I had some issues with Windows 10. I had to uninstall then reinstall it to get it to work. For this release a straight upgrade was fine on both Windows versions.

Vagrant 2.3.7

At the same time I noticed Vagrant 2.3.7 had been released. All my test systems are built with Vagrant, so I grabbed it before testing my builds with the new version of VirtualBox.

If you are new to Vagrant and want to learn, you might find this useful.

Once you understand that, I found the best way of learning more was to look at builds done by other people. You can see all my Vagrant builds here.

I’ve already updated the relevant builds to include the latest versions of OpenJDK, Tomcat, ORDS and SQLcl. I’ll go through and add the latest round of Oracle database patches where necessary over the next few days.

Packer 1.9.2

With each new release of VirtualBox I rebuild my Vagrant boxes (Oracle Linux 7, 8 and 9) so they have the latest guest additions. These Vagrant boxes are the base for all my Vagrant builds. The boxes are built using Packer. I had a quick check and noticed Packer 1.9.2 was available, so I picked that up before starting my builds. The new version of the boxes can be seen here.

If you are interested in creating your own Packer builds, you might take inspiration from mine, available here.

How did it all go?

The new version of Packer worked fine with the new version of VirtualBox, and my boxes were built and uploaded in no time.

From there, all the Vagrant builds I’ve tried worked with no hiccups, so all versions seem to be playing well with each other.

I’ll be doing lots of testing over the next few days. I’ll update here if I notice anything unusual.

What about the VirtualBox GUI?

In the past people have asked me about issues they have had with the VirtualBox GUI after a new release, and my answer has always been the same. I don’t use it. I use Vagrant. As a result, when I say VitualBox is working fine I am never commenting on the GUI side of VirtualBox. For all I know it could be a disaster, but it wouldn’t affect me. I just do “vagrant up” and wait while the magic happens… 🙂

If you want my advice, try using Vagrant and you will never want to do manual configuration in the GUI again!

Cheers

Tim…

When Overlapping CRON Jobs Attack…

We recently had an issue, which I suspect was caused by overlapping CRON jobs. By that I mean a CRON job had not completed its run by the time it was scheduled to run again.

CRON

If you’ve used UNIX/Linux you’ve probably scheduled a task using CRON. We’ve got loads of CRON jobs on some of our systems. The problem with CRON is it doesn’t care about overlapping jobs. If you schedule something to run every 10 minutes, but the task takes 30 minutes to complete, you will get overlapping runs. In some situations this can degrade performance to the point where each run gets progressively longer, meaning there are more and more overlaps. Eventually things can go bang!

Fortunately there is a really easy solution to this. Just use “flock”.

Let’s say we have a job that runs every 10 minutes.

*/10 * * * * /u01/scripts/my_job.sh > /dev/null 2>&1

We can use flock protect it by providing a lock file. The job can only run if it can lock the file.

*/10 * * * * /usr/bin/flock -n /tmp/my_job.lockfile /u01/scripts/my_job.sh > /dev/null 2>&1

In one simple move we have prevented overlapping jobs.

Remember, each job will need a separate lock file. In the following example we have three separate scripts, so we need three separate lock files.

*/10 * * * * /usr/bin/flock -n /tmp/my_job1.lockfile /u01/scripts/my_job1.sh > /dev/null 2>&1
*/10 * * * * /usr/bin/flock -n /tmp/my_job2.lockfile /u01/scripts/my_job2.sh > /dev/null 2>&1
*/10 * * * * /usr/bin/flock -n /tmp/my_job3.lockfile /u01/scripts/my_job3.sh > /dev/null 2>&1

Oracle Scheduler (DBMS_SCHEDULER)

The Oracle Scheduler (DBMS_SCHEDULER) doesn’t suffer from overlapping jobs. The previous run must be complete before the next run can happen. If we have a really slow bit of code that takes 30 minutes to run, it is safe to schedule it to run every 10 minutes, even though it may seem a little stupid.

begin
  dbms_scheduler.create_job (
    job_name        => 'slow_job',
    job_type        => 'plsql_block',
    job_action      => 'begin my_30_min_procedure; end;',
    start_date      => systimestamp,
    repeat_interval => 'freq=minutely; interval=10; bysecond=0;',
    enabled         => true);
end;
/

The Oracle Scheduler also has a bunch of other features that CRON doesn’t have. See here.

Conclusion

I’m not a massive fan of CRON. For many database tasks I think the Oracle Scheduler is far superior. If you are going to use CRON, please use it safely. 🙂

Cheers

Tim…

Planning our next Oracle database upgrades. A customer perspective…

Upgrading a database is not about the technical side of things. It’s about the planning that is required. I can upgrade a database in a few minutes, but the project to upgrade all the environments for a specific application can take months to complete. In this post I want to discuss some of the issues we are discussing at the moment regarding our future Oracle 23c upgrades.

What support dates are relevant?

Here are the support dates for the Oracle database.

ReleasePremier Support (PS) EndsFree Extended Support (ES) Ends
19cApril 30 2024April 30 2025
21cApril 30 2024N/A

Release Schedule of Current Database Releases (Doc ID 742060.1)

Here are the current support dates for Oracle Linux.

ReleaseGA DatePremier Support EndsExtended Support Ends
OL7Jul 2014Jul 2024Jun 2026
OL8Jul 2019Jul 2029Jul 2031
OL9Jun 2022Jun 2032Jun 2034

Lifetime Support Policy: Coverage for Oracle Open Source Service Offerings

What database versions are we currently running?

To upgrade directly to 23c we must be running Oracle 19c or 21c.

All our databases are on 19c, so this puts us in a good position. It took a lot of pain and effort to get us to 19c, but it was worth it!

PDB or non-CDB architecture?

The non-CDB architecture was deprecated in 12.1.0.2, but it has remained supported up to, and including, 19c. So Oracle 23c will be the first long term release where the non-CDB architecture is not an option. If you’ve not got up to speed on pluggable databases, you better get started soon! (Multitenant Articles)

With one exception, we have PDBs across the board, so there is nothing new for us here. It sometimes felt like I was swimming against the tide by pushing PDBs so hard over the years, but it all seems worth it now.

What OS are you running on?

I’m going to conveniently forget that anything other than RHEL/OL exist, because other operating systems don’t exist for me in the context of running Oracle databases.

It took us a long time to migrate from OL6 to OL7. The majority of our Oracle databases are currently still running on OL7, which is fast approaching end of life. Since Oracle 23c will not be supported on OL7, we are going to need to migrate to a newer operating system. I wrote about my scepticism around in-place RHEL/OL upgrades (here), so that leaves us two choices.

  • Move our existing databases to a new OS now, then upgrade to 23c later.
  • Wait for the 23c upgrade, and do a big-bang OS migration and database upgrade.

What’s stopping us from doing the first option now? Nothing really. We could migrate our 19c databases to OL8 servers. It would be nicer to migrate them to OL9, but it is not supported for 19c yet. I recently wrote a rant about product certifications on Oracle Linux 9, which resulted in this response from Oracle.

“Oracle Database product management has confirmed that when Oracle Database 23c ships, it will be certified for both OL8 and OL9. Also, Oracle Database 19c will be certified on OL9 before end of 2023.”

That’s really good news, as it gives us more options when we move forward.

Will Oracle Linux exist in the future? Yes!

Just as I thought I had got my head around the sequence of events, RHEL dropped the bombshell about how they would distribute their source in future (see here). This raised concerns about if RHEL clones such as Oracle Linux, Rocky Linux and AlmaLinux could even exist in future.

Without knowing the future of our main operating system, we were questioning what to deploy on new servers. Do we continue with OL or switch to RHEL? Rocky and Alma released some “don’t panic” messages, but Oracle were very quiet. I wasn’t surprised by that, because Oracle don’t say anything until it has passed through legal, but as a customer it was a very nervy time.

A couple of days ago we got a statement from Oracle (here), with a firm commitment to the future of Oracle Linux. I immediately spoke to our system admins and said OL8/OL9 is back on the table. Phew!

I have my own opinions on the RHEL vs clones situation, but as a customer it’s not about politics. I just need to know the OS we are using is still going to exist, and will be supported for our databases. In that respect the statement from Oracle was very welcome!

Do you need a hardware refresh?

If you are running on physical kit, you need to check your maintenance agreements. You may need a hardware fresh to keep everything up to date.

We run everything on virtual machines (VMs), so the hardware changes to the clusters have no impact on our VMs. We’ve had at least one hardware refresh during the lifespan of some of our database VMs.

Thanks to Ludovico Caldara for mentioning this point.

What versions do vendors support?

We use a lot of third party applications, and some of the vendors are really slow to certify their applications on new versions of the database and operating systems.

Ultimately we will make a choice on destination versions and timings based on application vendor support.

Manual or AutoUpgrade?

In Oracle 23c manual upgrades are deprecated (but still supported). I was late to the party with AutoUpgrade, but now I’ve used it I will never do manual upgrades again. We will definitely be using AutoUpgrade for our 23c upgrades!

If you are new to AutoUpgrade I have some examples of using it when I was doing 21c upgrades (see here). That should help you get started.

What are you going to test?

Testing is always a big stumbling block for us. We are not very far down the path of automated testing, which means we need bodies to complete testing. The availability of testing resource is always an issue. There are times of the year when it is extremely unlikely people will be made available, so planning this resource is really important.

So what’s the plan?

It’s always a balancing act around support for the OS, database and application vendors. Ultimately each project will have to be dealt with on a case by case basis, as the allocation of testing resources and potential disruption to the business have to be factored in. Everything is open to change, but…

  • Our default stance is we will upgrade to Oracle 23c on OL9. We will build new OL9 servers and install 23c on them, then use AutoUpgrade to migrate and upgrade the databases. For some of our internal developments I feel this could happen relatively quickly (kiss of death).
  • Application vendor support is often a sticking point for us, and timing will have to factor in the OL7 end of life. If support for 19c on OL9 comes in time, we may migrate our 19c databases to OL9, while we wait for a vendor to support Oracle 23c. Alternatively we could pay for extended support for OL7, and do the OS and database in one go once the application vendor is happy.

I realise this has been a bit of a ramble, but I just wanted to write it down to get things straight in my own head. 🙂

Cheers

Tim…

PS. I have some technical posts on upgrading to 23c that will be released once the on-prem version of 23c goes GA.

The rise and fall of read-only Oracle homes

Cast your mind back to the olden days of Oracle 18c, where one of the new features introduced was read-only Oracle homes. I wrote about it here.

Read-Only Oracle Homes in Oracle Database 18c

What was the problem?

Mixing executables, log files and configuration files is a really bad idea. Configuration files tend to have a long lifespan, while executables change all the time due to patches and upgrades. It can be difficult to find log files when they are spread across multiple subdirectories in the Oracle home, although the Automatic Diagnostics Repository (ADR) solved a lot of those problems for us.

Historically Oracle have had this mixed approach, with directories such as “dbs”, “network”, and some of the subdirectories of “rdbms” amongst others under the Oracle home.

The solution?

Read-only Oracle homes solve this problem by splitting out most of the common “problem files” into a separate location, leaving the contents of the Oracle home in a mostly read-only state.

The solution was nice enough, and didn’t require mental gymnastics to understand if you paid a little attention. If you have never tried it, check out the article linked about for the basics.

The Rise

When Oracle 21c was released one of the behaviour changes was that read-only Oracle homes were the default. You could still choose to go read/write, but there was a clear statement of direction. Read-only Oracle homes were the future!

The Fall

I noticed during the 23c beta that the read/write Oracle homes were the default again. I raised a question about it a couple of times. I noticed that Oracle 23c Free used a read/write Oracle home too, but figured that wasn’t a “proper installation”, so whatever.

More recently I was going through the 23c installation guide and I saw this.

“With Oracle Database 23c, an Oracle home is available in read/write mode by default. However, you can choose to configure an Oracle home in read-only mode after you have performed a software-only Oracle Database installation.”

https://docs.oracle.com/en/database/oracle/oracle-database/23/ladbi/about-read-only-oracle-home.html#GUID-D848002A-DBAD-48FA-8467-E849630B8E42

So it looks like we’ve flipped back to the read-write Oracle homes by default in 23c. Read-only Oracle homes are still available. Just not by default.

So what?

I prefer the read-only Oracle homes, and of course I can still choose to use them. The difference now is I expect the vast majority of people will use the read/write homes, as people tend to stick with the path of least resistance. So the question is, do I want to turn myself into a minority?

I understand why this is better for backwards compatibility, but I’m a little disappointed. Forcing the change of the default behaviour would have been nice, and better for the product in the long run.

I’ve got plenty of time to consider my options

Ah well. Better to have loved and lost, than to have your eyes gouged out with rusty spoons… 🙂

Cheers

Tim…

PS. I think I may have given some people the impression that read-only Oracle homes are going away. That wasn’t my intention. I’m just talking about the change in default behaviour since Oracle 21c.

Deprecated and Desupported Features in Oracle Database 23c

Every time there is a new database release on the horizon it’s worth looking at the deprecated and desupported features in that release, so you can start planning for the future. Here is the full list from the documentation.

Behavior Changes, Deprecated and Desupported Features for Oracle Database

I’m going to comment on a few things that standout for me. You might find other things more interesting…

Deprecated

DBUA and Manual Upgrade Deprecation : About time! From 21c onward AutoUpgrade is the preferred upgrade approach. Signalling the deprecation of the other approaches is welcome in my opinion. If you’ve never used AutoUpgrade you can see some examples here.

Oracle Persistent Memory Deprecation : Intel killing Optane was the writing on the wall for Oracle Persistent Memory Database (PMEM) and Oracle Memory Speed (OMS) File System. This is not really a surprise.

Deprecation of the mkstore Command-Line Utility : Not a major thing, but I will probably need to revisit a handful of articles to do some small edits. As pointed out by Piotr Wrzosek, the mkstore utility is used for credentials when using a secure external password store. I’m guessing this will be baked into another utility like orapki going forward, but we will see. (see update 2)

DBMS_RESULT_CACHE Function Name Deprecations : I love this move. References to “black lists” are changed to “block lists”. I personally try to use “allow list” and “block list” instead of “white list” and “black list” in conversation. Regardless of any other motivation, I think they are more descriptive.

Desupported

Non-CDB Architecture : This was deprecated in 12.1.0.2 and desupported in 21c. I’m listing it here because 23c is the first long term released where this is desupported. Most people won’t have progressed past 19c, and may have resisted the multitenant architecture. You can’t resist any longer. I’ve written loads about pluggable databases here. Please get up to speed with it.

Original Export Utility (EXP) Desupported : For some reason this feels like a “WOW” moment, but in reality I can’t remember the last time I used imp/exp. The IMP utility is clearly still supported to allow direct upgrades from older releases. Can you believe it’s about 18 years since Data Pump was introduced? 🙂

Oracle Enterprise Manager Database Express (EM Express) Desupported : This feature never really hit home with me, so I’m not sorry to see it gone. For most stuff you can just use SQL Developer which replicates the functionality. Of course, if you have Cloud Control, you wouldn’t be using the express feature anyway.

Transport Layer Security versions 1.0 and 1.1 Desupported : Great! If you do still need to make database callouts to old services that don’t support TLSv1.2 or above, just put a load balancer or reverse proxy in front of them and you are sorted. We usually do that anyway to ease certificate management for database callouts. See here.

Traditional Auditing Desupported : Cool. I prefer unified audit policies anyway, and it’s been the preferred method since 12.1, so it’s hardly a surprise. I mentioned this here.

Desupport of 32-Bit Oracle Database Clients : I can’t remember the last time I used a 32-bit client or server, so this doesn’t phase me.

Remember

Deprecated is not desupported. You can continue to use deprecated features, but you should be looking to move away from them before they are desupported in a future version.

The desupported stuff shouldn’t come as a big surprise as most things have been deprecated for some time. In some cases over many releases.

Make sure you check the full list for yourself, as there might be something important you need to think about.

Cheers

Tim…

Update 1: As mentioned in Mike Dietrich’s blog post (here) the public docs are currently for Oracle Database 23c Free, so the final on-prem release may include some changes. Keep your eyes open. 🙂

Update 2: Martin Bach confirmed by assumption that the credentials functionality would be included in a later version of orapki, as mentioned in this post.