Fedora 40 and Oracle

Fedora 40 was released over a month ago. Here comes the standard warning.

Here are the usual suspects.

I like messing about with this stuff, as explained in the first link.

I pushed Vagrant builds to my GitHub.

If you want to try these you will need to build a Fedora 40 box. You can do that using Packer. There is an example of that here.

What’s New?

So what’s new with Fedora 40? You can read about it here.



Oracle Enterprise Manager 13.5 Release Update 22 ( Certified on Oracle Linux 9 (OL9)

We’ve been pushing out some Oracle 19c databases on Oracle Linux 9 (OL9) since it was certified, see here, but those databases have not been monitored or backed up by Cloud Control, because the Enterprise Manager (EM) 13.5 agent was not certified on OL9. Instead we had reverted to the bad old days of using CRON scripts to do everything.

Since we started running 19c on OL9 I had been regularly searching for the EM certification notice. Last week I noticed MOS Doc ID 2978593.1, which said, “EM Agent 13.5 with Enterprise Manager 13.5 Release Update 22 or higher is certified on RHEL9 and OL9”. Happy days. I subsequently found this post announcing update 22, which I had somehow missed. I was also nudged by someone on Twitter/X to check the certification matrix again.

EM at Work

We have EM 13.5 running on OL8 at work. As soon as I found the agent was certified on OL9 servers I updated our installation to release update 22 and started to try and push out agents to the OL9 servers.

We hit an initial problem, which was EM uses SSH to push out the agent, and uses SHA1 to do it. Unfortunately SHA1 is not allowed on our OL9 servers, so that kind-of scuppered things. This is the error we got.

  • Error: SSH connection check failed
  • Cause: Connection to the SSH daemon (sshd) on the target host failed with the following error : KeyExchange signature verification failed for key type=ssh-rsa
  • Recommendation: Ensure that SSH daemon (sshd) on the target host is able to respond to login requests on the provided sshd port 22.Ensure the provided user name and password or ssh keys are correct.Ensure that the property PasswordAuthentication is set to yes in the SSHD configuration file (sshd_config) on the remote host.

To resolve this the system administrators issued the following command as root on the OL9 servers.

update-crypto-policies --set DEFAULT:SHA1

Once that was done the agents pushed out with no problems.

I’m currently pushing out agents to all our 19c on OL9 servers, and replacing all the CRON scripts with EM backups and monitoring.

This brings OL9 into line with out other databases servers. Happy days.

EM 13c Installation on OL9

Although I don’t need it for work, I decided to spend the bank holiday weekend trying to do a fresh installation of 13.5 on OL9. I tried several different ways, with the main two being.

  • Install and configure the base release without the patches.
  • Install with the patches downloaded and applied as part of the installation and configuration.

In both cases everything looked fine until near the end of the process, where the OMS refused to start. Unfortunately I couldn’t find anything obvious in the logs. It takes a long time to run the build, so having it fail near the end is quite frustrating.

At the moment I can’t see any OL9 specific docs, so I can’t tell if I’m missing out a vital step. As mentioned in the previous section, there are definite differences between OL9 and OL8, so I would not be surprised if the documentation (or MOS note) is released that includes an obvious gotcha.

As soon as I get it working I’ll release an article and a Vagrant build.



Pipelines and Automation : Switching things up to avoid additional costs

Here’s a little story about something that has happened over the last couple of days. Let’s start with some background.

Building blocks

Deployment pipelines and automations are made up of a number of building blocks including the following.

  • Git repositories
  • Automation servers
  • Build artifact repositories
  • Container registries
  • Software catalogues
  • Terraform state files

The specific tools are not important

I’ve said a number of times, the choice of the specific tools is far less important than just getting off your backside and doing something with what you’ve got. You can always replace specific tools later if you need to.

Over time people at our company have bought tools or subscriptions for a specific piece of functionality, and those happen to include other features. This means we have a lot of overlap between tools and subscriptions. Just to emphasise this point, here is a quick list of tools we have available at our company that fulfil the role of these building blocks.

Git repositories.

  • BitBucket
  • GitHub
  • Local Server (for backups of cloud repos)

Automation servers.

  • Teamcity
  • GitHub Actions
  • Jenkins

Build artifact repositories.

  • Artifactory
  • GitHub Packages

Container registries.

  • Artifactory
  • GitHub Packages
  • Azure Container Registry

On-prem software catalogues.

  • Artifactory
  • HTTP server
  • File stores

Terraform state files.

  • Azure storage
  • Artifactory

Switching to save money

A couple of days ago my boss said company X had hit us with a 23% increase in the price of one of our subscriptions. Day 1 we moved our container registry off their service. Day 2 we moved our artifact repository off their service. We no longer need that subscription, so instead of company X getting an extra 23% from us, they are now going to get nothing…


Not all moves are effortless, but you should really try and engineer pipelines so you can switch tooling at any time. You never know when an external pressure, such as a pricing change, might make you want to change things up quickly. 🙂



AI Search and the future of content creation

The recent announcements of GTP-4o by OpenAI and the AI updates from the Google IO keynote made me want to revisit the topic of how AI search will affect the future content creation. I’ve already touched on this here, but I think it’s worth revisiting the impact of AI search.

The view from the top

I’ve seen a few places talking about the Gartner post predicting a 25% reduction in search engine volume by 2026. This specifically relates to chatbots and virtual agents, but I think this figure could be higher if we separate AI search from traditional search.

Google have been experimenting with Gemini and search results for some time, hoping to offer a better search experience. According to the keynote, that service will become generally available soon. ChatGPT can already be considered a replacement for traditional search. Instead of doing a search and getting links, you just get an answer, which is after all what you are looking for.

Here lies the problem. If AI search presents answers directly, rather than referring you to the source websites, that represents a drop in traffic on the source websites. If there is indeed a 25% drop in traditional search by 2026, that will result in a drop of 25% in revenue for many online content creators.

Why is this a problem?

Professionally produced content will definitely be affected by a 25% reduction in traffic. Those content creators rely on traffic to their sites for their ad revenue. Without this, they can’t pay their workers. I don’t think many companies or people would be happy about a 25% cut from in their earnings.

The money from online advertisements has already fallen drastically over the last few years. Speaking from personal experience, for the same volume of traffic I’ve already seen ad revenue drop to about a quarter of what is was a few years back. Assuming that is true for professional content creators who rely on this income, they have already been hit hard, and now are likely to get hit again.

Even for those that don’t make money from publishing content, having a drop of 25% in their readership can be a demotivating factor.

So what?

So some people lose money. So what?

Well, AI typically relies on original source material to provide the information in the first place. If content creators give up, where is that new source content coming from? An endless recycling of AI generated content? That seems like a race to the bottom to me.

I spend a lot of time on YouTube and in recent months I’ve noticed the rise of AI generated content. I click on a video that looks interesting, only to find what sounds like an AI generated script being read by a very generic voice. Lots of words that sound related to the topic, but ultimately nothing of substance, leaving you with the feeling that you just wasted your time. I could easily see this happening to online publishing in general. The signal to noise ratio is likely to get really bad.

And another thing

I’ve focussed mostly on text publishing, as I’m mostly about articles and blog posts. Clearly there are other areas that are going to be massively affected by this.

  • Images : Unless you’ve been living under a rock you will already know about the complaints by people claiming AI image generation has stolen their material or art style. For companies that sell images online, AI image generation means game over for their business.
  • B-roll : When you watch videos on YouTube, you will notice many channels making use of b-roll footage. High quality clips inserted into their video to give it a more professional feel. Companies make money selling b-roll clips. That business will pretty much end overnight once the latest video generation is widely available. Why buy b-roll footage, when you can generate it for free?


Initially I see this as a win for the consumer, as we will be able to get access to information, images and video clips much more easily than we can currently. My concern is the initial progress may be followed by a gradual decline in quality to the point where everything becomes soulless dirge.



Oracle VirtualBox 7.0.18, Vagrant 2.4.1 and Packer 1.10.3

Oracle VirtualBox 7.0.18

VirtualBox 7.0.18 has been released.

The downloads and changelog are in the usual places.

I’ve installed it on my Windows 10 and 11 machines. Both gave me issues, which I put down to the new version of VirtualBox, but on further investigation it was actually because of my new Vagrant box builds.

If I used the new version of VirtualBox with the old version of my Vagrant boxes the builds worked fine. If I tried to use one of the newly built Vagrant box versions it failed with this error.

The SSH connection was unexpectedly closed by the remote end. This
usually indicates that SSH within the guest machine was unable to
properly start up. Please boot the VM in GUI mode to check whether
it is booting properly.

I tried using “config.ssh.keep_alive = true” in the Vagrantfile, but that didn’t help. I’m continuing to investigate the issue, but it seems like VirtualBox 7.0.18 is working fine. It’s something with my box builds which is the problem.

Vagrant 2.4.1

Releases of VirtualBox prompt me to check for new versions of Vagrant. The current version is Vagrant 2.4.1. All my test systems are built with Vagrant, so I installed it as well.

If you are new to Vagrant and want to learn, you might find this useful.

Once you understand that, I found the best way of learning more was to look at builds done by other people. You can see all my Vagrant builds here.

I’ll be doing some updates to my Oracle builds over the coming days, so this will get a lot of testing.

Packer 1.10.3

I use Packer to rebuild my Vagrant boxes (Oracle Linux 7, 8 and 9) so they have the latest guest additions. For the reasons mentioned above I’ve not released the new version of the Vagrant boxes yet. You can see the old ones here.

If you are interested in creating your own Packer builds, you might take inspiration from mine, available here.

Once I get to the bottom of the SSH issues on the new builds I’ll update the boxes.

How did it all go?

Most things worked fine. As mentioned there is an issue with my Vagrant box builds, but I’m assuming that is something on my side of things. 🙂

What about the VirtualBox GUI?

Just a quick warning. I do everything using Vagrant, so I rarely look at the VirtualBox GUI. Remember, when I say everything worked fine, I mean for what I use.



Oracle Database 23ai : How it affects me…

Oracle have released Oracle Database 23ai. You can watch the announcement video here, and read the announcement blog post here.

I don’t think I can add much to that, but I just want to talk about how this affects me as a customer and as a content creator.

Customer View

We’ve been waiting for Oracle Database 23 for a very long time. As I mentioned in this post, most of the upgrades I’ve been asked to do in my career are not driven by new features. They are driven by a need to stay in support.

Database upgrades are pretty simple from a technical perspective, but from a project perspective they are a nightmare. It takes ages to get everyone to agree to them, and then an eternity to actually test things before progressing to production. Any delays have a massive impact on this process.

We are in the process of migrating loads of our databases off Oracle Linux 7 and on to Oracle Linux 8 or 9 depending on 3rd party vendor support. We have to go through a whole testing cycle to complete this. If Oracle 23 had been released last year, many of these migrations would have gone directly to the new OS and new database version. You can argue the virtues of doing things separately or as a big bang, but our reality is testing resources are our biggest blocker, so having to test all our systems twice, once for the OS migration and once for the DB migration, represents a problem.

The delay of Oracle 23 on-prem has been a big headache. When I saw the announcement of Oracle Database 23ai I was sure it would include the on-prem version of the database. It does not. That was a bitter disappointment!

Content Creator View

I realise most of the people reading this are not content creators, and these issues are unlikely to affect you, but here goes…

Over the last 18 months I’ve written a bunch of articles. With the release of 23c Free I was able to publish most of them. As part of the Oracle community hype machine we’ve been encouraged to produce as much content about 23c as possible. There are a lot of us that will either have to go back and edit our 23c content, or leave the internet full of content for a version that doesn’t exist.

For the most part 23ai is just 23c with a different badge, so much of this can be done with a search and replace, along with the appropriate redirects. Where it is a bigger problem is for those that have published videos on YouTube, as those need to be redone and republished. There is no quick in-place edit. I’m one of the lucky ones here, as I made the decision to wait for the on-prem release before starting any videos, but some people will have a really painful job to do if they want things to keep current.

What I’m doing now

I’ve started the process.

My 23c index page now redirects to 23ai. It still contains all the 23c articles, but over the coming weeks they will change. All the old URLs will be redirected, so the world won’t be filled with broken links. It’s just going to take some time. If any of you notice any problems, just give me a shout.

I updated my vagrant builds. The 23c Free on OL8 build is now 23ai Free on OL8 build. There is also an OL9 build. You can find them here.

The original article about this OL8 build has been amended, and there is a new one for OL9.

The first step on the journey…


So I’m a few steps back compared to where I was before the announcement. I’m still waiting for the on-prem release, and now I’ve got to rework a bunch of existing content…

We are already seeing some backlash against AI in the tech press. I hope the new name doesn’t come back to haunt Oracle.



PS. Of course it’s all my own fault.

When Auditing Attacks

I had a particularly annoying problem with Oracle auditing a few days ago. I thought I would write about it here in case anybody knows a solution, or if anyone at Oracle cares to add the functionality I need. 🙂

The problem

I was asked to audit selects against tables in a particular schema when issued by several users. For the sake of this post let’s assume the following.

  • SCHEMAOWNER : The owner of the tables that are to be audited.
  • USER1, USER2, USER3 : The three users whose select statements are to be audited.

So I decided the write an audit policy.

Option 1 : Audit all selects, regardless of schema

My first thought was to do this.

create audit policy my_select_policy
actions select
when q'~ sys_context('userenv', 'session_user') in ('USER1','USER2','USER3') ~'
evaluate per session

The problem is it produced masses of audit records, most of which were referencing selects against objects owned by other schemas. I have so many records I can’t actually purge the audit trail. I’m having to wait until the next partition is created in a month, so I can drop the current partition and shrink the data files. 🙁

Option 2 : Explicitly list all objects to be audited

So I need to make the policy more granular, which I can do by explicitly referencing all objects I want to audit, like this.

create audit policy my_select_policy
actions select on schemaowner.tab1,
select on schemaowner.tab2,
select on schemaowner.tab3
when q'~ sys_context('userenv', 'session_user') in ('USER1','USER2','USER3') ~'
evaluate per session

The problem is there are loads of tables in the schema, so doing this is a pain. I could generate the statement using a script, but even then if someone adds a new table the audit policy wouldn’t pick it up.

What I really want

What I really want is to be able to limit the action to a specific schema. The specific syntax is not important. The result is what matters. Maybe something like this.

-- Use and explicit schema reference,
create audit policy my_select_policy
actions select on schema schemaowner
when q'~ sys_context('userenv', 'session_user') in ('USER1','USER2','USER3') ~'
evaluate per session


I don’t believe the current syntax for audit policies allows them to be limited by schema, so I’m faced with generating masses of unnecessary audit records, or having to explicitly name every table. 🙁

This all sounded kind-of familiar, and when I did a bit of Googling I found this note by Pete Finnigan. So I’m not alone in finding this frustrating.



What is the end goal for automation and AI?

I’ve been a big proponent of automation over the years. I recognise that automation will cause the loss of some jobs, but I’ve always assumed people would move on to other things. A consistent theme of mine was, why waste humans on mundane work when they could do something of more value?

I recently read The Naked Sun by Isaac Asimov and now I feel rather uneasy about automation and AI…

I don’t want to give too much of the story away, but we are told fairly early that the planet Solaria only has 20,000 humans, and millions of robots. The robots do everything as far as the economy is concerned, and the humans are essentially idle custodians. Rather than having millions of people doing a variety of occupations, there are only the elite sitting at the top of the pile…

As I was reading this I was thinking about the economic elite on our planet and I started to feel some concern. My dream of a post-scarcity society is one where we are all equal, but I can’t imagine there are many people at the top that want to be equal. They would always want to be more than equal. Call me a pessimist, but when I think of Elon Musk calling himself the emperor of Mars, I wonder if a future post-scarcity society will be less like Star Trek, and more like Solaria…

Of course, the AI could just turn into Skynet and kill us all, but I have this feeling that greedy humans using AI and automation against the majority of the population poses the greater threat. Why have “them and us”, if there could just be “us”? 🙁

All of a sudden automation has lost some of its lustre…



HP-UX and Oracle 11g : Don’t let the door hit you on ass on the way out…

We had a pretty momentous occasion recently. We finally got rid of the last of our HP-UX servers, and with it the last of our Oracle 11g databases and Oracle Forms apps.

We use VMware virtual machines running Oracle Linux for nearly all our databases. We pushed really hard over the last couple of years to get everything migrated to 19c. Despite this, there were two projects that were left behind on HP-UX and Oracle 11g. They were effectively dead projects, from the time before we moved to Oracle Cloud Apps, about 5 years ago. We couldn’t get rid of them because people wanted to look at the historical data, but there was very little appetite to do anything with them…

Finally, over the last year we’ve had an archive project running, which involved moving the databases on to Oracle Linux and upgrading to 19c. That bit was easy. The other bit of the project involved building two new APEX apps to replace a 3rd party application and an old Oracle Forms 11g app. Those went live a few months ago, but there was still a lot of reluctance to actually decommission the old systems.

Recently we finally got the approval to turn off the old systems, and decommission the old HP-UX kit. When the change went through the Change Advisory Board (CAB) it was such a relief…

The kit is now decommissioned, and I’ve just been clearing the agents out of Cloud Control, so it’s finally over. No more HP-UX. No more Oracle 11g. No more Oracle Forms…

Now all we have to do is replace a whole load of Oracle Linux 7 servers, and get ready to upgrade to the next long term release of the database… 🙂



Search Trends and Oracle

Please read the update before jumping to any conclusions. 🙂

I was chatting with some folks the other day and the question was raised about if we had seen a decline in our website activity. Since the vast majority of my website traffic comes from Google searches, I figured the obvious thing to do was to look at what is happening on Google Trends in terms of search terms. That way we were not focusing on the popularity of a specific site, but on searches in general. That resulted in this rather alarming graph showing a trend over the last 20 years.

I guess we all thought this decline was at the expense of another engine, so our obvious next thought was to compare against the other relational database engines. One of the guys suggested we also compare the word “database” as well, just to see. This is the resulting graph.

It should be noted we tried variations on “sql server” and “postgresql”, but they didn’t affect the searches. Feel free to try for yourself. 🙂

So the decline in searches was not restricted to Oracle. You may notice this is related to “All Categories” of search, and the word “oracle” is not specific to the database, so I tried in “Computers & Electronics”, and we got a similar result.

For subsequent searches I switched back to “All Categories”.

If we switch to the last 5 years, things seem to have levelled off somewhat.

So the next question was if NoSQL databases were eating into the relational database market. A straight search looked interesting.

But what happens when we add Oracle back into the mix. Does the rise in popularity of some NoSQL databases account for the drop in searches for relational databases?

It doesn’t look like it. The gains in some NoSQL databases are insignificant compared to the reductions in searches for relational databases.

So we are left with a question. Why is there a drop off in searches for relational databases, when there doesn’t seem to be a corresponding uptick in alternatives?

Update : So what is really going on?

The graphs are not absolute search numbers, but normalised compared to the total Google searches that happened. Back in the day only geeks were Googling, so tech searches were a comparatively high percentage. Now everyone is Googling, the proportion of tech searches is much lower compared to the random stuff. Check out the FAQ about Google Trends data. So in our case, the trend over time of an individual term is not so important as the comparison between two search terms over time, as both should be affected in a similar way by the normalisation…

What we can see is over time the searches for different relational engines have got closer. If we are using the number of searches as a proxy for popularity, then it seems it’s a much closer game now than before.



PS. Thanks to the folks that pointed me in the right direction about Google Trends.

PPS. I noticed the switch from 12c to 19c searches over recent years.