Oracle Database : A Bigfile and Shrink Wishlist

A few days ago I mentioned a couple of new features related to bigfile tablespaces in Oracle database 23ai.

Having played with these new features, a couple of things were added to my wish list.

Convert Smallfile to Bigfile Tablespaces

If would be awesome if we could convert smallfile tablespaces to bigfile tablespaces. At the moment we have to create a new bigfile tablespace and move everything to it. It would be nice if we had some type of simple convert option instead.

I don’t know if this would be possible with some internal jiggery-pokery, or if it would just hide the manual process of moving everything from us. Either way it would be cool.

Being able to convert a smallfile tablespace to bigfile tablespace gives us the option to use the new shrink tablespace functionality, assuming the process of converting didn’t already achieve this for us. 🙂

As much as I like the new bigfile defaults, we don’t create a lot of new databases these days. We are mostly supporting existing databases, so being able to do an easy convert would be really handy.

Smallfile Shrink Tablespace

As much as I love the new shrink tablespace functionality for bigfile tablespaces, I have loads of existing smallfile tablespaces that could benefit from a shrink operation. I know how to manually reclaim free space, as described here, but it is a pain. It would be much better if the shrink tablespace could support smallfile tablespaces as well.

Of course, I’m happy to ignore this if the conversion from smallfile to bigfile tablespaces were possible, as that would solve my issue via a different route. 🙂

Why do I care?

I’m not a fan of randomly shrinking things, but we do get incidents that leave us in a position where a big clean up is necessary. I recently wrote a post called When Auditing Attacks, where I mentioned how we accidentally generated loads of auditing records. Having a no-brainer way to clean this stuff up would be so useful!

Cheers

Tim…

The Dunning–Kruger Effect 

I’m starting to feel like the Dunning-Kruger Effect should be mentioned in every piece of media.

What is it?

“The Dunning–Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities.”

Wikipedia

I see examples of this all the time and it drives me crazy.

Uneducated People

We have the stereotypes of drunk and lazy students, but the process of deep-diving a subject teaches you a lot more than just that subject. If education is done well it teaches you how to learn. I must admit my undergraduate degree didn’t really teach me to learn. That all happened during my PhD.

I think higher education also gives you a different perspective. At each level of education I realized how simple the previous level was. If I had never experienced the next level, I would never have had this realization. It also made me question what more I might be missing.

Aristotle wrote, “The more you know, the more you realize you don’t know.” The converse of this seems to be true also.

You don’t have to go to university of have this perspective, but it’s rare I see it from people who haven’t.

Educated Idiots

Even educated people can fall prey to the Dunning–Kruger effect. Being educated in one subject doesn’t qualify you as an expert in every other subject. You may have the tools to research a new subject area better than someone with limited education, but have you really done the research, or have you just read the headlines?

In the UK an undergraduate degree is 3 years, and a PhD is a minimum of 3 years. Did you really commit 3-6 years to this new subject you are claiming expertise at? I’m not saying every topic of conversation needs that amount of time and rigor, but you should have enough self awareness to know you have not done the time, so you don’t know what you don’t know.

Despite this some educated people seem to think their PhD qualifies them to speak on any subject as an expert. You see these people doing the rounds on popular podcasts talking like they are a world leading expert on something they have no background in. It drives me nuts.

Critical Thinking

Unfortunately it all comes back to critical thinking, which seems to be sadly lacking in many people. I wrote about this here.

Conclusion

Please just engage your brain. If the people speaking are not self-aware enough to realize they are talking crap, at least you can be self-aware enough to fact check their rubbish.

Cheers

Tim…

Fedora 40 and Oracle

Fedora 40 was released over a month ago. Here comes the standard warning.

Here are the usual suspects.

I like messing about with this stuff, as explained in the first link.

I pushed Vagrant builds to my GitHub.

If you want to try these you will need to build a Fedora 40 box. You can do that using Packer. There is an example of that here.

What’s New?

So what’s new with Fedora 40? You can read about it here.

Cheers

Tim…

Oracle Enterprise Manager 13.5 Release Update 22 (13.5.0.22) Certified on Oracle Linux 9 (OL9)

We’ve been pushing out some Oracle 19c databases on Oracle Linux 9 (OL9) since it was certified, see here, but those databases have not been monitored or backed up by Cloud Control, because the Enterprise Manager (EM) 13.5 agent was not certified on OL9. Instead we had reverted to the bad old days of using CRON scripts to do everything.

Since we started running 19c on OL9 I had been regularly searching for the EM certification notice. Last week I noticed MOS Doc ID 2978593.1, which said, “EM Agent 13.5 with Enterprise Manager 13.5 Release Update 22 or higher is certified on RHEL9 and OL9”. Happy days. I subsequently found this post announcing update 22, which I had somehow missed. I was also nudged by someone on Twitter/X to check the certification matrix again.

EM 13.5.0.22 at Work

We have EM 13.5 running on OL8 at work. As soon as I found the agent was certified on OL9 servers I updated our installation to release update 22 and started to try and push out agents to the OL9 servers.

We hit an initial problem, which was EM uses SSH to push out the agent, and uses SHA1 to do it. Unfortunately SHA1 is not allowed on our OL9 servers, so that kind-of scuppered things. This is the error we got.

  • Error: SSH connection check failed
  • Cause: Connection to the SSH daemon (sshd) on the target host failed with the following error : KeyExchange signature verification failed for key type=ssh-rsa
  • Recommendation: Ensure that SSH daemon (sshd) on the target host is able to respond to login requests on the provided sshd port 22.Ensure the provided user name and password or ssh keys are correct.Ensure that the property PasswordAuthentication is set to yes in the SSHD configuration file (sshd_config) on the remote host.

To resolve this the system administrators issued the following command as root on the OL9 servers.

update-crypto-policies --set DEFAULT:SHA1

Once that was done the agents pushed out with no problems.

I’m currently pushing out agents to all our 19c on OL9 servers, and replacing all the CRON scripts with EM backups and monitoring.

This brings OL9 into line with out other databases servers. Happy days.

EM 13c Installation on OL9

Although I don’t need it for work, I decided to spend the bank holiday weekend trying to do a fresh installation of 13.5 on OL9. I tried several different ways, with the main two being.

  • Install and configure the base release without the patches.
  • Install with the patches downloaded and applied as part of the installation and configuration.

In both cases everything looked fine until near the end of the process, where the OMS refused to start. Unfortunately I couldn’t find anything obvious in the logs. It takes a long time to run the build, so having it fail near the end is quite frustrating.

At the moment I can’t see any OL9 specific docs, so I can’t tell if I’m missing out a vital step. As mentioned in the previous section, there are definite differences between OL9 and OL8, so I would not be surprised if the documentation (or MOS note) is released that includes an obvious gotcha.

As soon as I get it working I’ll release an article and a Vagrant build.

Cheers

Tim…

Pipelines and Automation : Switching things up to avoid additional costs

Here’s a little story about something that has happened over the last couple of days. Let’s start with some background.

Building blocks

Deployment pipelines and automations are made up of a number of building blocks including the following.

  • Git repositories
  • Automation servers
  • Build artifact repositories
  • Container registries
  • Software catalogues
  • Terraform state files

The specific tools are not important

I’ve said a number of times, the choice of the specific tools is far less important than just getting off your backside and doing something with what you’ve got. You can always replace specific tools later if you need to.

Over time people at our company have bought tools or subscriptions for a specific piece of functionality, and those happen to include other features. This means we have a lot of overlap between tools and subscriptions. Just to emphasise this point, here is a quick list of tools we have available at our company that fulfil the role of these building blocks.

Git repositories.

  • BitBucket
  • GitHub
  • Local Server (for backups of cloud repos)

Automation servers.

  • Teamcity
  • GitHub Actions
  • Jenkins

Build artifact repositories.

  • Artifactory
  • GitHub Packages

Container registries.

  • Artifactory
  • GitHub Packages
  • Azure Container Registry

On-prem software catalogues.

  • Artifactory
  • HTTP server
  • File stores

Terraform state files.

  • Azure storage
  • Artifactory

Switching to save money

A couple of days ago my boss said company X had hit us with a 23% increase in the price of one of our subscriptions. Day 1 we moved our container registry off their service. Day 2 we moved our artifact repository off their service. We no longer need that subscription, so instead of company X getting an extra 23% from us, they are now going to get nothing…

Conclusion

Not all moves are effortless, but you should really try and engineer pipelines so you can switch tooling at any time. You never know when an external pressure, such as a pricing change, might make you want to change things up quickly. 🙂

Cheers

Tim…

AI Search and the future of content creation

The recent announcements of GTP-4o by OpenAI and the AI updates from the Google IO keynote made me want to revisit the topic of how AI search will affect the future content creation. I’ve already touched on this here, but I think it’s worth revisiting the impact of AI search.

The view from the top

I’ve seen a few places talking about the Gartner post predicting a 25% reduction in search engine volume by 2026. This specifically relates to chatbots and virtual agents, but I think this figure could be higher if we separate AI search from traditional search.

Google have been experimenting with Gemini and search results for some time, hoping to offer a better search experience. According to the keynote, that service will become generally available soon. ChatGPT can already be considered a replacement for traditional search. Instead of doing a search and getting links, you just get an answer, which is after all what you are looking for.

Here lies the problem. If AI search presents answers directly, rather than referring you to the source websites, that represents a drop in traffic on the source websites. If there is indeed a 25% drop in traditional search by 2026, that will result in a drop of 25% in revenue for many online content creators.

Why is this a problem?

Professionally produced content will definitely be affected by a 25% reduction in traffic. Those content creators rely on traffic to their sites for their ad revenue. Without this, they can’t pay their workers. I don’t think many companies or people would be happy about a 25% cut from in their earnings.

The money from online advertisements has already fallen drastically over the last few years. Speaking from personal experience, for the same volume of traffic I’ve already seen ad revenue drop to about a quarter of what is was a few years back. Assuming that is true for professional content creators who rely on this income, they have already been hit hard, and now are likely to get hit again.

Even for those that don’t make money from publishing content, having a drop of 25% in their readership can be a demotivating factor.

So what?

So some people lose money. So what?

Well, AI typically relies on original source material to provide the information in the first place. If content creators give up, where is that new source content coming from? An endless recycling of AI generated content? That seems like a race to the bottom to me.

I spend a lot of time on YouTube and in recent months I’ve noticed the rise of AI generated content. I click on a video that looks interesting, only to find what sounds like an AI generated script being read by a very generic voice. Lots of words that sound related to the topic, but ultimately nothing of substance, leaving you with the feeling that you just wasted your time. I could easily see this happening to online publishing in general. The signal to noise ratio is likely to get really bad.

And another thing

I’ve focussed mostly on text publishing, as I’m mostly about articles and blog posts. Clearly there are other areas that are going to be massively affected by this.

  • Images : Unless you’ve been living under a rock you will already know about the complaints by people claiming AI image generation has stolen their material or art style. For companies that sell images online, AI image generation means game over for their business.
  • B-roll : When you watch videos on YouTube, you will notice many channels making use of b-roll footage. High quality clips inserted into their video to give it a more professional feel. Companies make money selling b-roll clips. That business will pretty much end overnight once the latest video generation is widely available. Why buy b-roll footage, when you can generate it for free?

Conclusion

Initially I see this as a win for the consumer, as we will be able to get access to information, images and video clips much more easily than we can currently. My concern is the initial progress may be followed by a gradual decline in quality to the point where everything becomes soulless dirge.

Cheers

Tim…

Oracle VirtualBox 7.0.18, Vagrant 2.4.1 and Packer 1.10.3

Oracle VirtualBox 7.0.18

VirtualBox 7.0.18 has been released.

The downloads and changelog are in the usual places.

I’ve installed it on my Windows 10 and 11 machines. Both gave me issues, which I put down to the new version of VirtualBox, but on further investigation it was actually because of my new Vagrant box builds.

If I used the new version of VirtualBox with the old version of my Vagrant boxes the builds worked fine. If I tried to use one of the newly built Vagrant box versions it failed with this error.

The SSH connection was unexpectedly closed by the remote end. This
usually indicates that SSH within the guest machine was unable to
properly start up. Please boot the VM in GUI mode to check whether
it is booting properly.

I tried using “config.ssh.keep_alive = true” in the Vagrantfile, but that didn’t help. I’m continuing to investigate the issue, but it seems like VirtualBox 7.0.18 is working fine. It’s something with my box builds which is the problem.

Vagrant 2.4.1

Releases of VirtualBox prompt me to check for new versions of Vagrant. The current version is Vagrant 2.4.1. All my test systems are built with Vagrant, so I installed it as well.

If you are new to Vagrant and want to learn, you might find this useful.

Once you understand that, I found the best way of learning more was to look at builds done by other people. You can see all my Vagrant builds here.

I’ll be doing some updates to my Oracle builds over the coming days, so this will get a lot of testing.

Packer 1.10.3

I use Packer to rebuild my Vagrant boxes (Oracle Linux 7, 8 and 9) so they have the latest guest additions. For the reasons mentioned above I’ve not released the new version of the Vagrant boxes yet. You can see the old ones here.

If you are interested in creating your own Packer builds, you might take inspiration from mine, available here.

Once I get to the bottom of the SSH issues on the new builds I’ll update the boxes.

How did it all go?

Most things worked fine. As mentioned there is an issue with my Vagrant box builds, but I’m assuming that is something on my side of things. 🙂

What about the VirtualBox GUI?

Just a quick warning. I do everything using Vagrant, so I rarely look at the VirtualBox GUI. Remember, when I say everything worked fine, I mean for what I use.

Cheers

Tim…

Oracle Database 23ai : How it affects me…

Oracle have released Oracle Database 23ai. You can watch the announcement video here, and read the announcement blog post here.

I don’t think I can add much to that, but I just want to talk about how this affects me as a customer and as a content creator.

Customer View

We’ve been waiting for Oracle Database 23 for a very long time. As I mentioned in this post, most of the upgrades I’ve been asked to do in my career are not driven by new features. They are driven by a need to stay in support.

Database upgrades are pretty simple from a technical perspective, but from a project perspective they are a nightmare. It takes ages to get everyone to agree to them, and then an eternity to actually test things before progressing to production. Any delays have a massive impact on this process.

We are in the process of migrating loads of our databases off Oracle Linux 7 and on to Oracle Linux 8 or 9 depending on 3rd party vendor support. We have to go through a whole testing cycle to complete this. If Oracle 23 had been released last year, many of these migrations would have gone directly to the new OS and new database version. You can argue the virtues of doing things separately or as a big bang, but our reality is testing resources are our biggest blocker, so having to test all our systems twice, once for the OS migration and once for the DB migration, represents a problem.

The delay of Oracle 23 on-prem has been a big headache. When I saw the announcement of Oracle Database 23ai I was sure it would include the on-prem version of the database. It does not. That was a bitter disappointment!

Content Creator View

I realise most of the people reading this are not content creators, and these issues are unlikely to affect you, but here goes…

Over the last 18 months I’ve written a bunch of articles. With the release of 23c Free I was able to publish most of them. As part of the Oracle community hype machine we’ve been encouraged to produce as much content about 23c as possible. There are a lot of us that will either have to go back and edit our 23c content, or leave the internet full of content for a version that doesn’t exist.

For the most part 23ai is just 23c with a different badge, so much of this can be done with a search and replace, along with the appropriate redirects. Where it is a bigger problem is for those that have published videos on YouTube, as those need to be redone and republished. There is no quick in-place edit. I’m one of the lucky ones here, as I made the decision to wait for the on-prem release before starting any videos, but some people will have a really painful job to do if they want things to keep current.

What I’m doing now

I’ve started the process.

My 23c index page now redirects to 23ai. It still contains all the 23c articles, but over the coming weeks they will change. All the old URLs will be redirected, so the world won’t be filled with broken links. It’s just going to take some time. If any of you notice any problems, just give me a shout.

I updated my vagrant builds. The 23c Free on OL8 build is now 23ai Free on OL8 build. There is also an OL9 build. You can find them here.

The original article about this OL8 build has been amended, and there is a new one for OL9.

The first step on the journey…

Overall

So I’m a few steps back compared to where I was before the announcement. I’m still waiting for the on-prem release, and now I’ve got to rework a bunch of existing content…

We are already seeing some backlash against AI in the tech press. I hope the new name doesn’t come back to haunt Oracle.

Cheers

Tim…

PS. Of course it’s all my own fault.

When Auditing Attacks

I had a particularly annoying problem with Oracle auditing a few days ago. I thought I would write about it here in case anybody knows a solution, or if anyone at Oracle cares to add the functionality I need. 🙂

The problem

I was asked to audit selects against tables in a particular schema when issued by several users. For the sake of this post let’s assume the following.

  • SCHEMAOWNER : The owner of the tables that are to be audited.
  • USER1, USER2, USER3 : The three users whose select statements are to be audited.

So I decided the write an audit policy.

Option 1 : Audit all selects, regardless of schema

My first thought was to do this.

create audit policy my_select_policy
actions select
when q'~ sys_context('userenv', 'session_user') in ('USER1','USER2','USER3') ~'
evaluate per session
container=current;

The problem is it produced masses of audit records, most of which were referencing selects against objects owned by other schemas. I have so many records I can’t actually purge the audit trail. I’m having to wait until the next partition is created in a month, so I can drop the current partition and shrink the data files. 🙁

Option 2 : Explicitly list all objects to be audited

So I need to make the policy more granular, which I can do by explicitly referencing all objects I want to audit, like this.

create audit policy my_select_policy
actions select on schemaowner.tab1,
select on schemaowner.tab2,
select on schemaowner.tab3
when q'~ sys_context('userenv', 'session_user') in ('USER1','USER2','USER3') ~'
evaluate per session
container=current;

The problem is there are loads of tables in the schema, so doing this is a pain. I could generate the statement using a script, but even then if someone adds a new table the audit policy wouldn’t pick it up.

What I really want

What I really want is to be able to limit the action to a specific schema. The specific syntax is not important. The result is what matters. Maybe something like this.

-- Use and explicit schema reference,
create audit policy my_select_policy
actions select on schema schemaowner
when q'~ sys_context('userenv', 'session_user') in ('USER1','USER2','USER3') ~'
evaluate per session
container=current;

Thoughts

I don’t believe the current syntax for audit policies allows them to be limited by schema, so I’m faced with generating masses of unnecessary audit records, or having to explicitly name every table. 🙁

This all sounded kind-of familiar, and when I did a bit of Googling I found this note by Pete Finnigan. So I’m not alone in finding this frustrating.

Cheers

Tim…

What is the end goal for automation and AI?

I’ve been a big proponent of automation over the years. I recognise that automation will cause the loss of some jobs, but I’ve always assumed people would move on to other things. A consistent theme of mine was, why waste humans on mundane work when they could do something of more value?

I recently read The Naked Sun by Isaac Asimov and now I feel rather uneasy about automation and AI…

I don’t want to give too much of the story away, but we are told fairly early that the planet Solaria only has 20,000 humans, and millions of robots. The robots do everything as far as the economy is concerned, and the humans are essentially idle custodians. Rather than having millions of people doing a variety of occupations, there are only the elite sitting at the top of the pile…

As I was reading this I was thinking about the economic elite on our planet and I started to feel some concern. My dream of a post-scarcity society is one where we are all equal, but I can’t imagine there are many people at the top that want to be equal. They would always want to be more than equal. Call me a pessimist, but when I think of Elon Musk calling himself the emperor of Mars, I wonder if a future post-scarcity society will be less like Star Trek, and more like Solaria…

Of course, the AI could just turn into Skynet and kill us all, but I have this feeling that greedy humans using AI and automation against the majority of the population poses the greater threat. Why have “them and us”, if there could just be “us”? 🙁

All of a sudden automation has lost some of its lustre…

Cheers

Tim…