Oracle VirtualBox 7.0.20

VirtualBox 7.0.20 has been released.

The downloads and changelog are in the usual places.

I’ve installed it on my Windows 10 and 11 machines with no drama.

Vagrant

There was no new version of Vagrant since the last VirtualBox release.

If you are new to Vagrant and want to learn, you might find this useful.

Once you understand that, I found the best way of learning more was to look at builds done by other people. You can see all my Vagrant builds here.

I’ll be doing some updates to my Oracle builds over the coming days, so this will get a lot of testing.

Cheers

Tim…

A History of Tech Sprawl

Here’s a little story about how things are all the same but different…

Software Sprawl

Let’s cast our minds back to the bad old days, where x86 machines were so underpowered, the thought of using them for a server was almost laughable. In those days the only option for something serious was to use UNIX on kit from one of the “Big Iron” vendors.

The problem was they were very expensive, so we ended up having loads of software installations on a single box. In some cases we would have many versions of Oracle installed on a single machine, running databases of many different versions. In some cases those same machines also ran middle tier software too.

It may have been a small number of servers, but it was a software sprawl. To try and add some isolation to the sprawl we may have resorted to things like LPARs or Zones, but we often didn’t.

Physical Server Sprawl

Fast forward a few years and x86 kit became a viable alternative to big iron. In some cases a much better alternative. We replaced our big iron with many smaller machines. This gave us better isolation between services and cleaned up our software sprawl, but now we had a physical server sprawl, made up of loads of underutilized servers. We desperately needed some way to consolidate services to get better utilization of our kit, but keep the isolation we craved.

Virtual Machine (VM) Sprawl

Along comes virtualization to save the day. Clusters of x86 kit running loads of VMs, where each VM served a specific purpose. This gave us the isolation we desired, but allowed us to consolidate and reduce the number of idle servers. Unfortunately the number of VMs grew rapidly. People would fire up a new VM for some piddling little task, and forget that operating system and software licenses were a thing, and each virtualized OS came with an overhead.

Before we knew it we had invented VM sprawl. If ten VMs are good, twenty must be better, and put all of them on a physical host with 2 CPUs and one hard disk, because that’s going to work just fine! 🙁

Container Sprawl

Eventually we noticed the overhead of VMs was too great, so we switched to containers, which had a much lower overhead. That worked fine, working almost like lightweight virtualization, but it wasn’t special enough, so we had to make sure each container did as little as possible. That way we would need 50 of them working together to push out a little “Hello World” app. Managing all those containers was hard work, so we had to introduce new tools to cope with deploying, scaling and managing containerised applications. These tools came with there own overhead of extra containers and complexity.

We patted ourselves on the back, but without knowing it we had invented container sprawl, which was far more complicated than anything we had seen before.

Cloud Sprawl

Managing VM and container sprawl ourselves became too much of a pain. Added to that the limits of our physical kit were a problem. We couldn’t always fire up what we needed, when we needed it. In came the cloud to rescue us!

All of a sudden we had limitless resources at our fingertips, and management tools to allow us to quickly fire up new environments for developers to work in. Unfortunately, it was a bit too easy to fire up new things, and the myriad of environments built for every developer ended up costing a lot of money to run. We had invented cloud sprawl. We had to create some form of governance to control the cloud sprawl, so it didn’t bankrupt our companies…

What next?

I’m not sure what the next step is, but I’m pretty sure it will result in a new form of sprawl… 🙂

Cheers

Tim…

PS. I know there was a world before UNIX.

PPS. This is just a fun little rant. Don’t take things too seriously!

Life Update : Dude, what’s wrong with your face? (Episode 2)

A couple of years ago I wrote the following post, where I detailed my journey through some skin cancer treatments.

Life Update : Dude, what’s wrong with your face?

Fast forward a couple of years and here we are again. Fortunately I didn’t need anything chopped off this time, but I did need to repeat the treatment on my face.

In case you can’t be bothered to read the first post, this involves putting a chemo therapy cream on your face twice a day for 4 weeks. Anything that is cancerous or pre-cancerous will effectively get burnt off and scab. Nice. This is what I look like after 4 weeks of treatment this time.

2024 Treatment

Of course, this was nowhere near as bad as it was last time. I looked more like Deadpool in these photos. 🙂

2022 Treatment

Last time it was in the middle of the pandemic, so I hardly saw anyone, and I was wearing a mask when I was outside. This time I’ve been out in public, which is kind-of embarrassing. I get some funny looks at the gym. I feel like I want to tell people it’s not contagious. 🙂

So now it is just moisturizing about 4 times a day for a few weeks. Last time it took a couple of weeks for the scabs to drop off, then a couple of months for most of the redness to go. I’m hoping that will be a bit quicker this time, as it’s not as bad. Fingers crossed…

Once it’s healed I’ve got to treat a couple of patches again, but I won’t be doing my whole face, so it shouldn’t be quite so bad.

My next check up will be in a year, so I might be writing another post much sooner…

Cheers

Tim…

PS. I forgot to mention, there is nothing life threatening about this, so it’s not real drama.

Oracle Database : A Bigfile and Shrink Wishlist

A few days ago I mentioned a couple of new features related to bigfile tablespaces in Oracle database 23ai.

Having played with these new features, a couple of things were added to my wish list.

Convert Smallfile to Bigfile Tablespaces

If would be awesome if we could convert smallfile tablespaces to bigfile tablespaces. At the moment we have to create a new bigfile tablespace and move everything to it. It would be nice if we had some type of simple convert option instead.

I don’t know if this would be possible with some internal jiggery-pokery, or if it would just hide the manual process of moving everything from us. Either way it would be cool.

Being able to convert a smallfile tablespace to bigfile tablespace gives us the option to use the new shrink tablespace functionality, assuming the process of converting didn’t already achieve this for us. 🙂

As much as I like the new bigfile defaults, we don’t create a lot of new databases these days. We are mostly supporting existing databases, so being able to do an easy convert would be really handy.

Smallfile Shrink Tablespace

As much as I love the new shrink tablespace functionality for bigfile tablespaces, I have loads of existing smallfile tablespaces that could benefit from a shrink operation. I know how to manually reclaim free space, as described here, but it is a pain. It would be much better if the shrink tablespace could support smallfile tablespaces as well.

Of course, I’m happy to ignore this if the conversion from smallfile to bigfile tablespaces were possible, as that would solve my issue via a different route. 🙂

Why do I care?

I’m not a fan of randomly shrinking things, but we do get incidents that leave us in a position where a big clean up is necessary. I recently wrote a post called When Auditing Attacks, where I mentioned how we accidentally generated loads of auditing records. Having a no-brainer way to clean this stuff up would be so useful!

Cheers

Tim…

The Dunning–Kruger Effect 

I’m starting to feel like the Dunning-Kruger Effect should be mentioned in every piece of media.

What is it?

“The Dunning–Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities.”

Wikipedia

I see examples of this all the time and it drives me crazy.

Uneducated People

We have the stereotypes of drunk and lazy students, but the process of deep-diving a subject teaches you a lot more than just that subject. If education is done well it teaches you how to learn. I must admit my undergraduate degree didn’t really teach me to learn. That all happened during my PhD.

I think higher education also gives you a different perspective. At each level of education I realized how simple the previous level was. If I had never experienced the next level, I would never have had this realization. It also made me question what more I might be missing.

Aristotle wrote, “The more you know, the more you realize you don’t know.” The converse of this seems to be true also.

You don’t have to go to university of have this perspective, but it’s rare I see it from people who haven’t.

Educated Idiots

Even educated people can fall prey to the Dunning–Kruger effect. Being educated in one subject doesn’t qualify you as an expert in every other subject. You may have the tools to research a new subject area better than someone with limited education, but have you really done the research, or have you just read the headlines?

In the UK an undergraduate degree is 3 years, and a PhD is a minimum of 3 years. Did you really commit 3-6 years to this new subject you are claiming expertise at? I’m not saying every topic of conversation needs that amount of time and rigor, but you should have enough self awareness to know you have not done the time, so you don’t know what you don’t know.

Despite this some educated people seem to think their PhD qualifies them to speak on any subject as an expert. You see these people doing the rounds on popular podcasts talking like they are a world leading expert on something they have no background in. It drives me nuts.

Critical Thinking

Unfortunately it all comes back to critical thinking, which seems to be sadly lacking in many people. I wrote about this here.

Conclusion

Please just engage your brain. If the people speaking are not self-aware enough to realize they are talking crap, at least you can be self-aware enough to fact check their rubbish.

Cheers

Tim…

Fedora 40 and Oracle

Fedora 40 was released over a month ago. Here comes the standard warning.

Here are the usual suspects.

I like messing about with this stuff, as explained in the first link.

I pushed Vagrant builds to my GitHub.

If you want to try these you will need to build a Fedora 40 box. You can do that using Packer. There is an example of that here.

What’s New?

So what’s new with Fedora 40? You can read about it here.

Cheers

Tim…

Oracle Enterprise Manager 13.5 Release Update 22 (13.5.0.22) Certified on Oracle Linux 9 (OL9)

We’ve been pushing out some Oracle 19c databases on Oracle Linux 9 (OL9) since it was certified, see here, but those databases have not been monitored or backed up by Cloud Control, because the Enterprise Manager (EM) 13.5 agent was not certified on OL9. Instead we had reverted to the bad old days of using CRON scripts to do everything.

Since we started running 19c on OL9 I had been regularly searching for the EM certification notice. Last week I noticed MOS Doc ID 2978593.1, which said, “EM Agent 13.5 with Enterprise Manager 13.5 Release Update 22 or higher is certified on RHEL9 and OL9”. Happy days. I subsequently found this post announcing update 22, which I had somehow missed. I was also nudged by someone on Twitter/X to check the certification matrix again.

EM 13.5.0.22 at Work

We have EM 13.5 running on OL8 at work. As soon as I found the agent was certified on OL9 servers I updated our installation to release update 22 and started to try and push out agents to the OL9 servers.

We hit an initial problem, which was EM uses SSH to push out the agent, and uses SHA1 to do it. Unfortunately SHA1 is not allowed on our OL9 servers, so that kind-of scuppered things. This is the error we got.

  • Error: SSH connection check failed
  • Cause: Connection to the SSH daemon (sshd) on the target host failed with the following error : KeyExchange signature verification failed for key type=ssh-rsa
  • Recommendation: Ensure that SSH daemon (sshd) on the target host is able to respond to login requests on the provided sshd port 22.Ensure the provided user name and password or ssh keys are correct.Ensure that the property PasswordAuthentication is set to yes in the SSHD configuration file (sshd_config) on the remote host.

To resolve this the system administrators issued the following command as root on the OL9 servers.

update-crypto-policies --set DEFAULT:SHA1

Once that was done the agents pushed out with no problems.

I’m currently pushing out agents to all our 19c on OL9 servers, and replacing all the CRON scripts with EM backups and monitoring.

This brings OL9 into line with out other databases servers. Happy days.

EM 13c Installation on OL9

Although I don’t need it for work, I decided to spend the bank holiday weekend trying to do a fresh installation of 13.5 on OL9. I tried several different ways, with the main two being.

  • Install and configure the base release without the patches.
  • Install with the patches downloaded and applied as part of the installation and configuration.

In both cases everything looked fine until near the end of the process, where the OMS refused to start. Unfortunately I couldn’t find anything obvious in the logs. It takes a long time to run the build, so having it fail near the end is quite frustrating.

At the moment I can’t see any OL9 specific docs, so I can’t tell if I’m missing out a vital step. As mentioned in the previous section, there are definite differences between OL9 and OL8, so I would not be surprised if the documentation (or MOS note) is released that includes an obvious gotcha.

As soon as I get it working I’ll release an article and a Vagrant build.

Cheers

Tim…

Pipelines and Automation : Switching things up to avoid additional costs

Here’s a little story about something that has happened over the last couple of days. Let’s start with some background.

Building blocks

Deployment pipelines and automations are made up of a number of building blocks including the following.

  • Git repositories
  • Automation servers
  • Build artifact repositories
  • Container registries
  • Software catalogues
  • Terraform state files

The specific tools are not important

I’ve said a number of times, the choice of the specific tools is far less important than just getting off your backside and doing something with what you’ve got. You can always replace specific tools later if you need to.

Over time people at our company have bought tools or subscriptions for a specific piece of functionality, and those happen to include other features. This means we have a lot of overlap between tools and subscriptions. Just to emphasise this point, here is a quick list of tools we have available at our company that fulfil the role of these building blocks.

Git repositories.

  • BitBucket
  • GitHub
  • Local Server (for backups of cloud repos)

Automation servers.

  • Teamcity
  • GitHub Actions
  • Jenkins

Build artifact repositories.

  • Artifactory
  • GitHub Packages

Container registries.

  • Artifactory
  • GitHub Packages
  • Azure Container Registry

On-prem software catalogues.

  • Artifactory
  • HTTP server
  • File stores

Terraform state files.

  • Azure storage
  • Artifactory

Switching to save money

A couple of days ago my boss said company X had hit us with a 23% increase in the price of one of our subscriptions. Day 1 we moved our container registry off their service. Day 2 we moved our artifact repository off their service. We no longer need that subscription, so instead of company X getting an extra 23% from us, they are now going to get nothing…

Conclusion

Not all moves are effortless, but you should really try and engineer pipelines so you can switch tooling at any time. You never know when an external pressure, such as a pricing change, might make you want to change things up quickly. 🙂

Cheers

Tim…

AI Search and the future of content creation

The recent announcements of GTP-4o by OpenAI and the AI updates from the Google IO keynote made me want to revisit the topic of how AI search will affect the future content creation. I’ve already touched on this here, but I think it’s worth revisiting the impact of AI search.

The view from the top

I’ve seen a few places talking about the Gartner post predicting a 25% reduction in search engine volume by 2026. This specifically relates to chatbots and virtual agents, but I think this figure could be higher if we separate AI search from traditional search.

Google have been experimenting with Gemini and search results for some time, hoping to offer a better search experience. According to the keynote, that service will become generally available soon. ChatGPT can already be considered a replacement for traditional search. Instead of doing a search and getting links, you just get an answer, which is after all what you are looking for.

Here lies the problem. If AI search presents answers directly, rather than referring you to the source websites, that represents a drop in traffic on the source websites. If there is indeed a 25% drop in traditional search by 2026, that will result in a drop of 25% in revenue for many online content creators.

Why is this a problem?

Professionally produced content will definitely be affected by a 25% reduction in traffic. Those content creators rely on traffic to their sites for their ad revenue. Without this, they can’t pay their workers. I don’t think many companies or people would be happy about a 25% cut from in their earnings.

The money from online advertisements has already fallen drastically over the last few years. Speaking from personal experience, for the same volume of traffic I’ve already seen ad revenue drop to about a quarter of what is was a few years back. Assuming that is true for professional content creators who rely on this income, they have already been hit hard, and now are likely to get hit again.

Even for those that don’t make money from publishing content, having a drop of 25% in their readership can be a demotivating factor.

So what?

So some people lose money. So what?

Well, AI typically relies on original source material to provide the information in the first place. If content creators give up, where is that new source content coming from? An endless recycling of AI generated content? That seems like a race to the bottom to me.

I spend a lot of time on YouTube and in recent months I’ve noticed the rise of AI generated content. I click on a video that looks interesting, only to find what sounds like an AI generated script being read by a very generic voice. Lots of words that sound related to the topic, but ultimately nothing of substance, leaving you with the feeling that you just wasted your time. I could easily see this happening to online publishing in general. The signal to noise ratio is likely to get really bad.

And another thing

I’ve focussed mostly on text publishing, as I’m mostly about articles and blog posts. Clearly there are other areas that are going to be massively affected by this.

  • Images : Unless you’ve been living under a rock you will already know about the complaints by people claiming AI image generation has stolen their material or art style. For companies that sell images online, AI image generation means game over for their business.
  • B-roll : When you watch videos on YouTube, you will notice many channels making use of b-roll footage. High quality clips inserted into their video to give it a more professional feel. Companies make money selling b-roll clips. That business will pretty much end overnight once the latest video generation is widely available. Why buy b-roll footage, when you can generate it for free?

Conclusion

Initially I see this as a win for the consumer, as we will be able to get access to information, images and video clips much more easily than we can currently. My concern is the initial progress may be followed by a gradual decline in quality to the point where everything becomes soulless dirge.

Cheers

Tim…

Oracle VirtualBox 7.0.18, Vagrant 2.4.1 and Packer 1.10.3

Oracle VirtualBox 7.0.18

VirtualBox 7.0.18 has been released.

The downloads and changelog are in the usual places.

I’ve installed it on my Windows 10 and 11 machines. Both gave me issues, which I put down to the new version of VirtualBox, but on further investigation it was actually because of my new Vagrant box builds.

If I used the new version of VirtualBox with the old version of my Vagrant boxes the builds worked fine. If I tried to use one of the newly built Vagrant box versions it failed with this error.

The SSH connection was unexpectedly closed by the remote end. This
usually indicates that SSH within the guest machine was unable to
properly start up. Please boot the VM in GUI mode to check whether
it is booting properly.

I tried using “config.ssh.keep_alive = true” in the Vagrantfile, but that didn’t help. I’m continuing to investigate the issue, but it seems like VirtualBox 7.0.18 is working fine. It’s something with my box builds which is the problem.

Vagrant 2.4.1

Releases of VirtualBox prompt me to check for new versions of Vagrant. The current version is Vagrant 2.4.1. All my test systems are built with Vagrant, so I installed it as well.

If you are new to Vagrant and want to learn, you might find this useful.

Once you understand that, I found the best way of learning more was to look at builds done by other people. You can see all my Vagrant builds here.

I’ll be doing some updates to my Oracle builds over the coming days, so this will get a lot of testing.

Packer 1.10.3

I use Packer to rebuild my Vagrant boxes (Oracle Linux 7, 8 and 9) so they have the latest guest additions. For the reasons mentioned above I’ve not released the new version of the Vagrant boxes yet. You can see the old ones here.

If you are interested in creating your own Packer builds, you might take inspiration from mine, available here.

Once I get to the bottom of the SSH issues on the new builds I’ll update the boxes.

How did it all go?

Most things worked fine. As mentioned there is an issue with my Vagrant box builds, but I’m assuming that is something on my side of things. 🙂

What about the VirtualBox GUI?

Just a quick warning. I do everything using Vagrant, so I rarely look at the VirtualBox GUI. Remember, when I say everything worked fine, I mean for what I use.

Cheers

Tim…