Oracle Database 19c on Oracle Linux 9 (OL9): Installation Articles and Vagrant Builds

Earlier this year I wrote a rant about the lack of product certifications on Oracle Linux 9 (OL9).

One of the points I made was we are having to replace OL7 servers, but were forced to go to OL8 because Oracle 19c was not certified on OL9, and Oracle 23c on-prem is not available.

This blog post by Mike DieTrich changed all that because now 19c is certified on OL9, provided you are on patch 19.19 or above, and are on the correct version of UEK or the RHEL kernel. See Mike’s post for details.

Installation Articles

Of course, this triggered some installation articles.

Vagrant Builds

There are database, RAC and Data Guard vagrant builds here.

Odd Occurrence

I noticed something a little odd when doing these builds using the 19.21 RU patches.

For the database and Data Guard builds I used the DB RU and OJVM combo patch, and I was still forced to fake the distribution using the CV_ASSUME_DISTID environment variable. For the RAC build I used the GI RU and OJVM combo patch, and I didn’t need to fake the distribution.

I went back to the DB build, and instead used the GI RU and OJVM combo patch, and I no longer needed to fake the distribution. So it looks like there is something different about the database patches between these two types of RUs that slightly affect the installation process. It’s no big deal, but it might catch you out.

Oracle 19c is old. Why do you care?

We are in the process of replacing a load of VMs that are currently running OL7, and we want to go to OL9. Prior to this announcement were were going to have to do one of two things.

  • Migrate to 19c on OL8, which would be OK for 23c when it drops, but not ideal as building an OL8 box now seems like a fail.
  • Wait for 23c on-prem to drop and move to 23c on OL9. The problem here is we could run out of time waiting for 23c to come.

This announcement gives us a new option.

  • Migrate to 19c on OL9, then upgrade to 23c when the on-prem version drops.

This third option is way better for us!

Remember

There are a couple of things to remember.

  • You need to be on 19c to upgrade to 23c, so getting your 19c database on an OS that is supported for 23c is important. We’ve had confirmation that 23c will be available for OL8 and OL9 on release.
  • The extended support waiver for 19c was increased from 1 year to 2 years. Mike also wrote about this here. That means you get free extended support for 19c until April 30, 2026.

Conclusion

This is massive for us. I’m very happy!

Cheers

Tim…

Video : Vagrant Oracle Real Application Clusters (RAC) Build

In today’s video we’ll discuss how to build a 2-node RAC setup using Vagrant.

This video is based on the OL8 19c RAC build, but it’s similar to that of the OL7 19c RAC build also. If you don’t have access to the patches from MOS, stick with the OL7 build, as it will work with the 19.3 base release. The GitHub repos are listed here.

If you need some more words to read, you can find descriptions of the builds here, as well as a beginners guide to Vagrant.

The video is a talk through of using the build, not an explanation of each individual step in the build, as that would be a really long video. If you just want a RAC to play with, run the build and you’ll have one. If you want to learn about the steps involved in doing a RAC build, read the scripts that make up the build. Please don’t ask for a GUI step through of a build or I’ll be forced to ask you to read Why no GUI installations anymore? 🙂

The star of today’s video is Neil Chandler, who seems to be doing a bit of plumbing! 🙂

Cheers

Tim…

Oracle Database 19c RAC On OL8 Using Vagrant

On Sunday 17th May I started the process of putting together a Vagrant build of Oracle 19c RAC on Oracle Linux 8 (OL8.2 + EUK). I figured it would take me about 20 minutes to amend my existing OL7 build, but it took the whole of that Sunday, every evening for the following week, and the whole of the following Saturday and Sunday to complete it. There were some late nights, so from an hours perspective it well over 5 days of work. Most of that time would have been completely unnecessary if I wasn’t an idiot.

First things first. The result of that effort was this build on GitHub, with an associated article on my website describing the build in more detail.

For the remainder of this post I want to describe the comedy of errors that went into this creation. These problems were not indicative of issues with the software. These problems were totally down to me being an idiot.

Changes when using OL8

The vast majority of the build remains the same, but there was one change that was necessary when moving to from OL7 to OL8. This became evident pretty quickly. When configuring shared disks and UDEV, I switched from using partprobe to partx. It did look like partprobe was working, but it chucked out loads of errors, and partx didn’t. I found out about this from Uncle Google.

There was also a slight difference in getting the UUID of a regular VirtualBox disk in OL8, but I had already noticed that on single instance builds, so that wasn’t a problem.

Where it all started to go wrong

So with those minor changes in place, all the prerequisites built fine and I was ready to start the Grid Infrastructure (GI) installation. I try to do these builds using the stock releases, so people without Oracle Support contracts can try them. In this case that meant using the 19.3 software. This really marked the point where it all started to go wrong. I knew this build was only certified on 19.7, but I continued anyway…

None of the 19.3 installers recognised OL8, so I had to fake the Linux distribution using the following environment variable.

export CV_ASSUME_DISTID=OEL7.6

The 19.3 GI software refused to install, saying there was a problem with passwordless SSH connectivity. I tested it and it all looked good to me. I searched for solutions to this on Google and MOS, checking out everything I could find that seemed relevant. None of the solutions helped.

For the hell of it I tried on older releases of OL8. I had the same issue with OL8.1, but OL8.0 worked fine. I figured this was something to do OpenSSL or the SSH config, so I searched for more MOS notes and tried everything I could find. After a very long time I reached out to Simon Coter, who came back with an unpublished note (Doc ID 2555697.1), which included a workaround. That solved my problem for the 19.3 GI installation on OL8.2.

The GI configuration step and the DB software-only installation went fine, as they had done on OL8.0 also. Unfortunately the DBCA was failing to create a database, producing errors about passwordless SSH problems. The previous fix wasn’t helping, and I tried every variation I could find, short of totally downgrading OpenSSL.

At this point I pinged Markus Michalewicz a message, hoping he would be my salvation. In short he confirmed the build did work fine on OL8.2 if I used the 19.7 software, so the writing was on the wall. Then I stumbled on a MOS note (Doc ID 29529394.8) that explained the DBCA problem on OL8. There was no workaround, and it said it was fixed in 19.7. There is always a workaround, even if it means hacking the Linux distribution to death, but the more you do that the less realistic your build is, so at that point I conceded that it was not sensible to continue with the 19.3.

When I write it down like this it doesn’t seem like a lot, but all this took a long time. I was trying different Vagrant boxes, and eventually built some of my own for OL8.0 and OL8.2, to make sure I knew exactly what was on them. Added to that, many of the tests required full rebuilds, so I was waiting sometimes in excess of an hour to to see a success/fail message on the next test. It was soul destroying, and I nearly gave up a few times during the week.

A New Hope

Once I decided to go with the 19.7 patch things moved pretty quickly. I had the 19.3 GI installed and configured and the 19.3 DB software installed, so I added in a script to patch the lot to 19.7, and the database creation worked fine. Job done.

I cleaned things up a bit, pushed it to GitHub and put together the longer description of the build in the form of an article (see links above).

Pretty soon after I put all this live I got a comment from Abdellatif AG asking why I didn’t just use the “-applyRU” parameter in the Grid and DB software installations to apply the patches as part of the installation. At this point I felt a mix of emotions. I was kind of frustrated with myself for wasting so much time trying to get 19.3 working in the first place, then annoyed at myself for being so blinkered by the existing build I hadn’t seen the obvious solution regarding the patching. Why build it all then patch it, when you can do it right first time?

The conversion of the build to use the “-applyRU” parameter with the runInstaller and gridSetup.sh calls was really quick, but the testing took a long time, because these builds take in excess of 90 minutes each try. Things pretty much worked first time.

Now that I was effectively using 19.7 software out of the gate I figured many of the tweaks I had put in place when using the 19.3 software were no longer needed. I started to remove these tweaks and everything was good. By the end of that process I was pretty much left with the OL7-type build I started with. The vast majority of my time over the last week has been unnecessary…

Conclusion

The 19c (19.7) RAC build on OL8.2+UEK6 is pretty straight forward and works without any drama.

This process has shown me how stubborn and blinkered I can be at times. Taking a step back and getting a fresh perspective would have saved me a lot of time in the long run.

Thanks to Simon Coter, Markus Michalewicz and Abdellatif AG who all witnessed my descent into madness.

Cheers

Tim…

PS. There are some extra notes at the end of the article, which include some of the MOS notes I tried along the way. They are unnecessary for the build, but I felt like I should record them.

PPS. The image is how I feel at the end of this process.

Data Guard and RAC on Docker : Perhaps I was wrong?

I’ve talked a lot about Docker and containers over the last few years. With respect to the Oracle database on Docker, I’ve given my opinions in this article.

Over the weekend Sean Scott tweeted the following.

“A while back @oraclebase said Data Guard didn’t make sense on Docker.

For those of us disinterested in the sensible I present #Oracle#DataGuard on #Docker. 19c only for now. Please let me know what’s broken. Enjoy!

https://github.com/oraclesean/DataGuard-docker

This was in reference to a statement in my article that said the following.

“Oracle database high availability (HA) products are complicated, often involving the coordination of multiple machines/containers and multiple networks. Real Application Clusters (RAC) and Data Guard don’t make sense in the Docker world. In my opinion Oracle database HA is better done without Docker, but remember not every database has the same requirements.”

For the most part I stick by my statement, for the reasons described in my article. Although both Data Guard and RAC will work in Docker, I generally don’t think they make sense.

But…

A few years ago I had a conversation with Seth Miller, who was doing RAC in Docker. In his case it made sense for testing because of his use cases. I discussed this in this post.

For that use case, Seth was right and I was wrong.

What about Data Guard?

For a two node data guard playground I don’t see any major advantages to using two containers in one VM, compared to two VMs. The overhead of the extra VM and OS is not significant for this use case. Remember, most of the resources are going to the Oracle instances, not the VM and OS. Also, the VM approach will give you something similar to what you will see in production. It feels like a more natural testing scenario to me.

But Sean’s scenario was not this simple. When I questioned him over the value of this, considering the two VM approach had so little extra overhead, he came back with the following.

“There I’ll disagree. I have a Docker/sharding build I’m working on. 7 databases. Starts in moments. On my laptop. I can’t do 7 VM. No way!”

Now this scenario changes the game significantly. All of a sudden we go from the overhead of one extra VM to an overhead of six extra VMs. That’s pretty significant on a laptop. All of a sudden the Docker method probably makes a lot more sense than the VM approach for testing that scenario on a laptop.

Once again, I’m wrong and Sean is correct for this use case.

Conclusion

If you are building a two node RAC or Data Guard playground, I still think the VM approach makes a lot more sense. It’s going to be a lot more like what you use at work, and you don’t have to deal with some of the issues that containers bring with them.

Having said that, if you are looking to build something more extreme, or you are just trying stuff for fun, then Docker may be the right solution for you.

I still don’t see a realistic future for an RDBMS monolith on containers. I don’t care if it’s a single container or a giant Kubernetes cluster. This is not a criticism of the RDBMS or of containers. They are just things from different worlds for different purposes and continuing to treat them differently seems totally fine to me. Having said all that, it doesn’t mean combining the two can’t be useful for some use cases.

Remember, this is just my opinion! 🙂

Cheers

Tim…

PS. As a general point, trying to build your own data service on containers feels like a mistake. I would just use a cloud service that gives you the features, performance and availability you need. Concentrate on your apps.

Oracle Database 19c (19.3) : Installations, RAC, Data Guard etc.

A few weeks ago I put out a post about 19c installations and all that good stuff. That post was using the 19.2 release, which was not the official on-prem release of the product. Now Oracle 19c (19.3) has dropped and is available from here, and here, this post is just to say all those builds have been updated to use this 19.3 release. I also noticed the 19c preinstall package is available from yum.oracle.com.

Not surprisingly, I took the Vagrant and Docker builds I did for 19.2 and just changed the environment variables holding the software zip names, and everything worked just fine. Here are the associated articles, with those minor edits to reflect this version change.

I’ve committed a whole bunch of stuff to GitHub.

  • Vagrant build of 19c on OL7 with APEX and ORDS (here).
  • Vagrant build of 19c on Fedora 29 (here).
  • Vagrant hands-off build of 19c RAC on OL7 (here).
  • Vagrant hands-off build of 19c Data Guard on OL7 (here).
  • Docker 19c on OL7 build (here).
  • Docker 19c RPM on OL7 build (here).
  • Docker compose (here) and swarm (here) stacks.

Automation is awesome! 🙂

Cheers

Tim…

Oracle Database 19c : Installations, RAC, Data Guard and Upgrades

I’ve been playing around with Oracle Database 19c on LiveSQL since it was upgraded, and I pretty much thought that would be what I was stuck with until the on-prem release, as I don’t have an Exadata and it’s not on Oracle Cloud DBCS yet. Having seen a bunch people doing stuff on VMs, I got a bit frustrated and looked on eDelivery and low and behold the 19c software is available for download, even if you don’t have a Exadata CSI. I’m sure 18c was restricted during this period…

I’m pretty sure you wouldn’t be supported to use this for anything real (that wasn’t Exadata of course) until the on-prem drop, which will probably be 19.3 if they repeat what happened for 18c, but it does allow you to have a play.

Having a bunch of Vagrant environments for 18c already, meant it was pretty easy to test a whole bunch of 19c stuff within a few minutes, as most of the basics are very similar. Just minor changes to package recommendations. As a result I’ve pushed out the following stuff in the last couple of evenings.

Along the way I’ve committed a whole bunch of stuff to GitHub.

  • Vagrant build of 19c on OL7 with APEX and ORDS (here).
  • Vagrant build of 19c on Fedora 29 (here).
  • Vagrant hands-off build of 19c RAC on OL7 (here).
  • Vagrant hands-off build of 19c Data Guard on OL7 (here).
  • Docker 19c on OL7 build (here).
  • Docker compose (here) and swarm (here) stacks.

It should be obvious, but remember this is literally the first time I’ve done this stuff with 19c, so things will change over time. I just wanted to try some stuff out to see what happened, and have some test environments to play with while I’m checking out the new features. Once the real on-prem drop happens I’ll bring these up to date.

If nothing else, this is once again proof of how awesome automation is. A few minor tweaks and boom, there’s a new set of test environments. 🙂

Now I can get back to doing what I was meant to be doing… 🙂

Cheers

Tim…

VirtualBox and Vagrant : New RAC Stuff and Changes

There were a lot of changes in my Vagrant repository on GitHub last week and over the weekend.

First, I got asked a question about 12.2 RAC and I couldn’t be bothered to run through a manual build, so I took my 18c RAC hands-off build and amended it to create a 12.2 RAC hands-off build. Along the way I noticed a couple of hard-coded bits in the 18c build I hadn’t noticed previously, which I altered of course. I also had to move the 18c build to a version-specific sub-directory. I think I’ve altered all references to the location.

I went through some of my individual server builds and updated them to use the latest versions of Tomcat 9, Java 11 and APEX 18.2. All that was pretty straight forward.

On Sunday I was running some tests of the builds on my laptop while I was at my brother’s house, and I noticed I was not pulling packages from the yum repositories properly. I ended up adding “nameserver 8.8.8.8” to pretty much all the “/etc/resolv.conf” files inside the VMs. I’m not sure what has changed as that hasn’t happened before, so I’m not sure if it’s something to do with the networking… Anyway, it fixed everything, so happy days.

While I was doing these builds I learned something new. I forgot to amend the path to my ASM disks from a UNIX style path “/u05/VirtualBox/shared/ol7_183_rac/…” to a Windows style path. Vagrant didn’t care and just created the location under the C drive as “C:\u05\VirtualBox\shared\ol7_183_rac\”. I’ll have to add a note about that to my “README.md” files about that.

I’ve still got to update some Docker builds with the latest software. I’ll probably do that over this week…

Cheers

Tim…

 

How To Learn Real Application Clusters (RAC) Administration

glasses-272399_1280-smallIn yesterday’s post about Learning, Career Development and Mentoring I mentioned a specific question I had been asked about learning RAC. I get it quite a bit, and after reading the comments from that post I thought I would write something I could use as a reply to that type of question in future. So here it is.

It’s very much an opinion piece, but I would like to know what other RAC admins think. This is very much a starting point. I’m sure the article will change over time as I think about it and people comment on it. 🙂

Cheers

Tim…

Oracle 12c RAC on Oracle Linux 6 and 7 using NFS

linux-tuxFollowing on from the last post, I’ve brought my NFS RAC stuff up to date also.

I noticed I had not done a RAC install using NFS on Oracle Linux 6, so I threw that in for good measure too. 🙂

Just as a little history to this… I was doing the desktop Oracle RAC thing (using VMware then VirtualBox) for a while, when I started reading some blog posts by Kevin Closson about NFS. At the time, NFS filers were considered the poor relation to SANs, which was obvious or they wouldn’t be so cheap in comparison right? In those articles Kevin pointed out that most people’s systems at the time probably weren’t capable of maxing out a decent filer if it were set up correctly. Since NFS is a cluster file system, that got me thinking I should try RAC on it to see how easy it was. That was in the Oracle 10g days. How time flies when you are having fun… 🙂

Cheers

Tim…

Oracle 12cR1 RAC Installation on Windows 2012 Using VirtualBox…

After having a play with Oracle 12c on Windows 8, I decided to give Windows Server 2012 a go. Here is the resulting virtual RAC installation.

As you would expect, much of the process is pretty similar to the 11gR2 RAC installation on Windows 2008.

Windows Server 2012 is a strange beast. The interface is quite similar to Windows 8, which seems strange for a server OS. I’m gradually coming to terms with the Windows 8, so I am not so repulsed any more. That’s not to say I think it is the correct thing for Microsoft to do, but the thought of supporting my family on it is not filling me with quite so much dread now.

I really should get round to upgrading my desktop to Fedora 19, but time has been short. 🙂

Cheers

Tim…