Data Guard and RAC on Docker : Perhaps I was wrong?

I’ve talked a lot about Docker and containers over the last few years. With respect to the Oracle database on Docker, I’ve given my opinions in this article.

Over the weekend Sean Scott tweeted the following.

“A while back @oraclebase said Data Guard didn’t make sense on Docker.

For those of us disinterested in the sensible I present #Oracle#DataGuard on #Docker. 19c only for now. Please let me know what’s broken. Enjoy!

https://github.com/oraclesean/DataGuard-docker

This was in reference to a statement in my article that said the following.

“Oracle database high availability (HA) products are complicated, often involving the coordination of multiple machines/containers and multiple networks. Real Application Clusters (RAC) and Data Guard don’t make sense in the Docker world. In my opinion Oracle database HA is better done without Docker, but remember not every database has the same requirements.”

For the most part I stick by my statement, for the reasons described in my article. Although both Data Guard and RAC will work in Docker, I generally don’t think they make sense.

But…

A few years ago I had a conversation with Seth Miller, who was doing RAC in Docker. In his case it made sense for testing because of his use cases. I discussed this in this post.

For that use case, Seth was right and I was wrong.

What about Data Guard?

For a two node data guard playground I don’t see any major advantages to using two containers in one VM, compared to two VMs. The overhead of the extra VM and OS is not significant for this use case. Remember, most of the resources are going to the Oracle instances, not the VM and OS. Also, the VM approach will give you something similar to what you will see in production. It feels like a more natural testing scenario to me.

But Sean’s scenario was not this simple. When I questioned him over the value of this, considering the two VM approach had so little extra overhead, he came back with the following.

“There I’ll disagree. I have a Docker/sharding build I’m working on. 7 databases. Starts in moments. On my laptop. I can’t do 7 VM. No way!”

Now this scenario changes the game significantly. All of a sudden we go from the overhead of one extra VM to an overhead of six extra VMs. That’s pretty significant on a laptop. All of a sudden the Docker method probably makes a lot more sense than the VM approach for testing that scenario on a laptop.

Once again, I’m wrong and Sean is correct for this use case.

Conclusion

If you are building a two node RAC or Data Guard playground, I still think the VM approach makes a lot more sense. It’s going to be a lot more like what you use at work, and you don’t have to deal with some of the issues that containers bring with them.

Having said that, if you are looking to build something more extreme, or you are just trying stuff for fun, then Docker may be the right solution for you.

I still don’t see a realistic future for an RDBMS monolith on containers. I don’t care if it’s a single container or a giant Kubernetes cluster. This is not a criticism of the RDBMS or of containers. They are just things from different worlds for different purposes and continuing to treat them differently seems totally fine to me. Having said all that, it doesn’t mean combining the two can’t be useful for some use cases.

Remember, this is just my opinion! πŸ™‚

Cheers

Tim…

PS. As a general point, trying to build your own data service on containers feels like a mistake. I would just use a cloud service that gives you the features, performance and availability you need. Concentrate on your apps.

Oracle REST Data Services (ORDS) 19.4 : A quick life update…

Almost 2 weeks ago I wrote about the release of Oracle REST Data Services (ORDS), SQLcl, SQL Developer and SQL Developer Data Modeler 19.4.

I spent the holidays playing around with ORDS quite a bit, so I came back to work today and pushed it out across all Dev and Test installations.

As I’ve mentioned before, at work we run ORDS on Tomcat inside Docker containers. The build we use is very similar to this one I put on GitHub, but with some extra work-related bits added.

What did I have to do for this update?

Two things:

  • Build a new version of our ORDS Docker image with version 19.4 of the ORDS and SQLcl software.
  • Remove all the containers based on this image and fire up new containers.

How long did it take to deploy this to all Dev and Test instances?

The build of the new Docker image took about 5 minutes. It’s mostly just unzipping the software. This can be done before we touch any running containers, so there is no downtime associated with this.

The removal and creation of all the containers took about 5 minutes as well. Each container is created in a second, but the first run with a new version of ORDS has to do the ORDS upgrade in the database, which takes a few minutes sometimes. If there were no ORDS upgrade, the containers start really quickly.

So effectively, in 5 minutes we replaced all the “kit” and ran the ORDS upgrade across everything. I could have done production in that same 5 minute span too, but I’m not allowed to yet. πŸ™‚

Why am I talking about this?

It’s just another example of why containers make more sense than conventional app servers for this type of stuff.

To throw away kit and rebuild it from scratch takes an eternity here. I can do the equivalent with containers in seconds.

Once I’ve tested a new image and proved it works, I can roll that same image out across everything with no worries. If it works against one database, it will work against all the others. That’s the great thing about standardising the approach you take!

And another thing!

I’ve enabled SQL Developer Web on every Dev/Test installation too. Now all I’ve got to do is wait for the right opportunity to use it to save the day when someone is waiting for a firewall change, and act all casual like it’s no big thing! πŸ™‚

So in summary

Containers good! ORDS good!

If you are interested in playing with Docker, you can find more information here.

If you want to learn about ORDS, you can find more information here.

Cheers

Tim…

APEX 19.2 : Vagrant and Docker Builds

I’m sure anyone who cares knows that APEX 19.2 was officially released on Friday. I did an upgrade of one of our development instances straight away and it worked fine. it’s subsequently gone to a bunch of other development instances. I’ll be pushing to get this out to production as quickly as possible.

Over the weekend I worked through a bunch of my GitHub stuff.

Vagrant : I’ve updated all my Vagrant builds to use APEX 19.2 and the latest versions of Tomcat 9 and OpenJDK 11. I was using newer versions of OpenJDK, but it seemed a bit silly, so I reverted back to the long term support release. I tried updating the base box to ‘bento/oracle-7.7’, but it kept giving me timeouts, so I’ve reverted back to ‘bento/oracle-7.6’ for the moment.

Docker : Same as above, I’ve updated all my Docker builds to use APEX 19.2 and the latest versions of Tomcat 9 and OpenJDK 11. I noticed oraclelinux:8-slim was behaving a little strangely. I thought it was a PATH issue, but I need to spend some time to understand what is happening. It seems you can’t run basic commands like dnf during the build phase. It’s probably something stupid I’m doing, but for now I’ve switched from oraclelinux:8-slim to oraclelinux:8. Just making that switch made everything work as expected.

My Docker builds within the company have gone through a similar process, so as I’m rolling out APEX 19.2 to the databases, I’m also switching the ORDS containers over to the new images. You gotta love containers!

I guess I’ll be working through all this again when the next version of ORDS and SQLcl drop. πŸ™‚

Cheers

Tim…

Video : Docker Compose – Defining Multi-Container Applications

In today’s video we’ll take a look at Docker Compose, which allows you to define multi-container applications. In this example we are using the Oracle REST Data Services and Oracle Database 19c images we built on Oracle Linux 8 (oraclelinux:8-slim) in previous videos.

For those that prefer to read, this is based on the following information.

The star of today’s video is Murali Vallath, who looks incredibly suspicious of my motivation for videoing him. πŸ™‚

Cheers

Tim…

Vagrant Build of AWX on Oracle Linux 7 Using Docker-Compose Method

I may need to do a bunch of scripting related to our load balancers, and I have the choice of using the API from the servers directly, Ansible Core or the web services exposed by AWX. I wanted to play around with AWX anyway, so that seemed like a good excuse…

First step was to install AWX. It’s pretty easy, but I must admit to spending a few minutes in a state of confusion until I rebooted my brain and started again. Turning things off and on always works. I’m an Oracle Linux person and “I do Docker”, so the obvious choice was to install it using the Docker-Compose method on Oracle Linux 7 (OL7).

The post includes the basic Docker setup, but if you need something a little more, check out the installation article and video.

If you don’t care about the build and just need AWX up quickly, you can use this Vagrant build that does everything for you, including Docker and AWX on Oracle Linux. πŸ™‚

Cheers

Tim…

Docker Birmingham – September

Yesterday evening I went to my first Docker Birmingham meetup, sponsored by Black Cat Technology Solutions.

I was so tired before the event I was really nervous I would fall asleep half way through a presentation and start snoring. πŸ™‚ When I got there I was greeted by an array of pizzas. I wanted to eat them so badly, but then I would definitely sleep, so I resisted. πŸ™‚ I spent a bit of time chatting to one of the hosts Shaun McLernon before the sessions started.

The agenda had a last minute change, as one of the speakers was ill, so the first presentation was a lighthearted one by Alistair Hey called “CV Driven Development – Why it’s ok not to be ‘cool’. ” He spoke about the things that trigger alarm bells when he’s looking at CVs, and used that as a segway into comparing what’s cool, with what just works. A specific case being a comparison between Kubernetes and AWS ECS, where he compared the pros and cons of each. The take home message was use the correct tool for the job, where the “correct tool” choice will be influenced by your requirements, skills and what works for your organisation.

Being short of a speaker, a couple of folks stepped up to talk about their projects in a lightning talk style. First up was Marcus Oaten with a talk about an environment built on Docker for testing new architectures for a Drupal application. Essentially using Docker to model all the services and layers to try new approaches out before having to commit to a specific architectural change.

Next up was Dan Webb speaking about the evolution of the builds used for a PHP environment he was working on. Moving from large-ish multi-purpose containers to smaller single-purpose containers with separation of duties and multi-stage builds.

I think the lightning talks worked really well. They triggered a lot of discussion, with people throwing out ideas.

The meetup was really useful. I like the “this is what we are doing” stuff, as it feels a lot more real, and shows the thought process and progression. I’m not sure about the experience level of the other folks, but I’m a Docker newbie, so this sort of thing is more important to me than hearing all about the super-cool stuff I will probably never use. I like hearing that as well, but this this stuff is more relevant to me at this stage.

I definitely plan to go again. Thanks to the folks at Black Cat Technology Solutions for sponsoring and organising the event, and to the speakers for stepping up to the plate.

Cheers

Tim…

Video : Docker : Oracle REST Data Services (ORDS) Build

In today’s video we’ll take a look at a simple Docker build for Oracle REST Data Services (ORDS). In this example we’re using Tomcat on Oracle Linux 8 (oraclelinux:8-slim), which is connecting to an Oracle 19c database.

This video is based on the following articles and links.

The star of today’s video is Colm Divilly, of ORDS fame. πŸ™‚

Cheers

Tim…

Video : Docker : Oracle Database Build

Today’s video is a look at a simple Docker build for an Oracle database. In this example we are using Oracle database 19c on Oracle Linux 8 (oraclelinux:8-slim).

You can get an overview of this build in the following article.

You can see my other Docker posts and builds here.

The star of today’s video is “The Why Guy” Jim Czuprynski. πŸ™‚

Cheers

Tim…

Video : Install Docker on Oracle Linux 7 (OL7)

Today’s video is a run through installing the Docker engine on Oracle Linux 7 (OL7).

You can get the commands mentioned in this video from the following article.

You can see my other Docker posts and builds here.

The star of today’s video is Robyn Sands, formerly of the Oracle Real World Performance Group, and now something to do with some fruit company… πŸ™‚

Cheers

Tim…

Birmingham Digital & DevOps Meetup : August 2019

Yesterday evening I went along to the Birmingham Digital & DevOps Meetup for the first time. It followed the usual meetup format of quick intro, talk, break, talk then home.

First up was Elton Stoneman from Docker with “Just What Is A β€œService Mesh”, And If I Get One Will It Make Everything OK?” The session started by describing the problems associated with communication between the building blocks of a system, and how a service mesh can alleviate some of them. It then moved on to some service mesh demos using Istio. These included examples of altering the routing of traffic to do canary testing and targeting specific groups etc.

Elton was really honest about the learning curve, issues and overhead associated with this sort of setup. One comment I really liked was when he showed a slide containing the following, saying that often people assume there is a progression from left to right.

Meaning people assume you learn Docker, then you need some form of orchestration so you learn Swarm. From there you naturally progress to Kubernetes and once you understand that, you will inevitably move on to a service mesh using something like Istio. Elton’s point was you don’t *have to* continue on this progression. You can step off at any point once you’ve achieved the functionality you need. I think this is a really important point and I can see it reflected in what I do with Docker. We’ve got some things that stop at just using Docker containers, with no orchestration at all. I work on a project that requires some orchestration, so we use Swarm, which is really easy to use. So far I’ve had no reason to go beyond Swarm, and even considering a service mesh is so far down the line for us. I’m not discounting the relevance of these for everyone, but they don’t make sense for me at this point.

It was a really good session and I learned a lot. You can check out Elton’s blog here.

After the break it was James Relph with “Container Security Fundamentals”. This started of with a basic introduction to containers, using that as an entry point to explain how containers can be problematic from a security perspective, and what you can do to reduce the impact. He covered a lot of stuff, some of which I already do, some I know about and some stuff that was new to me. This is not an exhaustive list.

  • Don’t automatically trust images from Docker hub. Do your due diligence, even when they are from a reputable source.
  • Use your own image repository. He mentioned ECR amongst others. This can be used for your own images, but also base images from Docker Hub, which you have verified.
  • Don’t use “latest”, but use specific tagged versions. Latest gives you all the latest fixes, but all the latest bugs too. You should test and verify before you let images out into your infrastructure.
  • Multi-stage builds to reduce the size of containers and minimise the attack surface. Basically, copy out what you need and leave the crap behind.
  • Using sidecar containers to provide specific services, allowing your application images to remain more focused. The sidecar images can be maintained by feature experts to make sure they are as secure as possible.
  • Scanning images using Clair, amongst other things, to check for dodgy software. One of the audience mentioned Anchore.
  • Using microVMs like Firecracker to provide additional isolation, whilst retaining the ease of use of containers. I’ve not played with this, but I have tried Kata Containers, which seems to do pretty much the same.

There was a lot in there!

I was a bit nervous going into the event thinking it would all go over my head, and some of it probably did, but it was cool. I got to speak to a few people before the event, during the break and at the end. It seemed like there were quite a mix of people there from beginners in these areas upward, so I didn’t feel out of place.

A few times I found myself thinking, that’s great, but what do I do about my 3rd party applications? I’ve written before (here) about how 3rd party apps screw everything up. πŸ™‚

Thanks to Elton Stoneman and James Relph for taking the time to come and speak to us. Thanks to the folks from BrumDigitalDevOps for organising the event, and to Capgemini UK for sponsoring the event.

Cheers

Tim…