Vagrant Build of AWX on Oracle Linux 7 Using Docker-Compose Method

I may need to do a bunch of scripting related to our load balancers, and I have the choice of using the API from the servers directly, Ansible Core or the web services exposed by AWX. I wanted to play around with AWX anyway, so that seemed like a good excuse…

First step was to install AWX. It’s pretty easy, but I must admit to spending a few minutes in a state of confusion until I rebooted my brain and started again. Turning things off and on always works. I’m an Oracle Linux person and “I do Docker”, so the obvious choice was to install it using the Docker-Compose method on Oracle Linux 7 (OL7).

The post includes the basic Docker setup, but if you need something a little more, check out the installation article and video.

If you don’t care about the build and just need AWX up quickly, you can use this Vagrant build that does everything for you, including Docker and AWX on Oracle Linux. πŸ™‚

Cheers

Tim…

Video : Install Docker on Oracle Linux 7 (OL7)

Today’s video is a run through installing the Docker engine on Oracle Linux 7 (OL7).

You can get the commands mentioned in this video from the following article.

You can see my other Docker posts and builds here.

The star of today’s video is Robyn Sands, formerly of the Oracle Real World Performance Group, and now something to do with some fruit company… πŸ™‚

Cheers

Tim…

Birmingham Digital & DevOps Meetup : August 2019

Yesterday evening I went along to the Birmingham Digital & DevOps Meetup for the first time. It followed the usual meetup format of quick intro, talk, break, talk then home.

First up was Elton Stoneman from Docker with “Just What Is A β€œService Mesh”, And If I Get One Will It Make Everything OK?” The session started by describing the problems associated with communication between the building blocks of a system, and how a service mesh can alleviate some of them. It then moved on to some service mesh demos using Istio. These included examples of altering the routing of traffic to do canary testing and targeting specific groups etc.

Elton was really honest about the learning curve, issues and overhead associated with this sort of setup. One comment I really liked was when he showed a slide containing the following, saying that often people assume there is a progression from left to right.

Meaning people assume you learn Docker, then you need some form of orchestration so you learn Swarm. From there you naturally progress to Kubernetes and once you understand that, you will inevitably move on to a service mesh using something like Istio. Elton’s point was you don’t *have to* continue on this progression. You can step off at any point once you’ve achieved the functionality you need. I think this is a really important point and I can see it reflected in what I do with Docker. We’ve got some things that stop at just using Docker containers, with no orchestration at all. I work on a project that requires some orchestration, so we use Swarm, which is really easy to use. So far I’ve had no reason to go beyond Swarm, and even considering a service mesh is so far down the line for us. I’m not discounting the relevance of these for everyone, but they don’t make sense for me at this point.

It was a really good session and I learned a lot. You can check out Elton’s blog here.

After the break it was James Relph with “Container Security Fundamentals”. This started of with a basic introduction to containers, using that as an entry point to explain how containers can be problematic from a security perspective, and what you can do to reduce the impact. He covered a lot of stuff, some of which I already do, some I know about and some stuff that was new to me. This is not an exhaustive list.

  • Don’t automatically trust images from Docker hub. Do your due diligence, even when they are from a reputable source.
  • Use your own image repository. He mentioned ECR amongst others. This can be used for your own images, but also base images from Docker Hub, which you have verified.
  • Don’t use “latest”, but use specific tagged versions. Latest gives you all the latest fixes, but all the latest bugs too. You should test and verify before you let images out into your infrastructure.
  • Multi-stage builds to reduce the size of containers and minimise the attack surface. Basically, copy out what you need and leave the crap behind.
  • Using sidecar containers to provide specific services, allowing your application images to remain more focused. The sidecar images can be maintained by feature experts to make sure they are as secure as possible.
  • Scanning images using Clair, amongst other things, to check for dodgy software. One of the audience mentioned Anchore.
  • Using microVMs like Firecracker to provide additional isolation, whilst retaining the ease of use of containers. I’ve not played with this, but I have tried Kata Containers, which seems to do pretty much the same.

There was a lot in there!

I was a bit nervous going into the event thinking it would all go over my head, and some of it probably did, but it was cool. I got to speak to a few people before the event, during the break and at the end. It seemed like there were quite a mix of people there from beginners in these areas upward, so I didn’t feel out of place.

A few times I found myself thinking, that’s great, but what do I do about my 3rd party applications? I’ve written before (here) about how 3rd party apps screw everything up. πŸ™‚

Thanks to Elton Stoneman and James Relph for taking the time to come and speak to us. Thanks to the folks from BrumDigitalDevOps for organising the event, and to Capgemini UK for sponsoring the event.

Cheers

Tim…

Video : Vagrant : Oracle Database Build (19c on OL8)

Today’s video is an example of using Vagrant to perform an Oracle database build.

In this example I was using Oracle 19c on Oracle Linux 8. It also installs APEX 19.1, ORDS 19.2, SQLcl 19.2, with ORDS running on Tomcat 9 and OpenJDK 12.

If you’re new to Vagrant, there is an introduction video here. There’s also an article if you prefer to read that.

If you want to play around with some of my other Vagrant builds, you can find them here.

If you want to read about some of the individual pieces that make up this build, you can find them here.

The star of today’s video is Noel Portugal. It’s been far too long since I’ve seen you dude!

Cheers

Tim…

I’m 2% DevOps, 3% agile and 4% automated because of 3rd party apps…

I was having a discussion with my boss about the impact of 3rd party apps on the way we work, and how difficult things are when you have to deal with 3rd party apps, as opposed to just writing your own software.

It’s easier to do things well when you are in control of all the pieces. Most of the examples you see are people writing their own software, typically on new projects. That’s very different to dealing with old projects and 3rd party apps. I’ll give you some examples, without trashing the companies responsible for this.

Example 1

Our student system is provided by a 3rd party. The company in question has a really antiquated way of delivering applications. In recent years they’ve tried to resolve this by writing their own delivery mechanism, made up of some custom software and Jenkins. The problem is, this is just a wrapper over the old process, so it is not the most reliable tool in the world. Someone like me would describe it as putting lipstick on a pig.

In addition to that, you have to use a GUI to perform the operations. At this point there is no API to allow you to script operations, which makes building them into a bigger process really problematic. We have internal development which is gradually moving to something resembling CI/CD, but it will never truly meet that goal, because we have to include manual management of things because of the limitations of the 3rd party software.

I’m sure long-term customers see the new delivery mechanism as a great improvement, but it’s not something you would deliver for a new product. It’s less painful than it was, but not really good.

Example 2

We have a publishing system that is written in Java and runs on Tomcat. It is so close to being hands-off, but there are a couple of problems.

  • When you deploy a new version, it starts in maintenance mode and you need manual interaction to click an OK button a few times on a web-based maintenance screen. I’ve never “not clicked” the OK button, so I just want a “just do it” option, so I can let it get on with it.
  • When some features are enabled by the power users, the next restart of the application flips you into maintenance mode. We’ve had P1 incidents because a host failure has caused the VM to start on a new host, and because a user has enabled a new feature in the app, the automatic startup stalls, waiting for me to click the OK button a few times.

There are some other annoyances, which impact on availability and possible topology, as well. There is no way to resolve these because of limitations in the application. All we can do is raise enhancement requests with the vendor.

I could go on with more examples, but I think you get the message.

So what do you do?

It can be quite disheartening when you want to do things well, but you have to keep compromising because of factors outside your control. You have to try not to give up, and just keep plugging away.

  • Don’t make unrealistic comparisons between your environment and others. There’s no point comparing your mixed environment to a software house. I’ve worked in both. They are very different. Take what works. Ditch what doesn’t.
  • Semi-automated processes are better than processes that are 100% manual. Maybe you can use Robotic Process Automation (RPA) to automate what is essentially a manual process.
  • Try to make sure these considerations become part of your procurement process, or you will keep buying crap.
  • Try to be creative and find workarounds, don’t just bury your head in the sand. There’s always *something* you can do to improve things.
  • Even if something is terrible, that doesn’t stop you improving the processes around it.

I guess you should focus on the values, rather than trying to exactly match some prescriptive ideal.

Good luck!

Cheers

Tim…

PS. I’m pretty sure my boss is reading this laughing, as I’m following none of this advice myself, but instead stomping round the place like a thirteen year old having a strop because, “Everything is crap!” πŸ™‚

ORDS, SQLcl, SQL Developer 19.2 (Vagrant and Docker Builds)

The folks at Oracle dropped some new presents for us today, including version 19.2 of the following.

I’ve updated my Vagrant builds and ORDS Docker builds with the new versions and everything seems to be working fine so far.

Tomorrow I’ll probably try out some of our development ORDS containers with these releases and see how they work out. They are similar to this build, so I’m sure they will be fine…

Cheers

Tim…

Update: I rolled ORDS 19.2 out to all our Dev/Test environments this morning. We run them all on Docker, so it was really quick and easy. πŸ™‚

Docker : New Builds Using Oracle Linux 8 (oraclelinux:8-slim)

Yesterday I noticed the oraclelinux section on Docker Hub included “oraclelinux:8-slim”, so when I got home a did a quick run through some builds using it.

  • ol8_ords : This build is based on “oraclelinux:8-slim” and includes OpenJDK 12, Tomcat 9, ORDS 19, SQLcl 19 and the APEX 19 images.
  • ol8_19 : This build is based on “oraclelinux:8-slim” and includes the 19c database and APEX 19.
  • ol8_183 : This build is based on “oraclelinux:8-slim” and includes the 18c database and APEX 19.

There are also some new compose files, so I could test database and ORDS containers working together.

Everything worked fine, but here come the inevitable warnings and comments.

  • The Oracle database is not certified on Oracle Linux 8 yet, so the database builds are just for playing around, not a recommendation.
  • The database preinstall packages don’t exist yet, so I installed the main required packages with DNF, but I didn’t do some of the additional manual setup I would normally do, so it’s not a perfect example of an installation. I assume the preinstall packages will eventually be released, and I will substitute them in.
  • The ORDS build is not subject to the same certification restrictions as the database, so as far as I know, I could consider using this, although the build I use for work differs a little to this and is still using Oracle JDK 8 and Tomcat 8.5.

If you are interested in playing around with Docker, you can find my articles on it here, and my public builds here.

Cheers

Tim…

Why Automation Matters : Reducing the Cost of Failure

Recently I watched a video called The Future of Faster Enterprises by AWS Enterprise Strategist, Miriam McLemore. I think its a really good video, even if you don’t care about AWS or cloud in general. There is a wider message there.

One of the points Miriam raised was “Reducing the cost of failure”, which sparked a conversation between myself and a colleague. When you’re trying to improve the way you work, you are going to have to try new things. Not all of those things are going to work out. The important point is you try them, see if they work for you. If they do great. If they don’t, you throw them away and move on. Reducing the cost of failure is a really important part of encouraging the culture of experimentation needed for continuous improvement.

Recently I wrote a post called you have to keep working just to stand still. Now add to that the work required to move your company forward and I think you’ll see why any barrier to progress is a problem.

So what factors affect the cost of failure? Here are a few.

  • Lack of automation. If humans are involved in providing infrastructure, it’s going to increase the time it takes to set things up (see lost time), and they will get disgruntled when you ask them to throw it away 2 hours after you’ve got it. You need to be able to build and burn kit rapidly to have any hope of experimenting. This is why the focus on the automation part of flow in DevOps is so important, for both business as usual and experimentation.
  • Bloated waterfall process. If your company expects a detailed plan of action before you so much as fart, you are going to fail. You have to be agile. I’m not using the term agile in the, “I’m too lazy to plan”, sense. I mean proper agile.
  • Time. Your company has to value progress and be willing to allocate time to it. You can’t rely on the fact Beryl and Bert go home every night and no-life their way through learning something new, so the business can reap the benefit of it for free. Yes that happens, but companies that rely on it will fail.
  • Be accepting of failure. I’m not talking about being happy to be rubbish. I’m not talking about being accepting of failure in well defined business as usual (BAU) work. I’m talking about being accepting of failure during experimentation. Not everything will work. Not everything will be the right solution for you or your company. You have to be willing to try and fail or you will fall at the first hurdle.

Check out the rest of the series here.

Cheers

Tim…

My Vagrant Habit

I’ve posted a lot about automation and Vagrant over the last year. It’s got to the point where I find it quite difficult/annoying to create a VM manually anymore. I hadn’t really noticed this until a couple of days ago…

I wanted to try some stuff out with Fedora 30, which is currently in beta. I had a look and couldn’t find any Vagrant boxes for Fedora 30, so I downloaded the ISO image and started to do a manual creation of a VM. It wasn’t very long before I got really annoyed, because it felt so clumsy, and there were so many silly little things I had to do that Vagrant either does for me, or are really simple to configure with Vagrant. After a few minutes I threw my toys out of the pram and started to read up on creating a Vagrant base box. In all this time I had never created one for myself. Turns out it’s really simple.

Once I had that in my Vagrant box list, I could quickly bang out a number of tests. Happy days…

So what did this teach me? It seems I’ve become totally and utterly intolerant of doing anything manually! πŸ™‚

Cheers

Tim…

Why Automation Matters : Consistent Test Environments

I’ve already made this point in a previous post, but I thought it was worth mentioning in a little more detail.

One of the neat things about automation is it gives you the ability to quickly build/replace test environments, so you know you have a consistent starting point. This is especially important for automated testing (unit, integration etc.), but it also applies to your learning experience.

I’m currently learning about a bunch of Oracle 18c new features. Some of those features are limited to engineered systems and Oracle Database Cloud Services. Not to worry, there is a little hack that gets you round some of those restrictions for testing purposes.

alter system set "_exadata_feature_on"=true scope=spfile;
shutdown immediate;
startup;

In some cases I’m enabling extended data types. I’m also building additional test instances, and multiple test users, each requiring different levels of privileges.

So I finish learning about feature X and I move on to learning about feature Y. What am I bringing along with me for the ride? What problems will I run into, or not run into, as a result of the hacks I put in place for the previous test? I have no way of knowing.

This is where automation comes in really useful. You can quickly burn and build your test environment and start with a clean slate. This can also be really useful to check your understanding, by rerunning your tests on clean kit. Did you really remember to write down everything you did? πŸ™‚

It’s kind-of obvious I know, but it’s really surprising how often I’m rebuilding my testing kit these days. I’m literally talking multiple times a day just when I’m messing about with stuff. Earlier in the week someone asked me a question about RAC builds and I did the following in a 3 hour period, while I was doing other stuff. πŸ™‚

  • Oracle 18c RAC build on a Windows 10 host.
  • Oracle 12.2 RAC build on a Windows 10 host.
  • Oracle 18c RAC build on an Oracle Linux 7.6 host.
  • Oracle 18c RAC build on a macOS Mojave host.

There’s no way I could have contemplated that without automation.

When you are learning new stuff, the last thing you need to worry about is being thrown off target by crap left over from previous tests, so just start again with a clean slate!

Check out the rest of the series here.

Cheers

Tim…

PS. I know sometimes you can learn interesting stuff by making mistakes, like finding out that feature X and feature Y are incompatible, but I think you should approach those sort of tests in a controlled and conscious manner. Learning the basics first is far more important in my opinion.