Video : Ansible Playbooks : Lists and Loops

In today’s video we demonstrate how to use lists and loops in Ansible Playbooks.

The video is based on the following article.

You might find some useful stuff here.

The star of today’s video is Øyvind Isene, who took a break during coffee time to let me film this clip.

Cheers

Tim…

Video : Ansible Playbooks : Introduction

In today’s video we’ll introduce Ansible Playbooks.

There’s quite a lot to cover, so this is only one of several videos covering playbooks. The video is based on the following article.

You might find some useful stuff here.

The star of today’s video is Bailey, who has been known to associate with Connor McDonald at times…

Cheers

Tim…

Video : Ansible : Installation and Configuration

In today’s video we’ll cover the installation and configuration of Ansible.

The video is based on the following article, and will be the first in a series of videos.

You might find some useful stuff here.

The star of today’s video is Neil Chandler, who took time out of his busy bathroom schedule to record this clip.

Cheers

Tim…

How are you provisioning your databases on-prem and in the cloud? Poll results discussed.

Following on from my previous post, I wanted to discuss the results of the polls regarding database provisioning.

This was the first question I asked.

How are you provisioning your databases on-prem and in the cloud?

A couple of years ago I stopped putting GUI installation articles on my website. They look pretty and seem to get a lot of views, but I thought posting them was wrong because I never use GUI installations. Posting them felt like I was sending the wrong message. I wrote about that here. This was one of the reasons I lead with this question. I was pleased with the results of the poll.

  • Using GUI: I understand some people don’t want to take a step backwards to move forwards, but at nearly 20%, this number is still too high IMHO. You can’t check button clicks into version control. I’m sure some smart arse will tell me you can if you use Robotic Process Automation (RPA) to click them. πŸ™‚
  • Shell scripts: If I’m honest I thought 34% was on the low side. I was expecting more people to be using silent installations using shell scripts, but the number is lower because of the next option. If people have made a big investment into writing robust shell scripts with good error handling, I can understand the reluctance to move away from them. Ansible and Terraform are nice, but they are not magic. πŸ™‚
  • Ansible/Terraform/Other: This was actually the surprise of bunch. I wasn’t expecting this number to be so high, but I was pleasantly surprised. The previous post showed lots of people running their databases in the cloud, which has no doubt helped to drive the uptake of automation tools like Ansible and Terraform. Happy days!

Spurred on by a question from Jasmin Fluri, I asked the following question to drill down a little more.

For people using Ansible and/or Terraform, how automated is your process?

This was also a pleasant surprise.

  • We run it manually: I was expecting this to be way ahead of the pack, but at nearly 38% I was wrong, which made me happy. I have no problem if people are running Ansible or Terraform manually. A pipeline is just a bunch of building blocks threaded together. The fact people have taken the time to build these blocks is great. Threading them together is nice, but it’s not the “be all and end all”. The important bit is the definitions of the systems are in code!
  • Automated pipeline: Over 33% made me happy. My assumption was this would be lower, and I wrong and I’m glad.
  • Terrahawks was a cartoon: The people who picked this were wrong! Terrahawks was a kids TV show using puppets, not animation. I’m really surprised nobody noticed this. The community let me down! πŸ˜‰ If we discount this response from the mix, it makes the other two responses close to 50:50, which is cool.

On a bit of a tangent, I wanted to know how dominant Git was these days.

What Version Control System (VCS) are you suing for your database scripts and code?
  • Git – On Prem: I knew Git would dominate, but I wasn’t sure if people would be hosting their repositories on-prem or in the cloud. With a response of over 30%, that means nearly half of the Git users were hosting their repositories on-prem, which was higher than I expected.
  • Git – Cloud-based: I expected this to dominate, so 37% was a little lower than I expected. Only a little over half of the Git users were using cloud-based repositories. We use cloud-based Git repositories, but we always keep a backup on-prem. Just in case.
  • Other VCS – Not Git: I expected this to have a reasonable showing as VCS software like Subversion used to be really popular, so I knew things would linger. Nearly 19% isn’t bad. I don’t think there is anything wrong with using something other than Git, but Git has become so pervasive in tooling it probably makes sense to take the plunge.
  • VCS is for wimps: I’m hoping nearly 13% of the respondents were picking this to wind me up, but I suspect they weren’t. If you are not currently using version control, please start!

Version control is at the heart of automation, DevOps, Infrastructure as Code (IaC) and all that funky business, so if people can just get that right they have taken the first step on the journey.

So overall this makes very pleasant reading. Lots of people are provisioning databases using some form of scripting, rather than GUIs, and a bunch of people are automating that provisioning. This is what I wanted to hear.

Cheers

Tim…

PS. You know the caveats. These are small sample sizes. My audience has an Oracle bias. I’m no expert at automation, DevOps and the cloud. Just a keen dabbler.

Fear of a Robot Planet

I’ve been on a retro-kick recently, and I’ve been listening to some Isaac Asimov books on Audible.

During the Robot series there is a constant theme of some people distrusting robots. Despite being brought up on a diet of films where robots turn bad, I’ve never really shared this feeling.

Anyway, I thought I would test my brother’s family by showing them this video by Boston Dynamics.

My nephews were intrigued, but not totally phased by it. One said something to the tune of, it moves like a man in a suit, but it’s clearly not. My sister-in-law had a very negative reaction. She was wincing and saying it was horrible. It seems these robots are already “too human” for her tastes.

So in this regard, the predictions made by Isaac Asimov when he started to write about robots in the 1940’s were spot on. It’s going to take a lot of adjustment for some humans to feel comfortable around humanoid robots.

I, for one, welcome our killer robot overlords! πŸ™‚

Cheers

Tim…

PS. When I was thinking about the title of this post I had the song Kool Thing by Sonic Youth going through my head, because of Kim Gordon saying, “fear of a female planet”. I guess I could have been thinking about Fear of a Black Planet by Public Enemy.

Why Automation Matters : Dealing With Vulnerabilities

The recent Log4j issues have highlighted another win for automation, from a couple of different angles.

Which Servers Are vulnerable?

There are a couple of ways to determine this. I guess the most obvious is to scan the servers and see which ones ping for the vulnerability, but depending on your server real estate, this could take a long time.

An alternative is to manage your software centrally and track which servers have downloaded and installed vulnerable software. This was mentioned by a colleague in a meeting recently…

My team uses Artifactory as a central store for a lot of our base software, like:

  • Oracle Database and patches.
  • WebLogic and patches.
  • SQLcl
  • ORDS
  • Java
  • Tomcat

In addition the developers use Artifactory to store their build artifacts. Once the problem software is identified, you could use a tool like Artifactory to determine which servers contained vulnerable software. That would be kind-of handy…

This isn’t directly related to automation, as you could use a similar centralised software library for manual work, but if you are doing manual builds there’s more of a tendency to do one-off things that don’t follow the normal procedure, so you are more likely to get false negatives. If builds are automated, there is less chance you will “acquire” software from somewhere other than the central library.

Fixing Vulnerable Software

If you use CI/CD, it’s a much simpler job to swap in a new version of a library or package, retest your software and deploy it. If your automated testing has good coverage, it may be as simple as commit to your source control. The quick succession of Log4j releases we’ve seen recently would have very little impact on your teams.

If you are working with containers, the deployment process would involve a build of a new image, then replacing all containers with new ones based on the new image. Pretty simple stuff.

If you are working in a more traditional virtual machine or physical setup, then having automated patching and deployments would give you similar benefits, even though it may feel more clunky…

Conclusion

Whichever way you play it, the adoption of automation is going to improve your reaction time when things like this happen again in the future, and make no mistake they will happen again!

I wrote a series of posts about automation here.

Cheers

Tim…

Performance/Flow : Focusing on the Constraint

Over a decade ago Cary Millsap was doing talks at Oracle conferences called “Thinking Clearly About Performance”. One of the points he discussed was identifying the big bottlenecks and dealing with those, rather than focusing on the little things that weren’t causing a problem. For example, if a task is made up of two operations where one takes 60 seconds to complete and the other one takes 1 second to complete, which one will give the most benefit if optimized?

This is the same issue when we are looking at automation to improve flow in DevOps. There are a whole bunch of things we might consider automating, but it makes sense to try and fix the things that are causing the biggest problem first, as they will give the best return. In DevOps and Lean terms that is focusing on the constraint. The weakest link in the chain. (see Theory of Constraints).

Lost in Automation

The reason I mention this is I think it’s really easy to get lost during automation. We often focus on what we can automate, rather than what needs automating. With my DBA hat on it’s clear my focus will be on the automation of provisioning and patching databases and application servers, but how important is that for the company?

  • If the developers want to “build & burn” environments, including databases, for their CI/CD pipelines, then automation of database and application server provisioning is really important as it might happen multiple times a day for automated testing.
  • If the developers use a more traditional dev, test, prod approach to environments, then the speed of provisioning new systems may be a lot less important to the overall performance of the company.

In both cases the automation gives benefits, but in the first case the benefits are much greater. Even then, is this the constraint? Maybe the problems is it takes 14 days for approval to run the automation? πŸ™‚

It’s sometimes hard for techies to have a good idea of where they fit in the value chain. We are often so focused on what we do, and don’t have a clue about the bigger picture.

Before we launch into automation, we need to focus on where the big problems are. Deal with the constraints first. That might mean stopping what you’re doing and going to help another team…

Don’t Automate Bad Process

We need to streamline processes before automating them. It’s a really bad idea to automate bad processes, because they will become embedded for life. It’s hard enough to get rid of bad processes because the, “it’s what we’ve always done”, inertia is difficult to overcome. If we add automation around that bad process we will never get rid of it, because now people will complain we are breaking the automation if we alter the process.

Another thing Cary talked about was removing redundant steps. You can’t make something faster than not doing it in the first place. πŸ™‚ It’s surprising how much useless crap becomes embedded in processes as they evolve over the years.

The process of continuous improvement involves all aspects of the business. We have to be willing to revise our processes to make sure they are optimal, and build our automation around those optimised processes.

I’m not advocating compulsory tuning disorder. We’ve got to be sensible about this stuff.

Know When to Cut Your Losses

The vast majority of examples of automation and DevOps are focussed on software delivery in software development focused companies. It can be very frustrating listening to people harp on about this stuff when you work in a mixed environment with a load of 3rd party applications that will never be automated because they can’t be. They can literally break every rule you have in place and you are still stuck with them because the business relies on them.

You have to know where to cut your losses and move on. There will be some systems that will remain manual and crappy for as long as they are in your company. You can still try and automate around them, and maybe end up in a semi-automated state, but forget wasting your time trying to get to 1000 deployments a day. πŸ™‚

I wrote about this in a post called I’m 2% DevOps, 3% agile and 4% automated because of 3rd party apps.

Try to Understand the Bigger Picture

I think it makes sense for us to all to try and get a better understanding of the bigger picture. It can be frustrating when you’ve put in a lot of work to automate something and nobody cares, because it wasn’t perceived as a problem in the big picture. I’m not suggesting we all have to be business analysts and system architects, but it’s important we know enough about the big picture so we can direct our efforts to get the maximum results.

I wrote a series of posts about automation here.

Cheers

Tim…

VirtualBox 6.1.20 & Vagrant 2.2.15

VirtualBox 6.1.20 has been released.

The downloads and changelog are in the usual locations.

While I was playing around with VirtualBox I noticed Vagrant 2.2.15 has been released. You can download it here.

I’ve installed both of those on Windows 10, macOS Big Sur and Oracle Linux 7 hosts. So far so good.

With the release of the Oracle patches I’ll be doing a lot of Vagrant and Docker builds in the coming days, so I should get to exercise this pretty well.

I’ll also do the Packer builds of my Vagrant boxes with the new versions of the guest additions. They take a while to upload, so they should appear on Vagrant Cloud in the next couple of days.

Happy upgrading!

Cheers

Tim…

Video : Vagrant Oracle Real Application Clusters (RAC) Build

In today’s video we’ll discuss how to build a 2-node RAC setup using Vagrant.

This video is based on the OL8 19c RAC build, but it’s similar to that of the OL7 19c RAC build also. If you don’t have access to the patches from MOS, stick with the OL7 build, as it will work with the 19.3 base release. The GitHub repos are listed here.

If you need some more words to read, you can find descriptions of the builds here, as well as a beginners guide to Vagrant.

The video is a talk through of using the build, not an explanation of each individual step in the build, as that would be a really long video. If you just want a RAC to play with, run the build and you’ll have one. If you want to learn about the steps involved in doing a RAC build, read the scripts that make up the build. Please don’t ask for a GUI step through of a build or I’ll be forced to ask you to read Why no GUI installations anymore? πŸ™‚

The star of today’s video is Neil Chandler, who seems to be doing a bit of plumbing! πŸ™‚

Cheers

Tim…

Oracle REST Data Services (ORDS) : Database APIs – First Steps

In my never ending quest for automation, I finally got round to looking at the Oracle REST Data Services (ORDS) Database APIs.

These have been around for some time, but I was testing them for the first time using ORDS version 20.2, so I was basing my tests on that version of the documentation, and more importantly version 20 of the APIs.

The are several sets of APIs, and they don’t have the same dependencies or authentication methods. It’s not that big a deal once you know what’s going on, but it confused the hell out of me for a while, and the documentation doesn’t give you much of a steer for some of this.

PDB Lifecycle Management

My first tests were of the PDB Lifecycle Management endpoints. I enable all the relevant features in my normal installation, but there was one big road block. I always install ORDS in the PDB, and this feature only works if ORDS is installed in the root container. This makes sense as the management of PDBs is done at the root container level, but I prefer not to put anything in the root container if I can help it. I uninstalled and reinstalled ORDS so I could give it a go. This resulted in this article.

The PDB Lifecycle Management functionality seemed better suited to a self-contained article, as it is only available from a CDB installation, has its own authentication setup and only has a small number of endpoints. The available APIs are kind-of basic, but they could still be useful. It will be interesting to see if this expands to fit all the possible requirements for a PDB, which are now pretty large. I suspect not.

Most of the other stuff

Next up was “most of the other stuff”. There are too many endpoints to go into any level of detail in a single article, so I figured this should focus on the setup to use most of the other endpoints.

There are two methods of authentication discussed. The default administrator approach, which is good because it hides the database credentials from the user making the API calls. Instead they use application server credentials mapped to the “System Administrator” role. This is similar to that used by the PDB Lifecycle Management feature, except that uses the “SQL Administrator” role, and the ORDS properties are different..

The other approach is to use an ORDS enabled schema. This will be very familiar to people already using ORDS, but it comes with one big disadvantage compared to the previous method. For this functionality you have to expose the database credentials of the ORDS enabled schema to the person calling the API. Normally we would not expose these, instead using another form of authentication (Basic, OAUTH2 etc.) to allow the user to gain access. Even then the ORDS enabled schema would be a weak user that only has access to the specific objects we want it to interact with, but in this case it’s a DBA user, so it makes me nervous. Using the default administrator method the caller is constrained to some extent by the APIs, but with the database credentials they have everything if they have direct access to the database server. It’s probably insignificant when you consider the amount of damage someone could do with the APIs alone, but I feel myself wincing a little when putting DBA credentials into a HTTPS call.

For me as a DBA/Developer I would see myself as the person using these APIs to develop something, whether that was an automation, or an application. If this were to be handed over to a developer to do the work, these security questions may be a much bigger issue.

Having read that, you are probably thinking, just use the default administrator method then. I would, only some APIs don’t work with that method. Some seem to only work with the ORDS enabled schema method for authentication, while others only work with the default administrator method. What’s more, I don’t see any reference to this in the documentation. The API doc doesn’t even mention the default administrator approach, and the setup doc doesn’t mention the limitations on any of the approaches except the PDB lifecycle management. As a result, I think you will need to use a mix of the authentication methods if you plan to use a variety of functionality.

The good thing is they can all live side-by-side. At one point I was testing with a CDB installation of ORDS with credentials for PDB Lifecycle Management, default administrator and ORDS enabled schema authentication all configured at the same time. No problem. It’s just confusing when endpoints fail and you have to “trial and error” your way through them. It would be nice if there was a grid of which groups of endpoints need which type of authentication.

Now I am a noob, so maybe I’ve missed the point here, but I spent a long time trying out variations, and this seems like the way it is. If someone can educate me about why I am wrong I will willingly amend the articles, and this blog post. πŸ™‚

Thoughts and what next?

At this point I’ve just been finding my feet, and I’m not sure what I will do next. There are some endpoints that interest me, so I might do separate articles on those, and refer back to the setup in the above articles. Then again, it may feel like just regurgitating the API documentation, so I may not. It’s worth taking a look at the available endpoints, broken down into these main sections.

  • Clusterware CLIs
  • Data Dictionary
  • Environment
  • Fleet Patching and Provisioning
  • General
  • Monitoring
  • Performance
  • Pluggable Database Lifecycle Management

Some will require additional setup, but many will not.

From the look of it, the vast majority of the endpoints are for reporting purposes. There are far fewer that actually allow you to manipulate the contents of the database. You can always write your own services for that, or use REST Enabled SQL to do it I guess. The question will be, can I get enough value out of these APIs as they stand to warrant the investment in time? I’m not sure at this point.

Cheers

Tim…

PS. If you were watching my twitter feed over the weekend and wondered what bit of tech I gave up on. It was this. I’m very stubborn though, so I came back…