Video : Vagrant Oracle Real Application Clusters (RAC) Build

In today’s video we’ll discuss how to build a 2-node RAC setup using Vagrant.

This video is based on the OL8 19c RAC build, but it’s similar to that of the OL7 19c RAC build also. If you don’t have access to the patches from MOS, stick with the OL7 build, as it will work with the 19.3 base release. The GitHub repos are listed here.

If you need some more words to read, you can find descriptions of the builds here, as well as a beginners guide to Vagrant.

The video is a talk through of using the build, not an explanation of each individual step in the build, as that would be a really long video. If you just want a RAC to play with, run the build and you’ll have one. If you want to learn about the steps involved in doing a RAC build, read the scripts that make up the build. Please don’t ask for a GUI step through of a build or I’ll be forced to ask you to read Why no GUI installations anymore? πŸ™‚

The star of today’s video is Neil Chandler, who seems to be doing a bit of plumbing! πŸ™‚

Cheers

Tim…

Oracle REST Data Services (ORDS) : Database APIs – First Steps

In my never ending quest for automation, I finally got round to looking at the Oracle REST Data Services (ORDS) Database APIs.

These have been around for some time, but I was testing them for the first time using ORDS version 20.2, so I was basing my tests on that version of the documentation, and more importantly version 20 of the APIs.

The are several sets of APIs, and they don’t have the same dependencies or authentication methods. It’s not that big a deal once you know what’s going on, but it confused the hell out of me for a while, and the documentation doesn’t give you much of a steer for some of this.

PDB Lifecycle Management

My first tests were of the PDB Lifecycle Management endpoints. I enable all the relevant features in my normal installation, but there was one big road block. I always install ORDS in the PDB, and this feature only works if ORDS is installed in the root container. This makes sense as the management of PDBs is done at the root container level, but I prefer not to put anything in the root container if I can help it. I uninstalled and reinstalled ORDS so I could give it a go. This resulted in this article.

The PDB Lifecycle Management functionality seemed better suited to a self-contained article, as it is only available from a CDB installation, has its own authentication setup and only has a small number of endpoints. The available APIs are kind-of basic, but they could still be useful. It will be interesting to see if this expands to fit all the possible requirements for a PDB, which are now pretty large. I suspect not.

Most of the other stuff

Next up was “most of the other stuff”. There are too many endpoints to go into any level of detail in a single article, so I figured this should focus on the setup to use most of the other endpoints.

There are two methods of authentication discussed. The default administrator approach, which is good because it hides the database credentials from the user making the API calls. Instead they use application server credentials mapped to the “System Administrator” role. This is similar to that used by the PDB Lifecycle Management feature, except that uses the “SQL Administrator” role, and the ORDS properties are different..

The other approach is to use an ORDS enabled schema. This will be very familiar to people already using ORDS, but it comes with one big disadvantage compared to the previous method. For this functionality you have to expose the database credentials of the ORDS enabled schema to the person calling the API. Normally we would not expose these, instead using another form of authentication (Basic, OAUTH2 etc.) to allow the user to gain access. Even then the ORDS enabled schema would be a weak user that only has access to the specific objects we want it to interact with, but in this case it’s a DBA user, so it makes me nervous. Using the default administrator method the caller is constrained to some extent by the APIs, but with the database credentials they have everything if they have direct access to the database server. It’s probably insignificant when you consider the amount of damage someone could do with the APIs alone, but I feel myself wincing a little when putting DBA credentials into a HTTPS call.

For me as a DBA/Developer I would see myself as the person using these APIs to develop something, whether that was an automation, or an application. If this were to be handed over to a developer to do the work, these security questions may be a much bigger issue.

Having read that, you are probably thinking, just use the default administrator method then. I would, only some APIs don’t work with that method. Some seem to only work with the ORDS enabled schema method for authentication, while others only work with the default administrator method. What’s more, I don’t see any reference to this in the documentation. The API doc doesn’t even mention the default administrator approach, and the setup doc doesn’t mention the limitations on any of the approaches except the PDB lifecycle management. As a result, I think you will need to use a mix of the authentication methods if you plan to use a variety of functionality.

The good thing is they can all live side-by-side. At one point I was testing with a CDB installation of ORDS with credentials for PDB Lifecycle Management, default administrator and ORDS enabled schema authentication all configured at the same time. No problem. It’s just confusing when endpoints fail and you have to “trial and error” your way through them. It would be nice if there was a grid of which groups of endpoints need which type of authentication.

Now I am a noob, so maybe I’ve missed the point here, but I spent a long time trying out variations, and this seems like the way it is. If someone can educate me about why I am wrong I will willingly amend the articles, and this blog post. πŸ™‚

Thoughts and what next?

At this point I’ve just been finding my feet, and I’m not sure what I will do next. There are some endpoints that interest me, so I might do separate articles on those, and refer back to the setup in the above articles. Then again, it may feel like just regurgitating the API documentation, so I may not. It’s worth taking a look at the available endpoints, broken down into these main sections.

  • Clusterware CLIs
  • Data Dictionary
  • Environment
  • Fleet Patching and Provisioning
  • General
  • Monitoring
  • Performance
  • Pluggable Database Lifecycle Management

Some will require additional setup, but many will not.

From the look of it, the vast majority of the endpoints are for reporting purposes. There are far fewer that actually allow you to manipulate the contents of the database. You can always write your own services for that, or use REST Enabled SQL to do it I guess. The question will be, can I get enough value out of these APIs as they stand to warrant the investment in time? I’m not sure at this point.

Cheers

Tim…

PS. If you were watching my twitter feed over the weekend and wondered what bit of tech I gave up on. It was this. I’m very stubborn though, so I came back…

Packer by HashiCorp : Second Steps?

In a previous post I mentioned my first steps with Packer by HashiCorp. This is a brief update to that post.

I’ve created a new box called “oracle-7” for Oracle Linux 7 + UEK. This will track the latest OL7 spin. You can find it on Vagrant Cloud here.

I’ve altered all my OL7 Vagrant builds to use this box now.

You will see a new sub-directory called “ol7” under the “packer” directory. This contains the Packer build for this new image.

Cheers

Tim…

Packer by HashiCorp : First Steps

A few days ago I wrote about some Vagrant Box Drama I was having. Martin Bach replied saying I should build my own Vagrant boxes. I’ve built Vagrant boxes manually before, as shown here.

The manual process is just boring, so I’ve tended to use other people’s Vagrant boxes, like “bento/oracle-8”, but then you are at the mercy of what they decide to include/exclude in their box. Martin replied again saying,

“Actually I thought the same until I finally managed to get around automating the whole lots with Packer and Ansible. Works like a dream now and with minimum effort”

Martin Bach

So that kind-of shamed me into taking a look at Packer. πŸ™‚

I’d seen Packer before, but had not really spent any time playing with it, because I didn’t plan on being in the business of maintaining Vagrant box images. Recent events made me revisit that decision a little.

So over the weekend I spent some time playing with Packer. Packer can build all sorts of images, including Vagrant boxes (VirtualBox, VMware, Hyper-V etc.) and images for Cloud providers such as AWS, Azure and Oracle Cloud. I focused on trying to build a Vagrant box for Oracle Linux 8.2 + UEK, and only for a VirtualBox provider, as that’s what I needed.

The Packer docs are “functional”, but not that useful in my opinion. I got a lot more value from Google and digging around other people’s GitHub builds. As usual, you never find quite what you’re looking for, but there are pieces of interest, and ideas you can play with. I was kind-of hoping I could fork someone else’s repository and go from there, but it didn’t work out that way…

It was surprisingly easy to get something up and running. The biggest issue is time. You are doing a Kickstart installation for each test. Even for minimal installations that takes a while to complete, before you get to the point where you are testing your new “tweak”. If you can muscle your way through the boredom, you quickly get to something kind-of useful.

Eventually I got to something I was happy with and tested a bunch of my Vagrant builds against it, and it all seemed fine, so I then uploaded it to Vagrant Cloud.

I’ve already made some changes and uploaded a new version. πŸ™‚

You will see a couple of older manually built boxes of mine under oraclebase. I’ll probably end up deleting those as they are possibly confusing, and definitely not maintained.

I’ve also altered all my OL8 Vagrant builds to use this box now.

You will also see a new sub-directory called “packer”. I think you can guess what’s in there. If I start to do more with this I may move it to its own repository, but for now this is fine.

I’m not really sure what else I will do with Packer from here. I will probably do an Oracle Linux 7 build, which will be very similar to what I already have. This first image is pretty large, as I’ve not paid much attention to reducing it’s size. I’ve looked at what some other builds do, and I’m not sure I agree with some of the stuff they remove. I’m sure I will alter my opinion on this over time.

I’m making no promises about these boxes. That same way I make no promised about any of my GitHub stuff. It’s stuff I’m playing around with, and I will mostly try to keep it up to date, but I’m not an expert and it’s not my job to maintain this. It’s just something that is useful for me, and if you like it, great. If not, there are lots of other places to look for inspiration. πŸ™‚

Cheers

Tim…

Automating SQL and PL/SQL Deployments using Liquibase

You’ll have heard me barking on about automation, but one subject that’s been conspicuous by its absence is the automation of SQL and PL/SQL deployments…

I had heard of some products that might work for me, like Flyway and Liquibase, but couldn’t really make up my mind or find the time to start learning them. Next thing I knew, SQLcl got Liquibase built in, so I figured that was probably the decision made for me in terms of product. This also coincided with discussions about making a deployment pipeline for APEX applications, which kind-of focused me. It’s sometimes hard to find the time to learn something when there is not a pressing demand for it…

Despite thinking I would probably be using the SQLcl implentation, I started playing with the regular Liquibase client first. Kind of like starting at grass roots. If you are working in a mixed environment, you might prefer to use the regular client, as it will work with multiple engines.

Once I had found my feet with that, I essentially rewrote the article to use the SQLcl implementation of Liquibase. If you are focused on Oracle, I think this is better than using the standard client.

Both these articles were written more than 3 months ago, but I was holding them back on publishing them for a couple of reasons.

  1. I’m pretty new to this, and I realise some of the ways I’m suggesting to use them do not fall in line with the way I guess many Liquibase users would want to use them. I’m not trying to make out I know better, but I do know what will suit me. I don’t like defining all objects as XML and the Formatted SQL Changelogs don’t look like a natural way to work. I want the developer to do their job in their normal way as much as possible. That means using DDL, DML and PL/SQL scripts.
  2. I thought there was a bug in one aspect of the SQLcl implementation, but thanks to Jeff Smith, I found out it was a problem between my keyboard and seat. πŸ™‚

With a little cajoling from Jeff, I finally released them last night, then found a bunch of typos that quickly got corrected. Why are those never visible until you hit the publish button? πŸ™‚

The biggest shock for most people will probably be that it’s not magic! I’m semi-joking there, but I figure a lot of people assume these products solve your problems, but they don’t. Both Flyway and Liquibase provide a tool set to help you, but ultimately you are going to need to modify the way you work. If you are doing random-ass stuff with no discipline, automation is never going to work for you, regardless of products. If you are prepared to work with some discipline, then tools like Liquibase can help you build the type of automated deployment pipelines you see all the time with other languages and tech stacks.

The ultimate goal is to be able to progress code through environments in a sensible way, making sure all environments are left in the same state, and allow someone to do that promotion of code without having to give them loads of passwords etc. You would probably want a commit in a branch of your source control to trigger this.

So looking back to the APEX deployments, we might think of something like this.

  • A developer finishes their work and exports the current application using APEXExport. It doesn’t have to be that tool, but humans have a way of screwing things up, so having a guaranteed export mechanism makes sense.
  • Code gets checked into your source control. This includes any DDL, DML, packages, and of course the APEX application script.
  • A new changelog is created for the work which includes any necessary scripts, including DDL and DML, as well as the APEX script, all included in the correct order. That new changelog for this piece of work is included in the master changelog, and these are committed to source control.
  • That commit of the changelog, or more likely a merge into a branch triggers the deployment automation.
  • A build agent pulls down the latest source, which will probably include stuff from multiple repositories, then applies it with Liquibase, using the changelog to tell it what to do.

That sounds pretty simple, but depending on your company and how you work, that might be kind-of hard.

  • The master changelog effectively serialises the application of changes to the database. That has to be managed carefully. If stuff is done out of order, or clashes with another developer, that has to be managed. It’s not always a simple process.
  • You will need something to react to commits and merges in source control. In my company we use TeamCity, and I’ve also used GitLab Pipelines to do this type of thing, but if you don’t have any background in these automation tools, then that part of the automation is going to be a learning curve.
  • We also have to consider how we handle actions from privileged accounts. Not all changes in the database are done using the same user.

Probably the biggest factor is the level of commitment you need as a team. It’s a culture change and everyone has to be on board with this. One person manually pushing their stuff into an environment can break all your hard work.

I’m toying with the idea of doing a series of posts to demonstrate such a pipeline, but it’s kind-of difficult to know how to pitch it without making it too specific, or too long and boring. πŸ™‚

Cheers

Tim…

DevOps : Why do you focus on Flow and Automation?

A few weeks ago I did a DevOps “Lunch & Learn” talk inside my company. I’m not trying to claim I’m “Billy DevOps”, but I need our company to move in that direction or we will die. While I was preparing for that talk I did some Googling for the common complaints about DevOps talks and training courses, hoping to avoid them, and what I got back was a bunch of people complaining about the heavy focus on “The Principle of Flow”, specifically the automation piece of that.

Take a look at any conference agenda and the DevOps talks are mostly focused on automation, whether that’s builds (Ansible, Terraform, Vagrant, Cloud) or Continuous Integration/Deployment (CI/CD). Automation is certainly a part of DevOps, but DevOps isn’t just automation. So why do people focus on the automation aspect of DevOps so much?

I started off my talk by saying something like this,

“I Googled the common complaints about DevOps talks and training, and most people complained about too much focus on the Principle of Flow and automation. I’m going to do the same thing!”

So why did I come to this conclusion? Well, there are a few reasons.

The principle of feedback relies on you getting that feedback and doing something with it, like improving applications or processes etc. I realise I’m being simplistic, but how can you do anything with that feedback unless you have flow sorted? If it takes you weeks/months/years to effect basic changes because of bad flow, the feedback becomes almost irrelevant. It only serves to demotivate you, as you identify all the problems, with no way of fixing them.

The principle of continual learning and experimentation relies on flow being sorted. If you can’t quickly and reliably build kit and deploy apps to it, how do you expect to be able to experiment and learn new things? I discussed this point in a post called Why Automation Matters : Reducing the Cost of Failure.

It feels like the majority of people I speak to don’t have basic automation sorted yet. In public they talk a good talk, but behind the scenes the processes in their company suck just as bad as ours. Either that, or they have one aspect of automation sorted, which they talk about all the time, forgetting to mention the other manual processes that persist.

Let’s not forget that most of the people I see talking about DevOps come from a technical background, and I suspect are probably more interested in the automation aspects of DevOps than the process side of things. Also, in their current roles they have the ability to influence automation more than they do process change, so they are going to focus on a fight they have a chance of winning.

I think it’s important to emphasise to people that automation isn’t the be-all and end-all of DevOps, but that doesn’t stop it being the fun bit for me. πŸ™‚

Check out the rest of the series here.

Cheers

Tim…

Video : Vagrant : Oracle Database Build (19c on OL8)

Today’s video is an example of using Vagrant to perform an Oracle database build.

In this example I was using Oracle 19c on Oracle Linux 8. It also installs APEX 19.1, ORDS 19.2, SQLcl 19.2, with ORDS running on Tomcat 9 and OpenJDK 12.

If you’re new to Vagrant, there is an introduction video here. There’s also an article if you prefer to read that.

If you want to play around with some of my other Vagrant builds, you can find them here.

If you want to read about some of the individual pieces that make up this build, you can find them here.

The star of today’s video is Noel Portugal. It’s been far too long since I’ve seen you dude!

Cheers

Tim…

I’m 2% DevOps, 3% agile and 4% automated because of 3rd party apps…

I was having a discussion with my boss about the impact of 3rd party apps on the way we work, and how difficult things are when you have to deal with 3rd party apps, as opposed to just writing your own software.

It’s easier to do things well when you are in control of all the pieces. Most of the examples you see are people writing their own software, typically on new projects. That’s very different to dealing with old projects and 3rd party apps. I’ll give you some examples, without trashing the companies responsible for this.

Example 1

Our student system is provided by a 3rd party. The company in question has a really antiquated way of delivering applications. In recent years they’ve tried to resolve this by writing their own delivery mechanism, made up of some custom software and Jenkins. The problem is, this is just a wrapper over the old process, so it is not the most reliable tool in the world. Someone like me would describe it as putting lipstick on a pig.

In addition to that, you have to use a GUI to perform the operations. At this point there is no API to allow you to script operations, which makes building them into a bigger process really problematic. We have internal development which is gradually moving to something resembling CI/CD, but it will never truly meet that goal, because we have to include manual management of things because of the limitations of the 3rd party software.

I’m sure long-term customers see the new delivery mechanism as a great improvement, but it’s not something you would deliver for a new product. It’s less painful than it was, but not really good.

Example 2

We have a publishing system that is written in Java and runs on Tomcat. It is so close to being hands-off, but there are a couple of problems.

  • When you deploy a new version, it starts in maintenance mode and you need manual interaction to click an OK button a few times on a web-based maintenance screen. I’ve never “not clicked” the OK button, so I just want a “just do it” option, so I can let it get on with it.
  • When some features are enabled by the power users, the next restart of the application flips you into maintenance mode. We’ve had P1 incidents because a host failure has caused the VM to start on a new host, and because a user has enabled a new feature in the app, the automatic startup stalls, waiting for me to click the OK button a few times.

There are some other annoyances, which impact on availability and possible topology, as well. There is no way to resolve these because of limitations in the application. All we can do is raise enhancement requests with the vendor.

I could go on with more examples, but I think you get the message.

So what do you do?

It can be quite disheartening when you want to do things well, but you have to keep compromising because of factors outside your control. You have to try not to give up, and just keep plugging away.

  • Don’t make unrealistic comparisons between your environment and others. There’s no point comparing your mixed environment to a software house. I’ve worked in both. They are very different. Take what works. Ditch what doesn’t.
  • Semi-automated processes are better than processes that are 100% manual. Maybe you can use Robotic Process Automation (RPA) to automate what is essentially a manual process.
  • Try to make sure these considerations become part of your procurement process, or you will keep buying crap.
  • Try to be creative and find workarounds, don’t just bury your head in the sand. There’s always *something* you can do to improve things.
  • Even if something is terrible, that doesn’t stop you improving the processes around it.

I guess you should focus on the values, rather than trying to exactly match some prescriptive ideal.

Good luck!

I wrote a series of posts about automation here.

Cheers

Tim…

PS. I’m pretty sure my boss is reading this laughing, as I’m following none of this advice myself, but instead stomping round the place like a thirteen year old having a strop because, “Everything is crap!” πŸ™‚

Why Automation Matters : Reducing the Cost of Failure

Recently I watched a video called The Future of Faster Enterprises by AWS Enterprise Strategist, Miriam McLemore. I think its a really good video, even if you don’t care about AWS or cloud in general. There is a wider message there.

One of the points Miriam raised was “Reducing the cost of failure”, which sparked a conversation between myself and a colleague. When you’re trying to improve the way you work, you are going to have to try new things. Not all of those things are going to work out. The important point is you try them, see if they work for you. If they do great. If they don’t, you throw them away and move on. Reducing the cost of failure is a really important part of encouraging the culture of experimentation needed for continuous improvement.

Recently I wrote a post called you have to keep working just to stand still. Now add to that the work required to move your company forward and I think you’ll see why any barrier to progress is a problem.

So what factors affect the cost of failure? Here are a few.

  • Lack of automation. If humans are involved in providing infrastructure, it’s going to increase the time it takes to set things up (see lost time), and they will get disgruntled when you ask them to throw it away 2 hours after you’ve got it. You need to be able to build and burn kit rapidly to have any hope of experimenting. This is why the focus on the automation part of flow in DevOps is so important, for both business as usual and experimentation.
  • Bloated waterfall process. If your company expects a detailed plan of action before you so much as fart, you are going to fail. You have to be agile. I’m not using the term agile in the, “I’m too lazy to plan”, sense. I mean proper agile.
  • Time. Your company has to value progress and be willing to allocate time to it. You can’t rely on the fact Beryl and Bert go home every night and no-life their way through learning something new, so the business can reap the benefit of it for free. Yes that happens, but companies that rely on it will fail.
  • Be accepting of failure. I’m not talking about being happy to be rubbish. I’m not talking about being accepting of failure in well defined business as usual (BAU) work. I’m talking about being accepting of failure during experimentation. Not everything will work. Not everything will be the right solution for you or your company. You have to be willing to try and fail or you will fall at the first hurdle.

Check out the rest of the series here.

Cheers

Tim…

My Vagrant Habit

I’ve posted a lot about automation and Vagrant over the last year. It’s got to the point where I find it quite difficult/annoying to create a VM manually anymore. I hadn’t really noticed this until a couple of days ago…

I wanted to try some stuff out with Fedora 30, which is currently in beta. I had a look and couldn’t find any Vagrant boxes for Fedora 30, so I downloaded the ISO image and started to do a manual creation of a VM. It wasn’t very long before I got really annoyed, because it felt so clumsy, and there were so many silly little things I had to do that Vagrant either does for me, or are really simple to configure with Vagrant. After a few minutes I threw my toys out of the pram and started to read up on creating a Vagrant base box. In all this time I had never created one for myself. Turns out it’s really simple.

Once I had that in my Vagrant box list, I could quickly bang out a number of tests. Happy days…

So what did this teach me? It seems I’ve become totally and utterly intolerant of doing anything manually! πŸ™‚

Cheers

Tim…