Fear of a Robot Planet

I’ve been on a retro-kick recently, and I’ve been listening to some Isaac Asimov books on Audible.

During the Robot series there is a constant theme of some people distrusting robots. Despite being brought up on a diet of films where robots turn bad, I’ve never really shared this feeling.

Anyway, I thought I would test my brother’s family by showing them this video by Boston Dynamics.

My nephews were intrigued, but not totally phased by it. One said something to the tune of, it moves like a man in a suit, but it’s clearly not. My sister-in-law had a very negative reaction. She was wincing and saying it was horrible. It seems these robots are already “too human” for her tastes.

So in this regard, the predictions made by Isaac Asimov when he started to write about robots in the 1940’s were spot on. It’s going to take a lot of adjustment for some humans to feel comfortable around humanoid robots.

I, for one, welcome our killer robot overlords! šŸ™‚

Cheers

Tim…

PS. When I was thinking about the title of this post I had the song Kool Thing by Sonic Youth going through my head, because of Kim Gordon saying, “fear of a female planet”. I guess I could have been thinking about Fear of a Black Planet by Public Enemy.

Why Automation Matters : It’s Not New and Scary!

It’s easy to think of automation as new and scary. Sorry for stating the obvious, but automation may be new to you, or new to your company, but plenty of people have been doing this stuff for a long time. I’m going to illustrate this with some stories from my past…

Automated Deployments

In 2003 I worked for a parcel delivery company that were replacing all their old systems with a Java application running against an Oracle back end. Their build process was automated using Ant scripts, which were initiated by a tool called Ant Hill. Once developers committed their code to version control (I think we used CVS at the time) it was available to be included in the nightly builds, which were deployed automatically by Ant Hill. Now I’m not going to make out this was a full CI/CD pipeline implementation, but this was 19 years ago, and how many companies are still struggling to do automated builds now?

Automated Installations

Back at my first Oracle OpenWorld in 2006 I went to a session by Dell, who were able to deploy a 16 node Oracle RAC by just plugging in the physical kit. They used PXE network installations, which included their own custom RPM that performed the Oracle RAC installation and config silently. The guy talking about the technical stuff was Werner Puschitz, who was a legend in the Oracle on Linux space back in the day. I wrote about this session here. This was 16 years ago and they were doing things that many companies still can’t do today.

I can’t remember when the Oracle Universal Installer (OUI) first allowed silent installations, but I’m pretty sure I used them for the first time in Oracle 9i, so that’s somewhere around the 2001 period. I have an article about this functionality here. I think Oracle 9.2 in 2002 was the first time the Database Configuration Assistant (DBCA) allowed silent installations, but before the DBCA we always used to create databases manually using scripts anyway, so silent database creations in one form or another have been possible for well over 20 years. You can read about DBCA silent mode here. Build scripts for Oracle are as old as the hills, so there is nothing new to say here. The funny thing is, back in the day Oracle was often criticised for not having enough GUI tools, and nowadays nobody wants GUI tools. šŸ™‚

Sorry, but if you are building stuff manually with GUIs, it kind-of means you’re a noob. If consultants are building things manually for you, they are wasting your time and need to be called out on it. At minimum you need build scripts, even if you can’t fully automate the whole process. A deliverable on any project should be the build scripts, not a 100 page word document with screen shots.

Random – Off Topic

While writing this post I thought of a recent conversation with a friend. He was showing me videos of his automated warehouse. It had automated guided vehicles (AGVs) zipping around the warehouse picking up products to ship. It was all new and exciting to him. We were laughing because in 1996 I was renting a room in his house, and my job at the time was writing software for automated warehouses using Oracle on the back end. It wasn’t even a new thing 26 years ago. One of the projects I worked on was upgrading an existing automated warehouse that had already been in operation for about 10 years, with AGVs and automated cranes.

New is a matter of perception.

Final Thoughts

I’m not saying all this stuff in an attempt to make out I’m some kind of automation or DevOps thought leader. If you read my blog, you know all about me. I’m just trying to show that many of us have a long history in automation, even if we can’t check all the boxes for the latest buzzwords. Automation is not new and scary. It’s been part of the day-to-day job for a long time. In some cases we are using newer tools to tidy up things that were either already automated, or at least semi-automated. If someone is presenting this stuff like it’s some brave new world bullshit, they are really trying to pull the wool over your eyes. It should be an evolution of what you were already trying to do…

I wrote a series of posts about automation here.

Cheers

Tim…

Why Automation Matters : Why You Will Fail!

The biggest problem you are likely to encounter with any type of change is people!

People don’t want to change, even if they say they do. You would think an industry that is based on constant innovation would be filled with people who are desperate to move forward, but that’s not true. Most people like the steady state. They want to come to work today and do exactly what they did yesterday.

Automation itself is not that difficult. The difficult part is the culture change required. There is a reason why new startup companies can innovate so rapidly. They are staffed by a small number of highly motivated people, who are all excited by the thought of doing something new and different. The larger and more established a company becomes, the harder it is to innovate. There are too many people who are happy to make do. Too many layers of management who, despite what they say in meetings, ultimately don’t want the disruption caused by change. Too many people who want to be part of the process, but spend most of their time focussing on “why not” and (sometimes unknowingly) sabotaging things, rather than getting stuck in. Too many people who suck the life out of you.

It’s exhausting, and that’s one of the worst things about this. It’s easy to take someone who is highly motivated and grind them down to the point where there is no more fight left in them, and they become a new recruit to the stationary crowd.

I’ve been around long enough to know this is a repeating cycle. When I started working in tech I encountered people telling me why relational databases were rubbish. Why virtualization was rubbish. Why NoSQL is rubbish. More recently why Agile is rubbish. Why containers are rubbish. Why cloud is rubbish. Why CI/CD is rubbish. Why DevOps is rubbish. The list goes on…

I’m not saying everything “new” is good and everything old is trash. I’m just saying you have to give things a proper go before you make these judgements. Decide what is the right tool for the job in question. Something might genuinely not be right for you, but that doesn’t mean it is crap for everyone. It also doesn’t mean it might not be right for you in the next project. And be honest! If you don’t want to do something, say you don’t want to do it. Don’t position yourself as an advocate, then piss on everyone’s parade!

I’m convinced companies that don’t focus on automation will die. If you have people trying to move your company forward, please support them, or at least get out of their way. They don’t need another hurdle to jump over!

I wrote a series of posts about automation here.

Cheers

Tim…

Why Automation Matters : Dealing With Vulnerabilities

The recent Log4j issues have highlighted another win for automation, from a couple of different angles.

Which Servers Are vulnerable?

There are a couple of ways to determine this. I guess the most obvious is to scan the servers and see which ones ping for the vulnerability, but depending on your server real estate, this could take a long time.

An alternative is to manage your software centrally and track which servers have downloaded and installed vulnerable software. This was mentioned by a colleague in a meeting recently…

My team uses Artifactory as a central store for a lot of our base software, like:

  • Oracle Database and patches.
  • WebLogic and patches.
  • SQLcl
  • ORDS
  • Java
  • Tomcat

In addition the developers use Artifactory to store their build artifacts. Once the problem software is identified, you could use a tool like Artifactory to determine which servers contained vulnerable software. That would be kind-of handy…

This isn’t directly related to automation, as you could use a similar centralised software library for manual work, but if you are doing manual builds there’s more of a tendency to do one-off things that don’t follow the normal procedure, so you are more likely to get false negatives. If builds are automated, there is less chance you will “acquire” software from somewhere other than the central library.

Fixing Vulnerable Software

If you use CI/CD, it’s a much simpler job to swap in a new version of a library or package, retest your software and deploy it. If your automated testing has good coverage, it may be as simple as commit to your source control. The quick succession of Log4j releases we’ve seen recently would have very little impact on your teams.

If you are working with containers, the deployment process would involve a build of a new image, then replacing all containers with new ones based on the new image. Pretty simple stuff.

If you are working in a more traditional virtual machine or physical setup, then having automated patching and deployments would give you similar benefits, even though it may feel more clunky…

Conclusion

Whichever way you play it, the adoption of automation is going to improve your reaction time when things like this happen again in the future, and make no mistake they will happen again!

I wrote a series of posts about automation here.

Cheers

Tim…

Continuous Delivery : Is there a problem with the sales pitch?

I saw Simon Haslam point to this very short video of Charity Majors speaking about Continuous Delivery.

This answer to why companies are not using continuous delivery is perfect!

“It’s a failure of will, and a failure of courage on the part of our technical leadership.”

I love this answer, because this is exactly how I feel about it!

After seeing that, it got me thinking about why technical leadership are so disengaged from continuous integration/continuous delivery (CI/CD), and I started to wonder if it was actually a problem with the sales pitch.

Have you ever been in a discussion where you provide compelling evidence for your stance, then say one stupid thing, which allows people with the opposing view to jump all over you, and effectively ignore all the stuff you said previously? Been there! Done that! I wonder if the same thing is happening during the CI/CD sales pitch.

When people write or speak about this stuff, they will often bring up things that provide an instant get-out for people. Let’s imagine I am trying to convince someone that CD is the way forward. I might say things like,

  • Automation means it’s not dependent on a specific person being there to complete the deployment.
  • We can eliminate human error from the delivery process.
  • It makes delivery more reliable, as we have a well tested and proven process.
  • That proven reliability makes both us and our customers more confident that deployments will be successful, so it reduces the fear, uncertainty and doubt that often surround deployments.
  • As a result of all of the above, it makes the delivery process quicker and more efficient.

That all sounds great, and surely seals the deal, but then we hit them with this.

  • Amazon does 23,000 production deployments a day!

And now you’ve lost your audience. The room of people who are scared of change, and will look for any reason to justify their stagnation, will likely go through this thought process.

  • Amazon use CI/CD to get 23,000 production deployments a day.
  • We don’t need to do 23,000 production deployments a day.
  • Therefore we don’t need CI/CD.

I know this sounds stupid, but I’m convinced it happens.

I’ve read a bunch of stuff over the years and I’m invested in this subject, but I still find myself irritated by some of the things I read because they focus on the end result, rather than the core values that got them to that end result. Statements like, “Amazon does 23,000 production deployments a day” or “this is what Twitter does”, are unhelpful to say the least. I feel like the core values should be consistent between companies, even if the end result is very different.

This is just a thought and I could be talking complete crap, but I’m going to try and declutter myself of all this hype bullshit and try to focus on the core values of stuff, and hopefully stop giving people a reason to ignore me…

Cheers

Tim…

VirtualBox 6.1.26, Vagrant 2.2.18 and Packer 1.7.4

Hot on the heels of VirtualBox 6.1.24 we get version 6.1.26. Let’s be honest, you knew it was coming right? šŸ™‚

The downloads and changelog are in the usual places.

When I shutdown some VMs before the VirtualBox upgrade, I noticed Vagrant 2.2.18 had been released. Downloads here.

I’ll need to rebuild my Vagrant boxes again, so I thought I should check if there was a new Packer version. Sure enough, Packer 1.7.4 was available. Downloads here. It came out just over a week ago, but I hadn’t noticed.

They are all installed now, so I’ve just got to start doing some Vagrant box builds. Happy days… šŸ™‚

Cheers

Tim…

Update: I used Packer to rebuild my OL7 and OL8 vagrant boxes. They are now uploaded to Vagrant Cloud.

Performance/Flow : Focusing on the Constraint

Over a decade ago Cary Millsap was doing talks at Oracle conferences called “Thinking Clearly About Performance”. One of the points he discussed was identifying the big bottlenecks and dealing with those, rather than focusing on the little things that weren’t causing a problem. For example, if a task is made up of two operations where one takes 60 seconds to complete and the other one takes 1 second to complete, which one will give the most benefit if optimized?

This is the same issue when we are looking at automation to improve flow in DevOps. There are a whole bunch of things we might consider automating, but it makes sense to try and fix the things that are causing the biggest problem first, as they will give the best return. In DevOps and Lean terms that is focusing on the constraint. The weakest link in the chain. (see Theory of Constraints).

Lost in Automation

The reason I mention this is I think it’s really easy to get lost during automation. We often focus on what we can automate, rather than what needs automating. With my DBA hat on it’s clear my focus will be on the automation of provisioning and patching databases and application servers, but how important is that for the company?

  • If the developers want to “build & burn” environments, including databases, for their CI/CD pipelines, then automation of database and application server provisioning is really important as it might happen multiple times a day for automated testing.
  • If the developers use a more traditional dev, test, prod approach to environments, then the speed of provisioning new systems may be a lot less important to the overall performance of the company.

In both cases the automation gives benefits, but in the first case the benefits are much greater. Even then, is this the constraint? Maybe the problems is it takes 14 days for approval to run the automation? šŸ™‚

It’s sometimes hard for techies to have a good idea of where they fit in the value chain. We are often so focused on what we do, and don’t have a clue about the bigger picture.

Before we launch into automation, we need to focus on where the big problems are. Deal with the constraints first. That might mean stopping what you’re doing and going to help another team…

Don’t Automate Bad Process

We need to streamline processes before automating them. It’s a really bad idea to automate bad processes, because they will become embedded for life. It’s hard enough to get rid of bad processes because the, “it’s what we’ve always done”, inertia is difficult to overcome. If we add automation around that bad process we will never get rid of it, because now people will complain we are breaking the automation if we alter the process.

Another thing Cary talked about was removing redundant steps. You can’t make something faster than not doing it in the first place. šŸ™‚ It’s surprising how much useless crap becomes embedded in processes as they evolve over the years.

The process of continuous improvement involves all aspects of the business. We have to be willing to revise our processes to make sure they are optimal, and build our automation around those optimised processes.

I’m not advocating compulsory tuning disorder. We’ve got to be sensible about this stuff.

Know When to Cut Your Losses

The vast majority of examples of automation and DevOps are focussed on software delivery in software development focused companies. It can be very frustrating listening to people harp on about this stuff when you work in a mixed environment with a load of 3rd party applications that will never be automated because they can’t be. They can literally break every rule you have in place and you are still stuck with them because the business relies on them.

You have to know where to cut your losses and move on. There will be some systems that will remain manual and crappy for as long as they are in your company. You can still try and automate around them, and maybe end up in a semi-automated state, but forget wasting your time trying to get to 1000 deployments a day. šŸ™‚

I wrote about this in a post called Iā€™m 2% DevOps, 3% agile and 4% automated because of 3rd party apps.

Try to Understand the Bigger Picture

I think it makes sense for us to all to try and get a better understanding of the bigger picture. It can be frustrating when you’ve put in a lot of work to automate something and nobody cares, because it wasn’t perceived as a problem in the big picture. I’m not suggesting we all have to be business analysts and system architects, but it’s important we know enough about the big picture so we can direct our efforts to get the maximum results.

I wrote a series of posts about automation here.

Cheers

Tim…

Oracle REST Data Services (ORDS) : Database APIs – First Steps

In my never ending quest for automation, I finally got round to looking at the Oracle REST Data Services (ORDS) Database APIs.

These have been around for some time, but I was testing them for the first time using ORDS version 20.2, so I was basing my tests on that version of the documentation, and more importantly version 20 of the APIs.

The are several sets of APIs, and they don’t have the same dependencies or authentication methods. It’s not that big a deal once you know what’s going on, but it confused the hell out of me for a while, and the documentation doesn’t give you much of a steer for some of this.

PDB Lifecycle Management

My first tests were of the PDB Lifecycle Management endpoints. I enable all the relevant features in my normal installation, but there was one big road block. I always install ORDS in the PDB, and this feature only works if ORDS is installed in the root container. This makes sense as the management of PDBs is done at the root container level, but I prefer not to put anything in the root container if I can help it. I uninstalled and reinstalled ORDS so I could give it a go. This resulted in this article.

The PDB Lifecycle Management functionality seemed better suited to a self-contained article, as it is only available from a CDB installation, has its own authentication setup and only has a small number of endpoints. The available APIs are kind-of basic, but they could still be useful. It will be interesting to see if this expands to fit all the possible requirements for a PDB, which are now pretty large. I suspect not.

Most of the other stuff

Next up was “most of the other stuff”. There are too many endpoints to go into any level of detail in a single article, so I figured this should focus on the setup to use most of the other endpoints.

There are two methods of authentication discussed. The default administrator approach, which is good because it hides the database credentials from the user making the API calls. Instead they use application server credentials mapped to the “System Administrator” role. This is similar to that used by the PDB Lifecycle Management feature, except that uses the “SQL Administrator” role, and the ORDS properties are different..

The other approach is to use an ORDS enabled schema. This will be very familiar to people already using ORDS, but it comes with one big disadvantage compared to the previous method. For this functionality you have to expose the database credentials of the ORDS enabled schema to the person calling the API. Normally we would not expose these, instead using another form of authentication (Basic, OAUTH2 etc.) to allow the user to gain access. Even then the ORDS enabled schema would be a weak user that only has access to the specific objects we want it to interact with, but in this case it’s a DBA user, so it makes me nervous. Using the default administrator method the caller is constrained to some extent by the APIs, but with the database credentials they have everything if they have direct access to the database server. It’s probably insignificant when you consider the amount of damage someone could do with the APIs alone, but I feel myself wincing a little when putting DBA credentials into a HTTPS call.

For me as a DBA/Developer I would see myself as the person using these APIs to develop something, whether that was an automation, or an application. If this were to be handed over to a developer to do the work, these security questions may be a much bigger issue.

Having read that, you are probably thinking, just use the default administrator method then. I would, only some APIs don’t work with that method. Some seem to only work with the ORDS enabled schema method for authentication, while others only work with the default administrator method. What’s more, I don’t see any reference to this in the documentation. The API doc doesn’t even mention the default administrator approach, and the setup doc doesn’t mention the limitations on any of the approaches except the PDB lifecycle management. As a result, I think you will need to use a mix of the authentication methods if you plan to use a variety of functionality.

The good thing is they can all live side-by-side. At one point I was testing with a CDB installation of ORDS with credentials for PDB Lifecycle Management, default administrator and ORDS enabled schema authentication all configured at the same time. No problem. It’s just confusing when endpoints fail and you have to “trial and error” your way through them. It would be nice if there was a grid of which groups of endpoints need which type of authentication.

Now I am a noob, so maybe I’ve missed the point here, but I spent a long time trying out variations, and this seems like the way it is. If someone can educate me about why I am wrong I will willingly amend the articles, and this blog post. šŸ™‚

Thoughts and what next?

At this point I’ve just been finding my feet, and I’m not sure what I will do next. There are some endpoints that interest me, so I might do separate articles on those, and refer back to the setup in the above articles. Then again, it may feel like just regurgitating the API documentation, so I may not. It’s worth taking a look at the available endpoints, broken down into these main sections.

  • Clusterware CLIs
  • Data Dictionary
  • Environment
  • Fleet Patching and Provisioning
  • General
  • Monitoring
  • Performance
  • Pluggable Database Lifecycle Management

Some will require additional setup, but many will not.

From the look of it, the vast majority of the endpoints are for reporting purposes. There are far fewer that actually allow you to manipulate the contents of the database. You can always write your own services for that, or use REST Enabled SQL to do it I guess. The question will be, can I get enough value out of these APIs as they stand to warrant the investment in time? I’m not sure at this point.

Cheers

Tim…

PS. If you were watching my twitter feed over the weekend and wondered what bit of tech I gave up on. It was this. I’m very stubborn though, so I came back…

Packer by HashiCorp : First Steps

A few days ago I wrote about some Vagrant Box Drama I was having. Martin Bach replied saying I should build my own Vagrant boxes. I’ve built Vagrant boxes manually before, as shown here.

The manual process is just boring, so I’ve tended to use other people’s Vagrant boxes, like “bento/oracle-8”, but then you are at the mercy of what they decide to include/exclude in their box. Martin replied again saying,

“Actually I thought the same until I finally managed to get around automating the whole lots with Packer and Ansible. Works like a dream now and with minimum effort”

Martin Bach

So that kind-of shamed me into taking a look at Packer. šŸ™‚

I’d seen Packer before, but had not really spent any time playing with it, because I didn’t plan on being in the business of maintaining Vagrant box images. Recent events made me revisit that decision a little.

So over the weekend I spent some time playing with Packer. Packer can build all sorts of images, including Vagrant boxes (VirtualBox, VMware, Hyper-V etc.) and images for Cloud providers such as AWS, Azure and Oracle Cloud. I focused on trying to build a Vagrant box for Oracle Linux 8.2 + UEK, and only for a VirtualBox provider, as that’s what I needed.

The Packer docs are “functional”, but not that useful in my opinion. I got a lot more value from Google and digging around other people’s GitHub builds. As usual, you never find quite what you’re looking for, but there are pieces of interest, and ideas you can play with. I was kind-of hoping I could fork someone else’s repository and go from there, but it didn’t work out that way…

It was surprisingly easy to get something up and running. The biggest issue is time. You are doing a Kickstart installation for each test. Even for minimal installations that takes a while to complete, before you get to the point where you are testing your new “tweak”. If you can muscle your way through the boredom, you quickly get to something kind-of useful.

Eventually I got to something I was happy with and tested a bunch of my Vagrant builds against it, and it all seemed fine, so I then uploaded it to Vagrant Cloud.

I’ve already made some changes and uploaded a new version. šŸ™‚

You will see a couple of older manually built boxes of mine under oraclebase. I’ll probably end up deleting those as they are possibly confusing, and definitely not maintained.

I’ve also altered all my OL8 Vagrant builds to use this box now.

You will also see a new sub-directory called “packer”. I think you can guess what’s in there. If I start to do more with this I may move it to its own repository, but for now this is fine.

I’m not really sure what else I will do with Packer from here. I will probably do an Oracle Linux 7 build, which will be very similar to what I already have. This first image is pretty large, as I’ve not paid much attention to reducing it’s size. I’ve looked at what some other builds do, and I’m not sure I agree with some of the stuff they remove. I’m sure I will alter my opinion on this over time.

I’m making no promises about these boxes. That same way I make no promised about any of my GitHub stuff. It’s stuff I’m playing around with, and I will mostly try to keep it up to date, but I’m not an expert and it’s not my job to maintain this. It’s just something that is useful for me, and if you like it, great. If not, there are lots of other places to look for inspiration. šŸ™‚

Cheers

Tim…

Automating SQL and PL/SQL Deployments using Liquibase

You’ll have heard me barking on about automation, but one subject that’s been conspicuous by its absence is the automation of SQL and PL/SQL deployments…

I had heard of some products that might work for me, like Flyway and Liquibase, but couldn’t really make up my mind or find the time to start learning them. Next thing I knew, SQLcl got Liquibase built in, so I figured that was probably the decision made for me in terms of product. This also coincided with discussions about making a deployment pipeline for APEX applications, which kind-of focused me. It’s sometimes hard to find the time to learn something when there is not a pressing demand for it…

Despite thinking I would probably be using the SQLcl implentation, I started playing with the regular Liquibase client first. Kind of like starting at grass roots. If you are working in a mixed environment, you might prefer to use the regular client, as it will work with multiple engines.

Once I had found my feet with that, I essentially rewrote the article to use the SQLcl implementation of Liquibase. If you are focused on Oracle, I think this is better than using the standard client.

Both these articles were written more than 3 months ago, but I was holding them back on publishing them for a couple of reasons.

  1. I’m pretty new to this, and I realise some of the ways I’m suggesting to use them do not fall in line with the way I guess many Liquibase users would want to use them. I’m not trying to make out I know better, but I do know what will suit me. I don’t like defining all objects as XML and the Formatted SQL Changelogs don’t look like a natural way to work. I want the developer to do their job in their normal way as much as possible. That means using DDL, DML and PL/SQL scripts.
  2. I thought there was a bug in one aspect of the SQLcl implementation, but thanks to Jeff Smith, I found out it was a problem between my keyboard and seat. šŸ™‚

With a little cajoling from Jeff, I finally released them last night, then found a bunch of typos that quickly got corrected. Why are those never visible until you hit the publish button? šŸ™‚

The biggest shock for most people will probably be that it’s not magic! I’m semi-joking there, but I figure a lot of people assume these products solve your problems, but they don’t. Both Flyway and Liquibase provide a tool set to help you, but ultimately you are going to need to modify the way you work. If you are doing random-ass stuff with no discipline, automation is never going to work for you, regardless of products. If you are prepared to work with some discipline, then tools like Liquibase can help you build the type of automated deployment pipelines you see all the time with other languages and tech stacks.

The ultimate goal is to be able to progress code through environments in a sensible way, making sure all environments are left in the same state, and allow someone to do that promotion of code without having to give them loads of passwords etc. You would probably want a commit in a branch of your source control to trigger this.

So looking back to the APEX deployments, we might think of something like this.

  • A developer finishes their work and exports the current application using APEXExport. It doesn’t have to be that tool, but humans have a way of screwing things up, so having a guaranteed export mechanism makes sense.
  • Code gets checked into your source control. This includes any DDL, DML, packages, and of course the APEX application script.
  • A new changelog is created for the work which includes any necessary scripts, including DDL and DML, as well as the APEX script, all included in the correct order. That new changelog for this piece of work is included in the master changelog, and these are committed to source control.
  • That commit of the changelog, or more likely a merge into a branch triggers the deployment automation.
  • A build agent pulls down the latest source, which will probably include stuff from multiple repositories, then applies it with Liquibase, using the changelog to tell it what to do.

That sounds pretty simple, but depending on your company and how you work, that might be kind-of hard.

  • The master changelog effectively serialises the application of changes to the database. That has to be managed carefully. If stuff is done out of order, or clashes with another developer, that has to be managed. It’s not always a simple process.
  • You will need something to react to commits and merges in source control. In my company we use TeamCity, and I’ve also used GitLab Pipelines to do this type of thing, but if you don’t have any background in these automation tools, then that part of the automation is going to be a learning curve.
  • We also have to consider how we handle actions from privileged accounts. Not all changes in the database are done using the same user.

Probably the biggest factor is the level of commitment you need as a team. It’s a culture change and everyone has to be on board with this. One person manually pushing their stuff into an environment can break all your hard work.

I’m toying with the idea of doing a series of posts to demonstrate such a pipeline, but it’s kind-of difficult to know how to pitch it without making it too specific, or too long and boring. šŸ™‚

Cheers

Tim…