Packer by HashiCorp : Second Steps?

In a previous post I mentioned my first steps with Packer by HashiCorp. This is a brief update to that post.

I’ve created a new box called “oracle-7” for Oracle Linux 7 + UEK. This will track the latest OL7 spin. You can find it on Vagrant Cloud here.

I’ve altered all my OL7 Vagrant builds to use this box now.

You will see a new sub-directory called “ol7” under the “packer” directory. This contains the Packer build for this new image.



Do you know where your installation media is?

This was inspired by a Twitter comment and subsequent DMs, but I’m not going to name names. You know who you are… 🙂

Let me ask you some questions.

  • Do you have an archive of all your OS installation media?
  • Do you have an archive of all your software versions?
  • Do you have an archive of the patches you’ve downloaded over the years, along with any supporting tools?

If you answer “No” to any of those questions, you probably need to rethink your approach to managing your software.


You never know when there will be a catastrophic event and you will need to rebuild something. If you don’t have the exact software, you might not be able to get your system up and running again.

Don’t even get me started on build automation and/or documentation…

But I only use the latest software, so I can download it again!

I’m tempted to scream, “Liar!”

Every company I’ve worked for over the last 25+ years has had a mix of products, including some out of support old crap they try not to talk about. If they say they don’t, they are either a new startup, or they are lying.

But I can contact the vendor and get the media!

Can you? Do you know that? In the past I’ve had to open service requests to get old versions of the Oracle database software, and I’ve never been told no yet, but that’s a big risk to take. There is nothing to stop a vendor from hitting the delete key and making it impossible for you to get a copy of that software from them in future.

This is especially important if you are running old versions of products that are out of support.

When should I purge my archive?

I’m tempted to say never, but lets put a few ground rules in place.

  • A piece of software can only be removed if it is not used in your company anymore.
  • That includes offsite backups that might need the software if a rebuild were needed to allow you to restore/recover the backup. Some places keep old backups for several years, so this could be a long time.
  • For vendors, only when you can 100% guarantee the last of your customers has stopped using that version of the software. 100% guarantee. That probably means never.

Vendors: But we can rebuild that version using our build process!

Shut up. You need to keep all your build artefacts. You can’t guarantee that several years later your build process will be able to build exactly what you need. I know you kid yourself you can, but I think you are probably wrong. Just keep the bloody build artefacts.

What do I do?

I’m sure what I do is not perfect, but it’s pretty good. At work I have all the software we use. For each product version there is a directory containing the base installation media, along with sub-directories for all the patches we’ve downloaded, which includes any supporting tools. In the case of Oracle database software that will include the latest version of OPatch and tools like the PreUpgrade.jar etc.


It is your responsibility to keep hold of all your installation media and patches. If you don’t and a vendor won’t/can’t give you a download, you only have yourself to blame.

I don’t agree with vendors ever deleting old versions of their software, but you have to protect yourself against them potentially doing that.



PS. Don’t ask me to send you old copies of stuff. It is illegal to do that, and I may not have what you are looking for anyway…

PPS. I reserve the right to post a, “I’ve messed up”, post next week when something happens and I don’t have the software. But at least I’ve tried… 🙂

PPPS. Just to prove nothing is ever truly original these days, see Jon Adams’ post on a similar subject here. 🙂

Packer by HashiCorp : First Steps

A few days ago I wrote about some Vagrant Box Drama I was having. Martin Bach replied saying I should build my own Vagrant boxes. I’ve built Vagrant boxes manually before, as shown here.

The manual process is just boring, so I’ve tended to use other people’s Vagrant boxes, like “bento/oracle-8”, but then you are at the mercy of what they decide to include/exclude in their box. Martin replied again saying,

“Actually I thought the same until I finally managed to get around automating the whole lots with Packer and Ansible. Works like a dream now and with minimum effort”

Martin Bach

So that kind-of shamed me into taking a look at Packer. 🙂

I’d seen Packer before, but had not really spent any time playing with it, because I didn’t plan on being in the business of maintaining Vagrant box images. Recent events made me revisit that decision a little.

So over the weekend I spent some time playing with Packer. Packer can build all sorts of images, including Vagrant boxes (VirtualBox, VMware, Hyper-V etc.) and images for Cloud providers such as AWS, Azure and Oracle Cloud. I focused on trying to build a Vagrant box for Oracle Linux 8.2 + UEK, and only for a VirtualBox provider, as that’s what I needed.

The Packer docs are “functional”, but not that useful in my opinion. I got a lot more value from Google and digging around other people’s GitHub builds. As usual, you never find quite what you’re looking for, but there are pieces of interest, and ideas you can play with. I was kind-of hoping I could fork someone else’s repository and go from there, but it didn’t work out that way…

It was surprisingly easy to get something up and running. The biggest issue is time. You are doing a Kickstart installation for each test. Even for minimal installations that takes a while to complete, before you get to the point where you are testing your new “tweak”. If you can muscle your way through the boredom, you quickly get to something kind-of useful.

Eventually I got to something I was happy with and tested a bunch of my Vagrant builds against it, and it all seemed fine, so I then uploaded it to Vagrant Cloud.

I’ve already made some changes and uploaded a new version. 🙂

You will see a couple of older manually built boxes of mine under oraclebase. I’ll probably end up deleting those as they are possibly confusing, and definitely not maintained.

I’ve also altered all my OL8 Vagrant builds to use this box now.

You will also see a new sub-directory called “packer”. I think you can guess what’s in there. If I start to do more with this I may move it to its own repository, but for now this is fine.

I’m not really sure what else I will do with Packer from here. I will probably do an Oracle Linux 7 build, which will be very similar to what I already have. This first image is pretty large, as I’ve not paid much attention to reducing it’s size. I’ve looked at what some other builds do, and I’m not sure I agree with some of the stuff they remove. I’m sure I will alter my opinion on this over time.

I’m making no promises about these boxes. That same way I make no promised about any of my GitHub stuff. It’s stuff I’m playing around with, and I will mostly try to keep it up to date, but I’m not an expert and it’s not my job to maintain this. It’s just something that is useful for me, and if you like it, great. If not, there are lots of other places to look for inspiration. 🙂



Why I don’t want my presentations recorded!

I was on Twitter a couple of days ago and I mentioned my preference not to be recorded when I’m presenting. That sparked a few questions, so I said I would write a blog post about it. Here it is.

This is a bit of a stream of consciousness, so forgive me if I ramble.

The impact on me!

The primary reason I don’t like being recorded is it has a big impact on me.

I’ve said many times, presenting is not natural for me. I’m very nervous about doing it. I have to do a lot of preparation before an event to try to make it look casual, and almost conversational. It takes a rather large toll on me personally, invading every part of my life for weeks before it happens, and pretty much ruining the day(s) immediately before the event. In my head it’s going to be a complete disaster, and the public humiliation is going to be that much worse because I’m an Oracle ACE and Groundbreaker Ambassador, so I must clearly think I’m the shit, yet I can’t even present and don’t have a clue what I’m talking about. Classic impostor syndrome stuff.

That’s “normal” for me and conferences, which is why I nearly always get a post-conference crash, because of the relief it’s over. But it goes into overdrive if I know the session is going to be recorded, because in my head there will be a permanent record of my screw up.

I have been recorded before, but fortunately not on the sessions where I’ve screwed up… Yet… I don’t think… Recently I’ve decided that I will probably pull out of any event where I’m being recorded, as I can’t keep putting myself through that anymore.

There are other people that will happily fill the conference slot, so me not being there is no big deal.

Editorial control

When I write an article, I constantly go back and revisit things. If my opinion changes, I learn something new, or just don’t like the way I explained something I will rewrite it. I have full control of the content.

When I record a YouTube video I edit it, making sure it contains what I want it to contain. YouTube won’t let you do much in the way of editing a video once it’s posted, but you can make minor changes to the timeline. Even so, if something really annoyed me I could delete it, re-edit it and post it again. Yes I would lose all the views and comments, but ultimately I can do that if I want.

When a user group records a presentation, you no longer have any control of that content. If your opinion changes, or it contains some really dumb stuff, it is there for life. I know nothing is lost on the internet, but at least I should be able to control the “current version” of the content.

I very rarely write for other publications. I like to keep control of my content, so I can decide what to do with it. A lot of this is a throw-back to the previous point about my insecurities, but that’s how I feel about it, and why should I have to compromise about my content?

It’s my content!

Following on from the previous point, it is my content. I wrote it. I rehearsed it. I presented it. And most importantly, I wasn’t being paid to present it! Why should a user group now have control of that content?

Karen López (@datachick) recently posted a really interesting tweet.

“What would you think about an organization who held an event and you spoke at it for free. You signed an agreement to allow distribution to attendees, but they are now selling your content as part of a subscription that you are getting no compensation for?”


I’m not saying this is what user groups are planning, but it’s certainly something some might try, now that times are getting harder than usual.

I’m sorry if this sounds really selfish, but I think I’m doing enough for the community and user groups, without giving them additional product to sell. I know a lot of user groups find finance difficult, but in the current online-model, the financial situation is very different. There aren’t any buildings to hire and people to feed.

The audience matters!

My presentation style varies depending on the audience.

If I present in the UK I tend to speak faster and swear a bit. Similar with Australia. When I present in other countries I tend to tone down my language, as some places are really uptight about expletives.

In some countries where English is a second or third language, I slow down a lot and remove some content from the session, because I know there will be a larger number of people who will struggle to keep up. Maybe I’ll miss out a couple of anecdotes, so I can speak more slowly. If there is live translation I have to go a lot slower.

I remember seeing one recording of me presenting with live translation and I sounded really odd, as I was having to present so slowly for the live translation to work. It was kind-of shocking for me to see it back, and I would prefer people not see that version of the talk, as it doesn’t represent me. It’s “adjusted me” to suit the circumstance.

Other things…

OK. Let’s assume other speakers are not self-obsessed control freaks like me for a second…

It’s possible some people would prefer to be selective about what gets recorded. For example, the first time I do a talk I really don’t know how it will turn out. That’s different to the 10th time I give the same talk. For a new talk I doubt I would feel happy about it being recorded, even if I were generally cool with the concept. I may feel better about recording a talk I have done a few times, having had time to adjust and improve it. I think of this like comedians, who go on tour and constantly change their material based on how it works with the audience. At the end of a tour they record their special, only using the best bits. Then it’s time to start preparing for the next tour. I suspect many comedians would be annoyed at being recorded on the first day of a tour. Same idea…

I think recording sessions could be off-putting for new speakers. When you are new to the game there is enough to worry about, without having to think about this too. Maybe other people aren’t as “sensitive” as me, but maybe they are.

I don’t like to be in pictures and videos. It’s just not my thing. I rarely put myself into my videos on YouTube. I’m sure there would be other speakers who would prefer to be judged by what they say, rather than how they look.

I used to be concerned that if someone recorded my session and put it on YouTube, nobody would come to my future sessions on the same subject. I actually don’t think this is a real problem. It seems the audience for blog posts, videos and conferences is still quite different. Yes, there is some crossover, but there is also a large group of people that gravitate to their preferred medium and stick with it.

But what about…

Look, I really do know what the counter arguments to this are.

  • Some people can’t get to your session because of an agenda clash, and they would like to watch it later.
  • This gives the user group members a resource they can look back at to remind themselves what you said.
  • This is a resource for existing user group members who couldn’t make it to the event.
  • For paid events, the attendees are paying money, so they have the right to have access to recordings. (but remember, the speakers are not being paid!)

I know all this and more. I am sorry if people don’t like my view on this. I really am, and I’m happy not to be selected to speak at an event. It really doesn’t bother me. Feel free to pick someone else that fits into your business model. That is fine by me. It really is.


Maybe I’m the only person that feels this way. Maybe other people feel the same, but don’t feel they have a loud enough voice to make a big deal out of it.

At the end of the day, it’s my content and I should have the right to decide if I’m happy about it being recorded or not. I believe conferences should make recording optional, and I’ll opt out. If people believe recording should be mandatory, that’s totally fine. It’s just unlikely I will be involved.

I’m sorry if you don’t like my opinion, but that’s how I feel at this point and it’s my choice. My attitude may change in future. It may not. Either way, it’s still my choice!



Update: This is not because of any recent conferences. Just thought I better add that in case someone thought it was. I’ve been asking events not to record me for a while now and it’s not been drama. In a recent message for a conference later in the year I was asked to explicitly confirm my acceptance of recording and publishing rights, which is why I mentioned it on Twitter, which then prompted the discussion. Sorry to any recent events if you thought you were the catalyst for this. You weren’t. Love you! 🙂

PS. I expected a lot more criticism, and I didn’t expect how many people would respond (through various channels) to say they also don’t like being recorded. It’s nice to know I’m not alone in my paranoia. 🙂

Docker Birmingham March 2020

Last night was Docker Birmingham March 2020. It clashed with the Midlands Microsoft 365 and Azure User Group for the second time, so it was Docker Birmingham’s turn this time. 🙂

These events start with food and I was looking longingly at the pizzas, but I know enough about myself to know it would make me sleepy, so I distanced myself from them until later.

First up was Richard Horridge with “A Brief History of Containers”. As the name suggests this was a history lesson, but it started much further back than most do when discussing this subject. With punched cards in fact. Fortunately I never had the “pleasure” of those, but I did find myself thinking, “Oh yeah, I’ve used that!”, about a bunch of stuff mentioned. That’s it. I’m now part of ancient history. I think it’s good for some of the younger folks to understand about the history of some of this stuff, and the difference in focus from the system administration focus of the past, to the application focus of the present.

Next up was Matt Todd with “Say Yes! To K8s and Docker”. Let me start by saying I like Swarm. It feels almost like a dirty statement these days, but I do. Matt started in pretty much the same way. He gave a quick pros vs. cons between Swarm and Kubernetes, then launched into the main body of the talk, which was trying to find a convenient way to learn about Kubernetes on your laptop without needing to install a separate hypervisor. So basically how to run Kubernetes in Docker. He did a comparison between the following.

He picked K3s as his preferred solution.

Along the way he also mentioned these tools to help visualize what’s going on inside a Kubernetes cluster, which helped him as he was learning.

  • Octant. Kind of like Portainer for Kubernetes.
  • K9s. He described as looking like htop for Kubernetes. 

Of course, the obvious question was, “Why not Minikube?”, and that came down to his preference of not having to install another hypervisor. It was an interesting take on the subject, and mentioning Octant certainly got my attention.

So once again, I noobed my way through another event. Thanks to the speakers for taking their time to come and educate us, and to the sponsor Black Cat Technology Solutions for the venue, food and drinks. See you all soon!



Birmingham Digital & DevOps Meetup – March 2020

On Tuesday evening I was at the Birmingham Digital & DevOps Meetup – March 2020 event, which had four speakers this time.

First up was Mike Bookham from Rancher Labs with “Rancher Labs – Kubernetes”. The session demonstrated how to set up a Kubernetes cluster using RKE (Ranchers Kubernetes Engine). The tool looked pretty straight forward, and Rancher got referenced a few times during this event and the one the next day, so there seems to be some love for them as a company out there.

Next up was Dave Whyte from Auto Trader with “Under the bonnet at Auto Trader”. He did a talk about how Auto Trader use Google Kubernetes Engine (GKE) and Istio for a bunch of their microservices. They do the hundreds of production deployments a day that you come to expect from microservice folks, but the main “Wow!” moment for me was the diagnostics and observability they’ve got. It was amazing. I was just sitting there thinking, there is no way on earth we could do this! Very… Wow! Many of the points are cover in this video.

After the break it was Patricia McMahon from Generation with “AWS re/Start – Resourcing Cloud Teams”. The session was about the work they are doing re-skilling long term unemployed young people as AWS Cloud Engineers, and of course getting them into jobs. I love this sort of stuff. My background was a bit different, but I entered the technology industry via a retraining course after my PhD in cabbage sex. The course I did was available for all age groups, not just young people, but it was a similar thing. I hope they continue to do great work. If you are looking for fresh, enthusiastic and diverse talent, I’m sure Patricia and Generation would love to hear from you!

Last up was Toby Pettit from CapGemini with “Multilingual, Multi-Cloud Apps – A Reality Check”. His abstract said, “All I wanted to do is run any language on any cloud with state and with no servers to maintain. Of course it also needs to be highly available, observable, maintainable, recoverable and all of the other “ables”. How hard can it be?” Well it turns out the answer is bloody hard! I don’t even know where to begin with this. It was Alpha this product and Beta that product. Of course Kubernetes and Istio were in there along with OpenFaaS and loads of other stuff. He showed a demo of a workload being split between AWS, Azure and Google Public Cloud, so it “worked”, but by his own admissions this was a POC, not something you could go to production with. Interesting, but crazy mind blowing. 🙂

Thanks to all the speakers for coming along and making it a great event. Thanks also to CapGemini for sponsoring the event!



Shadow IT : Low-code solutions can help!

I recently had a bit of a rant on email about the current state of Shadow IT at work. Typically, we don’t know it is happening until something goes wrong, then we’re called in to help and can’t, mostly because we don’t have the resources to do it. My rant went something like this…

“This is shadow IT.

Shadow IT is happening because we are not able to cope with the requirements from the business, so they do it themselves.

We need to stop being so precious about tool-sets and use low-code solutions to give the business the solutions to their problems. This allows us to develop them quicker, and in some cases, let them develop their own safely.”

We are not a software house. We are not the sort of company that can take our existing staff and reasonably launch into microservices this, or functions that. In addition to all the big projects and 3rd party apps we deal with, we also need to provide solutions to small issues, and do it fast.

Like many other companies we have massive amounts of shadow IT, where people have business processes relying on spreadsheets or Access databases, that most of us in IT don’t know exist. As I mentioned in the quote above, this is happening because we are failing! We are not able to respond to their demands. Why?

For the most part we make the wrong decisions about technology stacks for this type of work. We just need simple solutions to simple problems, that are quick and easy to produce, and more importantly easy to maintain.

What tool are you suggesting? The *only* thing we have in our company that is truly up to date at this time, and has remained so since it was introduced into the company, is APEX. It also happens to be a low-code declarative development solution, that most of our staff could pick up in a few days. The *only* tool we have that allows us to quickly deliver solutions is APEX. So why are we not using it, or some other tool like it? IMHO because of bad decisions!

You’re an Oracle guy, and you are just trying to push the Oracle stack aren’t you? No. Give me something else that does a similar job of low-code declarative development and I will gladly suggest that goes in the list too. I’ve heard good things about Power Apps for this type of stuff. If that serves the purpose better, I’ll quite happily suggest we go in that direction. Whatever the tool is, it must be something very productive, which doesn’t require a massive learning curve, that also gives us the possibility of allowing the business to development for themselves, in a citizen developer type of way.

It should be noted, we are wedded to Oracle for the foreseeable future because of other reasons, so the “Oracle lock-in” argument isn’t a valid for us anyway.

So you’re saying all the other development stuff is a waste of time? No. In addition to the big and “sexy” stuff, there are loads of simple requirements that need simple solutions. We need to be able to get these out of the door quickly, and stop the business doing stuff that will cause problems down the line. If they are going to do something for themselves, I would rather it was done with a tool like APEX, that we can look after centrally. I don’t want to be worrying if Beryl and Bert are taking regular backups of their desktops…

Are you saying APEX is only good for this little stuff? No! I’m saying it does this stuff really well, so why are we using languages, frameworks and infrastructure that makes our life harder and slower for these quick-fire requirements? Like I said, it’s not about the specific tool. It’s what the tool allows us to achieve that’s important.

What would you do it you could call the shots? I would take a couple of people and task them with working through the backlog of these little requirements using a low-code tool. It might be APEX. It might be something else. The important thing is we could quickly make a positive impact on the way the company does things, and maybe reduce the need for some of the shadow IT. It would be really nice to feel like we are helping to win the war on this, but we won’t until we change our attitude in relation to this type of request.

So you think you can solve the problem of shadow IT? No. This will always happen. What I’m talking about is trying to minimise it, rather than being the major cause of it.



MobaXterm 20.0 and KeePass 2.44

And in other news about things I’ve missed recently…

MobaXterm 20.0 was released a couple of days ago. It looks like they’ve switched across to the yearly naming like many other companies. 🙂

The downloads and changelog are in the usual places.

If you are working on Windows and spend a lot of time in shells for connections to Linux boxes, you need this in your life!

KeePass 2.44 was released nearly a month ago.

The downloads and changelog are in the usual places.

You can read about how I use KeePass and KeePassXC on my Windows, Mac and Android devices here.

Happy days!



Midlands Microsoft 365 and Azure User Group – February 2020

Last night I went to the Midlands Microsoft 365 and Azure User Group. It was co-organised by Urfaan Azhar and Lee Thatcher from Pure Technology Group, and Adrian Newton from my company.

This event clashed with the Cloud Native Computing Foundation meetup. If the clash continues I’ll probably have to alternate between the events.

First up was Penny Coventry with “Power Automate AKA Flow Introduction”. The session started with an overview of various “Power” products, before focusing on some of the Power Automate features. This included a demo of building an automation flow. I’ve seen Amy Simpson-Grange demonstrating UIPath and one of my colleagues Paul demonstrating LeapWorks, and as you would expect, there are a lot of similarities between these automation tools. I don’t know if I’ll get to do any of this, but I do find it interesting. I’ll probably wait for my colleague Natalie to learn it, then bug her to explain stuff to me, so I can act like I know what I’m doing. 🙂

After far too much pizza and a doughnut (diet starts tomorrow) it was time for Tom Gough with “Azure Machine Learning with Power BI”. The session started with an overview of some of the Artificial Intelligence (AI) and Machine Learning (ML) services on Azure. The mention of data preparation and data cleansing was quite interesting, as people don’t really say a lot about this. You could be forgiven for thinking this piece just magically happens. There was a demo of using Power BI desktop to prepare some data containing user comments, connect to Cognitive Services and pull out some key phrases from the data, and presenting it in some custom visualisations. One of my colleagues has used this to do sentiment analysis on responses to a chat bot running in the Azure Bot Service. Pretty interesting stuff, and he tells me it’s very easy to get some basic examples working.

It seems every event comes with some more signs that this stuff is gradually creeping into our company. I’m not sure if I will be part of this world, but it’s certainly interesting to see.

Thanks to everyone who turned up to support the event, the speakers, and the sponsor Pure Technology Group. See you at the next event.



PS. Apologies to Richard Harrison, who had to endure me asking questions for ages, while he froze to death. Bring some masking tape or a restraining order the next time you come. 🙂

VirtualBox 6.1.2

About a month after the release of VirtualBox 6.1 we get the release of VirtualBox 6.1.2, a maintenance release.

The downloads and changelog are in the usual places.

So far I’ve only tried it on a Windows 10 host at work, but it looks fine.

Remember, if you use Vagrant 2.2.6 and this is your first time using VirtualBox 6.1.x you will need to do a couple of config changes to Vagrant, as discussed in this post by Simon Coter. I’m sure once Vagrant 2.2.7 is released this will no longer be necessary.

Happy upgrading! 🙂



Update: Once I got home I installed VirtualBox 6.1.2 on Windows 10, Oracle Linux 7 and macOS Catalina hosts. It worked fine. 🙂