Docker Birmingham March 2020

Last night was Docker Birmingham March 2020. It clashed with the Midlands Microsoft 365 and Azure User Group for the second time, so it was Docker Birmingham’s turn this time. 🙂

These events start with food and I was looking longingly at the pizzas, but I know enough about myself to know it would make me sleepy, so I distanced myself from them until later.

First up was Richard Horridge with “A Brief History of Containers”. As the name suggests this was a history lesson, but it started much further back than most do when discussing this subject. With punched cards in fact. Fortunately I never had the “pleasure” of those, but I did find myself thinking, “Oh yeah, I’ve used that!”, about a bunch of stuff mentioned. That’s it. I’m now part of ancient history. I think it’s good for some of the younger folks to understand about the history of some of this stuff, and the difference in focus from the system administration focus of the past, to the application focus of the present.

Next up was Matt Todd with “Say Yes! To K8s and Docker”. Let me start by saying I like Swarm. It feels almost like a dirty statement these days, but I do. Matt started in pretty much the same way. He gave a quick pros vs. cons between Swarm and Kubernetes, then launched into the main body of the talk, which was trying to find a convenient way to learn about Kubernetes on your laptop without needing to install a separate hypervisor. So basically how to run Kubernetes in Docker. He did a comparison between the following.

He picked K3s as his preferred solution.

Along the way he also mentioned these tools to help visualize what’s going on inside a Kubernetes cluster, which helped him as he was learning.

  • Octant. Kind of like Portainer for Kubernetes.
  • K9s. He described as looking like htop for Kubernetes. 

Of course, the obvious question was, “Why not Minikube?”, and that came down to his preference of not having to install another hypervisor. It was an interesting take on the subject, and mentioning Octant certainly got my attention.

So once again, I noobed my way through another event. Thanks to the speakers for taking their time to come and educate us, and to the sponsor Black Cat Technology Solutions for the venue, food and drinks. See you all soon!

Cheers

Tim…

Birmingham Digital & DevOps Meetup – March 2020

On Tuesday evening I was at the Birmingham Digital & DevOps Meetup – March 2020 event, which had four speakers this time.

First up was Mike Bookham from Rancher Labs with “Rancher Labs – Kubernetes”. The session demonstrated how to set up a Kubernetes cluster using RKE (Ranchers Kubernetes Engine). The tool looked pretty straight forward, and Rancher got referenced a few times during this event and the one the next day, so there seems to be some love for them as a company out there.

Next up was Dave Whyte from Auto Trader with “Under the bonnet at Auto Trader”. He did a talk about how Auto Trader use Google Kubernetes Engine (GKE) and Istio for a bunch of their microservices. They do the hundreds of production deployments a day that you come to expect from microservice folks, but the main “Wow!” moment for me was the diagnostics and observability they’ve got. It was amazing. I was just sitting there thinking, there is no way on earth we could do this! Very… Wow! Many of the points are cover in this video.

After the break it was Patricia McMahon from Generation with “AWS re/Start – Resourcing Cloud Teams”. The session was about the work they are doing re-skilling long term unemployed young people as AWS Cloud Engineers, and of course getting them into jobs. I love this sort of stuff. My background was a bit different, but I entered the technology industry via a retraining course after my PhD in cabbage sex. The course I did was available for all age groups, not just young people, but it was a similar thing. I hope they continue to do great work. If you are looking for fresh, enthusiastic and diverse talent, I’m sure Patricia and Generation would love to hear from you!

Last up was Toby Pettit from CapGemini with “Multilingual, Multi-Cloud Apps – A Reality Check”. His abstract said, “All I wanted to do is run any language on any cloud with state and with no servers to maintain. Of course it also needs to be highly available, observable, maintainable, recoverable and all of the other “ables”. How hard can it be?” Well it turns out the answer is bloody hard! I don’t even know where to begin with this. It was Alpha this product and Beta that product. Of course Kubernetes and Istio were in there along with OpenFaaS and loads of other stuff. He showed a demo of a workload being split between AWS, Azure and Google Public Cloud, so it “worked”, but by his own admissions this was a POC, not something you could go to production with. Interesting, but crazy mind blowing. 🙂

Thanks to all the speakers for coming along and making it a great event. Thanks also to CapGemini for sponsoring the event!

Cheers

Tim…

Shadow IT : Low-code solutions can help!

I recently had a bit of a rant on email about the current state of Shadow IT at work. Typically, we don’t know it is happening until something goes wrong, then we’re called in to help and can’t, mostly because we don’t have the resources to do it. My rant went something like this…

“This is shadow IT.

Shadow IT is happening because we are not able to cope with the requirements from the business, so they do it themselves.

We need to stop being so precious about tool-sets and use low-code solutions to give the business the solutions to their problems. This allows us to develop them quicker, and in some cases, let them develop their own safely.”

We are not a software house. We are not the sort of company that can take our existing staff and reasonably launch into microservices this, or functions that. In addition to all the big projects and 3rd party apps we deal with, we also need to provide solutions to small issues, and do it fast.

Like many other companies we have massive amounts of shadow IT, where people have business processes relying on spreadsheets or Access databases, that most of us in IT don’t know exist. As I mentioned in the quote above, this is happening because we are failing! We are not able to respond to their demands. Why?

For the most part we make the wrong decisions about technology stacks for this type of work. We just need simple solutions to simple problems, that are quick and easy to produce, and more importantly easy to maintain.

What tool are you suggesting? The *only* thing we have in our company that is truly up to date at this time, and has remained so since it was introduced into the company, is APEX. It also happens to be a low-code declarative development solution, that most of our staff could pick up in a few days. The *only* tool we have that allows us to quickly deliver solutions is APEX. So why are we not using it, or some other tool like it? IMHO because of bad decisions!

You’re an Oracle guy, and you are just trying to push the Oracle stack aren’t you? No. Give me something else that does a similar job of low-code declarative development and I will gladly suggest that goes in the list too. I’ve heard good things about Power Apps for this type of stuff. If that serves the purpose better, I’ll quite happily suggest we go in that direction. Whatever the tool is, it must be something very productive, which doesn’t require a massive learning curve, that also gives us the possibility of allowing the business to development for themselves, in a citizen developer type of way.

It should be noted, we are wedded to Oracle for the foreseeable future because of other reasons, so the “Oracle lock-in” argument isn’t a valid for us anyway.

So you’re saying all the other development stuff is a waste of time? No. In addition to the big and “sexy” stuff, there are loads of simple requirements that need simple solutions. We need to be able to get these out of the door quickly, and stop the business doing stuff that will cause problems down the line. If they are going to do something for themselves, I would rather it was done with a tool like APEX, that we can look after centrally. I don’t want to be worrying if Beryl and Bert are taking regular backups of their desktops…

Are you saying APEX is only good for this little stuff? No! I’m saying it does this stuff really well, so why are we using languages, frameworks and infrastructure that makes our life harder and slower for these quick-fire requirements? Like I said, it’s not about the specific tool. It’s what the tool allows us to achieve that’s important.

What would you do it you could call the shots? I would take a couple of people and task them with working through the backlog of these little requirements using a low-code tool. It might be APEX. It might be something else. The important thing is we could quickly make a positive impact on the way the company does things, and maybe reduce the need for some of the shadow IT. It would be really nice to feel like we are helping to win the war on this, but we won’t until we change our attitude in relation to this type of request.

So you think you can solve the problem of shadow IT? No. This will always happen. What I’m talking about is trying to minimise it, rather than being the major cause of it.

Cheers

Tim…

MobaXterm 20.0 and KeePass 2.44

And in other news about things I’ve missed recently…

MobaXterm 20.0 was released a couple of days ago. It looks like they’ve switched across to the yearly naming like many other companies. 🙂

The downloads and changelog are in the usual places.

If you are working on Windows and spend a lot of time in shells for connections to Linux boxes, you need this in your life!

KeePass 2.44 was released nearly a month ago.

The downloads and changelog are in the usual places.

You can read about how I use KeePass and KeePassXC on my Windows, Mac and Android devices here.

Happy days!

Cheers

Tim…

Midlands Microsoft 365 and Azure User Group – February 2020

Last night I went to the Midlands Microsoft 365 and Azure User Group. It was co-organised by Urfaan Azhar and Lee Thatcher from Pure Technology Group, and Adrian Newton from my company.

This event clashed with the Cloud Native Computing Foundation meetup. If the clash continues I’ll probably have to alternate between the events.

First up was Penny Coventry with “Power Automate AKA Flow Introduction”. The session started with an overview of various “Power” products, before focusing on some of the Power Automate features. This included a demo of building an automation flow. I’ve seen Amy Simpson-Grange demonstrating UIPath and one of my colleagues Paul demonstrating LeapWorks, and as you would expect, there are a lot of similarities between these automation tools. I don’t know if I’ll get to do any of this, but I do find it interesting. I’ll probably wait for my colleague Natalie to learn it, then bug her to explain stuff to me, so I can act like I know what I’m doing. 🙂

After far too much pizza and a doughnut (diet starts tomorrow) it was time for Tom Gough with “Azure Machine Learning with Power BI”. The session started with an overview of some of the Artificial Intelligence (AI) and Machine Learning (ML) services on Azure. The mention of data preparation and data cleansing was quite interesting, as people don’t really say a lot about this. You could be forgiven for thinking this piece just magically happens. There was a demo of using Power BI desktop to prepare some data containing user comments, connect to Cognitive Services and pull out some key phrases from the data, and presenting it in some custom visualisations. One of my colleagues has used this to do sentiment analysis on responses to a chat bot running in the Azure Bot Service. Pretty interesting stuff, and he tells me it’s very easy to get some basic examples working.

It seems every event comes with some more signs that this stuff is gradually creeping into our company. I’m not sure if I will be part of this world, but it’s certainly interesting to see.

Thanks to everyone who turned up to support the event, the speakers, and the sponsor Pure Technology Group. See you at the next event.

Cheers

Tim…

PS. Apologies to Richard Harrison, who had to endure me asking questions for ages, while he froze to death. Bring some masking tape or a restraining order the next time you come. 🙂

VirtualBox 6.1.2

About a month after the release of VirtualBox 6.1 we get the release of VirtualBox 6.1.2, a maintenance release.

The downloads and changelog are in the usual places.

So far I’ve only tried it on a Windows 10 host at work, but it looks fine.

Remember, if you use Vagrant 2.2.6 and this is your first time using VirtualBox 6.1.x you will need to do a couple of config changes to Vagrant, as discussed in this post by Simon Coter. I’m sure once Vagrant 2.2.7 is released this will no longer be necessary.

Happy upgrading! 🙂

Cheers

Tim…

Update: Once I got home I installed VirtualBox 6.1.2 on Windows 10, Oracle Linux 7 and macOS Catalina hosts. It worked fine. 🙂

Automating SQL and PL/SQL Deployments using Liquibase

You’ll have heard me barking on about automation, but one subject that’s been conspicuous by its absence is the automation of SQL and PL/SQL deployments…

I had heard of some products that might work for me, like Flyway and Liquibase, but couldn’t really make up my mind or find the time to start learning them. Next thing I knew, SQLcl got Liquibase built in, so I figured that was probably the decision made for me in terms of product. This also coincided with discussions about making a deployment pipeline for APEX applications, which kind-of focused me. It’s sometimes hard to find the time to learn something when there is not a pressing demand for it…

Despite thinking I would probably be using the SQLcl implentation, I started playing with the regular Liquibase client first. Kind of like starting at grass roots. If you are working in a mixed environment, you might prefer to use the regular client, as it will work with multiple engines.

Once I had found my feet with that, I essentially rewrote the article to use the SQLcl implementation of Liquibase. If you are focused on Oracle, I think this is better than using the standard client.

Both these articles were written more than 3 months ago, but I was holding them back on publishing them for a couple of reasons.

  1. I’m pretty new to this, and I realise some of the ways I’m suggesting to use them do not fall in line with the way I guess many Liquibase users would want to use them. I’m not trying to make out I know better, but I do know what will suit me. I don’t like defining all objects as XML and the Formatted SQL Changelogs don’t look like a natural way to work. I want the developer to do their job in their normal way as much as possible. That means using DDL, DML and PL/SQL scripts.
  2. I thought there was a bug in one aspect of the SQLcl implementation, but thanks to Jeff Smith, I found out it was a problem between my keyboard and seat. 🙂

With a little cajoling from Jeff, I finally released them last night, then found a bunch of typos that quickly got corrected. Why are those never visible until you hit the publish button? 🙂

The biggest shock for most people will probably be that it’s not magic! I’m semi-joking there, but I figure a lot of people assume these products solve your problems, but they don’t. Both Flyway and Liquibase provide a tool set to help you, but ultimately you are going to need to modify the way you work. If you are doing random-ass stuff with no discipline, automation is never going to work for you, regardless of products. If you are prepared to work with some discipline, then tools like Liquibase can help you build the type of automated deployment pipelines you see all the time with other languages and tech stacks.

The ultimate goal is to be able to progress code through environments in a sensible way, making sure all environments are left in the same state, and allow someone to do that promotion of code without having to give them loads of passwords etc. You would probably want a commit in a branch of your source control to trigger this.

So looking back to the APEX deployments, we might think of something like this.

  • A developer finishes their work and exports the current application using APEXExport. It doesn’t have to be that tool, but humans have a way of screwing things up, so having a guaranteed export mechanism makes sense.
  • Code gets checked into your source control. This includes any DDL, DML, packages, and of course the APEX application script.
  • A new changelog is created for the work which includes any necessary scripts, including DDL and DML, as well as the APEX script, all included in the correct order. That new changelog for this piece of work is included in the master changelog, and these are committed to source control.
  • That commit of the changelog, or more likely a merge into a branch triggers the deployment automation.
  • A build agent pulls down the latest source, which will probably include stuff from multiple repositories, then applies it with Liquibase, using the changelog to tell it what to do.

That sounds pretty simple, but depending on your company and how you work, that might be kind-of hard.

  • The master changelog effectively serialises the application of changes to the database. That has to be managed carefully. If stuff is done out of order, or clashes with another developer, that has to be managed. It’s not always a simple process.
  • You will need something to react to commits and merges in source control. In my company we use TeamCity, and I’ve also used GitLab Pipelines to do this type of thing, but if you don’t have any background in these automation tools, then that part of the automation is going to be a learning curve.
  • We also have to consider how we handle actions from privileged accounts. Not all changes in the database are done using the same user.

Probably the biggest factor is the level of commitment you need as a team. It’s a culture change and everyone has to be on board with this. One person manually pushing their stuff into an environment can break all your hard work.

I’m toying with the idea of doing a series of posts to demonstrate such a pipeline, but it’s kind-of difficult to know how to pitch it without making it too specific, or too long and boring. 🙂

Cheers

Tim…

Midlands Microsoft 365 and Azure User Group – January 2020

Last night I went to the Midlands Microsoft 365 and Azure User Group. It was co-organised by Urfaan Azhar and Lee Thatcher from Pure Technology Group, and Adrian Newton from my company.

First up was Matt Fooks speaking about “Microsoft Cloud App Security”. Matt covered a number of use cases, including shadow IT detection, log collection, checking compliance of applications and using policies to protect systems. He demoed a few of these. The flexibility of the cloud is great, but it also allows you to create a security nightmare as your cloud estate grows. MCAS gives you visibility and control over that. I guess the value of this will depend how far down the cloud journey you are. If you’ve got a bit of IaaS that’s being managed centrally, this isn’t going to sounds too interesting. Once you open the gates and let other people/teams get involved in provisioning services, you are going to need something like this to keep some level of control over the sprawl.

I heard one of the attendees mention Snowflake, so I collared him during the break to discuss it. I’m not so interested in the headline stuff. I care more about the boring day-to-day stuff, as people tend not to talk about it. It was a really interesting. Networking is great.

Next up was Richard Harrison with “The Journey from being a DBA Guy for 20 years to becoming an Azure Guy”. Richard was a fellow Oracle ACE before he moved in a new direction, so it was good to see him again. We spent some time chatting before his session, and I kept him for ages after the session chatting about a bunch of cloud related stuff. As the name of the session suggests, this was about his transition. What made him decide to make the move. His first opening into the world of the cloud. Some of the steps along the way and the amount of work involved. Richard is a smart guy, so when he says it’s hard work to keep on top of things due to the rate of change in the cloud, that should be a warning sign for people who are focused on the status quo.

There were some pieces that related nicely to the first session. For example, he discussed the control/governance aspect. To paraphrase, some services like security, budget management and databases are kept under central control, because of the critical nature of them, but people are given pretty much a free reign with platforms within their resource group. Why? Because there are loads of services with loads of features and trying to manage them centrally is practically impossible. I think of this as a move to a high-trust culture. Of course, you have tools to monitor what’s going on to stop people doing crazy stuff, but ultimately you have to start releasing control (where it’s appropriate) or people will look elsewhere. 🙂

I’m hoping Richard will come back again and tell us some more about what he is doing, and I know he’s got some really interesting Oracle-related content that will work really well at Oracle events, so hopefully he’ll throw his hat into that arena too. I don’t want to say anything more because I don’t want to steal his thunder.

Thanks to everyone who turned up to support the event, the speakers, and the sponsor Pure Technology Group. See you at the next event.

Cheers

Tim…

The Unicorn Project : My Review

The Unicorn Project is a follow-up to The Phoenix Project. Actually, it’s more like the same book again, but written from different person’s perspective.

I loved The Phoenix Project, but absolutely hated The DevOps Handbook, so I was a little reluctant to start reading The Unicorn Project, as I was really worried I would hate it, and it would tarnish the memory of The Phoenix Project.

Overall it was fine, but IMHO it was nowhere near as good as The Phoenix Project.

I’m not going to talk details here, but instead talk about my feelings about the book. You don’t have to agree. 🙂

Let’s talk about Maxine

In The Phoenix Project, Bill was an experienced manager, but he was asked to take over a role that was totally out of his comfort zone. He was mentored by Eric, who helped to develop him in his new role using “The Three Ways” and giving him an understanding of the 4 types of work, and how they affect productivity. Bill was gradually introduced to the three ways, one at a time, and we as the reader went on that journey with him and his colleagues. There was a definite story arc and a clear development of his character.

In The Unicorn Project, Maxine is a super God-mode developer/architect that has done pretty much everything before, and is totally amazing at almost everything. The first two chapters tell us this repeatedly. At the start of the book she is already the finished article. As a result of this there seems little in the way of character development here. Where do you go from amazing? When she does interact with Eric, he basically brain-dumps “The Five Ideals” in one shot to Maxine and friends, and they pretty much run with it. The story arc and development of Maxine as a character, and most of the other characters also, is weak in comparison to Bill’s story from the first book.

You see this problem in reality TV competitions. If a person’s first audition is “too good”, they won’t win the show. The show is built around the journey. The alternative is to craft a back story that fakes a journey, which is why they didn’t mention Kelly Clarkson had already recorded demos and turned down 2 record contracts before she auditioned for American Idol. The character arc is built around the journey from waitress to diva.

The lack of “progress” of the lead character is the biggest problem with this book from my perspective. I know it’s a DevOps book, but it’s not a reference book. It’s meant to be like The Phoenix Project, which uses a story to convey the message.

I like the fact there is a female lead character, and I certainly understand the problem with making her a newbie, mentored by an old white guy :), but I think this situation caused a problem with the story line, and ultimately how “The Five Ideals” were presented to us as the reader. The concepts and meaning of them should have been drip fed to us, like they were in the first book. Almost making us feel like we’ve discovered them for ourselves.

As far as I see it, one of the following would have solved this problem.

  • Leave Maxine as a super God-mode developer, and have her introduce the rest of the characters in the development teams to “The Five Ideals”. Maybe she learnt “The Three Ways” from Eric in the past, and developed them further, giving them a more development focused slant? I guess how you play this depends on how wedded you are to the idea that Eric is involved in this part of the company transformation. She could have just figured this stuff out for herself.
  • Make Maxine less capable, and have a female mentor that teachers her about “The Five Ideals”, allowing her story arc to be similar to Bill’s, and allowing us to go on the journey with her. If you need a link back to Eric, you can always make this mentor one of Eric’s proteges or a former colleague. Hell, make her the person who taught Eric.
  • Make Maxine less capable and let Eric mentor her, like he did with Bill. Yes, people will criticise the fact you showed her as less capable, but the development of her character would have been more interesting.

So the problem isn’t the fact the lead character is a woman. There is no journey!

Why do you care about the story?

The great thing about The Phoenix Project was we learned about The Three Ways and the types of work as part of the story. It was not a technical reference book, but allowed someone new to the concepts to understand them, and see why they were important. I think a lot of people from a less technical background could pick up the book and see exactly why this stuff made sense. It was in itself a good tool to promote change.

The main purpose of The Unicorn Project is to tell us about “The Five Ideals” and how they relate to the three horizons framework. I don’t think it did a good job of that. When I finished the book I found myself asking, “Did they really explain any of this stuff properly?” I Googled some of the concepts to make sure, and learned more in a couple of pages than I did in the whole book.

I feel like you would get more value out of reading The Phoenix Project, then following it up with this interview with Gene Kim.

It’s interesting that in the interview Gene Kim says, “Maxine who is a very talented architect, knows the five ideal patterns.” From where? Just from what Eric said? OK. Is she meant to be relating what Eric tells her to what she already knew? If so, how did she find this stuff for herself? I feel like that journey is a lot more interesting, and likely more informative than what we actually got. After all, that journey is the one we saw Bill make in the first story.

After reading the interview I can see what was actually meant to be happening here was Maxine was just observing what was going wrong, whilst already knowing the solution. Eric was effectively irrelevant to her progress. In fact, her progress was not the point. It was the progress of the project that was the point. I don’t think I got that from the book at all, and if that were the case, why not go with my first suggestion and make her the “Eric” of this book? I think that would have worked a lot better! Maxine is Yoda. That works for me.

As it stands, I don’t think you can give this book to someone and say, “This is why DevOps matters”, in the same way you could with The Phoenix Project. If that’s what it was meant to be, then I think it has probably failed. If it’s a rallying call for people who already know about DevOps, then it’s probably not too bad, but could have made most of the important points in a blog post.

I see some very gushy comments about how “amazing” the book is. That is not the book I read, although you will read the word “amazing” all the time in it…

What really matters I guess

The Five Ideals are pretty much a different take on The Three Ways. The Phoenix Project and The Three Ways probably feel more infrastructure focused, while The Unicorn Project and The Five Ideals are more development focused. At least, that’s what my boss tells me. 🙂 That might make The Five Ideals more attractive to a section of the audience, as they may feel more relevant. Being someone that bridges the infrastructure and development gap, I think they both apply equally well really, but I can see how they might have a different appeal. They are different ways of stating the same thing.

There are some great messages in this book, and I agree with the vast majority of them. If you can focus on “the message”, I think you will enjoy it a lot more than me. Having said that, it feels really badly written compared to The Phoenix Project. To the point where I can hardly believe these two books are by the same author. Better editing, and maybe reducing many chapters to two thirds or half their current size, could have given this more punch and made it much better. Better than The Phoenix Project? I don’t know, but as it stands it feels like a pale imitation of it. I keep wanting to say the word “clumsy”, and everything about it feels that way. Having said all that, there were a couple of chapters towards the end which were quite exciting, so it’s not all bad.

Despite all this, I think it will be a valuable source of quotes, or paraphrased statements, and similar to The Phoenix Project, it will be used to help effect change in stubborn organisations. For that alone I guess we should be grateful.

I wonder how I would have felt if I had read this book first?

I’m interested to know what others felt. Maybe I was just expecting too much, having been such a big fan of The Phoenix Project, which I guess you already figured out. 🙂

Cheers

Tim…

Back to the “good old days”, and other cases of denying change!

This is going to be about the technology industry, but I’m going to liken things to what’s going on here in the UK…

Things are pretty depressing at the moment. The latest political fiasco in the UK makes me realise I have little in common with the majority of the British voting public, and I’m starting to think I have little in common with a lot of people working in the technology industry.

I was listening to one of the politicians in a northern constituency that recently elected a conservative MP for the first time in ages. One of the first things she said was, “We need more investment in the north, like a focus on our high streets”. Well, I agree entirely with the first part of the sentence, but the second part reeks of living in the past. This is a classic case of not understanding how the world has changed. I come from a time when going down to the market or the local high street was the way we shopped. Now I buy almost everything off the internet. Judging by my nephews and their friends, this is the norm, but I suspect it’s not so normal for lots of people in in my age bracket and above, who haven’t moved with the times. Unfortunately, those are a big chunk of the voting public, who are looking to the past for inspiration.

I try to surround myself with people who give a crap and are focused on change, so a lot of the time I’m in this echo chamber of “progress”, but I don’t think these values are shared by our industry as a whole. Why? Because I think a lot of people higher up in the chain of command just don’t get it. They either come from a time which was “pre-technology”, or they have not progressed from “their days of technology”. They are the technology equivalent of the people shopping on the high street. With that type of people in control, progress is stalled.

Cloud deniers are the climate change deniers of our industry. We have a big problem with climate change, but people don’t want to change their lives, which is the only way we are going to fix things. Likewise, lots of people are in cloud denial, but the cloud is the only way a lot of medium sized businesses will be able to fix their issues. The cloud was originally marketed as being cheaper. It’s not. It definitely costs more money, but if you embrace it and use it to your advantage it can deliver more value than you well ever get on-prem. Replicating your data centre in the cloud in an Infrastructure as a Service (IaaS) manner is a cloud failure in my opinion. I’m fine with it if this is a stepping stone, but if you think this is the final goal you’ve already failed. You are getting little in the way of benefits and all of the costs and hassles. Instead you need to focus on platforms, which bring something new to the table. Linking services together to bring new opportunities to generate more value from your data. It might cost more, but if you can leverage it to add more value, then you are still winning. If you are being driven by people who are stuck in an infrastructure frame of mind, the potential value is not even recognised, let alone on the road-map.

The “return of the local high street” will only be possible if the local high street offers something new and unique to the consumer. What is that? I guess nobody knows as nobody has been able to do it successfully. I’m not sure a bunch of charity shops is what I consider “the future”. Likewise, those people who are doubling down on their on-prem stuff, or even talking of moving back from the cloud to on-prem need to show the value add of doing that. If you focus purely on numbers, then it’s possible you can move your crappy IaaS from the cloud to on-prem and “save some money”, but at what cost? You will be stuck in the past forever. Many of the interesting services out there will *never* be available on-prem. They just won’t! Even if you were to make the move, you can’t do things the way you used to. Waiting weeks/months for a new service is a thing of the past. If you haven’t already automated that on-prem so it happens in minutes, you have already failed. To automate that yourself will require engineers that come at a price, and the people who are into that stuff are probably not going to be interested in working in your crappy backwater.

I’m not suggesting we completely forget the past, but if you are going to focus on it, or treat it as some utopian goal you are doomed to failure. Humans have to progress or die. We can do that in a way that harms everything around us, or we can we sensitive to our impact, but regardless of which approach we take, forward is the only way!

Cheers

Tim…

PS. I’m sorry if this post sounds really negative, but I can’t help thinking people of my generation and older are robbing the future from those who come after us.

Update: Based on comments from Twitter, I thought it was worth addressing some things.

  • When I talk about cloud, I am not talking about a specific provider. I am talking about whoever provides the service you need.
  • When I talk about a move to the cloud, I am not suggesting blindly moving to the cloud without any planning. It’s not magic.
  • I’m not talking about moving to the cloud if that means a degradation in your service or functionality.
  • I am suggesting that for many companies there are services you can simply not build and support on-prem.
  • I do believe that the cloud is *often* an easier place to try things out. I did a POC of something the other day for less that a dollar. That would have cost hundreds of pounds in staffing costs alone on-prem in my company.
  • I think many of the negative cloud comments or demands for additional clarifications when discussing cloud act as a distraction from the message, and are used by others as a convenient excuse not to do anything. I understand, but most people are not willing to change, so giving them an excuse not to do anything is not what we need. 🙂
  • Even when you remain on-prem, you should be aiming to take on the values of the cloud in terms of automation and self-service. I’m not talking about total re-engineering and altering platforms. I’m talking about making the essential operations automatic and/or self-service.