Windows Laptop : Update… Again…

Just another quick update about how things are going with the new laptop.

I read with interest this post by Denis Savenko about his choice of a Lenovo ThinkPad X1 Carbon (6th gen), which looks like a nice bit of kit. The ThinkPad seems to have almost as much loyalty as MacBook Pro. 🙂

The recent announcement about the revamped MacBook Pro range caught my eye in a, “Did I make a mistake?”, kind-of way. A quick comparison tells me I didn’t based on UK pricing. In both cases the Dell has a 3840 x 2160 resolution touch screen. There are cheaper options available, which makes the discrepancy even greater.

  • Dell XPS 15″ : Core i9, 32G RAM, 1TB SSD = £2,599
  • MBP 15″ : Core i9, 32G RAM, 1TB SSD = £3,689
  • Dell XPS 15″ : Core i7, 32G RAM, 1TB SSD = £2,048
  • MBP 15″ : Core i7, 32G RAM, 1TB SSD = £3,419

That price differential is crazy…

You may have seen the YouTube video by Dave Lee talking about the thermal throttling of the i9 in the new MBP, and that is really what I want to talk about here.

The XPS 15″ i9 runs hot! Like burn your hand hot. I had one incident when playing Fortnite where the machine shutdown as the internal temperature was so hot. Under normal workload, like a few VMs, it doesn’t get quite so hot, but it is still noticeable. I got a cooler pad, which helped a lot, but doesn’t do much if it’s under really high load. It seems all these laptops that try to look small and cute don’t have a cooling solution that can cope with an i9. On reflection an i7 would probably have been a better, and cheaper, choice.

I’m still happy with the purchase, and with Windows 10. If you are out in the market for a new laptop, I would seriously consider the i7 over the i9 unless you buy a big laptop with a great cooling solution. You will save yourself a bunch of cash, and I really don’t think you will notice the difference.

Cheers

Tim…

Why Automation Matters : ITIL

ITIL is quite a divisive subject in the geek world. Once the subject is raised most of us geeks start channelling our inner cowboy/cowgirl thinking we don’t need the shackles of a formal process, because we know what we are doing and don’t make mistakes. Once something goes wrong everyone looks around saying, “I didn’t do anything!”

Despite how annoying it can seem at times, you need something like ITIL for a couple of reasons:

  • It’s easy to be blinkered. I see so many people who can’t see beyond their own goals, even if that means riding roughshod over other projects and the needs of the business. You need something in place to control this.
  • You need a paper trail. As soon as something goes wrong you need to know what’s changed. If you ask people you will hear a resounding chorus of “I’ve not changed anything!”, sometimes followed by, “… except…”. It’s a lot easier to get to the bottom issues if you know exactly what has happened and in what order.

So what’s this got to do with automation? The vast majority of ITIL related tasks I’m forced to do should be invisible to me. Imagine the build and deployments of a new version of an application to a development server. The process might look like this.

  • Someone requests a new deployment manually, or it is done automatically on a schedule or triggered by a commit.
  • A new deployment request is raised.
  • The code is pulled from source control.
  • The build is completed and result of the build recorded in the deployment request.
  • Automated testing is used to test the new build. Let’s assume it’s all successful for the rest of the list. The results of the testing are recorded in the deployment request.
  • Artifacts from the build are stored in some form of artefact store.
  • The newly built application is deployed to the application server.
  • The result of the deployment is recorded in the deployment request.
  • Any necessary changes to the CMDB are recorded.
  • The deployment request is closed as successful.

None of those tasks require a human. For a development server the changes are all pre-approved, and all the ITIL “work” is automated, so you have a the full paper trail, even for your development servers.

It’s hard to be annoyed by ITIL if most of it is invisible to you! 🙂

IMHO the biggest problem with ITIL is bad implementation. Over complication, emphasis on manual operations and lack of continuous improvement. If ITIL is hindering your progress you are doing it wrong. The same could be said about lots of things. 🙂 One way of solving this is to automate the problem out of existence.

Check out the rest of the series here.

Cheers

Tim…

Windows Laptop : Update

Some of the photos of me a nlOUG showed me with my MacBook Pro, which caused some amusement for a few people as I wasn’t using my new Windows laptop, so I thought I would give an update…

The new laptop got delivered while I was in Riga, so I had 4 days between picking it up and leaving for nlOUG. I picked it up from work at 21:00 on the Friday, but didn’t start playing with it until the next day. By the end of the morning it was my main computer.

As far as I can remember I only had one hiccup, which was down to a thunderbolt driver that meant my dock wasn’t working properly. I installed the latest driver from the Dell website and everything was fine.

All my test environments are built using VirtualBox, Vagrant and Docker, so rebuilding all my testing stuff was a matter of issuing a command and waiting. 🙂

So why didn’t I take the new laptop to nlOUG? Two reasons.

  • Like most new laptops, it has thunderbolt and USB-C ports and not much else, so I was waiting for a travel adaptor to arrive. The dock I’ve got is quite big, so I wasn’t going to take that with me.
  • At that point I hadn’t practised using the new laptop with a projector. I had used it with a second screen, but I wanted to try it out in a more realistic situation. I’ve since done that with a projector at work.

I don’t believe I’ve even turned on the MacBook Pro since I got back from the Netherlands, so as it stands, I’m planning on taking the new laptop to Oracle Code : Paris.

Now for some general thoughts…

Something that did surprise me a little was how weird it felt using Windows at home for a while. I use Windows 10 at work and I would often log into my work machine from the MBP when I was at home or away at conferences, so I expected there to be zero mind-shift from this move. I was really surprised how much my mind would just switch when I was working on my own stuff at home, compared to when I was doing my real job. So funny.

It’s nice to be able to use mapped drives reliably again. The MBP was terrible at dealing with my NAS. Didn’t matter what I tried, it would drop connections, and judging by Google I was not alone with this. The Windows machine is 100% so far.

As I’ve said before, most of my life is spent using a browser and a shell prompt connected to local or cloud VMs. As a result I am not tied to any desktop OS, but I’ve definitely been less frustrated with Windows 10 than I was macOS and Linux before it. I’m not sure why I stuck it out for so long.

Cheers

Tim…

PS. For context, you might want to read my post here before you tell me how great your preferred desktop OS is… 🙂

Why Automation Matters : Patching and Upgrading

As I said in a recent post, you know you are meant to, but you don’t. Why not?

The reasons will vary a little depending on the tech you are using, but I’ll divide this answer into two specific parts. The patch/upgrade process itself and testing.

The Patch/Upgrade Process

I’ve lived through the bad old days of Oracle patching and upgrades and it was pretty horrific. In comparison things are a lot better these days, but they are still not what they should be in my opinion. I can script patches and upgrades, but I shouldn’t have to.  I’m sure this will get some negative feedback, but I think people need to stop navel gazing and see how simple some other products are to deal with. I’ll stop there…

That said, I don’t think patches and upgrades are actually the problem. Of course you have to be careful about limiting down time, but much of the this is predictable and can be mitigated.

One of the big problems is the lack of standardisation within a company. When every system is unique, automating a patch or upgrade procedure can become problematic. You have to include too much logic in the automation, which can make the automation a burden. What the cloud has taught us is you should try to standardise as much as possible. When everything most things are the same, scripting and automation gets a lot easier. How do you guarantee things conform to a standard? You automate the initial build process. 🙂

So if you automate your build process, you actually make automating your patch/upgrade process easier too. 🙂

The app layer is a lot simpler than the database layer, because it’s far easier to throw away and replace an application layer, which is what people aim to do nowadays.

Testing

Testing is usually the killer part of the patch/upgrade process. I can patch/upgrade anything without too much drama, but getting someone to test it and agree to moving it forward is a nightmare. Spending time to test a patch is always going to lose out in the war for attention if there is a new spangly widget or screen needed in the application.

This is where automation can come to the rescue. If you have automated testing not only can you can move applications through the development pipeline quicker, but you can also progress infrastructure changes, such as patches and upgrades, much quicker too, as there will be a greater confidence in the outcome of the process.

Conclusion

Patching and upgrades can’t be considering in isolation where automation is concerned. It doesn’t matter how quick and reliably you can patch a database or app server if nobody is ever going to validate it is safe to progress to the next level.

I’m not saying don’t automate patching and upgrades, you definitely should. What I’m saying is it might not deliver on the promise of improved roll-out speed as a chain is only as strong as the weakest link. If testing is the limiting factor in your organisation, all you are doing by speeding up your link in the chain is adding to the testing burden down the line.

Having said all that, at least you will know your stuff is going to work and you can spend your time focusing on other stuff, like maybe helping people sort out their automated testing… 🙂

Check out the rest of the series here.

Cheers

Tim…

The Death of TLS 1.0 and 1.1 : Should You Panic?

There have been some stories recently about the death/deprecation of TLS 1.0 and 1.1. They follow a similar format to those of a few years ago about SSLv3. The obvious question is, should you panic? The answer is, as always, it depends!

Web Apps

For commercial web apps you are hopefully doing the right thing and using a layered approach to your architecture. User traffic hits a load balancer that does the SSL termination, then maybe re-encrypts to send traffic to the web servers and/or app servers below it. It’s great if you can make all the layers handle TLS 1.2, but the only one that absolutely must is the load balancer. You can hide a multitude of sins behind it. Provided your load balancer isn’t ancient, your typical web traffic won’t be a problem. If you still allow direct traffic to web and app servers you might be in for a rough ride. The quickest and easiest way to progress is to slap a load balancer or reverse proxy in front of everything.

Server Call-Outs (Back-End)

Where it can get tricky is when the back end servers make call-outs to other systems. Back when SSLv3 was getting turned off, because of the POODLE vulnerability, a bunch of people took a hit because Oracle 11.2.0.2 couldn’t use TLS1.0 when making call-outs from the database using UTL_HTTP. Support for that appeared in 11.2.0.3, so the fix was to patch the database. Oracle 11.2 (when patched accordingly), 12.1 and 12.2 can all cope with TLS 1.2. You could have similar problems when using “curl” or “wget” on old/unpatched Linux installations. Obviously the same is true for old software and frameworks.

So what’s the solution from the back end? Depending on the ages of the software you are using, and what you are actually trying to do, it’s likely to be one of the following.

  • Patch : You know you are meant to, but you don’t. This is one of those situations where you will wish you had. It always makes sense to keep up to date with patches. Often it means these sort of changes pass you by without you ever having to consider them.
  • Upgrade : If you are using a version of software that can’t be patched to enable TLS 1.2 for callouts, you probably need to upgrade to a version that can.
  • Proxy : The same way you can use a reverse proxy, or load balancer, to handle calls coming into your system, you can stick a proxy in front of outbound locations, allowing your old systems to call out with insecure protocols. The proxy then contacts the destination using a secure protocol. It’s kind-of messy, but we all have skeletons in our closet. If you have applications that simply can’t be updated, you might have to do this.
  • Stop : Stop doing what you are doing, the way you are doing it. I guess that’s not an option right? 🙂

The first step in fixing a problem is to admit you have one. You need to identify your incoming and outgoing calls to understand what the impact of these and future deprecations will have. I’m no expert at this, but it seems like architecture 101 to me. 🙂

Deprecation Doesn’t Mean Desupport

True, but it’s how the world reacts that counts. If Chrome stops allowing anything less than TLS 1.2, all previous protocols become dead instantly. If a web service provider you make call-outs to locks down their service to TLS 1.2, deprecation suddenly means death.

You know it’s coming. You’ve been warned… 🙂

Cheers

Tim…

Why Automation Matters : Reliability and Confidence

In my previous post on this subject I mentioned the potential for human error in manual processes. This leads nicely into the subject of this post about reliability and confidence…

I’ve been presenting at conferences for over a decade. Right from the start I included live demos in those talks. For a couple of years I avoided them to make my life simpler, but I’ve moved back to them again as I feel in some cases showing something has a bigger impact than just saying it…

The Problem

One of the stressful things about live demos is they require something to run the demo on, and what happens if that’s not in the state you expect it to be?

I had an example of this a few years ago. I was in Bulgaria doing a talk about CloneDB and someone asked me a question at the end of the session, so I trashed my demo to allow me to show the answer to their question. I forgot to correct the situation, so when I came to do the same demo at UKOUG it went horribly wrong, which lead someone on Twitter to say “session clone db is a mess“, and they were correct. It was. The problem here was I wasn’t starting from a known state…

This is no different for us developers and DBAs out in the real world. When we are given some kit, we want to know it’s in a consistent state, but it might not be for a few reasons.

Human Error

The system was created using a manual build process and someone made a mistake. I think almost every system coming out of a manual process has something screwed on it. I make mistakes like this too. The phone rings, you get distracted and you come back to the original task and you forget a step. You can minimise this with recipes and checklists, but we are human. We will goof up, regardless of the measures we put in place.

Sometimes it’s easy to find and fix the issue. Sometimes you have to step through the whole process again to identify the issue. For complex builds this can take a long time, and that’s all wasted time.

Changes During the Lifespan

The delivered system was perfect, but then it was changed during its lifespan. Here are a couple of examples.

App Server: Someone is diagnosing an issue and they change some app server parameters and forget to set them back. Those don’t fix the current issue, but they do affect the outcome of the next test. Having completed the testing successfully, the application gets moved to production and fails, because UAT and Live no longer have the same environment, so the outcomes are not comparable or predictable.

Database: Several developers are using a shared development database. Each person is trying to shape the data to fit their scenario, and in the process trashing someone else’s work. The shared database is only refreshed a handful of times a year, so these inconsistencies linger for a long time. If the setup of test data is not done carefully you can add logical corruptions to the data, making it no longer representative of a real situation. Once again the outcomes are not comparable or predicable.

The Solution?

I guess from the title you already know this. Automation.

Going back to my demo problem again, I almost had a repeat of this scenario at Oracle Code: Bangalore a few months ago. I woke up the day of the conference and did a quick run through my demos and something wasn’t working. How did I solve it? I rebuilt everything. 🙂

I do most of my demos using Docker these days, even for non-Docker stuff. I use Oracle Linux 7 and UEK4 as my base OS and kernel, so I run Docker inside a VirtualBox VM. The added bonus is I get a consistent experience regardless of underlying host OS (Windows, macOS or Linux). So what did the rebuild involve? From my laptop I just ran these commands.

vagrant destroy -f
vagrant up

I subsequently connected to the resulting VM and ran this command to build and run the specific containers for my demo.

docker-compose up

What I was left with was a clean build in exactly the condition I needed it to be to do my demos. Now I’m not saying I wasn’t nervous, because not having working demos on the morning of the conference is a nerve wracking thing, but I knew I could get back to a steady state, so this whole issue resulted in one line in the blog post for that day. 🙂 Without automation I would be trying to find and fix the problem, or manually rebuilding everything under time pressure, which is a sure fire way to make mistakes.

I do some demos on Oracle Database Cloud Service too. When I recently switched between trial accounts my demo VM was lost, so I provisioned a new 18c DBaaS, uploaded a script and ran it. Setup complete.

Confidence

Automation is quicker. I think we all get that. Having a reliable build process means you have the confidence to throw stuff away and build clean at any point. Think about it.

  • Developers replacing their whole infrastructure whenever they want. At a minimum once per sprint.
  • Deployments to environments not just deploying code, but replacing the infrastructure with it.
  • Environments fired up for a single purpose, maybe some automated QA or staff training, then destroyed.
  • When something goes wrong in production, just replace it. You know it’s going to work because it did in all your other environments.

Having reliable automation brings with it a greater level of confidence in what you are delivering, so you can spend less time on unplanned work fixing stuff and focus more on delivering value to the business.

Tooling

The tooling you choose will depend a lot on what you are doing and what your preferences are. For example, if you are focusing on the RDBMS layer, it is unlikely you will choose Docker for anything other than little demos. For some 3rd party software it’s almost impossible to automate a build process, so you might use gold images as your starting point or partially automate the process. In some cases you might use the cloud to provide the automation for you. The tooling is less important than the mindset in my opinion.

Check out the rest of the series here.

Cheers

Tim…

Why Automation Matters : Lost Time

Sorry for stating what I hope is the obvious, but automation matters. It’s mattered for a long time, but the constant mention of Cloud and DevOps over the last few years has thrown even more emphasis on automation.

If you are not sure why automation matters, I would just like to give you an example of the bad old days, which might be the current time for some who are still doing everything manually, with separate teams responsible for each stage of the process.

Lost Time : Handover/Handoff Lag

In the diagram below we can see all the stages you might go through to deploy a new application server. Every time the colour of the box changes, it means a handover to a different team.

So there are a few things to consider here.

  • Each team is likely to have different priorities, so a handover between teams is not necessarily instantaneous. The next stage may be waiting on a queue for a long time. Potentially days. Don’t even get me started on things waiting for people to return from holiday…
  • Even if an individual team has created build scripts and has done their best to automate their tasks, if it is relying on them to pick something off a queue to initiate it, there will still be a handover delay.
  • When things are done manually people make mistakes. It doesn’t matter how good the people are, they will mess up occasionally. That is why the diagram includes a testing failure, and the process being redirected back through several teams to diagnose and fix the issue. This results in even more work. Specifically, unplanned work.
  • Manual processes are just slower. Running an installer and clicking the “Next” button a few times takes longer than just running a script. If you have to type responses and make choices it’s going to take even more time, and don’t forget that bit about human error…

Let’s contrast this to the “perfect” automated setup, where the request triggers an automated process to deliver the new service.

In this example, the request initiates an automated workflow that completes the action and delivers the finished product without any human intervention along the path. The automation takes as long as it takes, and ultimately has to do most of the same work, but there is no added handover lag in this process.

I think it’s fair to say you would be expecting a modern version of this process to complete in a matter of minutes, but I’ve seen the manual process take weeks or even months, not because of “work time”, but because of the idle handover time and human processes involved…

They Took Our Jobs!

At first glance it might seem like this is a problem if you are employed in any of the teams responsible for doing the manual tasks. Surely the automation is going to mean job cuts right? That depends really. In order to fully automate the delivery of any service you are going to have to design and build the blocks that will be threaded together to produce the final solution. This is not always simple. Depending on your current setup this might mean having fewer, more highly skilled people, or it might require more people in total. It’s impossible to know without knowing the requirements and the current staffing levels. Also, cloud provides a lot of the building blocks for you, so if you go that route there may be less work to do in total.

Even if the number of people doesn’t change as part of the automation process, you are getting work through the door quicker, so you are adding value to the business at a higher rate. From a DevOps perspective you have not added value to the business until you’ve delivered something to them. All the hours spent getting part of the build done equate to zero value to the business…

But we are doing OK without automation!

No you’re not! You’re drowning! You just don’t know it yet!

I never hear people saying they haven’t got enough projects waiting. I always hear people saying they have to shelve things because they don’t have time staff/resources/time to do them.

As your processes get more efficient you should be able to reallocate staff to projects that add value to the business, rather than wasting their lives on clicking the “Next” button.

If your process stays inefficient you will always be saying you are short of staff and every new project will require yet another round of internal recruitment or outsourcing.

Is this DevOps?

I’m hesitant to use the term DevOps as it can be a bit of a divisive term. I struggle to see how anyone who understands DevOps can’t see the benefits, but I think many people don’t know what it means, and without the understanding the word is useless…

Certainly automation is one piece of the DevOps puzzle, but equally if you have company resistance to the term DevOps, feel free to ignore it and focus on trying to sell the individual benefits of DevOps, one of which is improved automation…

Check out the rest of the series here.

Cheers

Tim…

PS. Conway’s Law – Melvin Conway 1967

“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”

In language I can understand.

If you have 10 departments, each process will have 10 sections, with a hand-off between them.

GDPR : The good, the good and the good!

As is the way with reporting these days, most of the posts about GDPR that have gained any sort of traction over the last few years/months/days have been focused on the doom and gloom side of things. I too have found myself focussing on this side of the issue, being the natural worrier that I am. Having said all that, I think it’s really important to take a step back and look at the issue as a whole…

I’ve seen a few comments from people outside the EU, and some inside, that can be summarised as, “F**k You EU!” I can understand that to a certain extent, but I think it’s important to remember what this is all about.

The Good : It’s about protecting you!

It’s really easy to gather massive amounts of data about you. This data is used to profile you and subsequently influence your decisions. There’s a reason why those pizza adverts keep coming to me, but I never see adverts for booze…

The stories about companies like Cambridge Analytica highlight how this data can be used to influence more than what food you buy. It can potentially influence who/what you vote for, and we can see how that has worked out for us in the UK and those in the USA recently…

I understand you may not like the implementation of GDPR from a business perspective, but surely you’ve got to agree that some control over the collection and use of this data has to be put in place?

Understanding what data is held about you and how it is processed is a good thing.

The Good: The technical stuff is easy

There are challenges associated with the technical side of GDPR, but for the most part we have the technology, tools and intellect to deal with this. Depending on how much work your company has put into security over the years, there may not actually be that much to do on the technical side.

For a number of people GDPR has been good leverage to finally deal with some important stuff that has been moved down the priority list for years, because it’s more important to add a new spangly widget to an application than to patch a server.

If nothing else, this type of work keeps us techies in work, which is a good thing. 🙂

The Good: The business process is where it’s at

If you’re reading this you are probably involved in IT, so the technical side of things is probably your main focus. Where is the data, is it secured, does it need to be encrypted etc. This is the tip of the iceberg, and as mentioned previously it’s all pretty easy, but labour intensive, to identify and fix. The really tough stuff is to identify and secure the business processes…

Burt uses an APEX interactive report to display some data he’s interested in. He downloads it as a spreadsheet and emails it to Beryl because she is the “Seven of Nine” of Excel and has macros coming out of her ears. She works her Excel Borg magic and emails the resulting masterpiece back to Burt. Burt then emails it on to Barbara who downloads it on to her laptop so she can take a look through it on the train on the way to the next board meeting…

Is anyone seeing the problem with this all too common business process? It really doesn’t matter how secure your database and applications are if people are going to download the data onto their PC, play around with it, print it, email it to people and then lose their unprotected laptop or memory stick on the train…

GDPR incentivises you to identify these stupid processes and secure them, or preferably replace them by something more sensible. This is a good thing. It’s something we in the IT world have been trying to encourage for years. Not only is it a good idea, but it’s also going to keep us techies in work. Do you see a pattern here? 🙂

Conclusion

I’m not saying GDPR is perfect. I understand it introduces a set of problems for companies. I realise it’s easy to go down the rabbit hole of doom and gloom, but this really is a good thing.

Speaking for myself, it’s been quite enlightening reading through the GDPR information and going through the process for my website and blog. I was surprised about how much data was being captured that I didn’t know about, especially considering this is just a crappy “read only” resource, not a proper business that needs to track customers/clients etc.

The next few years will prove interesting.

Cheers

Tim…

PS. I might have forgotten to mention it keeps us techies in work… 🙂

New XPS 15 : The Wait is Over

Followers of the blog will know I’ve been moaning about my MacBook Pro and macOS for a while now, and talking about making a switch back to Windows. That time will arrive soon, because I’ve just ordered one of these.

It’s a Dell XPS 15″ with 32G RAM, 1TB M.2 drive and an i9 (6 core) processor.

It’s a little over the top, but I tend to hold on to laptops for quite a while, assuming they work properly. I might have gone down-market a bit if Dell had released something in the middle range. In the UK they currently have low spec or mega spec in the new 15″ range, and I’m getting increasingly worried about my current MBP, so I just went for it. Working for a university has the distinct advantage that I get a fantastic Higher Education discount from Dell when buying kit for home use. We also get an OK discount from Apple, but who cares…

This will be my main desktop and travel laptop, so I’ll be interested to see how it stacks up. I know a couple of people with the 2017 model and they say it is awesome, so on paper this looks like it will be great, assuming it works. 🙂

I was tempted to go for one of the 13″ versions, which Connor McDonald recommended. The extra portability would be nice, but having recently spent some time working from just the laptop with no extra screen, I would go mad on such a small screen, no matter how good the resolution was.

Of course I’ve bought a dock for home and I already have a great monitor, so hopefully is should all slot into the setup nicely. I probably won’t get to use it for the next couple of conferences because of delivery dates, setup and understanding what adapters I need to connect to the real world. I’m not carrying the dock around with me. 🙂

I’ll no doubt write about the experience has it happens. I’m using Windows 10 at work, so I don’t think that will be an issue as it is working out fine. It’s always a bit of a concern when switching over to a new bit of kit. What if you get “the bad one”, which has certainly happened with this last MBP. Also, I’ve got my setup documented, but I always worry I will miss something out… 🙂

Fingers crossed this will work out…

Cheers

Tim…

PS. For context, you might want to read my post here before you tell me how great your preferred desktop OS is… 🙂