The Death of TLS 1.0 and 1.1 : Should You Panic?

There have been some stories recently about the death/deprecation of TLS 1.0 and 1.1. They follow a similar format to those of a few years ago about SSLv3. The obvious question is, should you panic? The answer is, as always, it depends!

Web Apps

For commercial web apps you are hopefully doing the right thing and using a layered approach to your architecture. User traffic hits a load balancer that does the SSL termination, then maybe re-encrypts to send traffic to the web servers and/or app servers below it. It’s great if you can make all the layers handle TLS 1.2, but the only one that absolutely must is the load balancer. You can hide a multitude of sins behind it. Provided your load balancer isn’t ancient, your typical web traffic won’t be a problem. If you still allow direct traffic to web and app servers you might be in for a rough ride. The quickest and easiest way to progress is to slap a load balancer or reverse proxy in front of everything.

Server Call-Outs (Back-End)

Where it can get tricky is when the back end servers make call-outs to other systems. Back when SSLv3 was getting turned off, because of the POODLE vulnerability, a bunch of people took a hit because Oracle 11.2.0.2 couldn’t use TLS1.0 when making call-outs from the database using UTL_HTTP. Support for that appeared in 11.2.0.3, so the fix was to patch the database. Oracle 11.2 (when patched accordingly), 12.1 and 12.2 can all cope with TLS 1.2. You could have similar problems when using “curl” or “wget” on old/unpatched Linux installations. Obviously the same is true for old software and frameworks.

So what’s the solution from the back end? Depending on the ages of the software you are using, and what you are actually trying to do, it’s likely to be one of the following.

  • Patch : You know you are meant to, but you don’t. This is one of those situations where you will wish you had. It always makes sense to keep up to date with patches. Often it means these sort of changes pass you by without you ever having to consider them.
  • Upgrade : If you are using a version of software that can’t be patched to enable TLS 1.2 for callouts, you probably need to upgrade to a version that can.
  • Proxy : The same way you can use a reverse proxy, or load balancer, to handle calls coming into your system, you can stick a proxy in front of outbound locations, allowing your old systems to call out with insecure protocols. The proxy then contacts the destination using a secure protocol. It’s kind-of messy, but we all have skeletons in our closet. If you have applications that simply can’t be updated, you might have to do this.
  • Stop : Stop doing what you are doing, the way you are doing it. I guess that’s not an option right? 🙂

The first step in fixing a problem is to admit you have one. You need to identify your incoming and outgoing calls to understand what the impact of these and future deprecations will have. I’m no expert at this, but it seems like architecture 101 to me. 🙂

Deprecation Doesn’t Mean Desupport

True, but it’s how the world reacts that counts. If Chrome stops allowing anything less than TLS 1.2, all previous protocols become dead instantly. If a web service provider you make call-outs to locks down their service to TLS 1.2, deprecation suddenly means death.

You know it’s coming. You’ve been warned… 🙂

Cheers

Tim…

Why Automation Matters : Reliability and Confidence

In my previous post on this subject I mentioned the potential for human error in manual processes. This leads nicely into the subject of this post about reliability and confidence…

I’ve been presenting at conferences for over a decade. Right from the start I included live demos in those talks. For a couple of years I avoided them to make my life simpler, but I’ve moved back to them again as I feel in some cases showing something has a bigger impact than just saying it…

The Problem

One of the stressful things about live demos is they require something to run the demo on, and what happens if that’s not in the state you expect it to be?

I had an example of this a few years ago. I was in Bulgaria doing a talk about CloneDB and someone asked me a question at the end of the session, so I trashed my demo to allow me to show the answer to their question. I forgot to correct the situation, so when I came to do the same demo at UKOUG it went horribly wrong, which lead someone on Twitter to say “session clone db is a mess“, and they were correct. It was. The problem here was I wasn’t starting from a known state…

This is no different for us developers and DBAs out in the real world. When we are given some kit, we want to know it’s in a consistent state, but it might not be for a few reasons.

Human Error

The system was created using a manual build process and someone made a mistake. I think almost every system coming out of a manual process has something screwed on it. I make mistakes like this too. The phone rings, you get distracted and you come back to the original task and you forget a step. You can minimise this with recipes and checklists, but we are human. We will goof up, regardless of the measures we put in place.

Sometimes it’s easy to find and fix the issue. Sometimes you have to step through the whole process again to identify the issue. For complex builds this can take a long time, and that’s all wasted time.

Changes During the Lifespan

The delivered system was perfect, but then it was changed during its lifespan. Here are a couple of examples.

App Server: Someone is diagnosing an issue and they change some app server parameters and forget to set them back. Those don’t fix the current issue, but they do affect the outcome of the next test. Having completed the testing successfully, the application gets moved to production and fails, because UAT and Live no longer have the same environment, so the outcomes are not comparable or predictable.

Database: Several developers are using a shared development database. Each person is trying to shape the data to fit their scenario, and in the process trashing someone else’s work. The shared database is only refreshed a handful of times a year, so these inconsistencies linger for a long time. If the setup of test data is not done carefully you can add logical corruptions to the data, making it no longer representative of a real situation. Once again the outcomes are not comparable or predicable.

The Solution?

I guess from the title you already know this. Automation.

Going back to my demo problem again, I almost had a repeat of this scenario at Oracle Code: Bangalore a few months ago. I woke up the day of the conference and did a quick run through my demos and something wasn’t working. How did I solve it? I rebuilt everything. 🙂

I do most of my demos using Docker these days, even for non-Docker stuff. I use Oracle Linux 7 and UEK4 as my base OS and kernel, so I run Docker inside a VirtualBox VM. The added bonus is I get a consistent experience regardless of underlying host OS (Windows, macOS or Linux). So what did the rebuild involve? From my laptop I just ran these commands.

vagrant destroy -f
vagrant up

I subsequently connected to the resulting VM and ran this command to build and run the specific containers for my demo.

docker-compose up

What I was left with was a clean build in exactly the condition I needed it to be to do my demos. Now I’m not saying I wasn’t nervous, because not having working demos on the morning of the conference is a nerve wracking thing, but I knew I could get back to a steady state, so this whole issue resulted in one line in the blog post for that day. 🙂 Without automation I would be trying to find and fix the problem, or manually rebuilding everything under time pressure, which is a sure fire way to make mistakes.

I do some demos on Oracle Database Cloud Service too. When I recently switched between trial accounts my demo VM was lost, so I provisioned a new 18c DBaaS, uploaded a script and ran it. Setup complete.

Confidence

Automation is quicker. I think we all get that. Having a reliable build process means you have the confidence to throw stuff away and build clean at any point. Think about it.

  • Developers replacing their whole infrastructure whenever they want. At a minimum once per sprint.
  • Deployments to environments not just deploying code, but replacing the infrastructure with it.
  • Environments fired up for a single purpose, maybe some automated QA or staff training, then destroyed.
  • When something goes wrong in production, just replace it. You know it’s going to work because it did in all your other environments.

Having reliable automation brings with it a greater level of confidence in what you are delivering, so you can spend less time on unplanned work fixing stuff and focus more on delivering value to the business.

Tooling

The tooling you choose will depend a lot on what you are doing and what your preferences are. For example, if you are focusing on the RDBMS layer, it is unlikely you will choose Docker for anything other than little demos. For some 3rd party software it’s almost impossible to automate a build process, so you might use gold images as your starting point or partially automate the process. In some cases you might use the cloud to provide the automation for you. The tooling is less important than the mindset in my opinion.

Cheers

Tim…

Midlife Crisis : Tough Year So Far…

I wrote this post earlier in the year, then didn’t publish it because it felt like I was drowning in self pity, but recently I accidentally showed it during a presentation, so I thought I would put it out there… So here goes…

I dislike writing this because I lead a charmed life compared to many people, this is very much “First World Problems”, but we are half way through the year and I’m already thinking I’m cursed…

Work

Work is proving extremely challenging for me. I like to be good at stuff, but more importantly I hate being bad at stuff. At the moment I’m doing so many different things at work I feel like my day is a massive pile of mediocrity, and that is really hard on my ego.

I’m in no different a position to many DBAs and developers out there, Googling my way through life, but it’s really quite depressing. The counter to this is I can’t imagine ever being so blinkered as I used to be, back in the days when I considered myself as just an “Oracle specialist”.

This is where the difficulty lies. I don’t see a way forward that I will be happy with. These types of situation where every option comes with a set of bad outcomes fry my brain…

Conferences

There is just something that is not clicking into place for me right now. It’s not a criticism of the events or the people, it’s something to do with me. I’m really daunted in the lead up to the events and although I enjoy the events themselves and interacting with people, I come away with a massive sense of relief when they are over, and then have a bad post-event crash where I just want to stop everything and give up. The post-event crash is not a new thing, but the peaks and troughs seem more exaggerated than before.

It doesn’t matter how much prep I do, it never seems to be enough. I’ve been doing this for over 10 years and I can’t remember feeling this way before. I don’t think it’s anything to do with impostor syndrome, as that has always been there and I came to terms with it a long time ago. People who think they are great are probably too rubbish to realise how much they don’t know. 🙂 I suspect there are a number of factors feeding into this.

Oracle user group conferences are much easier for me, but over the last year I’ve been doing more events that aren’t straight Oracle events, where you don’t have a good handle on the audience before you get there. As a result these events are a lot more daunting for me. Sometimes they work. Sometimes they don’t. I guess it plays into my insecurities about presenting…

Part of the Dev Champion program is about taking you out of your comfort zone. It’s certainly done this, and I know it’s probably good for me, but it doesn’t always feel like it. 🙂

Website

Over the years the website has been the one thing I can always count on to get me out of a funk. I sit down, play with technology and write. Some of it becomes articles. Some of it gets lost in the midst of time. Either way I’ve always felt like I’m having fun when I’m doing it. It doesn’t feel like that at the moment though. The pressure of other stuff encroaching on my time means this is suffering, which in turn makes me put pressure on myself to deliver, which stops it feeling like fun…

Conclusion

So there it is. I’m having a midlife crisis. I have no plan regarding how to fix it, but convention tells me I should go out with a 19 year old gold digger (sorry Debra), buy a drop-top car and/or a motorbike and generally try to act like I’m 20, so people can discuss how sad my behaviour is and laugh about me behind my back… I’m off to do some test drives and install Tinder on my phone…

Cheers

Tim…

PS. Since writing this, but before posting it, I’ve had a conference where things have worked out OK, so I’m hoping my mindset and luck are changing…

PPS. A couple of people encouraged me to release this, because they thought it would be good for people to hear that experienced presenters have all the same problems as newbies. That is definitely true…

Why Automation Matters : Lost Time

Sorry for stating what I hope is the obvious, but automation matters. It’s mattered for a long time, but the constant mention of Cloud and DevOps over the last few years has thrown even more emphasis on automation.

If you are not sure why automation matters, I would just like to give you an example of the bad old days, which might be the current time for some who are still doing everything manually, with separate teams responsible for each stage of the process.

Lost Time : Handover/Handoff Lag

In the diagram below we can see all the stages you might go through to deploy a new application server. Every time the colour of the box changes, it means a handover to a different team.

So there are a few things to consider here.

  • Each team is likely to have different priorities, so a handover between teams is not necessarily instantaneous. The next stage may be waiting on a queue for a long time. Potentially days. Don’t even get me started on things waiting for people to return from holiday…
  • Even if an individual team has created build scripts and has done their best to automate their tasks, if it is relying on them to pick something off a queue to initiate it, there will still be a handover delay.
  • When things are done manually people make mistakes. It doesn’t matter how good the people are, they will mess up occasionally. That is why the diagram includes a testing failure, and the process being redirected back through several teams to diagnose and fix the issue. This results in even more work. Specifically, unplanned work.
  • Manual processes are just slower. Running an installer and clicking the “Next” button a few times takes longer than just running a script. If you have to type responses and make choices it’s going to take even more time, and don’t forget that bit about human error…

Let’s contrast this to the “perfect” automated setup, where the request triggers an automated process to deliver the new service.

In this example, the request initiates an automated workflow that completes the action and delivers the finished product without any human intervention along the path. The automation takes as long as it takes, and ultimately has to do most of the same work, but there is no added handover lag in this process.

I think it’s fair to say you would be expecting a modern version of this process to complete in a matter of minutes, but I’ve seen the manual process take weeks or even months, not because of “work time”, but because of the idle handover time and human processes involved…

They Took Our Jobs!

At first glance it might seem like this is a problem if you are employed in any of the teams responsible for doing the manual tasks. Surely the automation is going to mean job cuts right? That depends really. In order to fully automate the delivery of any service you are going to have to design and build the blocks that will be threaded together to produce the final solution. This is not always simple. Depending on your current setup this might mean having fewer, more highly skilled people, or it might require more people in total. It’s impossible to know without knowing the requirements and the current staffing levels. Also, cloud provides a lot of the building blocks for you, so if you go that route there may be less work to do in total.

Even if the number of people doesn’t change as part of the automation process, you are getting work through the door quicker, so you are adding value to the business at a higher rate. From a DevOps perspective you have not added value to the business until you’ve delivered something to them. All the hours spent getting part of the build done equate to zero value to the business…

But we are doing OK without automation!

No you’re not! You’re drowning! You just don’t know it yet!

I never hear people saying they haven’t got enough projects waiting. I always hear people saying they have to shelve things because they don’t have time staff/resources/time to do them.

As your processes get more efficient you should be able to reallocate staff to projects that add value to the business, rather than wasting their lives on clicking the “Next” button.

If your process stays inefficient you will always be saying you are short of staff and every new project will require yet another round of internal recruitment or outsourcing.

Is this DevOps?

I’m hesitant to use the term DevOps as it can be a bit of a divisive term. I struggle to see how anyone who understands DevOps can’t see the benefits, but I think many people don’t know what it means, and without the understanding the word is useless…

Certainly automation is one piece of the DevOps puzzle, but equally if you have company resistance to the term DevOps, feel free to ignore it and focus on trying to sell the individual benefits of DevOps, one of which is improved automation…

Cheers

Tim…

nlOUG Tech Experience 2018 : The Journey Home

The trip home from nlOUG Tech Experience 2018 started pretty early, so true to form I didn’t sleep properly, for fear of sleeping through my alarm. At about 04:00 I was in the bath watching YouTube videos. 🙁

A couple of hours later I was at the station in Amersfoort waiting for the train to the airport where I was joined by Sabine Heimsath. Pretty soon we were on the train, where I made use of the wifi, as witnessed by Sabine (see her photo). 🙂 It was about 40 minutes to the airport, which was just about enough time to get out a blog post about the previous day.

Once at the airport I had a total brain freeze looking at the departure boards, so Sabine gave me a reboot and pointed me in the right direction. We were leaving from different parts of the airport, so we said our goodbyes and I was off to security…

Security was really busy, with the queue for the passport scanners moving rather slowly. Luckily they opened the second half of the scanners, which made things move a bit faster… I made my way over to the boarding gate and parked until my flight…

Boarding time came and we were told the cabin crew were missing… They arrived and be boarded pretty quick, so I don’t think it delayed us drastically, even though I was personally having flashbacks to last week. I originally had a middle seat, but I was asked if I wanted to move to the empty exit row, and of course I said yes and got an aisle seat. The laptop came out as soon as we were in the air…

From the airport I decided to take the train into town as I wanted to pick something up. Unfortunately the shop didn’t have what I wanted, so I had to console myself with a giant burrito for breakfast. You win some, you lose some. 🙂 From the city centre is was a quick taxi ride home and the trip was complete.

What a difference a year makes. I was ill for the nlOUG Tech Experience 2017 event and just about made it through my sessions, then went back to bed. This year I was able to get involved in the whole conference and have fun.

Thanks very much to the folks in nlOUG for letting me come along to play. Thanks to the speakers and attendees that make this stuff possible. This was a self-funded event for me, but I’d still like to thank the Oracle ACE Program and the Oracle Developer Program for continuing to allow me to fly the flag.

Here are my posts related to this event.

Cheers

Tim…

nlOUG Tech Experience 2018 : Day 2

I had a rough start to day 2 of nlOUG Tech Experience 2018. The night before I was in bed and foolishly checked my email only to find some problems at work. I got out of bed, logged in and was checking out the impact of a storage fault. Some databases had been down and some app servers weren’t exactly happy. It was some time after midnight when things started to stabilise out. That meant the following morning I felt a little fuzzy…

Once I got to the conference the day started with a keynote about “Autonomous Data Management” by Penny Avril. Hopefully we will see Penny at UKOUG at the end of the year…

If you’ve followed the blog you will know I’m a fan of what Oracle is trying to achieve with this family of cloud services. To use my own words, it’s all about less time doing boring stuff, more time doing interesting stuff.

From there I went to see Ron Ekins talk about “DevOPS, Ansible and automation for the DBA”. Due to clashes I’ve managed to miss this session at each event where we’ve been together, although he did run through some of it with me in Ireland earlier in the year. Even though I was present during this session, I missed most of it as I was logged into work again. Sorry Ron. Someday I will actually see the full session. 🙂 Just in case you are curious, yes this was a train carriage.

The next session I went to was “All about linux memory usage by the Oracle database” by Frits Hoogland. If gaining knowledge is like peeling back the layers of an onion, Frits has got through a lot more layers than me. I was surprised at how little I knew about this subject before this session. Of course I will act all superior now like I always knew it, but seriously…

From there it was off to “Kicking the Tyres on Oracle Database 18c with Swingbench” by Dominic Giles. What’s not to love about Swingbench (and Dom)? I’ve been using SwingBench for years, long before I knew who Dom was. 🙂 Keep those releases coming Dom!

The last session of my day was my presentation called DBA Does Docker, which is about my journey so far with Docker. I’m a big fan, but I’ve not drunk the KoolAid… I think the session wet well, and all the demos worked. 🙂

After a quick closing ceremony, then some drinks and nibbles the conference was over.

A few of us went into town to get some food and then before I knew it the day was over.

I will do all the proper thank you messages in the closing post when I get home, but thanks everyone for a great conference. It’s been a tough year so far and this was the first conference where I felt things went OK for me. That’s not a criticism of the other events I’ve been to. It’s about something not clicking into position with me. I’m hoping this event has broken my run of bad luck…

Cheers

Tim…

nlOUG Tech Experience 2018 : Day 1

Day 1 of nlOUG Tech Experience 2018 started with me missing the opening keynote to spend time talking with Frits Hoogland about all things Vagrant, Ansible and Docker…

The first session I went to was Penny Avril & Dominic Giles with “What’s New from Oracle Database Development”. This was a quick run through some of the key features that have been introduced in 12.2 and 18c, which sets the scene well for some of the other talks happening over the two days.

Next up was “Database Design Thoughts” by Toon Koppelaars. I think this type of session appeals on several levels. To a beginner it is full of solid facts about basic database design. To someone with more experience it’s more about hearing things you know, but from a different angle. I spoke with Toon about the session when it was over and I’m pretty sure I would not be able to present this type of session.

From there I went to see “SQL Model Clause: A Gentle Introduction” by Alex Nuijten. What I really need to do is go home and write an article about this now I vaguely know what it is all about. Unfortunately I think I will leave it a couple of weeks and be clueless again. Alex did a really good job of explaining it, so it is up to me to get on the case soon!

From there it was two back-to-back sessions by me. First up was, “Cool New Features for Developers in 18c and 12c”, which was a collection of things I think are cool that were added in 12.1, 12.2 and 18c. There were live demonstrations too, which went well. I ran out of time, but I felt happy with the presentation. I had fun!

My next session was “Make the RDBMS Relevant Again with RESTful Web Services and JSON”. It was a struggle to fit this into 45 minutes, but I hope I got the main message across without rushing too much. The live demos went smooth too.

After the last session there was food and drinks and random chatting, with the odd rant, which you expect at tech events. All in all a great end to a great first day. 🙂

Cheers

Tim…

PS. At last year’s event I was ill and spent most of my time in bed when I wasn’t presenting (similar to Riga this year). It was nice to actually participate properly in the conference this year!

nlOUG Tech Experience 2018 : The Journey Begins

The trip to nlOUG Tech Experience 2018 started at a pretty normal time. I left the house at 08:00, which was far too early really, but you never know about the traffic when you are in rush hour, so I thought it better to be safe than sorry. Rather than the normal 30 minutes, it took about an hour to get to the airport, but once there I breezed through security and had a full 2 hours before the flight, so out came the laptop.

The flight to Amsterdam was delayed by about 15 minutes due to the curse of Schiphol. Luckily I got moved to an exit row seat and had loads of space, so out came the laptop.

From Schiphol to Amersfoort was a train ride of about 50 minutes. The train had free wifi, so out came the laptop.

Last year my hotel was a bus ride away from the event, but this year I booked a hotel near to the station, so it was only a short walk then I was in my room, so out came the laptop.

Having a bit of space and wifi makes the day feel far less wasted. I was pretty productive in the end…

I spent the evening going through my talks and demos making sure everything was OK. As mentioned in a previous post I now have three sessions, so it takes quite a while to rehearse… 🙂

Cheers

Tim…

It’s all about focus!

When I’m in airports I do a lot of people watching. One thing I notice is a total lack of focus in some people.

In the airport I have several distinct goals.

  • Get through check-in and/or bag drop as required.
  • Get through security.
  • Identify my boarding gate if it is already displayed.
  • If my boarding gate is listed, get to it to make sure I know where it is and how long it takes to get to it, so if I have time to wander off I’m not going to get into trouble later.

Only once these tasks are complete can I relax and while away the time. Now I understand things can get complicated when people are having to sheppard young children, but I see lots of single adults, or couples that seem unable to focus on the task at hand…

As an example I recently witnessed someone being asked the same question three times before answering it. At this point you might be thinking it was because they were hard of hearing, or maybe struggling with the accent. Although that could be true, what I could see was they were not looking at the person dealing with them. Their attention was elsewhere, rather than focusing on the task at hand. This drives me crazy. You are asking for help, so pay attention you flippin’ idiot!

There are lots of characteristics that can be attributed to successful people, but I would suggest one of them has got to be the ability to focus. Being able to shut out everything else and focus on the task at hand is really important. You think you are good at multitasking, but you aren’t. It’s a lie. Sure, you can to some extent multitask mindless operations, but anything that needs proper concentration is single-threaded. By attempting to multitask all you are doing is performing substandard work. It takes time to switch between tasks, so when you think you are just checking your twitter messages, you are actually wasting significantly more time… I notice a big difference in my productivity when I’m working from home, because home is really boring, with very few distractions. In contrast the office is full of people that just want a “quick chat” about something, me included. 🙂

One of the principles of agile development is to control Work in Process/Progress (WIP). This is important because it allows you to focus on a single task (user story or story point) and get it done and out of way, before moving on to the next thing on the list (or kanban board). Since you are only ever focused on the current task, there is no need for context switching during the task. It also has some other benefits…

  • If you are like me, you get a kick out ticking things off a list. This is something I’ve done for years, before I heard of kanban. Something like a kanban board just adds visibility to something you are probably doing anyway.
  • It’s easier to judge progress on large pieces of work if it is broken into steps.
  • Assuming the work moves to production in stages and can be made visible to the users, the users can see it happening too. This is important on long running projects where it’s easy to look like you’ve disappeared for 6 months before the finished product arrives.

There will always be some interruptions, like high priority incidents, but removing all but the essential distractions has a massive impact on productivity. This doesn’t have to be controlled by others and imposed on you. The trick is for you to be disciplined about when you do things. If you can’t live without checking social media, fine. Just do it between tasks, not during a task. If you are already switching between the end of one task and the start of the next, you are already having a mental context switch, so the impact is much reduced compared to checking in the middle of a task. I don’t agree with companies trying to turn workers into mindless drones, but at the same time it is your duty not to waste time you are being paid for.

Most importantly, never stand in front of me in a queue and ignore the person on the desk who is trying to help you, or I’ll write a rambling blog post about you! 🙂

Cheers

Tim…

nlOUG Tech Experience 2018

Just a quick post to say I’ll be at nlOUG Tech Experience 2018 later this week.

I was originally meant to be doing two sessions, but due to someone dropping out I’m now doing three. 🙂

See you there!

Cheers

Tim…