Writing Tips : Don’t get blocked by a post you don’t want to write

I’ve written previously about writing and motivation, but I wanted to say something about self-inflicted demotivation, which is something I’ve been guilty of a number of times.

As you will probably know, I write a lot of posts about Oracle technology, but I’m not equally interested in every feature of the Oracle database. Sometimes this is because I just don’t see the point of a feature, and sometimes it is because I simply don’t get to use a feature very much, even if I think it is cool. It can be hard to be motivated to write about something that isn’t jumping out and screaming at you…

Baiting the trap with stupid goals

I often set myself little goals, and sometimes these are my undoing. I might make a list of topics to write about, and the “completionist” part of my brain adds things into the list that I don’t really care about.

For example, I might decide I want to write about all of the SQL new features in version X of the database, but there is something that I’m not interested in, so writing that post is a grind. I want to complete the list, but that one item on the list is not inspiring me…

Now the sensible thing to do it to just miss that post out, and move on, but my stupid head gets locked into finishing the list, and I kind-of cripple my progress by forcing myself to do something I don’t really want to.

A quick example

Something very similar happened to me this week. I had recently written four new posts, which I can’t publish until 23c is out of beta, and I had one more post to complete my list. The subject itself was OK, but there was a lot of setup involved, for very little payback. It felt like hours of work to prove a single sentence. Needless to say I was not highly motivated.

I kept telling myself to move on and do something different, but in the back of my mind I kept thinking about that final tick on the list…

So what did I do? I wasted the week playing Raft on peaceful mode. Just cruising round the sea picking up junk and gathering resources from reefs. I’ve completed the game about 15 times on harder difficulties, but I wanted something mindless to do, rather than face writing that post.

The solution

This is a case of “do as I says, not as I do”, but you really need to avoid situations that you know will block you. Each of us will have different blockers, and different displacement activities we use to distract us, but I bet most of us can spot a pattern that triggers us…

If you find yourself working on a post that is killing you, just walk away. You can always come back to it later…

Check out the rest of the series here.

Cheers

Tim…

Update Oracle Database Time Zone Files (Poll Results Discussed)

In case you didn’t know, countries occasionally change their time zones, or alter the way they handle daylight saving time (DST). To let the database know about these changes we have to apply a new database time zone file. The updated files have been shipped with upgrades and patches since 11gR2, but applying them to the database has always been a manual operation.

With the recent switch over to daylight savings time in the UK I decided to post this question on Twitter yesterday.

How often do you update your Oracle database time zone files?

We get less than 6% of people updating their time zone files on a regular schedule. Nearly 45% who only do the updates after a database upgrade, and nearly 50% of people who never do it at all.

I can’t say I’m surprised by the results. In terms of the reasoning for these responses, I’ll reference some of the comments on Twitter.

Regular Schedule

“Every ru patch, also thanks to 19.18 it is included now and with out of place upgrade and autoupgrade, i dont do it anymore 🙂 all automatic.”

Mustafa KALAYCI

If you are using AutoUpgrade to patch to a new Oracle Home, then applying updated time zone files is really easy. Before 19.18 it’s just a single entry “timezone_upg=yes” in the AutoUpgrade config file. From 19.18 onward the update of the time zone file is the default action (see here).

So interestingly, there may be some people who don’t know they are applying an update of their time zone file, who actually are now…

After Upgrades

This feels like the natural time to do it for me, and it seems many other people feel the same.

As mentioned previously, AutoUpgrade makes it simple. From 21c onward AutoUpgrade is the main upgrade approach, even for those that have resisted using it for previous versions, so this question goes away from an upgrade perspective.

We can specifically tell it not to perform the action using “timezone_upg=no”, but I’m guessing most people will just go with the default action.

Never

“NEVER. As an American-only company with very little need for time-specific data, quite unnecessary. Horrible design with no rollbacks and headaches w/data pump. Just not worth it if possible to avoid”

Taylor

I totally understand this response. Many of us work with systems that are limited to our own country. Assuming our country doesn’t alter its own daylight savings time rules, then using an old time zone file is unlikely to cause an issue.

When you consider the number of people that run *very old* versions of Oracle, you can see that using old versions of the time zone file doesn’t present a major issue in these circumstances.

With reference to the data pump issue, I’ve experienced this, and it was also picked up in the comments.

“My hypothesis: Most do it when datapump tells they need to do it to get the import file they just received to load”

Connor McDonald

Offline/Online Operation

The point about this being an offline operation was raised.

“Well it is an offline operation, so pretty exceptional thing to do. Only in a rare case where some feature requires the upgrade – like DataPump failing or query over dblink failing.”

Ilmar Kerm

Downtime is never welcome, but it was also pointed out it can be an online operation in 21c.

“Offline will be a thing of the past…

https://docs.oracle.com/en/database/oracle/oracle-database/21/refrn/TIMEZONE_VERSION_UPGRADE_ONLINE.html

Connor McDonald

Conclusion

It seems like the time zone file version is not high on the list of priorities for most people, providing it is not causing a data pump issue. I totally understand this, and I myself only consider it during database upgrades.

I always like reading these poll results. I know the sample size is small, but it gives you a good idea of how your beliefs compare to the wider audience.

If you are interested to know how to manually upgrade your time zone file, you can read about it here.

Cheers

Tim…

Stupid is as stupid does! Outsourcing, Agile, DevOps and Cloud.

Outsourcing From Hell

Many years ago, when outsourcing first became a thing, you would often see phrases like, “you can’t outsource a problem”. That can be interpreted in several ways, but one which comes to mind is the idea that if you can’t properly define what you need, how can you expect someone to deliver the solution to your prayers?

During the early days of outsourcing there were many horror stories, but in my opinion many of them were self-inflicted. Companies with terrible project management believed that a load of cheap offshore workers would somehow make up for the fact the project manager didn’t know their ass from their elbow. Companies not putting in the effort up front to understand their requirements, then moaning about what was delivered. Companies who had no understanding of the product/development stack had no way to judge the competence of the offshore team they hired. These sort of problems caused internal development teams to fail, so of course they would also cause outsourced teams to fail.

If you are having problems with internal development teams, outsourced teams and external vendor relationships, how can you not turn this around and ask yourself the question, “could I/we be the problem?”

Fragile, not Agile

How many times have you heard people/companies talk about agile, while insisting on doing everything possible to make sure agile becomes fragile? Those same people/companies will then insist that agile is not all it’s cracked up to be. This sort of nonsense lead a group of us to come up with this, mocking what we were seeing…

There is no framework or methodology you can’t screw up if you are an idiot.

DevOops, not DevOps

Much like Agile, DevOps has been one of those things people love to talk about without even doing some basic reading. Either they are quick to point out the limitations of DevOps, or even outwardly promote it, while sabotaging it from within to protect their silos.

If you have totally dysfunctional silos, the chances are you are not going to save yourself with DevOps, because the people that allowed those silos to become dysfunctional will want to wield control over DevOps, thereby guaranteeing it will fail.

Dark Clouds on the Horizon

In a repeat of the “you can’t outsource a problem” issue, the cloud isn’t magic. There is a lot of stuff you need to understand before you can do something successfully on the cloud. Stuff like pricing, security, network topology, platform offerings, “best practices”, hybrid (cloud + on-prem) systems all need to be considered before you start building anything. Just because you can fire up a VM in the cloud in 30 seconds, it doesn’t mean it is sound to build your business around that…

There have been numerous stories over the years where companies have turned cloud hype into cloud hell. It’s not because there is something inherently wrong with the cloud. It’s because the company has a broken approach to everything, so of course they failed when they launched into their cloud initiative…

Conclusion

Before you launch into a tirade about how X is crap and Y is much better, just make sure it’s not you that’s the problem. Stupid is as stupid does!

Cheers

Tim…

Oracle Database Patching (Poll Results Discussed)

Having recently put out a post about database patching, I was interested to know what people out in the world were doing, so I went to Twitter to ask.

As always, the sample size is small and my followers have an Oracle bias, so you can decide how representative you think these number are…

Patching Frequency

Here was the first question.

How often do you patch your production Oracle GI/DB installations? (Pick the nearest that applies)

There was a fairly even spread of answers, with about a third of people doing quarterly patching, and a quarter doing six-monthly patching. I feel like both these options are reasonable. About 20% were doing yearly patching, which is starting to sound a little risky to me. The real downer was over 22% of people never patch their databases. This is interesting when you consider the recent announcement about monthly recommended patches (MRPs).

For those people that never patch, I can think of a few reasons off the top off my head why.

  • Lack of testing resource. I think patch frequency has more to do with testing than any other factor. If you have a lot of databases, the testing resource to get through a patching cycle can be quite considerable. This is why you have to invest some time and money into automated testing.
  • If it ain’t broke, don’t fix it. The problem is, it is broken! How long after your system has been compromised will it be before you notice? How are your customers going to feel when you have a data breach and they find out you haven’t even taken basic steps to protect them? I don’t envy you explaining this…
  • Fear of downtime. I know downtime is a real issue to some companies, but there are several ways to mitigate this, and you have to balance the pros and the cons. I think if most people are honest, they can afford the downtime to patch their systems. They are just using this as an excuse.
  • Patching is risky. I understand that patches can introduce new issues, but that is why there are multiple ways to patch, with some being more conservative from a risk perspective. I think this is just another excuse.
  • Out of support database versions. I think this is a big factor. A lot of people run really old versions of the database that are no longer in support, and are no longer receiving patches. I don’t even think I need to explain why this is a terrible idea. Once again, how are you going to explain this to your customers?
  • Lack of skills. We like to think that every system is looked after by a qualified DBA, but the reality is that is just not true. I get a lot of questions from people who are SQL Server and MySQL DBAs that have been given some Oracle databases to look after, and they freely admit to not having the skills to look after them. Even amongst Oracle DBAs there is a massive variation in skills. Oracle patching has improved over the years, but it is still painful compared to other database engines. Just saying.

Type of Patching

This was the second question.

When patching your production Oracle GI/DB installations, which method do you use?
In-Place = Current ORACLE_HOME
Out-Of-Place = New ORACLE_HOME

This was a fairly even split, with In-Place winning by a small margin. Oracle recommend Out-Of-Place patching, but I think both options are fine if you understand the implications. I discussed these in my previous post.

Conclusion

I think of patch frequency in a similar way to upgrade frequency. If you do it very rarely, it’s really scary, and because nobody remembers what they did last time, there are a bunch of problems that occur, which makes everyone nervous about the next patch/upgrade. There are two ways to respond to this. The first is to delay patching and upgrades as long as possible, which will result in the next big disaster project. The second is to increase your patch/upgrade frequency, so everyone becomes well versed in what they have to do, and it becomes a well oiled machine. You get good at what you do frequently. As you might expect, I prefer the second option. I’ve fought long and hard to get my company into a quarterly patching schedule, and it will only decrease in frequency over my dead body!

Assuming the results of these polls are representative of the wider community, I feel like Oracle need to sit up and take notice. Patching is better than it was, but “less bad” is not the same as “good”. It is still too complicated, and too prone to introducing new issues IMHO!

Cheers

Tim…

Database Patching : It’s a difficult subject

If you came hear hoping I was going to say there are valid reasons not to patch, you are out of luck. There is never a valid reason not to patch…

Instead this post is more about the general approach to patching. I’ve spent 22+ years writing about Oracle, including how to install it, but I’ve written practically nothing about how to patch a database. My stock answer is “read the patch notes”, and to be honest that is probably the best thing anyone can do. Although patching is a lot more standardized these days, it’s still worth reading the patch notes in case something unexpected happens. In this post I just want to talk about a few top-level things…

Patching to a new ORACLE_HOME

There are two big reasons for patching to a new ORACLE_HOME, or out-of-place patching.

  1. You can apply the binary patches to the new home while the database is still running in the old home, so you reduce the total amount of downtime.
  2. You have a natural fallback in the event of the wanting to revert the patch. You don’t have to wait for the patch rollback to complete.

There are some downsides though.

  1. It requires extra space to hold both the unpatched and patched homes, until you reach a point where you are happy to remove the unpatched home.
  2. If you have any scripts that reference the ORACLE_HOME, they will need to be updated. Hopefully you’ve centralized this into a single environment setup script.
  3. I guess it’s a little more complicated, and the patch notes are not that helpful.

So should you follow the recommendation of patching to a new home or not? The answer as always is “it depends”.

The reduction in downtime for a single instance database is good, but if you are running RAC or Data Guard, this isn’t really an issue as the database remains online for most of the patching anyway. Having a quick fallback is great, but once again if you are running RAC or Data Guard this isn’t a big deal.

If you are running without RAC or Data Guard, you have made a decision that you can tolerate a certain level of downtime, so is taking the system down for an hour every quarter that big a deal? I’ve heard of folks who use RAC and/or Data Guard who still bring the whole system offline to patch, so the decision is probably going to be very different for people, depending on their environment and the constraints they are working with.

I hope you’re taking OS and database backups before patching. If something catastrophic happens, such that a rollback of the patch is not possible, you can recover your original home and database from the backups. Clearly this could take a long time, depending on how your backups are done, but the risk of loss is low. So the question is, can you tolerate the additional downtime?

You have to make a decision on the pros and cons of each approach for you, and of course deal with the consequences. If in doubt, go with the recommendation and patch to a new home.

Read-only Oracle homes

Read-only Oracle homes were introduced in 18c (here) as an option, and are the default from Oracle 21c onward. One of the benefits of read-only Oracle homes is they make switching homes so much easier. You haven’t got to worry about copying configuration files between homes, as they are already located outside the home.

Release Update (RU) or Release Update Revision (RUR)?

You have a choice between patching using a Release Update (RU), or a Release Update Revision (RUR). To put it simply, a RU contains not only the latest security patches and regression fixes, but may also include additional functionality, so the risk of introducing a new bug is higher. A RUR is just the security patches and regression fixes. Unlike the Critical Patch Updates (CPUs) of the past, that ran on endlessly, RURs are tied to specific RUs, so you will end up applying the RUs, but at a later date, when hopefully the bugs have been sorted by the RUR…

The folks at Oracle suggest applying the RUs, which is what I (currently) do. Some in the Oracle community suggest applying RURs is the safer strategy. If you look at the “Known Issues” for each RU, and the list of recommended one-off patches that should be applied after the RU, you can see why some people are nervous of going directly to RUs.

Once again, this comes down to you and your experience of patching with the feature set you use. If you are finding RUs are too problematic, go with the RUR approach. You can always change your mind at any time…

Monthly Recommended Patches (MRPs)

There’s a new kid on the block starting with 19.17 on Linux, which are monthly recommended patches (MRPs). They replace RURs. There are 6 MRPs per RU, with each MRP containing the RU and the current batch of recommended one-off patches, as documented in MOS Note 555.1.

I’m assuming these are rolling and standby-first patches, but I can’t confirm that yet.

RAC Patching : Rolling Patches

Rolling patches can be applied one node at a time, so there are always database instances running, which means the database remains available for the whole of the patching process.

Release Updates (RUs) and Release Update Revisions (RURs) are always rolling patches, so it makes sense to take advantage of this approach. If you are applying one-off patches, these may not be rolling patches, so always check the patch notes to make sure.

Even when rolling patches are available, you can still make the decision to take the whole system offline to apply the patches. I’m not sure why you would want to do this, but the option is there for you.

Data Guard : Standby-First Patches

Release Updates (RUs) and Release Update Revisions (RURs) are always standby-first patches. This gives you some flexibility on how you approach patching your system. Here are two scenarios with a two node Data Guard setup, where node 1 is the primary and node 2 is the standby.

Scenario 1 : Switchovers

  • Patch the node 2 binaries (not datapatch) and bring the standby back into recovery mode.
  • Switchover roles, making the node 2 the primary and node 1 the standby.
  • Patch the node 1 binaries (not datapatch) and bring the standby back into recovery mode.
  • Run datapatch against node 2 (the primary database).
  • Optionally switchover roles making node 1 the primary database again.

Scenario 2 : No switchovers

  • Patch the node 2 binaries (not datapatch), but don’t start the standby.
  • Patch the node 1 binaries (not datapatch) and start the database.
  • Start the standby on node 2.
  • Run datapatch on node 1 (the primary).

Scenario 1 reduces downtime, as the primary is always running while the standby is having the binaries patched. Scenario 2 is simpler, but has a more extensive downtime as the primary is out of action while the binaries are being patched.

Remember, one-off patches may not be standby-first patches, so you may only have the option of scenario 2 when applying them. You have to read the patch notes.

OJVM Patching : Which approach?

Oracle 21c has simplified the OJVM patching situation. In previous releases the OJVM patches were completely separate. The grid infrastructure (GI) and database patches for 21c include the OJVM patches. For 19c the OJVM patches are still separate.

The separate 19c OJVM patches come with additional restrictions. They are not standby-first patches, and according to the patch notes, they can only be applied as RAC rolling patches if you use out-of-place patching.

Why don’t you write about patching much?

Writing about patching is difficult, because everyone has a unique environment, and their own constraints placed on them by their business. I’ve always avoided writing too much about patching because I know it’s opening myself up for criticism. Whatever you say, someone will always disagree because of their unique situation, or demand yet another patching scenario because of their unique environment. You’re damned if you do, and damned if you don’t.

I’ve recently written a few patching articles for specific scenarios (here). I may add some more, but it’s not going to be a complete list, and don’t expect me to write articles about stuff I don’t use, like Exadata. These are purely meant as inspiration for new people. Ultimately, you need to read the patch notes and decide what is best for you!

Let the cloud do it!

If all this is too much hassle, you do have the option of moving your database to the cloud and letting them worry about patching it. 🙂

Conclusion

Read the patch notes!

Cheers

Tim…

Why Automation Matters : Your automation is your documentation

How many times have you been following a process defined in a knowledge base note, only to find something has been omitted, or is unclear? This may be because of empire building, laziness or more often oversight, but the result is the same. Unless your processes are well documented, you always run the risk of progress drawing to a halt when “the right person” is not present.

One of the great things about automation is, by definition, every step of the process must be defined. If person X is on holiday, you can be 100% sure all the steps to complete the automation are present.

Of course, this doesn’t stop people writing stupid, ugly and hard to understand code, but your development process should have some control over that. Even if it doesn’t, you know the answer is there. It must be there because the process works.

Does that mean you don’t need to document automations?

No. The automations should be self documenting. I don’t mean that in the sense that “my code is so good it’s self-documenting”, which is the calling card of the lazy developer. I mean that automation code in your source control system should be documented. Markdown is a quick and easy tool that allows us to easily document our code, and the good thing about it is it remains close to the code. It’s right next to it in the repository. When we change our code, we should revise our documentation where necessary. The documentation becomes a living document, rather than some 1000 page word document that nobody ever reads, and nobody updates.

But documentation sucks!

Documentation gets a really bad rap because most people are doing it wrong. They fall into one of these traps.

  • They produce too little, which means people are unlikely to find what they are looking for.
  • They produce too much, which makes it daunting to look at, so nobody bothers.
  • It’s overly formal, which is dry and boring.
  • It’s hidden, or at least separate to the code, so people might not even know it exists.

Basic pointers and how-to examples are good enough for 90% of the cases, so make these the focus of your documentation. You can always give links to more detailed documentation for those people that need a little more. The context is slightly different, but this post on Structuring Content should give you some clues about how to structure your documentation. After all, documentation is content. 🙂

Conclusion

For some companies an automation or infrastructure as code project may well be the first time in their company history that they have got everything about a process documented. That has to be a positive result for the company!

Check out the rest of the series here.

Cheers

Tim…

Test Cases Are Important : Again…

Over the weekend I was reminded of the importance of test cases again. I’ve written about this before, with probably the most consistent post here.

If you want my opinions of test cases, go and read that. In this post I want to tell a little story to demonstrate why I think test cases are important. I’m going to keep things a bit vague, because I don’t want to openly criticise the person in question, because they actually did an OK job of expressing their issue, but it did highlight some things.

The issue

I got a question that suggested a recent upgrade on Autonomous Database had altered the behaviour of something. Every time you patch or upgrade software there is a possibility of change, whether it is an intentional behaviour change, or a bug. The person had provided some evidence that did seem to suggest there was an issue, so my interest was piqued. Unfortunately there wasn’t a test case, but I have an article that includes a test case that was similar, so I was able to knock something together pretty quickly.

The test case

The first thing I did was try out my test case in an on-prem installation. Yes, I know the potential issue related to autonomous database, but I wanted to see the test case working, and prove to myself that what I believed to be true actually was true. Think of this as an experimental control. The test case ran as expected on-prem, which was good.

I then moved to trying to replicate this issue on autonomous database. Most cloud databases come with some restrictions on what you can do, so my test case setup was not ideal for running on autonomous database. I had to revise the setup a little. APEX to the rescue. Before you ask, yes, I did rerun the on-prem test with the new setup to make sure the control was still valid. 🙂 Having set up the base data, I was able to run the code for my test case, and it ran just the same in autonomous database as it did on-prem.

Test case vs in situ

In the original question, the issue was directed specifically at one feature, but my test case seemed to prove that feature was working as expected. When you are doing scientific experiments you try to reduce the number of variables. Too many variables and you have no idea what caused the result, so you can’t come to any reasonable conclusion. I was trying to prove a feature works as expected, so I reduced the possible variables to the point where I was specifically testing that feature, and it seems to work as expected.

So that’s the end of it right? Well not really. I’ll use an example from biology to explain. Biology is complicated because living things are complicated. When you are doing chemistry, it’s possible to isolate specific compounds and put them together in a controlled manner to observe an interaction. Kind-of like my test case, this is a very controlled approach. Living things have loads of working parts, and you can’t isolate things without killing the organism, so you have to deal with the fact you are working in the middle of a whole bunch of interactions. You still try to minimise your variables, but you have to accept that you can’t always do that to the extent you would like. You may define experimental controls that discount the other possible reasons for the result. This distinction between running things in isolation and in situ is really important. What has this got to do with my test case?

My test case is run in isolation. The original poster clearly has an issue in their system. Perhaps there is something in their system that affects the way this feature works, so although the feature works in isolation, maybe there is an issue in situ. My test case hasn’t resolved the issue. It has just ticked one thing off the list of possible causes.

What next?

Having ticked the base functionality off the list of possible causes, we then have to move up one step higher and incorporate more elements of the system, to see how that affect things. That could be as simple as we are using different session parameters, or it could be something more fundamental with the design of their system. Hell, it might be their data is corrupt for all I know (I really hope not).

It’s also possible when looking at the “next layer”, we notice something that shows the original test case is invalid. That sort of things happens.

What’s the point of this post?

I often get the impression some people think problem solving is some kind of witchcraft. In reality it is painstaking meticulous work. I look at all the people I think are good and they have one thing in common. They put in the the work and grind through this stuff. Yes, you get quicker the more experienced you get, but you still have to put in the effort. People are often looking for the “magic button” to solve their problem, but there isn’t one. If it were that simple, it would already be built into every piece of software you use. 🙂

You need a test case, even if all it does is prove your initial conclusion was wrong, and allows you to focus your attention elsewhere.

Once again, the question that promoted this post was not bad. The person did an OK job of expressing themselves. This is just a post that was triggered by that interaction. If we get to the bottom of their issue, and it proves to be interesting, I will probably write up something more specific about it. 🙂

You might find it useful to read these, as they are relevant to this post.

Cheers

Tim…

Update: This looks like it is a data/understanding issue. It’s starting to sound like the data isn’t stored in the format the original poster expected, so they are trying to do something with it that is impossible. If this is the case, it’s nothing to do with the upgrade.

User Experience – A Little Rant Again

I had a bit of a negative post yesterday, and it got me thinking of these two posts.

I’ve said some of this stuff before, but I want to bring it all into a slightly different context.

Good user experience is…

Good user experience is not about forcing me to follow your atomic implementation of a feature. What do I mean by this? Let’s take look at some examples of getting it right (IMHO) from Oracle.

An Oracle REST Data Services (ORDS) web service is made up of a module with one or more templates, each with one or more handlers. We could define our service by defining a module, template and a handler separately, because that’s how the underlying implementation of an ORDS web service works. It’s fine, but it’s a bit over the top if I just want a quick little web service based on a query. That’s why we have been given the DEFINE_SERVICE procedure, allowing us to do all that other stuff in a single call (see here). For simple services this is all you need.

The database scheduler is a complex beast. We can define loads of things like schedules, programs, arguments, jobs classes, windows and of course jobs. That’s fine, but 99% of the time we just want a simple job, and the CREATE_JOB procedure allows us create one in a single call (see here).

In both cases we can choose between doing things the long/verbose way, or use the “cheat code” and do stuff in a single call. This is exactly the sort of thing I like when I’m using a feature. I want to know the flexibility is there if I need it, but if 99% of my requirements don’t, I want the cheat code so I can do what I need to do and move on. This also makes the feature more accessible to new people…

Good user experience is not…

As I mentioned above, good user experience is not about forcing me to follow your atomic implementation of a feature. Someone should take a step back and ask what would “normal” users really like? The answer is probably giving them an option to zone out and get all the prerequisites and config done for them. It’s not making them spend a weekend trying to figure out how to enable a feature, then finding it doesn’t really work properly anyway…

I’m a generalist. I have to work with lots of different products. When I open the docs and I see a list of prerequisites, and then multiple commands to actually set stuff up my heart sinks. I want a “we’ll do everything for you” option. That might sound funny because of my history, and if companies did that it would make my website redundant, but I feel we need to progress. We’ve been doing this nuts & bolts crap for too long. If I can automate it, Oracle can automate it. If Oracle can automate it, why don’t they?

I don’t want to name and shame. I’ve made some positive comments about Oracle in the previous section, but you know there are a whole bunch of Oracle things I could use as examples of what not to do. Oracle aren’t alone here. It applies to lots of other companies too.

But Tim, I want to…

I can already hear people typing their responses about their need to be in control and their obsessive configuration disorder. Shut up. I don’t care. The chances are, if you are reading this post, you are probably one of the people that can cope with all this tech, but there are many people who can’t, or don’t want to.

Won’t someone think of the children customers

I am a customer. My company is a customer. I can think of two things my company refuse to pay for because the functionality in question is unsupportable if I’m not available. Those are features we need, but won’t buy because they are overly complex for normal people to do well.

Now you can argue that cloud services will solve all these issues, but cloud adoption varies between regions, and maybe people will not pick your cloud. My company are a perfect example of that. We’ve consolidated on Azure, and although we don’t run any Oracle databases there yet, if we run Oracle on the cloud, it will probably be on Azure.

If you heard someone say, “I used to get a punch in the face every day, but now it’s only once a week. Things are good!”, you would think they were crazy. Less bad is not the same as good. I often think companies bring out tools and utilities that are “less bad” than what they had before. Not actually “good”. If you have been in the trenches, “less bad” might feel “good”, but it’s not.

I realise this is another rant, but I think it’s a subject that is worth a rant. I use a wide variety of tech from a number of companies, and some of them get on my nerves at times, because it feels like user experience is an after thought. You can’t expect everyone to no-life the learning curve for your products. I’m just saying how I feel, and I’m pretty sure I’m not alone here!

Cheers

Tim…

PS. I’m playing a bit fast and loose with the term user experience in this post, but hopefully you get what I mean…

DG PDB : Oracle Data Guard for Pluggable Databases in 21c, and why you shouldn’t use it!

Last month you may have noticed the announcement of DG PDB. It’s Data Guard for PDBs, rather than CDBs, introduced in the Oracle 21.7 release update.

How do you use it?

I’ve had a play around with it, which resulted in this article.

I also did a Vagrant build, which includes the build of the servers, the database software installations, database creations and the perquisites, so you can jump straight to the DG PDB configuration section in the article. You can find that build here.

So that’s the basic how-to covered, and I really do mean “basic”. There is a lot more people might want to do with it, but it’s beyond the scope of my little Vagrant build.

What do I think about it?

Well I guess you know how this is going to go, based on the title of this post. I don’t like it (yet), but I’m going to try and be a bit more constructive than that.

  • It is buggy! : I know 21c is an innovation release, but this is a HA/DR solution, so it needs to be bullet proof and it’s not. There are a number of issues when you come to use it, which will most likely be fixed in a future release update, or database version, but for now this is a production release and I don’t feel like it is safe pair of hands for real PDBs. That is a *very* bad look for a product of this type.
  • Is it Data Guard? Really? : Once again, I know this is the first release of this functionality, but there are so many restrictions associated with it that I wonder if it is even deserving of the Data Guard name. I feel like it should have been a little further along the development cycle before it got associated with the name Data Guard. The first time someone has a problem with DG PDB, and they definitely will, they are going to say some choice words about Data Guard. I know this because I was throwing around some expletives when I was having issues with it. That’s not a feeling you want to associated with one of your HA/DR products…
  • Is this even scriptable? : The “add pluggable database” step in the DGMGRL utility prompts for a password. Maybe I’ve missed something, but I didn’t see a way to supply this silently. If it needs human interaction it is not finished. If someone can explain to me what I’ve missed, that would be good. If I’m correct and this can’t be done silently, it needs some new arguments. It doesn’t help that it consistently fails the first time you call it, but works the second time. Ouch!
  • Is the standby PDB created or not? : When you run the “add pluggable database” command (and it eventually works) it creates the standby PDB, but there are no datafiles associated with it. You have to copy those across yourself. The default action should be to copy the files across. Oracle could do it quite easily with the DBMS_FILE_TRANSFER package, or some variant of a hot clone. There should still be an option to not do the datafile copy, as some people might want to move the files manually, and that is fine, but to not have a way to include the file copy seems a bit crappy.
  • Ease of use : Oracle 21c introduced the PREPARE FOR DATA GUARD command, which automates a whole bunch of prerequisites for Data Guard setup, which is a really nice touch. Of course DG PDB has many of the same prerequisites, so you can use PREPARE FOR DATA GUARD to get yourself in a good place to start, but I still feel like there are too many moving parts to get going. I really want it to be a single command that takes me from zero to hero. I could say this about many other Oracle features too, but that’s the subject of another blog post.
  • Overall : A few times I got myself into such a mess the only thing I could do was rebuild the whole environment. That’s not a good look for a HA/DR product!

Conclusion

I’m sorry if I’ve pissed off any of the folks that worked on this feature. It wasn’t my intention. I just don’t think this is ready to be included in a production release yet. I’m hoping I can sing its praises of a future release of this functionality!

Cheers

Tim…

PS. I’m reminded of this post about The Definition of Done.

The Efficiency Paradox : Same Term, Different Meanings?

I’ve recently come across the term “Efficiency Paradox” being used by different people, in different contexts, and giving it different meanings. I thought I would share them…

The Efficiency Paradox in Economics

In 1865 William Stanley Jevons postulated, the more efficient a process gets in terms of resource usage, the higher demand you will see for that resource. This seems counter intuitive, as you might think the more efficient a process is, the less resources it requires, and therefore total resource usage would go down. Instead as a process becomes more efficient, costs drop and that drives demand, which eventually can result in more of the resource being needed. This is the heart of the Jevons Paradox, which is also referred to as the Efficiency Paradox by some sources.

Cost is always an important factor. We are currently going through a cost of living crisis in the UK. One of the factors affecting this is the cost of power. People are looking at ways to save money by reducing their power usage. When power was cheaper many people didn’t pay any attention to saving power. Now it is expensive, every little bit matters.

The Efficiency Paradox in Gaming

I watched a video by Josh Strife Hays, where he discussed the impact of guides and wikis on the enjoyment of playing video games. The term “grinding” refers to highly repetitive tasks that you must do to achieve a goal. Grinding can be exhausting, but when you achieve your goal there is a sense of satisfaction. Some games require a certain amount of detective work, where you try to figure out how to progress. Once again, the effort of trying to figure out how to progress can be exhausting, but the satisfaction on completing the task is high.

With the advent of the internet, there are loads of videos, wikis and websites dedicated to helping you play games in the most efficient manner possible. They might tell you how to minimise grinding, or flat out give you the answer to puzzles. These guides reduce the amount of time it takes to complete a task in a game, making you more efficient, but because you never have to deal with the adversity, you never get the same satisfaction when you complete a task.

So the efficiency paradox in gaming is, the more efficient you make the game play in an attempt to help the player, the less satisfying the game may become. Of course, if it is too difficult, they might leave before completing the task. There is a balance…

The Efficiency Paradox in Lean/DevOps

The previous versions of the efficiency paradox are interesting to me, but it’s this version that is really the subject of this post. In Lean and DevOps people often use the term efficiency paradox in subtly different ways, but invariably they are talking about resource efficiency vs. flow efficiency. Specifically, a focus on maximising resource efficiency resulting in less overall efficiency.

Lost Time : I’ve written about lost time before here. Lost time is about work waiting in queues while passing between siloed teams. Each team believe they are working efficiently because they have maximised their resource usage. All their staff are busy, but the flow of work through the chain of teams is really slow, making the flow efficiency low, and reducing the quality of work.

To counter this, some companies reorganise into self-sufficient teams that can progress a piece of work from conception to delivery, thereby reducing the hand-offs between teams. Some may retain the silos, but use automation to deliver self-service tools and APIs that others can pick up and run with. Regardless of the approach taken, they are attempting to reduce the constraints on the flow of work to improve flow efficiency.

Work in Process (WIP) : I’ve written about WIP before here. Most people can’t multitask well. Some think they can, but they just end up doing multiple things badly. Problem solving requires concentration, and it’s really hard to concentrate when you are being distracted by multiple projects competing for your attention. In an ideal world your WIP would be 1. You would work on a single task to completion, then move to another task. This can be tricky if you are constantly being blocked by other people and teams/silos, but it’s also complicated when a company wants to see staff being “busy” all the time.

In an effort to maximise resource (staff) usage, they increase the WIP, so there is always something for people to do. On the surface this increased resource usage looks like it is increasing efficiency, but often the work degenerates to the point where people are spinning plates, without actually achieving much. Also, the reduced attention on a specific task results in a lower quality of work. You should always try to keep WIP low, even if that means some people have idle time. If the idle time is excessive, it probably means there is a problem somewhere else in the organisation that needs to be fixed. Deal with the root cause, not the symptom!

Ultimately we have to forget about the resource efficiency and focus on flow efficiency. We can often see this in our normal working lives. We have some processes we know are going to take weeks to complete. Then there is a “Priority 1” incident that means we need to complete something ASAP. The P1 instantly aligns every team giving them the same priorities, and we race through and complete the work in a few hours. Once the P1 is over, every person goes back to their silo, with their differing priorities, and the process returns to taking weeks to complete again. We have proved it can be done in hours, but because of politics and the internal company organization, fast never becomes the norm.

Conclusion

I thought it was interesting that the term efficiency paradox came up in three different contexts in the space of a few days, so I thought I would write about it. The important point is that in all three cases people are often making incorrect assumptions about efficiency. People are doing things that they think will improving efficiency, but it is not having the desired result.

Cheers

Tim…

Exit mobile version