Update Oracle Database Time Zone Files (Poll Results Discussed)

In case you didn’t know, countries occasionally change their time zones, or alter the way they handle daylight saving time (DST). To let the database know about these changes we have to apply a new database time zone file. The updated files have been shipped with upgrades and patches since 11gR2, but applying them to the database has always been a manual operation.

With the recent switch over to daylight savings time in the UK I decided to post this question on Twitter yesterday.

How often do you update your Oracle database time zone files?

We get less than 6% of people updating their time zone files on a regular schedule. Nearly 45% who only do the updates after a database upgrade, and nearly 50% of people who never do it at all.

I can’t say I’m surprised by the results. In terms of the reasoning for these responses, I’ll reference some of the comments on Twitter.

Regular Schedule

“Every ru patch, also thanks to 19.18 it is included now and with out of place upgrade and autoupgrade, i dont do it anymore 🙂 all automatic.”

Mustafa KALAYCI

If you are using AutoUpgrade to patch to a new Oracle Home, then applying updated time zone files is really easy. Before 19.18 it’s just a single entry “timezone_upg=yes” in the AutoUpgrade config file. From 19.18 onward the update of the time zone file is the default action (see here).

So interestingly, there may be some people who don’t know they are applying an update of their time zone file, who actually are now…

After Upgrades

This feels like the natural time to do it for me, and it seems many other people feel the same.

As mentioned previously, AutoUpgrade makes it simple. From 21c onward AutoUpgrade is the main upgrade approach, even for those that have resisted using it for previous versions, so this question goes away from an upgrade perspective.

We can specifically tell it not to perform the action using “timezone_upg=no”, but I’m guessing most people will just go with the default action.

Never

“NEVER. As an American-only company with very little need for time-specific data, quite unnecessary. Horrible design with no rollbacks and headaches w/data pump. Just not worth it if possible to avoid”

Taylor

I totally understand this response. Many of us work with systems that are limited to our own country. Assuming our country doesn’t alter its own daylight savings time rules, then using an old time zone file is unlikely to cause an issue.

When you consider the number of people that run *very old* versions of Oracle, you can see that using old versions of the time zone file doesn’t present a major issue in these circumstances.

With reference to the data pump issue, I’ve experienced this, and it was also picked up in the comments.

“My hypothesis: Most do it when datapump tells they need to do it to get the import file they just received to load”

Connor McDonald

Offline/Online Operation

The point about this being an offline operation was raised.

“Well it is an offline operation, so pretty exceptional thing to do. Only in a rare case where some feature requires the upgrade – like DataPump failing or query over dblink failing.”

Ilmar Kerm

Downtime is never welcome, but it was also pointed out it can be an online operation in 21c.

“Offline will be a thing of the past…

https://docs.oracle.com/en/database/oracle/oracle-database/21/refrn/TIMEZONE_VERSION_UPGRADE_ONLINE.html

Connor McDonald

Conclusion

It seems like the time zone file version is not high on the list of priorities for most people, providing it is not causing a data pump issue. I totally understand this, and I myself only consider it during database upgrades.

I always like reading these poll results. I know the sample size is small, but it gives you a good idea of how your beliefs compare to the wider audience.

If you are interested to know how to manually upgrade your time zone file, you can read about it here.

Cheers

Tim…

Why Automation Matters : The cloud may not be right for you, but you still have to automate!

A few days ago I tweeted this link to an article about some workloads being better suited to on-prem infrastructure.

Jared Still sent me this link.

The executive summary in both cases is, if you have defined workloads that don’t require elastic resource allocation, and you are not making use of cloud-only platforms, you might find it significantly cheaper to run your systems on-prem compared to running it in the cloud.

With reference to the first article Freek D’Hooge responded with this.

“I agree that cloud is not always the best or most cost effective choice, but I find the article lacking in what it really takes to run on-prem equipment.”

I responded to Freek D’Hooge with this.

“Yes. On-prem works well if you have Infrastructure as Code and have automated all the crap, making it feel more like self-service.

For many people, that concept of automation only starts after they move to the cloud though, so they never realise how well on-prem can work…”

I’m assuming these folks who are moving back to on-prem are doing the whole high availability (HA) and disaster recovery (DR) thing properly.

There are many counter arguments, and I don’t want to start a religious war about cloud vs on-prem, but there is one aspect of this discussion that doesn’t seem to be covered here, and that is automation.

But you still have to automate!

Deciding not to go to the cloud, or moving back from the cloud to on-prem, is not an excuse to go back to the bad old days. We have to make sure we are using infrastructure as code, and automating the hell out of everything. I’ve mentioned this before.

Of course, servers in racks are a physical task, but for most things after that we are probably using virtual machines and/or containers, so once we have the physical kit in place we should be able to automate everything else.

Take a look at your stack and you will probably find there are Terraform providers and Ansible modules that work for your on-prem infrastructure, the same as you would expect for your cloud infrastructure. There is no reason not to use infrastructure as code on-prem.

For many people the “step change” nature of moving to the cloud is the thing that allows them to take a step back and learn automation. That’s a pity because they have never seen how well on-prem can work with automation.

Even as I write this I am still in the same situation. I’m currently building Azure Integration Services (AIS) kit in the cloud using Terraform. I have a landing zone where I, as part of the development team, can just build the stuff we need using infrastructure as code. That’s great, but if I want an on-prem VM, I have to raise a request and wait. I’ve automated many aspects of my DBA job, but basic provisioning of kit on-prem is still part of the old world, with all the associated lost time in hand-offs. For those seeking to remain on-prem, this type of thing can’t be allowed to continue.

In summary

It doesn’t matter if you go to the cloud or not, you have to use infrastructure as code and automate things to make everything feel like self-service. I’m not suggesting you need the perfect private cloud solution, but you need to provide developers with self-service solutions and let them get on with doing their job, rather waiting for you.

Check out the rest of the series here.

Cheers

Tim…

Exit mobile version