Data Pump Between Database Versions : It’s not just about the VERSION parameter! (Time Zone Files)

I was doing a small “quick” data transfer between two servers. The source was 19c and the destination was 18c, so I used the VERSION parameter during the export.

expdp … version=18 directory=…

The export went fine, but when I started the import I immediately got this error.

ORA-39002: invalid operation

A little Googling and I came across MOS Doc ID 2482971.1. In short, the time zone file was different between the two databases.

No problem. I know how to fix that, and both databases had the January quarterly patches applied, so the latest time zone files would be available right? Wrong. The 18c database was already at the maximum time zone version that was installed, and I needed to be one higher to match the 19c database.

After some Googling I re-found MOS Doc ID 412160.1. As soon as I opened it I remembered it. I find this note really messy an confusing, but the section labelled “C.1.d) DST patches list” had the list of patches, which is what I needed. I downloaded the patch to match the time zone file version of the source system and applied it with OPatch in the normal way. Always read the patch notes!!!

Once the new time zone file was in place in, it was time to update it in the database. I’ve written about this before.

Once the time zone file versions matched, the import worked as expected. Although the small data transfer that I expected to be quick had turned into a much bigger job. 🙂

I can’t remember if I’ve hit this issue before, but I don’t remember it. I guess I’ve just been lucky with the time zone file versions matching. This note is to remind myself, it’s not just about the VERSION parameter! I’ve also updated a couple of articles with pointers about this.



PS. It seems the later releases are more sensitive to time zone file differences than previous releases.

OTN Appreciation Day : Data Pump (expdp, impdp)

pumpHere’s my contribution to the OTN Appreciation Day.

Data Pump (expdp, impdp) was added in Oracle 10g as a replacement for the rather tired exp/imp utilities, which although useful, were severely lacking in functionality. So why do I like Data Pump? Here are a few standouts for me, but I’m sure other people will prefer others. 🙂

  • Transportable Tablespaces, and from 12c onward Transportable Database, make moving large amounts of data, or even whole databases easy. This can include platform/endian changes and version upgrade too. This was possible with exp/imp too, but it doesn’t stop it being a useful feature of Data Pump. 🙂
  • The INCLUDE, EXCLUDE, CONTENT and QUERY parameters allow a lot of flexibility about what you include or exclude from your exports and imports.
  • The FLASHBACK_TIME parameter allows you to do an export of the data based on a point in time, undo permitting, which allows you to make truly consistent exports.
  • The REMAP parameters allow you to rename tablespaces, datafiles, schemas, tables and even alter the data during operations.
  • The DBMS_DATAPUMP package provides a PL/SQL API, allowing you to perform Data Pump operations without having to shell out to the OS. That makes automation a lot simpler.
  • The NETWORK_LINK parameter can be used to perform export and import operations over a database link. This allows you to create a dump file on a remote database, or even import without an intermediate dump file.

I’m sure beginners think Data Pump is just for doing exports and imports of small databases, but it’s got loads of features. If you’ve not kept up with the new releases, you might be surprised just how much it can do!

If you’re interested in reading something more technical about Data Pump, here are some articles from my website.



Data Pump Enhancements in Oracle Database 12c

Another one to file under “Not sexy but flippin’ awesome!”

If you are a DBA, you are going to spend a lot of time with Data Pump. All roads seem to lead back to it. 🙂 There are some more headline worthy features, like transportable database, but the two that jumped out at me were actually pretty small, but awesome.

  • “TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y” – Switches table and/or index imports to NOLOGGING for the lifespan of the import operation.
  • “LOGTIME=ALL” – Puts a timestamp in the output message so you can see how long individual operations took.

I wrote up an article about it here.



SQL Developer 3.1 : Data Pump Wizards…

One of the things that sounded kinda neat in SQL Developer 3.1 was the Data Pump Wizards, so I thought I would have a play with them.

As you would expect, they are pretty straight forward. They can’t do everything you can do with expdp and impdp, but they are pretty cool for on-the-fly tasks.

You can use the wizard to generate data pump definitions using the PL/SQL API. It would have been a nice touch if it gave you the option to see a regular command line or parameter file definition also, since I would be more likely to put that into source control than the API definition of a job. Even so, a nice little feature.