8i | 9i | 10g | 11g | 12c | 13c | 18c | 19c | 21c | 23ai | Misc | PL/SQL | SQL | RAC | WebLogic | Linux

Home » Articles » Misc » Here

Oracle Databases and CI/CD Pipelines

This article contains a general discussion about how Oracle Databases can fit into Continuous Integration/Continuous Delivery (CI/CD) pipelines.

This is not an article about how to build the pipeline itself. It focuses on the Oracle database piece of the pipeline, and the features and approaches you might take to live up to the requirements. If you were looking for a how-to guide, you've come to the wrong place.

Context Is Everything

When we talk about CI/CD pipelines, we are often talking about a series of environments that allow an application to be pushed from initial development to production in an automated manner. This automation may include automation of unit testing and integration testing. In some cases these environments only exist for the lifespan of the testing, so we see people take a "build and burn" attitude towards these environments.

There are a number of ways to approach CI/CD, and of course a number of technologies you can use to create pipelines. We don't often think of Oracle databases for short-lived environments, due to the complexity and length of time associated with Oracle installation, database creation and data loads, but that doesn't mean it's not possible.

We also have to consider what constitutes a meaningful test environment. For some tests we might feel like nothing other than an exact duplicate of production is a valid test. For other tests we might be happy with a cut-down environment to test basic functionality. When judging a solution, you have to consider the implications of testing against a non-production like environment.

In this article we'll just be looking at a variety of solutions to achieve some of the goals of creating short-lived environments that give repeatable results. There is no right or wrong way. As always, it depends what you are trying to achieve. We will mostly focus on the speed of setup for a new test and how to guarantee a consistent starting point for all tests, allowing us to have deterministic results from our tests. These are not recommendations for "normal databases".

Containers (Docker)

It's impossible to talk about CI/CD pipelines without mentioning containers, as they are at the heart of most existing pipelines. Oracle databases work insides containers, and it's possible this is the best use case for an Oracle database inside a container.

Containers allow you to build and burn environments really easily and quickly, but the traditional approach to building a Docker image is to have a software only Oracle installation in the read-only image, then the database is created on the first run of a container based on that image. This makes the startup time of the container unacceptably long. So what alternatives do we have?

The first, and possibly simplest option is to use a persistent volume based on a host directory. In the initial setup we create a container based on a normal Oracle database image, but the database files are placed in the persistent volume. Once the database creation is complete, we destroy the container, leaving the database files in the persistent volume. This will be our "testing" database, and we want every test to begin at the same starting point, so we need to protect these database files. This might mean taking a manual copy of them, or using a storage snapshot to keep a copy of them. The typical testing scenario would look something like this.

The efficiency of this approach depends on how quickly we can replace the original copy of the files on the host server. If we are using storage snapshots, this could be really quick. If we are using a conventional file copy, the speed will depend on how big the seed database is, and how quick our storage is.

An alternative to this is to build the seed database into the read-only image. This will also allow the database to start quickly as the database is pre-created, and of course each time we run a container we will get a fresh copy of the database, as it looked when the image was first created. This gives us the consistent starting point for all testing, allowing our test results to be deterministic, but it does come with some issues.

The folks who work on utPLSQL use this second method for their test environments.

Depending on your needs, either of these approaches are viable solutions. I guess the biggest question we should ask ourselves is, is testing in a Docker container representative of how things will be running on our production system? If we are testing in Docker, then running production on an Exadata machine, how do we feel about that? Clearly performance testing would be silly, but for functional testing we might find this an acceptable compromise.

Here are some resources related to Oracle in containers.


CloneDB was introduced in the database patchset. Rather than using the traditional RMAN database duplication, CloneDB uses dNFS technology to instantly fire up a clone using an existing backup of a database as the data store. The clone uses copy-on-write technology, so only changed blocks need to be stored locally, while the unchanged data is referenced directly from the backup datafile copies. This drastically increases the speed of cloning a system and means that several separate clones can function against a single set of backup datafile copies, thus saving considerable amounts of space.

This approach would allow a "build and burn" approach to test databases, allowing them to be cloned almost instantly and removed just as fast. Here are some things to consider when using CloneDB.

Because every new clone is based on the backup image copies, each new clone will always start from the same point in time. Refreshing a database from production is simply a case of dropping any CloneDB instances, bringing across a fresh set of image copy backups and firing the instances up again.

Once again, you have to ask yourself if running a test environment using CloneDB and NFS is representative of your production system.

Here are some resources related to CloneDB.

Pluggable Databases (PDBs)

The multitenant architecture gives a range of options for producing new test pluggable databases.

The creation of an empty pluggable database from the seed is quite quick, so if your requirement is to start with an empty database, this could be an option for the deployment of a new database in your pipeline.

More likely, you will want a test database with some data in place to begin with. If the database were sufficiently small, you could create your starter pluggable database, leave it in read-only mode and use this as the source to clone fresh copies of your database for testing.

For larger pluggable databases you have the option of using Snapshot Copy PDBs. Snapshot copies take advantage of the storage system to create sparse PDBs, where the snapshot is used as read-only source data and changed blocks are written to a copy-on-write location. This is similar to the CloneDB option discussed above. In fact, you can optionally use dNFS for snapshot copy PDBs, so you are using CloneDB under the hood. Snapshot copy PDBs are described here.

There is also an option to use Split Mirror Clone PDBs. In this case ASM splits a mirror, effectively making a point-in-time media copy from a parent PDB. From that point forward, changes to the parent PDB do not get reflected in the clone PDB. Split Mirror Clone PDBs are described here.

Here are some resources related to the multitentant architecture.


An alternative to constantly dropping and recreating a database is to use some form of point in time recovery between tests, so each set of tests get the same clean start. The typical testing scenario would look something like this.

Here are some resources related to database recovery and flashback database.

Other Options

There are a number of other options that could be used to create temporary databases, but they may be too slow to give us the performance we need for creating new environments.

Recreating Deterministic Test Data

An important point of automated testing is being able to predict the result of a test. This presents a problem where databases are concerned, because of two conflicting requirements.

Refreshing a database from production is relatively easy, regardless of the infrastructure you are using, but what if that affects your testing?

It probably makes sense to automate the creation of specific test data for the scenarios you are testing. In simple systems, this could be as easy as running scripts that contain DML to set up the test data. In more complex systems, especially those involving third party apps, you may be forced to use the applications themselves to create data for your test scenarios. In these situations you may be able to take advantage of tools like UiPath and Leapwork to automate the manual data entry operations.


I am not an expert at CI/CD, so when I wrote this article I passed it out to some people for feedback, to make sure I had not missed the mark. Here are some of the points raised and my responses.

For more information see:

Hope this helps. Regards Tim...

Back to the Top.