8i | 9i | 10g | 11g | 12c | 13c | 18c | 19c | 21c | 23c | Misc | PL/SQL | SQL | RAC | WebLogic | Linux
SQLcl : Automating Your SQL and PL/SQL Deployments using the SQLcl implementation of Liquibase
This article gives a quick overview of applying changes to the database using the Liquibase implementation in SQLcl.
This is a rewrite of a Liquibase article, but using SQLcl instead of the regular Liquibase client.
- That's Not How You Use It!
- Assumptions
- Supported Commands
- Generating a ChangeLog?
- Create a master.xml (controller.xml)
- Applying Changes
- Considerations
- The Automatic Approach
Related articles.
- Liquibase : All Articles
- SQLcl and Liquibase : Automating Your SQL and PL/SQL Deployments
- Liquibase : Automating Your SQL and PL/SQL Deployments
- SQLcl : Deploying Oracle Application Express (APEX) Applications using the SQLcl implementation of Liquibase
- SQLcl : Installation
That's Not How You Use It!
When you look at examples of using Liquibase on the internet they all have a few things in common.
- They are typically examples used to track table changes and not much else.
- Like my examples, they are based on small simple schemas. This always makes sense, but issues arise with some methods when things grow.
- They don't include code objects (procedure, functions, packages, triggers, types etc.).
- If they do include code objects, they assume each version of the code is in a new file. This means you're going to lose the conventional commit history of a file you would normally expect for code. Instead you have to manually diff between separate files.
- They assume people need to rollback changes to previous versions of the database through this mechanism. I think creating a rollback script for each schema change makes sense, but I think it's a bad idea to include it in this mechanism. In my opinion all changes should move forward. So a "rollback" is really a new change applied to the database that reverts the changes. This is especially true of code related functionality.
The major issue for me is the way code objects are managed. This may not affect you if you never have code in the database, but for a PL/SQL developer, this feels like a show-stopper. As a result, I prefer to work using scripts, which are kept in source control, and use Liquibase as the deployment and sequencing mechanism. I'm sure many Liquibase users will not like this, and will think I'm using it incorrectly. That's fine. There's more discussion about script management here.
I'm not hating on the Liquibase, or the SQLcl implementation of it. This is what feels right to me, but I reserve the right to change my mind!
Assumptions
You have SQLcl installed (see here).
For the examples here, we have a file system that look like one of these, depending on the OS used. Adjust as required.
Rem Windows c:\git\my_project\changelogs : The location for the individual Liquibase changelogs. c:\git\my_project\scripts : The location of any scripts referenced by Liquibase. c:\git\my_project\changelogs\changelog_master.xml : The master.xml that maintains the changelog order. # Linux /git/my_project/changelogs : The location for the individual Liquibase changelogs. /git/my_project/scripts : The location of any scripts referenced by Liquibase. /git/my_project/changelogs/changelog_master.xml : The master.xml that maintains the changelog order.
Notice I use the UNIX-style separator throughout the rest of the article. Liquibase and Windows don't really care about the separator. A path beginning with "/" is assumed to be relative to the current drive.
Supported Commands
The SQLcl HELP command displays the supported Liquibase commands. You can use either the full name "liquibase" or the "lb" shortcut.
SQL> help liquibase LB Provides a command line interface to Liquibase change management features from within SQLcl! Available commands are liquibase(lb) genobject <object_type> <object_name> liquibase(lb) genschema liquibase(lb) gencontrolfile liquibase(lb) update <CHANGE LOG> {include schema} liquibase(lb) updatesql <CHANGE LOG> {include schema} liquibase(lb) rollback <CHANGE LOG> <COUNT> liquibase(lb) diff <DEST URL> <DEST USER> <DEST PASS> {report} liquibase(lb) status <CHANGE LOG> liquibase(lb) validate <CHANGE LOG> liquibase(lb) changelogsync <CHANGE LOG> liquibase(lb) listlocks <CHANGE LOG> liquibase(lb) releaseLocks <CHANGE LOG> liquibase(lb) clearchecksums <CHANGE LOG> liquibase(lb) help <COMMAND> SQL>
Generating a Changelog?
If you are starting to use Liquibase against an existing project, you can generate an initial changelog that represents your day 1 state. To demonstrate this, let's create some objects in a clean schema called "TEST".
conn test/test@pdb1 create table tab1 ( id number, description varchar2(50), constraint tab1_pk primary key (id) ); create sequence tab1_seq; insert into tab1 (id, description) values (tab1_seq.nextval, 'Description for ' || tab1_seq.currval); commit; create or replace function get_tab1_count return number as l_count number; begin select count(*) into l_count from tab1; return l_count; end; /
We generate an initial changelog to capture the state of our initial schema contents.
SQL> lb genschema [Type - TYPE_SPEC]: 379 ms [Type - TYPE_BODY]: 103 ms [Type - SEQUENCE]: 82 ms [Type - CLUSTER]: 2043 ms [Type - TABLE]: 11641 ms [Type - MATERIALIZED_VIEW_LOG]: 49 ms [Type - MATERIALIZED_VIEW]: 27 ms [Type - VIEW]: 1436 ms [Type - REF_CONSTRAINT]: 126 ms [Type - DIMENSION]: 33 ms [Type - FUNCTION]: 131 ms [Type - PROCEDURE]: 76 ms [Type - PACKAGE_SPEC]: 62 ms [Type - DB_LINK]: 30 ms [Type - SYNONYM]: 51 ms [Type - INDEX]: 2293 ms [Type - TRIGGER]: 100 ms [Type - PACKAGE_BODY]: 96 ms [Method loadCaptureTable]: 18758 ms [Method parseCaptureTableRecords]: 428 ms [Method sortCaptureTable]: 19 ms [Method createExportChangeLogs]: 1 ms Export Flags Used: Export Grants false Export Synonyms false SQL>
This generates a number of files in the working directory, typically named after the object name with a siffix of the object type.
tab1_seq_sequence.xml : The sequence definition. tab1_table.xml : The table definition. get_tab1_count_function.xml : The function definition. controller.xml : The Liquibase master.xml.
The "controller.xml" file contains a reference to each changelog in the correct dependency order.
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd"> <include file="tab1_seq_sequence.xml"/> <include file="tab1_table.xml"/> <include file="get_tab1_count_function.xml"/> </databaseChangeLog>
The individual changelogs use the Liquibase XML format to describe the objects, with some Oracle extensions. The default object support is a lot better than the community Liquibase client, but it doesn't capture any data, so any seed data is not captured.
Even though this is more convincing than the regular Liquibase client, I still don't believe generating an initial changelog makes sense for all but the simplest projects. Instead, have a fixed point in time and start using Liquibase from there onward.
Create a master.xml (controller.xml)
The master.xml, also known as a controlfile or master index, is an ordered list of changelogs. SQLcl typically calls this "controller.xml", but the name is not important. When we start using Liquibase we can just create an empty one manually with the following contents.
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd"> </databaseChangeLog>
SQLcl allows us to generate a new "controller.xml" file in the working directory.
SQL> lb gencontrolfile Action successfully completed please review created file controller.xml SQL>
The content looks similar to the previous empty file, but includes an example file inclusion tag.
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd"> <include file="{filename.xml}"/> </databaseChangeLog>
Every changelog that happens from now on has to be referenced in order from this master.xml.
We can create one that references every changelog in a specific directory, but this requires the changelogs to be named in a way that the application order is preserved, which could prove problematic. It feels safer to avoid this approach unless you are extremely disciplined.
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd"> <includeAll path="/git/my_project/changelogs"/> </databaseChangeLog>
Applying Changes
We have a script called "tab1_seq.sql" in our "/git/my_project/scripts" directory with the following contents.
-- tab1_seq.sql create sequence tab1_seq;
The comment line at the start of each SQL file is the file name. This isn't necessary, but it makes things a clearer when looking at the changelog.
We have a script called "tab1.sql" in our "/git/my_project/scripts" directory with the following contents.
-- tab1.sql create table tab1 ( id number, description varchar2(50), constraint tab1_pk primary key (id) ); insert into tab1 (id, description) values (tab1_seq.nextval, 'Description for ' || tab1_seq.currval); commit;
We have a script called "get_tab1_count.sql" in our "/git/my_project/scripts" directory with the following contents.
-- get_tab1_count.sql create or replace function get_tab1_count returns number as l_count number; begin select count(*) into l_count from tab1; return l_count; end; /
We want to apply these as a change to the database, so we create a changelog for them. Let's call this "changelog_00001_setup_tab1.xml" and place it in the "/git/my_project/changelogs" directory.
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd"> <changeSet author="tim" id="01_tab1_seq"> <sqlFile dbms="oracle" endDelimiter=";" path="../scripts/tab1_seq.sql" relativeToChangelogFile="true" splitStatements="true" stripComments="false"/> </changeSet> <changeSet author="tim" id="02_tab1"> <sqlFile dbms="oracle" endDelimiter=";" path="../scripts/tab1.sql" relativeToChangelogFile="true" splitStatements="true" stripComments="false"/> </changeSet> <changeSet author="tim" id="03_get_tab1_count" runOnChange="true"> <sqlFile dbms="oracle" endDelimiter=";" path="../scripts/get_tab1_count.sql" relativeToChangelogFile="true" splitStatements="false" stripComments="false"/> </changeSet> </databaseChangeLog>
There are some things to consider here.
- The entries in the changelog need to be in the correct order. The creation of the seed data requires the sequence, so the sequence creation must come first in the changelog. The function depends on the table, so the table creation has to happen before the function change.
- We need to make sure "splitStatements" is set to false for database code (triggers, procedures, functions, packages, types etc.) or Liquibase will try to split on ";" and errors will be produced when those changes are applied.
- We are using relative paths (relativeToChangelogFile="true"). Explicit paths can be used if required.
- The ID should be unique for all changesets applied to the schema, so you will often see people using a GUID.
We add a reference to this new changelog into the master.xml, so it now looks like this.
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd"> <include file="./changelog_00001_setup_tab1.xml" relativeToChangelogFile="true"/> </databaseChangeLog>
We can then apply the change using the following command. The syntax varies depending on the version of SQLcl.
-- >= 22.3 lb update -changelog-file /git/my_project/changelogs/changelog_master.xml SQL> -- 20.2 - 22.2 lb update -changelog /git/my_project/changelogs/changelog_master.xml SQL> -- < v20.2 lb update /git/my_project/changelogs/changelog_master.xml SQL>
This will create the objects associated with our change, and create some Liquibase objects to track the changes that have been applied.
column object_name format a30 select object_name, object_type from user_objects order by 1, 2; OBJECT_NAME OBJECT_TYPE ------------------------------ ----------------------- DATABASECHANGELOG TABLE DATABASECHANGELOGLOCK TABLE DATABASECHANGELOG_ACTIONS TABLE DATABASECHANGELOG_ACTIONS_PK INDEX DATABASECHANGELOG_ACTIONS_TRG TRIGGER GET_TAB1_COUNT FUNCTION PK_DATABASECHANGELOGLOCK INDEX SYS_IL0000560335C00003$$ INDEX SYS_IL0000560335C00004$$ INDEX SYS_LOB0000560335C00003$$ LOB SYS_LOB0000560335C00004$$ LOB TAB1 TABLE TAB1_PK INDEX TAB1_SEQ SEQUENCE 14 rows selected.
If we describe the DATABASECHANGELOG
table we can see the information that's captured.
SQL> desc databasechangelog Name Null? Type ------------- -------- ------------- ID NOT NULL VARCHAR2(255) AUTHOR NOT NULL VARCHAR2(255) FILENAME NOT NULL VARCHAR2(255) DATEEXECUTED NOT NULL TIMESTAMP(6) ORDEREXECUTED NOT NULL NUMBER(38) EXECTYPE NOT NULL VARCHAR2(10) MD5SUM VARCHAR2(35) DESCRIPTION VARCHAR2(255) COMMENTS VARCHAR2(255) TAG VARCHAR2(255) LIQUIBASE VARCHAR2(20) CONTEXTS VARCHAR2(255) LABELS VARCHAR2(255) DEPLOYMENT_ID VARCHAR2(10) SQL>
We display just the ID
and FILENAME
from the table. We can see the three changesets from our changelog.
column id format a17 column author format a10 column filename format a40 select id, author, filename from databasechangelog; ID AUTHOR FILENAME ----------------- ---------- ---------------------------------------- 01_tab1_seq tim changelog_00001_setup_tab1.xml 02_tab1 tim changelog_00001_setup_tab1.xml 03_get_tab1_count tim changelog_00001_setup_tab1.xml SQL>
We then get a second set of changes.
We have a script called "tab2_seq.sql" in our "/git/my_project/scripts" directory with the following contents.
-- tab2_seq.sql create sequence tab2_seq;
We have a script called "tab2.sql" in our "/git/my_project/scripts" directory with the following contents.
-- tab2.sql create table tab2 ( id number, description varchar2(50), constraint tab2_pk primary key (id) );
We create a new changelog for this called "changelog_00002_setup_tab2.xml".
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd"> <changeSet author="dave" id="01_tab2_seq"> <sqlFile dbms="oracle" endDelimiter=";" path="../scripts/tab2_seq.sql" relativeToChangelogFile="true" splitStatements="true" stripComments="false"/> </changeSet> <changeSet author="dave" id="02_tab2"> <sqlFile dbms="oracle" endDelimiter=";" path="../scripts/tab2.sql" relativeToChangelogFile="true" splitStatements="true" stripComments="false"/> </changeSet> </databaseChangeLog>
We add a reference to this new changelog into the master.xml, so it now looks like this.
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd"> <include file="./changelog_00001_setup_tab1.xml" relativeToChangelogFile="true"/> <include file="./changelog_00002_setup_tab2.xml" relativeToChangelogFile="true"/> </databaseChangeLog>
We then apply the change the same way we did the first time. The syntax changes a little depending on the version of SQLcl.
CONN test/test@pdb1 Connected. SQL> -- >= 22.3 lb update -changelog-file /git/my_project/changelogs/changelog_master.xml SQL> -- 20.2 - 22.2 lb update -changelog /git/my_project/changelogs/changelog_master.xml SQL> -- < v20.2 lb update /git/my_project/changelogs/changelog_master.xml SQL>
Liquibase knows it applied the first changelog already, so it applies only the second. We can see the new objects present in the database.
column object_name format a30 select object_name, object_type from user_objects order by 1, 2; OBJECT_NAME OBJECT_TYPE ------------------------------ ----------------------- DATABASECHANGELOG TABLE DATABASECHANGELOGLOCK TABLE DATABASECHANGELOG_ACTIONS TABLE DATABASECHANGELOG_ACTIONS_PK INDEX DATABASECHANGELOG_ACTIONS_TRG TRIGGER GET_TAB1_COUNT FUNCTION PK_DATABASECHANGELOGLOCK INDEX SYS_IL0000560335C00003$$ INDEX SYS_IL0000560335C00004$$ INDEX SYS_LOB0000560335C00003$$ LOB SYS_LOB0000560335C00004$$ LOB TAB1 TABLE TAB1_PK INDEX TAB1_SEQ SEQUENCE TAB2 TABLE TAB2_PK INDEX TAB2_SEQ SEQUENCE 17 rows selected. SQL>
We query the DATABASECHANGELOG
table, and we can see the two changesets from our second changelog have been applied.
column id format a17 column author format a10 column filename format a40 select id, author, filename from databasechangelog; ID AUTHOR FILENAME ----------------- ---------- ---------------------------------------- 01_tab1_seq tim changelog_00001_setup_tab1.xml 02_tab1 tim changelog_00001_setup_tab1.xml 03_get_tab1_count tim changelog_00001_setup_tab1.xml 01_tab2_seq dave changelog_00002_setup_tab2.xml 02_tab2 dave changelog_00002_setup_tab2.xml SQL>
Considerations
- If your focus is on Oracle databases, I prefer the SQLcl implementation of Liquibase, but if you work on multiple database engines you may be better to focus on using the normal Liquibase client.
- As mentioned previously, I don't think you should generate a changelog from an existing schema. I think you should only track changes from the day you start using Liquibase onward.
- All change scripts should have an equivalent "undo" script, but I don't like the concept of rollback as described in Liquibase changesets. Instead I feel a better solution is "fix forward". If a change has a problem and needs to be backed out, the "undo" script should be included as a new change in a changelog, and applied via the master.xml, so any form of rollback/undo is still a move forward. This includes reverting code object versions.
- Everything should be in version control, which is why I've included "git" in the paths as a reminder. The changelogs and the code don't have to be in the same repository though, depending on your requirements.
- Management of the changelog is important, because it is the serialisation of changes to the database. How you manage the changelog is an interesting process.
- This requires some discipline. So should your manual process, but I suspect many people are more lax than you can be when using an automation tool like this.
- I'm purposely keeping this scripts-based. I know a lot of people will not like this and will prefer to use the XML tag approach to creating changelogs. It just doesn't work for me at this point. SQLcl supports most of the Oracle objects, including the code objects, but it doesn't support data, which I think is important, especially for seed data. I could mix the XML tag approach and scripts for hings it doesn't support, but I don't like the inconsistency.
- Deciding how to approach different schema and privileged accounts takes some thought. In some cases you will need to coordinate changes to multiple schemas, including some via privileged accounts. The great thing about the SQLcl implementation is it's all based on the currently logged in user, so having a build agent that logs into sqlcl using a secure external password store is quite simple.
- I don't like the thought of relying on file names to control the apply order, but I know some other products encourage that. This is a personal choice, and the example changelogs here are named with an order in mind, even though we are not using it.
- Every change needs to be in a separate changeset, or this can affect the way failures are handled. That may feel odd to people who like to combine multiple changes into a single build scripts. This is one concession I'm prepared to make. You can still use multi-purpose scripts if you want to. Just be certain of your clean-up operations.
- Everything looks simple when managing a small number of objects and scripts. As things escalate is gets more difficult. Your approach, and mine, will change over time. That's OK.
The Automatic Approach
I don't recommend this method for the reasons listed above, but I thought it was worth showing as it's interesting.
We start a new project and create the development and UAT users as follows.
conn sys/SysPassword1@pdb1 as sysdba --alter system set deferred_segment_creation=false; -- DROP USER dev CASCADE; create user dev identified by dev quota unlimited on users; grant create session, create table, create sequence, create procedure to dev; -- DROP USER uat CASCADE; create user uat identified by uat quota unlimited on users; grant create session, create table, create sequence, create procedure, create view, create trigger to uat;
We make the first changes to the development schema.
conn dev/dev@pdb1 create sequence tab1_seq; create table tab1 ( id number, description varchar2(50), constraint tab1_pk primary key (id) ); create or replace function get_tab1_count return number as l_count number; begin select count(*) into l_count from tab1; return l_count; end; /
Switch directory to "v1" and use Liquibase to capture the first version of the schema contents.
mkdir -p /git/my_project/v1 cd /git/my_project/v1 sql dev/dev@pdb1 <<EOF lb genschema exit; EOF
We are now ready to promote these changes to a UAT environment. We connect to the UAT environment and apply the changelogs.
cd /git/my_project/v1 sql uat/uat@pdb1 <<EOF lb update controller.xml exit; EOF
We have a new change in development.
conn dev/dev@pdb1 -- Change an existing tabe. alter table tab1 add (col3 date); -- Create a new table. create sequence tab2_seq; create table tab2 ( id number, description varchar2(50), constraint tab2_pk primary key (id) ); -- Amend a code object (silly change). create or replace function get_tab1_count return number as l_count number; begin select count(*) + 1 - 1 into l_count from tab1; return l_count; end; /
Switch directory to "v2" and use Liquibase to capture the first version of the schema contents.
mkdir /git/my_project/v2 cd /git/my_project/v2 sql dev/dev@pdb1 <<EOF lb genschema exit; EOF
We are now ready to promote these changes to a UAT environment. We connect to the UAT environment and apply the changelogs.
cd /git/my_project/v2 sql uat/uat@pdb1 <<EOF lb update -changelog-file controller.xml false exit; EOF
For more information see:
- Liquibase
- Liquibase Best Practices
- Liquibase : All Articles
- Using Liquibase with SQLcl
- SQLcl and Liquibase : Automating Your SQL and PL/SQL Deployments
- Liquibase : Automating Your SQL and PL/SQL Deployments
- SQLcl : Deploying Oracle Application Express (APEX) Applications using the SQLcl implementation of Liquibase
- SQLcl : Installation
Hope this helps. Regards Tim...