Cluster Cache Coherency in EM12c

January is winding down and RMOUG Training Days 2015 is just around the corner, taking up much of my after work hours.

With that, we are going to discuss a great performance console in the EM12c cloud control-  Cluster Cache Coherency.

Cluster Cache Coherency

Optimization for an Oracle Real Application Cluster, (RAC) can be a daunting tasks for those that aren’t familiar with some of the most common issues to look for.  Although many aspects of a single instance must always be examined, knowing what is unique to RAC is important.  Be aware that Enterprise Manager is completely RAC Aware and will provide you with much of the data that you need to research, identify and resolve performance issues unique to clusters.

Accessing the Console

Once you’ve logged into a RAC instance, you can access the console from the performance menu:


The interface is broken down into multiple areas:

  • Global Cache Block Access Latency

  • Global Cache Block Transfer Rate

  • Block Access Statistics

You can view this data in Graphical or Tabular view, with the graphical view as default.  I find the graphical view to be much more informative, as it shows more details and breakdown by time, where the tabular view is a snapshot of information rolled up.

Tabular View

By Clicking on Tabular view, you are going to see, (as explained earlier) the rolled up estimates for the time shown in the for the snapshot of time displayed in the graphical view:


Along with the three categories of data showing the averages for each, you can view the data per instance, which is extremely important to see if transactions and resource usage is askew per node.

Below the global cache info is Additional Links to Interconnects and Top Activity performance pages, the two areas that are most likely accessed next for deeper research.

Graphical Charts

By changing back to a graphical chart view, the console extends to display each of these three categories, displaying global cache information by time.


Notice by each of these three graphs for the Cluster Cache Coherency, how the average Current Blocks Received and CR Blocks Received Time is measured in milli-seconds.  GC, (Global Cache) values are measured by blocks per second and Physical and Logical read comparisons are shown in IO Per Second, (IOPS).

Breaking it Down

You can highlight any of these metrics and click on them in the right hand side to show more detailed data for each:




You can also click on any of the metrics shown on the right, and break down exactly what objects are cached per node:


With the instance/node view, you can see what objects are cached per instance, switch to another node in RAC or order the data by different metrics:

  • GC Current Blocks Received
  • GC CR Blocks Received
  • GC Buffer Busy
  • Physical Reads
  • Logical Reads
  • Row Lock Waits

This view can offer a lot of data when trying to drill down and find out what object may be causing concurrency or block wait issues across different nodes.

Tile Chart View

You can adjust the Summary Chart to a Tile Chart view, too:


There are, (again) a number of different ways to view this data between the nodes to see  resources are being allocated in the cluster:


Once you have investigated and found what object or interconnect issue is causing your waits, you can then use the Additional Links section to further research and resolve performance issues.  Interconnect for cluster interconnect related issues and Top Activity for database issues, but I would highly recommend checking into ASH Analytics or ADDM Comparison for enhanced features to get the most out of EM12c.  Both features are available in the Performance drop down in the target console!






Tags:  , , ,

Copyright © DBA Kevlar [Cluster Cache Coherency in EM12c], All Right Reserved. 2015.

Oracle XFILES now on GitHub

The demonstration environment for Oracle XML DB called XFILES is now on GitHub. As stated…

Oracle Database 12c Patching: DBMS_QOPATCH, OPATCH_XML_INV, and datapatch


Oracle Database 12c brings us many new features including: the long needed ability to run OPatch and query the patch software installed in the Oracle Home programatically through SQL using the new DBMS_QOPATCH package.

If you’re a DBA working in an environment where patching consistently among databases is important, this is a welcomed enhancement. Sometimes one-off critical bug fix patches are important in the environment. Other times regular and consistent application of the quarterly PSUs is important to the organization (i.e. for regulatory compliancy). For sake of illustration I’m going to use the later as my need case for the duration of this article as I’ve regularly run into the situation where clients need PSUs applied properly, regularly, and consistently to their databases.

As a recap, many Oracle Patches including PSUs require a two-step process to implement properly. First the software change needs to be applied to the associated Oracle Home via the OPatch utility. Secondly it needs to be installed into all databases running in that Oracle Home (possibly updating internal packages and objects).

With Oracle 11g the latest patch is applied within the database using:

SQL> @catbundle.sql psu apply

Oracle 12c introduces a new utility called “datapatch” which replaces the need to run the 11g command shown above.

As with Oracle 11g you first install the patch into the Oracle Home with all services & databases down. But with Oracle Database 12c after restarting the database simply run datapatch from the OPatch directory:

./datapatch -verbose


Given that patching is a two-step process, some common DBA questions are:

1) Has a patch been installed in a given database?
2) Has a patch been installed into the $ORACLE_HOME via Opatch but not one or more of the databases?
3) Is the patch in the database but not the $ORACLE_HOME?

You may be thinking that #3 isn’t very likely but the reality is that it does happen. Circumstances that lead to a database being at a higher patch level than the Oracle Home include:

A) If the database was create from an RMAN duplicate from a home with a patch to one without.
B) A data guard standby switch-over or fail-over where the primary Oracle home was patched but the standby home was missed.
C) A 12c PDB that was unplugged from a patched CDB and plugged into an unpatched CDB.
D) The patch was uninstalled via Opatch from the Oracle home without uninstalling from the database (unlikely).

Since patching is a two-step process, historically answering questions such as these is also a two step process requiring the DBA to query the Oracle Home inventory and the database and to manually correlate and reconcile the results. This is cumbersome, prone to human errors, and not scalable due to the manual component.

But with Oracle 12c and the new DBMS_QOPATCH package, both can be queried programatically and joined using SQL allowing the DBA to easily report on questions like 1, 2, and 3 above via SQL queries, custom extensions to monitoring tools, reporting programs, etc. Or just to report accurately on patch deployments for security compliance reports, general database security assessments, or database health checks.

This is actually quite a significant breakthrough with regards to patch management!


How it works

The first thing to note is that in Oracle Database 12c there are some new Directory Objects installed by default:

SQL> select owner, directory_name, directory_path from dba_directories
  2  where directory_name like 'OPATCH%' order by 2;

------------ -------------------- --------------------------------------------------------------------------------
SYS          OPATCH_INST_DIR      /u01/app/oracle/product/12.1.0/dbhome_1/OPatch
SYS          OPATCH_LOG_DIR       /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch
SYS          OPATCH_SCRIPT_DIR    /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch



Two of those point to a new QOpatch directory which contains a batch file which runs the OPatch utility:

SQL> !ls /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch
qopatch_log.log  qopiprep.bat

SQL> !cat /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch/qopiprep.bat
# $Header: rdbms/src/client/tools/qpinv/qopiprep.bat /unix/2 2013/04/04 20:59:27 tbhukya Exp $
# qopiprep.bat
# Copyright (c) 2012, 2013, Oracle and/or its affiliates. All rights reserved.
#    NAME
#      qopiprep.bat - bat file for preprocessor
#      bat file for external table preprocessor.
#    NOTES
#      .
#    tbhukya     04/03/13 - Bug 16226172 : Forward merge of fix 16076845
#    tbhukya     09/23/12 - Creation
export PATH

# Option: "-retry 0" avoids retries in case of locked inventory.

# Option: "-invPtrLoc" is required for non-central-inventory
# locations. $OPATCH_PREP_LSINV_OPTS which may set by users
# in the environment to configure special OPatch options
# ("-jdk" is another good candidate that may require configuration!).

# Option: "-all" gives information on all Oracle Homes
# installed in the central inventory.  With that information, the
# patches of non-RDBMS homes could be fetched.

$ORACLE_HOME/OPatch/opatch lsinventory -xml  $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt
`echo "UIJSVTBOEIZBEFFQBL" >> $ORACLE_HOME/QOpatch/xml_file.xml`
echo `cat $ORACLE_HOME/QOpatch/xml_file.xml`
rm $ORACLE_HOME/QOpatch/xml_file.xml
rm $ORACLE_HOME/QOpatch/stout.txt


The description text in this Oracle provided file states that it’s a batch file used for external table preprocessing. The preprocessor clause was added in Oracle 11g release 2 (and back-ported to Oracle11gR1) to allow for execution of an external script/file to process data before reading it in the database via an external table. A further explanation of this feature is beyond the scope of this article.

Consequently we know an external table is involved and it’s not too hard to find and understand that:

SQL> select owner, table_name from dba_external_tables where table_name like 'OPATCH%' order by 1,2;

------------ --------------------

SQL> select dbms_metadata.get_ddl('TABLE','OPATCH_XML_INV','SYS') from dual;


      READSIZE 8388608
      preprocessor opatch_script_dir:'qopiprep.bat'
      BADFILE opatch_script_dir:'qopatch_bad.bad'
      LOGFILE opatch_log_dir:'qopatch_log.log'
        xml_inventory    CHAR(100000000)
       ( "OPATCH_SCRIPT_DIR":'qopiprep.bat'



Hence we can understand the underlying mechanics of this new feature. It’s based on an external table using a directory object running an external batch file (through the external table preprocessor option) which then runs the actual OPatch utility.

And we can query that external table directly if we want though we’ll get an XMLTYPE result:

SQL> select * from OPATCH_XML_INV;

<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <InventoryInstance> <ora
cleHome> <UId>OracleHome-0243b34c-d8db-43f7-946b-589110de0ef9</UId> <targetTypeI
d>oracle_home</targetTypeId> <inventoryLocation>/u01/app/oraInventory</inventory
(output truncated)



Oracle gives us a wrapper API to this new external table (OPATCH_XML_INV) with DBMS_QOPATCH to make working with the output more manageable.

For example we can start with some basic information on the inventory using DBMS_QOPATCH.GET_OPATCH_INSTALL_INFO :

SQL> set heading off long 50000
SQL> select dbms_qopatch.get_opatch_install_info from dual;




But as we see the output still isn’t easy on the eyes. Fortunately the XML stylesheet is also presented using DBMS_QOPATCH.GET_OPATCH_XSLT function. Hence combining the two gives a much more readable output:

SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) from dual;

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Inventory         : /u01/app/oraInventory



The DBMS_QOPATCH package provides many other useful functions and procedures. For example, to see if and when a specified patch was installed, or just to see the list of all of the patches installed by bug number:

SQL> select xmltransform(dbms_qopatch.is_patch_installed('19303936'), dbms_qopatch.get_opatch_xslt) from dual;

Patch Information:
         19303936:   applied on 2014-12-20T13:54:54-07:00

SQL> select xmltransform(dbms_qopatch.get_opatch_bugs, dbms_qopatch.get_opatch_xslt) from dual;

  Bugs fixed:
          19157754  18885870  19303936  19708632  19371175  18618122  19329654
19075256  19074147  19044962  19289642  19068610  18988834  19028800  19561643
19058490  19390567  18967382  19174942  19174521  19176223  19501299  19178851
18948177  18674047  19723336  19189525  19001390  19176326  19280225  19143550
18250893  19180770  19155797  19016730  19185876  18354830  19067244  18845653
18849537  18964978  19065556  19440586  19439759  19024808  18952989  18990693
19052488  19189317  19409212  19124589  19154375  19279273  19468347  19054077
19048007  19248799  19018206  18921743  14643995



Or to run the equivalent of “opatch lsinventory” but from SQL instead of the OS:

SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;

Oracle Querayable Patch Interface 1.0

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Inventory         : /u01/app/oraInventory
Installed Top-level Products (1):
Oracle Database 12c                          
Installed Products ( 135)

Oracle Database 12c                               
Java Development Kit                              
(output truncated)


Putting it all together

The examples above are interesting but to make it truly useful for patch application querying, monitoring, and reporting we need join the output of DBMS_QOPATCH (which is showing us what’s installed into the Oracle Home) with the new 12c view DBA_REGISTRY_SQLPATCH (which shows us what patches are applied to the database).

Prior to Oracle 12c to list the PSUs installed into the Oracle home we might use:

SQL> !$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'DATABASE PSU|DATABASE PATCH SET UPDATE'
19303936 19303936 Sat Dec 20 13:54:54 MST 2014 DATABASE PATCH SET UPDATE (OCT2014)



I obtained that result by shelling out from a sqlplus session running on the database server. But likely a centralized SQL based monitoring or reporting tool won’t have that ability. A client-server database connection won’t be able to run OPatch easily and hence the DBMS_QOPATCH API is required.

To extract the same information from the DBMS_QOPATCH package we need to parse the resulting XMLTYPE output:

SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.*
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription',
 10               applied_date varchar2(30) path 'appliedDate',
 11               sql_patch varchar2(8) path 'sqlPatch',
 12               rollbackable varchar2(8) path 'rollbackable'
 13         ) x;

---------- ----------
------------------------------ -------- --------
  19303936   18116864
Database Patch Set Update : (19303936)
2014-12-20T13:54:54-07:00      true     true



This is a great result: Information on the PSUs installed into the Oracle Home through SQL and returned to us in an easy to read and easy to work with tabular form!

To make this information even more useful we need to join to the new DBA_REGISTRY_SQLPATCH view which replaces the DBA_REGISTRY_HISTORY view with respect to which patches have been applied in the database:

SQL> select patch_id, patch_uid, version, status, description
  2  from dba_registry_sqlpatch
  3  where bundle_series = 'PSU';

---------- ---------- -------------------- ---------------
  19303936   18116864             SUCCESS
Database Patch Set Update : (19303936)



Joining these two outputs allows us to easily write queries that report on the PSU patches deployed and ones only partially deployed (whether in the Oracle Home but not the database or vice versa):

SQL> --
SQL> --   List of PSUs applied to both the $OH and the DB
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.patch_id, x.patch_uid, x.rollbackable, s.status, x.description
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription',
 10               rollbackable varchar2(8) path 'rollbackable'
 11         ) x,
 12         dba_registry_sqlpatch s
 13   where x.patch_id = s.patch_id
 14     and x.patch_uid = s.patch_uid
 15     and s.bundle_series = 'PSU';

---------- ---------- -------- ---------------
  19303936   18116864 true     SUCCESS
Database Patch Set Update : (19303936)

SQL> --
SQL> --   PSUs installed into the $OH but not applied to the DB
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.patch_id, x.patch_uid, x.description
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription'
 10         ) x
 11  minus
 12  select s.patch_id, s.patch_uid, s.description
 13    from dba_registry_sqlpatch s;

no rows selected

SQL> --
SQL> --   PSUs applied to the DB but not installed into the $OH
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select s.patch_id, s.patch_uid, s.description
  3    from dba_registry_sqlpatch s
  4  minus
  5  select x.patch_id, x.patch_uid, x.description
  6    from a,
  7         xmltable('InventoryInstance/patches/*'
  8            passing a.patch_output
  9            columns
 10               patch_id number path 'patchID',
 11               patch_uid number path 'uniquePatchID',
 12               description varchar2(80) path 'patchDescription',
 13         ) x;

no rows selected


Simple queries such as those three are what can be incorporated into monitoring scripts and reports.

Some other DBMS_QOPATCH functions worth trying include:

set heading off long 50000 pages 9999 lines 180 trims on tab off
select xmltransform(dbms_qopatch.get_opatch_count, dbms_qopatch.get_opatch_xslt) from dual;
select xmltransform(dbms_qopatch.get_opatch_list, dbms_qopatch.get_opatch_xslt) from dual;
select xmltransform(dbms_qopatch.get_pending_activity, dbms_qopatch.get_opatch_xslt) from dual;

set serverout on
exec dbms_qopatch.get_sqlpatch_status;


Back-porting to Oracle Database 11g

An interesting question is: “can we back port this approach to 11g” and the answer is “absolutely“!

First of all, we need to create the directory object, external table, and OS batch script. For simplicity I’m keeping the name and structure of each the same as in 12c but of course you can adjust them if desired.

Using DBMS_METADATA.GET_DDL on an Oracle 12c database I get the necessary DDL to add the directory objects and external tables to an 11g database (note that I’ve updated the Oracle Home path but that’s the only change):

CREATE OR REPLACE DIRECTORY "OPATCH_LOG_DIR" AS '/u01/app/oracle/product/11.2.0/dbhome_1/QOpatch';
CREATE OR REPLACE DIRECTORY "OPATCH_SCRIPT_DIR" AS '/u01/app/oracle/product/11.2.0/dbhome_1/QOpatch';

      READSIZE 8388608
      preprocessor opatch_script_dir:'qopiprep.bat'
      BADFILE opatch_script_dir:'qopatch_bad.bad'
      LOGFILE opatch_log_dir:'qopatch_log.log'
        xml_inventory    CHAR(100000000)
       ( "OPATCH_SCRIPT_DIR":'qopiprep.bat'


Next I need to copy the preprocessor batch script into the 11g home (which can be copied from another machine if necessary):

!cp -r /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch /u01/app/oracle/product/11.2.0/dbhome_1/QOpatch


Now I don’t want to install the DBMS_QOPATCH package into a different version of the database. It’s Oracle supplied “wrapped” code meaning I can’t modify it and it probably has 12c dependencies meaning it would be invalid in Oracle 11g. Instead I performed a SQL Trace on the DBMS_QOPATCH.GET_OPATCH_LSINVENTORY package executing on the 12c database and from the resulting trace file I see that all it’s doing is:



Not exactly complicated. Hence using the directory objects and external table created above I can execute the following on the 11g database:

SQL> select version from v$instance;


SQL> select XMLTYPE(XML_INVENTORY) patch_output from OPATCH_XML_INV;

<?xml version="1.0" encoding="US-ASCII"?>
<INVENTORY isStandAlone="false">
(output truncated)


So as we can see, it’s working perfectly back-ported to 11g.

However again the XML output isn’t really useful for me, I’m more interested in what PSUs have been installed. It’s easy to check that using opatch if I’m on the server:

SQL> !$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'DATABASE PSU|DATABASE PATCH SET UPDATE'
18522509   18522509  Sun Sep 21 20:58:00 MDT 2014   DATABASE PATCH SET UPDATE (INCLUDES CPU
18031668   18031668  Sun Sep 21 20:57:48 MDT 2014   DATABASE PATCH SET UPDATE (INCLUDES CPU
17478514   17478514  Sun Sep 21 20:57:38 MDT 2014   DATABASE PATCH SET UPDATE (INCLUDES CPU


But again what if I’m not on the server, want to do this programatically through SQL, build it into a monitoring query or a security compliance report, etc?

Again we can query the XMLTYPE data and get exactly what we want. Specifically:

SQL> with a as (select XMLTYPE(XML_INVENTORY) patch_output from OPATCH_XML_INV)
  2  select x.* from a, xmltable(
  3     'INVENTORY/HOST/HOME/ONEOFF_LIST/INTERIM_PATCH/oneoff_inventory/base_bugs/bug'
  4     passing a.patch_output columns
  5     --row_number for ordinality,
  6     bug_number number path '@number',
  7     bug_description varchar2(256) path '@description'
  8  ) x
  9  where regexp_like(bug_description, '(DATABASE PSU|DATABASE PATCH SET UPDATE)','i')
 10  order by bug_number;

---------- ------------------------------------------------------------



Voila! By copying the technique used by the Oracle 12c database we’ve now performed an OPatch query from SQL against an 11g database. Just took about 2 minutes of setup.

Again this can be super handy if PSU patching is a regular activity for you or if you have to produce security compliance reports.

Unfortunately though Oracle 11g records less data in DBA_REGISTRY_HISTORY than 12c has in DBA_REGISTRY_SQLPATCH:

SQL> select comments, action_time from dba_registry_history
  2  where bundle_series like '%PSU' order by action_time;

COMMENTS                       ACTION_TIME
------------------------------ ------------------------------
PSU                 23-SEP-14 AM



Hence joining the two is more challenging but certainly possible. The data (in this case the string “”) is present in both the DBA_REGISTRY_HISTORY and my XQuery output from my back-ported OPATCH_XML_INV external table output.



The new DBMS_QOPATCH API is a fantastic improvement with Oracle Database 12c when it comes to patch management (querying/reporting/monitoring). And by understanding the concepts and components added to 12c they can be manually back-ported to 11g. Both using DBMS_QOPATCH and even the back-port to 11g is actually pretty easy. In fact the hardest part for me (not being very familiar with XQuery) was coming up with the query to parse the XMLTYPE data.



Oracle Recommended Patches — Oracle Database (Doc ID 756671.1)

Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

How do you apply a Patchset,PSU or CPU in a Data Guard Physical Standby configuration (Doc ID 278641.1)

Log Buffer #405, A Carnival of the Vanities for DBAs

This Log Buffer Edition rides on the wave of new ideas in the database realms. From Oracle technologies through MySQL to the SQL Server, things are piping hot.


The Call for Papers for the Rittman Mead BI Forum 2015 is currently open, with abstract submissions open to January 18th 2015.

The replicat process is the apply process within the Oracle GoldenGate environment.


“log file sync” and the MTTR Advisor.

Working with XML files and APEX – Part 3: Detail elements in a row with OUTER JOIN.

SQL Server:

Get started testing your database code with the tSQLt framework.

Archiving Hierarchical, Deleted Transaction Using XML.

As a part of his “Function / Iterator Pairs” mini-series, Business Intelligence architect, Analysis Services Maestro, and author Bill Pearson introduces the DAX COUNT() and COUNTX() functions, discussing similarities and differences.

Free eBook: Fundamentals of SQL Server 2012 Replication

Importance of Statistics and How It Works in SQL Server – Part 1


Is Zero downtime even possible on RDS?

Monitor MySQL Database Users with VividCortex

Django with time zone support and MySQL

Using Perl to send tweets stored in a MySQL database to twitter

When ONLY_FULL_GROUP_BY Won’t See the Query Is Deterministic.

Next ORCLAPEX-BE Meetup – Feb 2nd

Roeland organised a new APEX Meetup in Belgium on February 2nd.

This time no specific topic, everybody can get on stage for maximum 5 minutes to show something they did or to ask some feedback or help on a specific problem.

If Belgium is to far, no worries, there're meetups planned all over the world.

The next meeting in the Netherlands is on February 18th organised by Alex, Christian and Roel.

But as I said, many others, just check out for a more local event.

Bitmap Counts

In an earlier post (not very serious) post about count(*) I pointed out how the optimizer sometimes does a redundant bitmap conversion to rowid when counting. In the basic count(*) example I showed this wasn’t a realistic issue unless you had set cursor_sharing to force (or the now-deprecated similar). There are, however, some cases where the optimizer can do this in more realistic circumstances and this posting models a scenario I came across a few years ago. The exact execution path has changed over time (i.e. version) but the error persists, even in

First we create a “fact” table and a dimension table, with a bitmap index on the fact table and a corresponding primary key on the dimension table:

create table area_sales (
	area		varchar2(10)	not null,
	dated		date		not null,
	category	number(3)	not null,
	quantity	number(8,0),
	value		number(9,2),
	constraint as_pk primary key (dated, area),
	constraint as_area_ck check (area in ('England','Ireland','Scotland','Wales'))

insert into area_sales
with generator as (
	select	--+ materialize
		rownum 	id
	from	all_objects 
	where	rownum <= 3000
	sysdate + 0.0001 * rownum,
	rownum <= 1e6

create bitmap index as_bi on area_sales(category) pctfree 0;

create table dim (
	id	number(3) not null,
	padding	varchar2(40)

alter table dim add constraint dim_pk primary key(id);

insert into dim
	distinct category, lpad(category,40,category)
from	area_sales


		ownname		 => user,
		tabname		 =>'AREA_SALES',
		method_opt 	 => 'for all columns size 1',
		cascade		 => true

		ownname		 => user,
		tabname		 =>'DIM',
		method_opt 	 => 'for all columns size 1',
		cascade		 => true

Now we run a couple of queries and show their execution plans with execution rowsource statistics. First a query to count the number of distinct categories used in the area_sales tables, then a query to list the IDs from the dimension table that appear in the area_sales table, then the same query hinted to run efficiently.

set trimspool on
set linesize 156
set pagesize 60
set serveroutput off

alter session set statistics_level = all;

	distinct category

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

select  distinct category from  area_sales
| Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT             |       |      1 |        |    300 |00:00:00.01 |     306 |       |       |          |
|   1 |  HASH UNIQUE                 |       |      1 |    300 |    300 |00:00:00.01 |     306 |  2294K|  2294K| 1403K (0)|
|   2 |   BITMAP INDEX FAST FULL SCAN| AS_BI |      1 |   1000K|    600 |00:00:00.01 |     306 |       |       |          |

As you can see, Oracle is able to check the number of distinct categories used very quickly by scanning the bitmap index and extracting ONLY the values from each of the 600 index entries that make up the whole index (the E-rows figure effectively reports the number of rowids identified by the index, but Oracle doesn’t evaluate them to answer the query).

select  /*+   qb_name(main)  */  dim.* from dim where  id in (   select
   /*+     qb_name(subq)    */    distinct category   from
area_sales  )

| Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT              |       |      1 |        |    300 |00:00:10.45 |     341 |       |       |          |
|*  1 |  HASH JOIN SEMI               |       |      1 |    300 |    300 |00:00:10.45 |     341 |  1040K|  1040K| 1260K (0)|
|   2 |   TABLE ACCESS FULL           | DIM   |      1 |    300 |    300 |00:00:00.01 |      23 |       |       |          |
|   3 |   BITMAP CONVERSION TO ROWIDS |       |      1 |   1000K|    996K|00:00:02.64 |     318 |       |       |          |
|   4 |    BITMAP INDEX FAST FULL SCAN| AS_BI |      1 |        |    599 |00:00:00.01 |     318 |       |       |          |

What we see here is that (unhinted) oracle has converted the IN subquery to an EXISTS subquery then to a semi-join which it has chosen to operate as a HASH semi-join. But in the process of generated the probe table Oracle has to convert the bitmap index entries into a set of rowids – all 1,000,000 of them in my case – and this introduces a lot of redundant work. In the original customer query (version 9 or 10, I forget which) the optimizer unnested the subquery and converted it into an inline view with a distinct – but still performed a redundant bitmap conversion to rowids. In the case of the client, with rather more than 1M rows, this wasted a lot of CPU.

select  /*+   qb_name(main)  */  dim.* from (  select   /*+
qb_name(inline)    no_merge    no_push_pred   */   distinct category
from   area_sales  ) sttv,  dim where = sttv.category

| Id  | Operation                      | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT               |       |      1 |        |    300 |00:00:00.02 |     341 |       |       |          |
|*  1 |  HASH JOIN                     |       |      1 |    300 |    300 |00:00:00.02 |     341 |  1969K|  1969K| 1521K (0)|
|   2 |   VIEW                         |       |      1 |    300 |    300 |00:00:00.01 |     306 |       |       |          |
|   3 |    HASH UNIQUE                 |       |      1 |    300 |    300 |00:00:00.01 |     306 |  2294K|  2294K| 2484K (0)|
|   4 |     BITMAP INDEX FAST FULL SCAN| AS_BI |      1 |   1000K|    600 |00:00:00.01 |     306 |       |       |          |
|   5 |   TABLE ACCESS FULL            | DIM   |      1 |    300 |    300 |00:00:00.01 |      35 |       |       |          |

By introducing a manual unnest in the original client code I avoided the bitmap conversion to rowid, and the query executed much more efficiently. As you can see the optimizer has predicted the 1M rowids in the inline view, but used only the key values from the 600 index entries. In the case of the client it really was a case of manually unnesting a subquery that the optimizer was automatically unnesting – but without introducing the redundant rowids.

In my recent 12c test I had to include the no_merge and no_push_pred hints. In the absence of the no_merge hint Oracle did a join then group by, doing the rowid expansion in the process; if I added the no_merge hint without the no_push_pred hint then Oracle did a very efficient nested loop semi-join into the inline view. Although this still did the rowid expansion it “stopped early” so ran very quickly:

select  /*+   qb_name(main)  */  dim.* from (  select   /*+
qb_name(inline)    no_merge   */   distinct category  from   area_sales
 ) sttv,  dim where = sttv.category

| Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
|   0 | SELECT STATEMENT              |       |      1 |        |    300 |00:00:00.02 |     348 |
|   1 |  NESTED LOOPS SEMI            |       |      1 |    300 |    300 |00:00:00.02 |     348 |
|   2 |   TABLE ACCESS FULL           | DIM   |      1 |    300 |    300 |00:00:00.01 |      35 |
|   3 |   VIEW PUSHED PREDICATE       |       |    300 |   3333 |    300 |00:00:00.01 |     313 |
|   4 |    BITMAP CONVERSION TO ROWIDS|       |    300 |   3333 |    300 |00:00:00.01 |     313 |
|*  5 |     BITMAP INDEX SINGLE VALUE | AS_BI |    300 |        |    300 |00:00:00.01 |     313 |

Bottom line on all this – check your execution plans that use bitmap indexes – if you see a “bitmap conversion to rowids” in cases where you don’t then visit the table it may be a redundant conversion, and it may be expensive. If you suspect that this is happening then dbms_xplan.display_cursor() may confirm that you are doing a lot of CPU-intensive work to produce a very large number of rowids that you don’t need.

Setting up your own local Oracle Development environment in less than 15 minutes

The fastest and easiest way to setup your own local Oracle Development environment is by using the "Oracle Technology Network Developer Day" Database Virtual Box Appliance.

It contains:

  • Oracle Linux 7
  • Oracle Database 12c ( EE (CDB/PDB)
  • Oracle Rest Data Services (ORDS)
  • Oracle Application Express (APEX)
  • Oracle SQL Developer and Oracle SQL Developer Data Modeler
Step 1: Download and install Oracle VM VirtualBox

Step 2: Download the VM image that contains all the pieces you need

Step 3: Import the VM: File > Import Appliance


You can now start using the VM.

Some extra steps I did was adding a Shared folder and setup the Network connection

Once you add the Shared Folder open up a Terminal in your VM and type: "mount -t vboxsf Downloads /mnt"  (Downloads is my OSX Downloads folder and /mnt is my mount directory in VirtualBox), that makes it easier to transfer files to the VM.

Here's a screenshot once you run the VM:

The nice thing with this VM is that everything is ready to be used and it includes labs, so you can do some exercises too and there's even a reset script to put everything back how it was.

Exercises are available for: Database 12c (including JSON, XML DB, ...), SQL Developer, APEX, REST Data Services and Cloud Services.

Who said installing Oracle was hard? :)

Generate nested JSON from SQL with ORDS and APEX 5

When creating web applications, at some point you will need to interact with JSON. Either you consume JSON or you need to generate it to be able to use that cool widget you found.

A few years ago when I wrote about interacting and customising charts, XML was the data format. Today JSON is more common to use as it works so well with JavaScript. I needed to pass some nested JSON - here's an example: multiple customers have multiple orders and an order consists out of multiple items: 

What is the easiest way to generate that JSON data?

Luckily we have Oracle REST Data Services (ORDS), it literally is just one SQL statement!
Here you go: 

Save and presto you're done :) Hit the TEST button and copy/paste the generated JSON in for example JSONLint to validate the output and yep we have valid JSON.

But that is not all - it gets even better. APEX 5 comes with a new package apex_json which contains so many goodies, really a wonderful piece of code. The same SQL statement I call with the package and again I get the JSON I need. It's slightly different from what ORDS generated - ORDS has an items tag surrounding the data, but again it's valid JSON.
Note that APEX 4.2 has some JSON support too (apex_util.json_from_sql), but it doesn't go as far as the package you find in APEX 5.0 - for example the cursor syntax is not working there, but for straight forward JSON it does it job.

So this shows how easy it is to generate JSON these days. It has been different where we had to use the PL/JSON package (which was good too, but build-in is what I prefer).

And finally with Oracle Database 12c, and the JSON support straight in the database, consuming and querying JSON has been made so much easier, but that is for another post. If you can't wait, the link will bring you to the official 12c Database JSON doc.

Understanding the APEX url – passing variables to items and (interactive) reports

Did you know you can recognise an Oracle APEX application through the url?

Here's the syntax:


Let me give some examples:


The first part: depends on your configuration. I'm using https, my domain name is and I'm using the Oracle REST Data Services as connection to the database. This is a configuration in your webserver.

The next part is f?p= f is a procedure in the database with as paramaters (p=). f comes from flow - once APEX was called "Project Flows".

209 is my application id, 12 is my page id and 12351378808570 is my session
The first part is the same, but now as request I've CSV, page 12 is an Interactive Report and the fastest way to see your IR data as CSV is to give CSV as the request in the url
I changed the request again to the name of a saved Interactive Report (Alternative) IR_REPORT_: so the page goes straight to that layout of the report.

Here I'm calling the page in DEBUG mode
In the clear cache section you have a couple of options, specifying the below will clear the cache for : 
- APP: whole application
- SESSION: current user session
- RIR: reset interactive report to primary report
- CIR: reset to primary report but with custom columns
- RP: reset pagination
- 12: page 12
- P12_ID: item P12_ID
Here I pass a variable with the url, we'll fill P12_ID with the value 1
You can use comma separated list of items and values for example: P12_ID,P12_NAME:1,DIMITRI
If you want to filter an interactive report you can call IR??_:
In the above case we set the customer last name = Dulles.
There are many other parameters: IREQ_ (equal), LT_ (less), IRLTE_ (less or equal), IRGT_, IRGTE_ (greater or equal), IRLIKE_ (like), IRN_ (null), IRNN_ (not null), IRC_ (contains), IRNC_ (not contains).
In this case we run the page in Printer-friendly mode
This url will generate a trace file on the system for this request.

You find some more information in the APEX Documentation or you can try online at

One remark: if you enable Session State Protection (SSP) you might not be able to call the parameters like this as it requires a checksum at the end which you have to calculate for example by using apex_util.prepare_url().

And finally, if you want more readable urls, some people did some posts about that, here's one of Peter.

AWR Warehouse and SYSDBA

I’ve had a few folks ask me a similar question about the AWR Warehouse, occuring numerous times this week.

“How can I limit what the user of the AWR Warehouse can view in the AWR Warehouse?”

“How can I add source databases to the AWR Warehouse without DBA privileges?”

This topic bridges into the area of confusion of use of the EM12c environment, (which I consistently promote for use by DBAs, Developers and all of IT) and then the AWR Warehouse, which shares it’s user interface as part of cloud control, that currently is a DBA’s deep analysis and research tool.

The request to limit privileges to add source databases, limit view access to targets, but also bind values, full SQL statements, and advanced performance data impacts the purpose of the AWR Warehouse.  I fully understand the security requirements for AWR Warehouse access as stated in Oracle’s support documentation:

“You can add and remove source databases provided you have access to the database target and to database credentials with execute privileges on the sys.dbms_swrf_internal package and the DBA role.”

Why is this important?

The first reasoning would be for space considerations.  AWR data with long term retention could add up to a considerable disk space if just any database was added without careful planning to ensure the right database information is retained for the right databases.  The power of the Automatic Workload Repository, (AWR) is that its always on, but always on means its always growing and this is best left to a DBA resource to ensure that allocation of space is used wisely.

Second, when discussing limiting view of data in the AWR Warehouse-  When investigating an issue with any SQL execution, there are many factors to take into consideration.  One of the most significant and commonly important information to answering why a performance issue has occurred, requires me to look into differences in the amount of data resulting in the where clause and objects vs. the data provided to the optimizer.

If we take the following, simple where clause into consideration:

where a.col1=b.col2
and a.col2=:b1
and b.col7=:b2;

When the optimizer uses data provided to it from statistics, histograms and any dynamic sampling, there is going to be a number of choices that can be made from the following information provided.

  1. Hash join on table a and b to address the join or perform nested loop if…
  2. Column a.col2 is unique, making the join quite small or…
  3. Adding b.col7 to a.col2 to the join will make it so unique that a nested loop is sufficient.

Now, what if the optimizer decided to perform a nested loop when 16 million+ rows were returned?

To the untrained eye, some may assume that the optimizer had made a poor choice or that there was a bug and would walk away.  More often, if you have the data provided by the values passed to the bind variables, along with the data provided to the optimizer, assumptions would fall away and a very different story would present itself.

This is why Oracle requires DBA privileges to add a source database to the AWR Warehouse and to work with the data provided as part of the AWR Warehouse. This feature provides an abundance of data that is most useful to the professional that knows how to work with the Automatic Workload Repository.  This professional, to have the access required to perform this type of analysis and research should be the database administrator, so the requirement for the AWR Warehouse now makes perfect sense.

Now to return to the query, adding in the values for the bind variables, a new picture develops to research:

where a.col1=b.col2
and a.col2=6002
and b.col7='YES';

We now can verify the statistics data behind the values for both a.col2 and b.col7 and accurately diagnose where the optimizer may have been mislead due to incorrect data provided to the Cost Based Optimizer.

This may be a simple explanation behind why I believe in the DBA privilege policy was chosen for source database additions and view options to the AWR Warehouse, but hopefully it sheds a bit of light onto the topic.



Copyright © DBA Kevlar [AWR Warehouse and SYSDBA], All Right Reserved. 2015.