Datafile without backups – how to restore?

Have you ever had a problem with restoring datafiles without any backups available? It's easy, of course if you have all archived logs from the time datafile was created. Please check it here: Re-Creating Data Files When Backups Are Unavailable. Moreover, RMAN is clever enough to create empty datafile automatically during restore phase and then recover it using archived logs. So far, so good, but...

Dowsing for Smarties

I have been a tad skeptical about the usefulness of smart watches, but my colleague Julia Blyumen has changed my thinking.

Woodblock of dowserIn her recent blog post, Julia noted that a smart watch could become both a detector and a universal remote control for all IoT “smart things”. She backed this up with a link to an excellent academic paper (pdf) “User Interfaces for Smart Things: A Generative Approach with Semantic Interaction Descriptions”.

I strongly encourage anyone interested in the Internet of Things to read this paper. In it the authors lay the foundations for a general purpose way of interacting with “smart things”, interactive components that can sense and report on current conditions (counters, thermometers), or respond to commands (light switches, volume knobs).

These smarties (as I like to call them) will have much to tell us and will be eager to accept our commands. But how will we interact with them? Will they adapt to us or must we adapt to them? How will we even find them?

The authors propose a brilliant solution: let each smartie emit a description of what it can show or do. Once we have that description, we can devise whatever graphical user interface (or voice command or gesture) we want. And we could display that interface anywhere: on a webpage or a smartphone – or a watch!

Another one of my AppsLab colleagues, Raymond Xie, immediately saw a logical division of labor: use a phone or tablet for complex interactions, use a watch for simple monitoring and short command bursts.

Another way a watch could work in concert with a phone would be as a “smartie detector”.  It will be a long time (if ever) before every thing is smart.  Until then it will often not be obvious whether the nearby refrigerator, copy machine, projector, or lamp is controllable.

Watches could fill this gap nicely.  Every time your watch comes within a few feet of a smartie it could vibrate or display an icon or show the object’s state or whatever.  You could then just glance at your wrist to see if the object is smart instead of pulling out your phone and using it as a dowsing rod.

One way of implementing this would be for objects or fixed locations (room doors, cubicles, etc.) to simply emit a short-range bluetooth ID beacon.  The watch or its paired phone could constantly scan for such signals (as long as its battery holds out).  If one was detected it would use local wifi to query for the ID and pull up an associated web page.  Embedded code in the web page would provide enough information to display a simple readout or controller. The watch could either display it automatically or just show an indicator to let the user know she could tap or speak for quick interactions or pull out her phone to play with a complete web interface.

An example I would find useful would be meeting room scheduling.  I often arrive at a meeting room to find someone else is already using it.  It would be nice to wave my watch at the door and have it confirm who had reserved the room or when it would next be free. Ideally, I could reserve it myself just by tapping my watch. If I realized that I was in the wrong place or needed to find another room, I could then pull out my phone or tablet with a meeting room search-and-reserve interface already up and running.

But that’s just the beginning.

One of the possibilities that excites me the most about this idea is the ability to override all the confusing and aggravating UIs that currently assault me from every direction and replace them with my own UIs, customized to my tastes.  So whenever I am confronted with a mysterious copy machine or the ridiculously complicated internet phone we use at work, or a pile of TV remote controls with 80 buttons apiece, or a BART ticket machine with poorly marked slots and multiple OK buttons, or a rental car with diabolically hidden wiper controls, I could pull out my phone (or maybe even just glance at my watch) to see a more sane and sensible UI.

Designers could perfect and sell these replacement UIs, thus freeing users from the tyranny of having to rely on whatever built-in UI is provided.  This would democratize the user experience in a revolutionary way.  It would also be a boon for accessibility.  Blind users or old people or children or the wheelchair-bound could replace any UI they encounter in the wild with one specially adapted for them.

Virtual interfaces could also end the tedium of waiting in lines. Lines tend to form in parking garages and conference registration because only one person can use a kiosk at a time. But if you could tap into a kiosk from your smart watch, dozens of people could complete their transactions at the same time.

Things get even more interesting if people start wearing their own beacons.  You could then use your watch to quickly capture contact information or create reminders; during a hallway conversation, a single tap could “set up meeting with Jake”. Even automatically displaying the full name of the person next to you would be helpful to those of us who sometimes have trouble remembering names.

If this capability was ubiquitous and the range was a bit wider you could see and interact with a whole roomful of people or even make friends during a plane ride. Even a watch could display avatars for nearby people and let you bring any one into focus. You could then take a quick action from the watch or pass the selected avatar to your phone/tablet/laptop to initiate something more complex like transferring a file.

Of course this could get creepy pretty fast.  People should have some control over the information they are willing to share and the kind of interactions they wish to permit. It’s an interesting design question: “What interaction UIs should a person emit?”

We are still at the dawn of the Internet of Things, of course, so it will be awhile before all of this comes to pass. But after reading this paper I now look at the things (and people) around me with new eyes. What kind of interfaces could they emit? Suddenly the idea of using a watch to dowse for smarties seems pretty cool.Possibly Related Posts:

Riga Dev Day 2015 : The Journey Begins

My first flight was 06:20, so I had to leave the house at 04:00, which meant getting up at about 03:00. Yuck!

The first flight was about 90 minutes from Birmingham to Frankfurt, with the smoothest landing I can ever remember in a plane the size of an A320. I was meant to have a 65 minute changeover before the flight from Frankfurt to Riga. A one hour changeover at Frankfurt is too short, but the alternative was to wait for about 6 hours. As soon as I got off the plane I did a quick march to the departure gate and arrived just as boarding was meant to start. The following plane was late getting in, so I got a 10 minute break before getting on the plane. The flight to Riga was pretty straight forward, taking a bit under 2 hours. Both planes had lots of empty seats, so I was able to get my laptop out and have a play.

At 16:00 we had a tour of the old town. It started with a quick bus tour and then we walked through some of the old town, with a tour guide telling us about a few of the key places. I really like the look of Riga. I took a few photos, which will be available here when I get to upload them.

From the tour, we went out to dinner, where we met up with some of the people who arrived too late to make the tour. This is a multi-discipline conference, so it was really cool speaking to people from different backgrounds, which helps you see how your pieces fit into the bigger picture…

From dinner it was back to the hotel and bed. It was a really long day, so I was desperate to get to bed. The conference starts tomorrow…

Cheers

Tim…


Riga Dev Day 2015 : The Journey Begins was first posted on January 21, 2015 at 8:52 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Partition gotchas

Two minor partition gotchas on partition extent sizing and on the number of partitions in an interval partitioned.

First – old news – note that initial extent size for partitioned tables changed in 11.2.0.2.
Somehow this passed me by and I was late to the large extent party – duh!
Influenced by parameter _partition_large_extents, the default was changed to 8MB rather than 64K for autoallocated ASSM extent sizing.

Pre 11.2.0.2 (non-Exadata):

alter session set "_partition_large_extents" = false;

create table t1
(col1   number not null)
partition by range (col1)    interval ( 1 )
(partition p0 values less than (1));

insert into t1 (col1) values (1);

See initial extent size of 64k:

select table_name
,      partition_name
,      high_value
,      initial_extent
,      next_extent 
from user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INITIAL_EXTENT NEXT_EXTENT
---------- -------------- ---------- -------------- -----------
T1         P0             1          
T1         SYS_P133592    2          65536          1048576 

Onwards from 11.2.0.2:

alter session set "_partition_large_extents" = true;

drop table t1;

create table t1
(col1   number not null)
partition by range (col1)    interval ( 1 )
(partition p0 values less than (1));

insert into t1 (col1) values (1);

Back to default initial extent of 8MB:

select table_name
,      partition_name
,      high_value
,      initial_extent
,      next_extent 
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INITIAL_EXTENT NEXT_EXTENT
---------- -------------- ---------- -------------- -----------
T1         P0             1          
T1         SYS_P133593    2          8388608        1048576 

Noticed this on a DB where a lot of space had been wasted on small or empty partitions.
If empty, why was space wasted? Did not deferred segment creation benefit?
Deferred segment creation was off on this DB because of historic occurrence of bug 12535346 with deferred constraints combined with other features with “deferred” behaviour (deferred segment creation or interval partition).

Second gotcha – if you are considering INTERVAL PARTITIONING, particularly with a RANGE of 1, think about the impact of the maximum number of interval partitions.

Following on from example T1 table above:

select table_name
,      partition_name
,      high_value 
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE 
---------- -------------- -----------
T1         P0             1                                                                                
T1         SYS_P133593    2        
insert into t1 (col1) values (1048574);

1 rows inserted

insert into t1 (col1) values (1048575);

ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions

The maximum number of partitions in an interval partition is documented as 1024K -1 = 1048575.

select table_name
,      partition_name
,      high_value 
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE 
---------- -------------- -----------
T1         P0             1                                                                                
T1         SYS_P133593    2                                                                                
T1         SYS_P133594    1048575   

Although we only have two materialised partitions, the possible future partition boundaries inbetween are effectively reserved for numbers between the lower and upper boundaries.

There’s not a lot we can do about this UNLESS you know that you’re not going to need those reserved partitions OR if you’re happy for other numbers to go into existing partitions, effectively having ranges much larger than the original 1.

At the moment, if we go higher than our reserved boundaries it fails, but for example if the key “1048573” comes along with is within our reserved boundaries, it will create a new partition:

insert into t1 (col1) values (1048573);

1 row inserted

select table_name
,      partition_name
,      high_value 
,      interval
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INTERVAL
---------- -------------- ---------- --------
T1         P0             1          YES                                                              
T1         SYS_P133593    2          YES
T1         SYS_P133594    1048575    YES
T1         SYS_P133595    1048574    YES

But we can do is convert our existing partitions to normal RANGE partitions:

alter table t1 set interval ();

table altered

select table_name
,      partition_name
,      high_value 
,      interval
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INTERVAL
---------- -------------- ---------- --------
T1         P0             1          NO                                                              
T1         SYS_P133593    2          NO
T1         SYS_P133594    1048575    NO
T1         SYS_P133595    1048574    NO

And then we can set the table back to interval partitioning without affecting those existing partitions:

alter table t1 set interval (1);

Now if a new values comes along, if it falls within an existing range, it will use the existing range partition:

insert into t1 (col1) values (1048572);

1 row inserted

select table_name
,      partition_name
,      high_value 
,      interval
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INTERVAL
---------- -------------- ---------- --------
T1         P0             1          NO                                                              
T1         SYS_P133593    2          NO
T1         SYS_P133594    1048575    NO
T1         SYS_P133595    1048574    NO

select col1
, (select subobject_name from dba_objects where data_object_id = rd) pname 
from (
select col1
,      dbms_rowid.rowid_object(rowid) rd
from t1 where col1 IN (1048572,1048573));

      COL1 PNAME                        
---------- ------------------------------
   1048573 SYS_P133595                    
   1048572 SYS_P133595      

But if a new high comes along, we can now create our new interval partition:

insert into t1 (col1) values (1048575);

1 row inserted

select table_name
,      partition_name
,      high_value 
,      interval
from   user_tab_partitions where table_name = 'T1';

TABLE_NAME PARTITION_NAME HIGH_VALUE INTERVAL
---------- -------------- ---------- --------
T1         P0             1          NO                                                              
T1         SYS_P133593    2          NO
T1         SYS_P133594    1048575    NO
T1         SYS_P133595    1048574    NO
T1         SYS_P133596    1048576    YES

And our limit of 1048575 partitions still exists but the reserved future interval partitions can move out:

insert into t1 (col1) values (2097145);

1 row inserted

insert into t1 (col1) values (2097146);

ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions

This workaround has very limited usefulness as mentioned.
Bottom line – if using interval partitioning, pay careful consideration to this limit.


LOB Space

Following on from a recent “check the space” posting, here’s another case of the code not reporting what you thought it would, prompted by a question on the OTN database forum about a huge space discrepancy in LOBs.

There’s a fairly well-known package called dbms_space that can give you a fairly good idea of the space used by a segment stored in a tablespace that’s using automatic segment space management. But what can you think when a piece of code (written by Tom Kyte, no less) reports the following stats about your biggest LOB segment:


Unformatted Blocks .....................             107
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................         859,438
Total Blocks............................       1,746,304
Total Bytes.............................  14,305,722,368

Of the available 1.7M blocks approximately 890,000 seem to have gone missing!

I tend to think that the first thing to do when puzzled by unexpected numbers is to check for patterns in the arithmetic. First (though not particularly interesting) the LOB segment seems to be using the standard 8KB blocksize: 1,746,304 * 8192 = 14,305,722,368; more interestingly, although only approximately true, the number of full blocks is pretty close to half the total blocks – does this give you a hint about doing a little test.


create table test_lobs (
        id              number(5),
        text_content    clob
)
lob (text_content) store as text_lob(
        disable storage in row
        chunk 32K
        tablespace test_8k_1m_assm
)
;

begin
        for i in 1..1000 loop
                insert into test_lobs values(
                        i, 'x'
                );
                commit;
        end loop;
end;
/

I’ve created a table with a LOB segment, storing LOBs out of row with a chunk size of 32KB in a tablespace which (using my naming convention) is locally managed, 8KB blocksize, uniform 1MB extents, using ASSM. So what do I see if I try to check the space usage through calls to the dbms_space package ? (There’s some sample code to do this in the comments of the blog I linked to earlier, but I’ve used some code of my own for the following – the first figure reported is the blocks, the second the bytes):


====
ASSM
====

Unformatted                   :      119 /    3,899,392
Freespace 1 (  0 -  25% free) :        0 /            0
Freespace 2 ( 25 -  50% free) :        0 /            0
Freespace 3 ( 50 -  75% free) :        0 /            0
Freespace 4 ( 75 - 100% free) :        0 /            0
Full                          :    1,000 /   32,768,000

=======
Generic
=======
Segment Total blocks: 4224
Object Unused blocks: 0

Apparently the segment has allocated 4,224 blocks, but we’ve only used 1,000 of them, with 119 unformatted and 3,105 “missing”; strangely, though, the 1,000 “Full blocks” are simultaneously reported as 32,768,000 bytes … and suddenly the light dawns. The dbms_space package is NOT counting blocks, it’s counting chunks; more specifically it’s counting “bits” in the bitmap space management blocks for the LOB segment and (I think I’ve written this somewhere, possibly as far back as Practical Oracle 8i) the bits in a LOB segment represent chunks, not blocks.

Conclusion:

The OP has set a 16KB chunksize with a 8KB block size. His numbers are self-consistent.


Oracle Node.js Database Driver

I’ve been eagerly awaiting the first release of the Official Oracle Node.js Driver and Christopher Jones (@ghrd) just tweeted this –

 

So I hopped over to the Github repository to take a look and try it out…

…I’m very impressed so far…

Back in 2014, I presented at KScope on “Oracle APEX + Node.JS – A Primer” where I showed many demos on just how you can integrate Node.JS into your APEX applications. For example I showed –

  • Office Integration – produce Word, Excel, Powerpoint documents
  • Consume and Publish REST Webservices from the database
  • Using Node to automatically export your APEX application and supporting objects
  • Integrating GRUNT into your APEX workflow
  • TextToSpeech from an APEX application (so you could have your APEX application talk to you!). This one was a fun one to do, even though it might not have many practical applications it’s kind of cool to do something that hasn’t really been done before.

Many more…

I first came across Node.JS a few years ago and became interested in ways to use it to extend APEX applications, I encourage you to take a look at it.

p.s. I’ve been asked by a lot of people to share the demos I presented and I’ll be doing exactly that in a series of upcoming blog posts. Except this time I’ll use the Oracle Node Database driver!

Setting up Node and Oracle Database driver

Today Oracle introduced the node-oracledb driver, so you can easily connect from node to your Oracle database.
Previously I blogged about the Oracle VM you can download so you have your own local Oracle environment. Below are the steps to install the node driver and run a first example in that environment.
(Also read the official node-oracledb installation guide for a local database - 4. Installation with a local database).

First we need to install git (a source control system) - as the source of the oracle node driver is there.

$ sudo yum install git

Download the source of the node-oracledb driver

$ git clone https://github.com/oracle/node-oracledb.git

Next we need to get node, you can download it or I used wget to get the file for linux


Install Node

tar -zxf node-v0.10.35-linux-x64.tar.gz

We follow the guide to set the PATH variable

export PATH=$HOME/node-v0.10.35-linux-x64/bin:$PATH

and we set other variables to the Oracle client and run the installer from within the node-oracledb directory:


Depending your system it might take a few seconds and it should come back after a while with this:


So we have now node installed and the oracle node driver as a module available.

Next we run the first example. It's a select on a table in the HR schema, but that schema is by default locked. So we unlock it first:

sqlplus sys/oracle as sysdba



We need to change the dbconfig.js in the examples directory to point to our database:


And now we can run the first example:


The above lets you run SQL, PL/SQL etc. commands from within Node.

Happy playing with Node! But be warned, once you start with Node its addictive to try different node modules... for example run "npm install node-tts-api" and you have a node module to do text-to-speech :)

And this module is something you can call from within your APEX application (so we go the other way compared to above - we go from Oracle to Node in this case).
It takes only one Dynamic Action - on change of the item - execute Javascript:

var url = 'https://www.apexrnd.be/node/tts/' + $v(P13_TEXT);
$.get( url, function( data ) {
  var url = data;
  var snd = new Audio(url);
  snd.play();
});

And here's a small video that shows the TTS in action



If somebody knows a Node module with a women's voice I would love to hear that :)

The TTS demo is just a quick way to show the integrating of APEX with Node, but I've some more cool (and useful!) stuff coming up in future posts :)

Cluster Cache Coherency in EM12c

January is winding down and RMOUG Training Days 2015 is just around the corner, taking up much of my after work hours.

With that, we are going to discuss a great performance console in the EM12c cloud control-  Cluster Cache Coherency.

Cluster Cache Coherency

Optimization for an Oracle Real Application Cluster, (RAC) can be a daunting tasks for those that aren’t familiar with some of the most common issues to look for.  Although many aspects of a single instance must always be examined, knowing what is unique to RAC is important.  Be aware that Enterprise Manager is completely RAC Aware and will provide you with much of the data that you need to research, identify and resolve performance issues unique to clusters.

Accessing the Console

Once you’ve logged into a RAC instance, you can access the console from the performance menu:

ccc1

The interface is broken down into multiple areas:

  • Global Cache Block Access Latency

  • Global Cache Block Transfer Rate

  • Block Access Statistics

You can view this data in Graphical or Tabular view, with the graphical view as default.  I find the graphical view to be much more informative, as it shows more details and breakdown by time, where the tabular view is a snapshot of information rolled up.

Tabular View

By Clicking on Tabular view, you are going to see, (as explained earlier) the rolled up estimates for the time shown in the for the snapshot of time displayed in the graphical view:

ccc5

Along with the three categories of data showing the averages for each, you can view the data per instance, which is extremely important to see if transactions and resource usage is askew per node.

Below the global cache info is Additional Links to Interconnects and Top Activity performance pages, the two areas that are most likely accessed next for deeper research.

Graphical Charts

By changing back to a graphical chart view, the console extends to display each of these three categories, displaying global cache information by time.

ccc2

Notice by each of these three graphs for the Cluster Cache Coherency, how the average Current Blocks Received and CR Blocks Received Time is measured in milli-seconds.  GC, (Global Cache) values are measured by blocks per second and Physical and Logical read comparisons are shown in IO Per Second, (IOPS).

Breaking it Down

You can highlight any of these metrics and click on them in the right hand side to show more detailed data for each:

ccc6

ccc8

ccc10

You can also click on any of the metrics shown on the right, and break down exactly what objects are cached per node:

ccc15

With the instance/node view, you can see what objects are cached per instance, switch to another node in RAC or order the data by different metrics:

  • GC Current Blocks Received
  • GC CR Blocks Received
  • GC Buffer Busy
  • Physical Reads
  • Logical Reads
  • Row Lock Waits

This view can offer a lot of data when trying to drill down and find out what object may be causing concurrency or block wait issues across different nodes.

Tile Chart View

You can adjust the Summary Chart to a Tile Chart view, too:

ccc12

There are, (again) a number of different ways to view this data between the nodes to see  resources are being allocated in the cluster:

ccc13

Once you have investigated and found what object or interconnect issue is causing your waits, you can then use the Additional Links section to further research and resolve performance issues.  Interconnect for cluster interconnect related issues and Top Activity for database issues, but I would highly recommend checking into ASH Analytics or ADDM Comparison for enhanced features to get the most out of EM12c.  Both features are available in the Performance drop down in the target console!

 

 

 

 

 



Tags:  , , ,

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon


Copyright © DBA Kevlar [Cluster Cache Coherency in EM12c], All Right Reserved. 2015.

Oracle XFILES now on GitHub

The demonstration environment for Oracle XML DB called XFILES is now on GitHub. As stated…

Oracle Database 12c Patching: DBMS_QOPATCH, OPATCH_XML_INV, and datapatch

Background

Oracle Database 12c brings us many new features including: the long needed ability to run OPatch and query the patch software installed in the Oracle Home programatically through SQL using the new DBMS_QOPATCH package.

If you’re a DBA working in an environment where patching consistently among databases is important, this is a welcomed enhancement. Sometimes one-off critical bug fix patches are important in the environment. Other times regular and consistent application of the quarterly PSUs is important to the organization (i.e. for regulatory compliancy). For sake of illustration I’m going to use the later as my need case for the duration of this article as I’ve regularly run into the situation where clients need PSUs applied properly, regularly, and consistently to their databases.

As a recap, many Oracle Patches including PSUs require a two-step process to implement properly. First the software change needs to be applied to the associated Oracle Home via the OPatch utility. Secondly it needs to be installed into all databases running in that Oracle Home (possibly updating internal packages and objects).

With Oracle 11g the latest patch is applied within the database using:

SQL> @catbundle.sql psu apply

Oracle 12c introduces a new utility called “datapatch” which replaces the need to run the 11g command shown above.

As with Oracle 11g you first install the patch into the Oracle Home with all services & databases down. But with Oracle Database 12c after restarting the database simply run datapatch from the OPatch directory:

cd $ORACLE_HOME/OPatch
./datapatch -verbose

 

Given that patching is a two-step process, some common DBA questions are:

1) Has a patch been installed in a given database?
2) Has a patch been installed into the $ORACLE_HOME via Opatch but not one or more of the databases?
3) Is the patch in the database but not the $ORACLE_HOME?

You may be thinking that #3 isn’t very likely but the reality is that it does happen. Circumstances that lead to a database being at a higher patch level than the Oracle Home include:

A) If the database was create from an RMAN duplicate from a home with a patch to one without.
B) A data guard standby switch-over or fail-over where the primary Oracle home was patched but the standby home was missed.
C) A 12c PDB that was unplugged from a patched CDB and plugged into an unpatched CDB.
D) The patch was uninstalled via Opatch from the Oracle home without uninstalling from the database (unlikely).

Since patching is a two-step process, historically answering questions such as these is also a two step process requiring the DBA to query the Oracle Home inventory and the database and to manually correlate and reconcile the results. This is cumbersome, prone to human errors, and not scalable due to the manual component.

But with Oracle 12c and the new DBMS_QOPATCH package, both can be queried programatically and joined using SQL allowing the DBA to easily report on questions like 1, 2, and 3 above via SQL queries, custom extensions to monitoring tools, reporting programs, etc. Or just to report accurately on patch deployments for security compliance reports, general database security assessments, or database health checks.

This is actually quite a significant breakthrough with regards to patch management!

 

How it works

The first thing to note is that in Oracle Database 12c there are some new Directory Objects installed by default:

SQL> select owner, directory_name, directory_path from dba_directories
  2  where directory_name like 'OPATCH%' order by 2;

OWNER        DIRECTORY_NAME       DIRECTORY_PATH
------------ -------------------- --------------------------------------------------------------------------------
SYS          OPATCH_INST_DIR      /u01/app/oracle/product/12.1.0/dbhome_1/OPatch
SYS          OPATCH_LOG_DIR       /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch
SYS          OPATCH_SCRIPT_DIR    /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch

SQL>

 

Two of those point to a new QOpatch directory which contains a batch file which runs the OPatch utility:

SQL> !ls /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch
qopatch_log.log  qopiprep.bat

SQL> !cat /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch/qopiprep.bat
#!/bin/sh
#
# $Header: rdbms/src/client/tools/qpinv/qopiprep.bat /unix/2 2013/04/04 20:59:27 tbhukya Exp $
#
# qopiprep.bat
#
# Copyright (c) 2012, 2013, Oracle and/or its affiliates. All rights reserved.
#
#    NAME
#      qopiprep.bat - bat file for preprocessor
#
#    DESCRIPTION
#      bat file for external table preprocessor.
#
#    NOTES
#      .
#
#    MODIFIED   (MM/DD/YY)
#    tbhukya     04/03/13 - Bug 16226172 : Forward merge of fix 16076845
#    tbhukya     09/23/12 - Creation
#
cd $ORACLE_HOME
PATH=/bin:/usr/bin
export PATH

# Option: "-retry 0" avoids retries in case of locked inventory.

# Option: "-invPtrLoc" is required for non-central-inventory
# locations. $OPATCH_PREP_LSINV_OPTS which may set by users
# in the environment to configure special OPatch options
# ("-jdk" is another good candidate that may require configuration!).

# Option: "-all" gives information on all Oracle Homes
# installed in the central inventory.  With that information, the
# patches of non-RDBMS homes could be fetched.


$ORACLE_HOME/OPatch/opatch lsinventory -xml  $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt
`echo "UIJSVTBOEIZBEFFQBL" >> $ORACLE_HOME/QOpatch/xml_file.xml`
echo `cat $ORACLE_HOME/QOpatch/xml_file.xml`
rm $ORACLE_HOME/QOpatch/xml_file.xml
rm $ORACLE_HOME/QOpatch/stout.txt

 

The description text in this Oracle provided file states that it’s a batch file used for external table preprocessing. The preprocessor clause was added in Oracle 11g release 2 (and back-ported to Oracle11gR1) to allow for execution of an external script/file to process data before reading it in the database via an external table. A further explanation of this feature is beyond the scope of this article.

Consequently we know an external table is involved and it’s not too hard to find and understand that:

SQL> select owner, table_name from dba_external_tables where table_name like 'OPATCH%' order by 1,2;

OWNER        TABLE_NAME
------------ --------------------
SYS          OPATCH_XML_INV

SQL> select dbms_metadata.get_ddl('TABLE','OPATCH_XML_INV','SYS') from dual;

DBMS_METADATA.GET_DDL('TABLE','OPATCH_XML_INV','SYS')
--------------------------------------------------------------------------------

  CREATE TABLE "SYS"."OPATCH_XML_INV"
   (    "XML_INVENTORY" CLOB
   )
   ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
      DEFAULT DIRECTORY "OPATCH_SCRIPT_DIR"
      ACCESS PARAMETERS
      ( RECORDS DELIMITED BY NEWLINE CHARACTERSET UTF8
      DISABLE_DIRECTORY_LINK_CHECK
      READSIZE 8388608
      preprocessor opatch_script_dir:'qopiprep.bat'
      BADFILE opatch_script_dir:'qopatch_bad.bad'
      LOGFILE opatch_log_dir:'qopatch_log.log'
      FIELDS TERMINATED BY 'UIJSVTBOEIZBEFFQBL'
      MISSING FIELD VALUES ARE NULL
      REJECT ROWS WITH ALL NULL FIELDS
      (
        xml_inventory    CHAR(100000000)
      )
        )
      LOCATION
       ( "OPATCH_SCRIPT_DIR":'qopiprep.bat'
       )
    )
   REJECT LIMIT UNLIMITED


SQL>

 

Hence we can understand the underlying mechanics of this new feature. It’s based on an external table using a directory object running an external batch file (through the external table preprocessor option) which then runs the actual OPatch utility.

And we can query that external table directly if we want though we’ll get an XMLTYPE result:

SQL> select * from OPATCH_XML_INV;

XML_INVENTORY
--------------------------------------------------------------------------------
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <InventoryInstance> <ora
cleHome> <UId>OracleHome-0243b34c-d8db-43f7-946b-589110de0ef9</UId> <targetTypeI
d>oracle_home</targetTypeId> <inventoryLocation>/u01/app/oraInventory</inventory
Location>
...
(output truncated)

 

Using DBMS_QOPATCH

Oracle gives us a wrapper API to this new external table (OPATCH_XML_INV) with DBMS_QOPATCH to make working with the output more manageable.

For example we can start with some basic information on the inventory using DBMS_QOPATCH.GET_OPATCH_INSTALL_INFO :

SQL> set heading off long 50000
SQL> select dbms_qopatch.get_opatch_install_info from dual;

<oracleHome><UId>OracleHome-a59380fa-5f8e-42df-b624-282f0189ec93</UId><targetTyp
eId>oracle_home</targetTypeId><inventoryLocation>/u01/app/oraInventory</inventor
yLocation><isShared>false</isShared><patchingModel>oneoff</patchingModel><path>/
u01/app/oracle/product/12.1.0/dbhome_1</path><targetTypeId>oracle_home</targetTy
peId></oracleHome>

SQL>

 

But as we see the output still isn’t easy on the eyes. Fortunately the XML stylesheet is also presented using DBMS_QOPATCH.GET_OPATCH_XSLT function. Hence combining the two gives a much more readable output:

SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Inventory         : /u01/app/oraInventory


SQL>

 

The DBMS_QOPATCH package provides many other useful functions and procedures. For example, to see if and when a specified patch was installed, or just to see the list of all of the patches installed by bug number:

SQL> select xmltransform(dbms_qopatch.is_patch_installed('19303936'), dbms_qopatch.get_opatch_xslt) from dual;


Patch Information:
         19303936:   applied on 2014-12-20T13:54:54-07:00


SQL> select xmltransform(dbms_qopatch.get_opatch_bugs, dbms_qopatch.get_opatch_xslt) from dual;


  Bugs fixed:
          19157754  18885870  19303936  19708632  19371175  18618122  19329654
19075256  19074147  19044962  19289642  19068610  18988834  19028800  19561643
19058490  19390567  18967382  19174942  19174521  19176223  19501299  19178851
18948177  18674047  19723336  19189525  19001390  19176326  19280225  19143550
18250893  19180770  19155797  19016730  19185876  18354830  19067244  18845653
18849537  18964978  19065556  19440586  19439759  19024808  18952989  18990693
19052488  19189317  19409212  19124589  19154375  19279273  19468347  19054077
19048007  19248799  19018206  18921743  14643995

SQL>

 

Or to run the equivalent of “opatch lsinventory” but from SQL instead of the OS:

SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Querayable Patch Interface 1.0
--------------------------------------------------------------------------------

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Inventory         : /u01/app/oraInventory
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 12c                                    12.1.0.2.0
Installed Products ( 135)

Oracle Database 12c                                         12.1.0.2.0
Java Development Kit                                        1.6.0.75.0
...
(output truncated)

 

Putting it all together

The examples above are interesting but to make it truly useful for patch application querying, monitoring, and reporting we need join the output of DBMS_QOPATCH (which is showing us what’s installed into the Oracle Home) with the new 12c view DBA_REGISTRY_SQLPATCH (which shows us what patches are applied to the database).

Prior to Oracle 12c to list the PSUs installed into the Oracle home we might use:

SQL> !$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'DATABASE PSU|DATABASE PATCH SET UPDATE'
19303936 19303936 Sat Dec 20 13:54:54 MST 2014 DATABASE PATCH SET UPDATE 12.1.0.2.1 (OCT2014)

SQL>

 

I obtained that result by shelling out from a sqlplus session running on the database server. But likely a centralized SQL based monitoring or reporting tool won’t have that ability. A client-server database connection won’t be able to run OPatch easily and hence the DBMS_QOPATCH API is required.

To extract the same information from the DBMS_QOPATCH package we need to parse the resulting XMLTYPE output:

SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.*
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription',
 10               applied_date varchar2(30) path 'appliedDate',
 11               sql_patch varchar2(8) path 'sqlPatch',
 12               rollbackable varchar2(8) path 'rollbackable'
 13         ) x;

  PATCH_ID  PATCH_UID
---------- ----------
DESCRIPTION
--------------------------------------------------------------------------------
APPLIED_DATE                   SQL_PATC ROLLBACK
------------------------------ -------- --------
  19303936   18116864
Database Patch Set Update : 12.1.0.2.1 (19303936)
2014-12-20T13:54:54-07:00      true     true


SQL>

 

This is a great result: Information on the PSUs installed into the Oracle Home through SQL and returned to us in an easy to read and easy to work with tabular form!

To make this information even more useful we need to join to the new DBA_REGISTRY_SQLPATCH view which replaces the DBA_REGISTRY_HISTORY view with respect to which patches have been applied in the database:

SQL> select patch_id, patch_uid, version, status, description
  2  from dba_registry_sqlpatch
  3  where bundle_series = 'PSU';

  PATCH_ID  PATCH_UID VERSION              STATUS
---------- ---------- -------------------- ---------------
DESCRIPTION
--------------------------------------------------------------------------------
  19303936   18116864 12.1.0.2             SUCCESS
Database Patch Set Update : 12.1.0.2.1 (19303936)


SQL>

 

Joining these two outputs allows us to easily write queries that report on the PSU patches deployed and ones only partially deployed (whether in the Oracle Home but not the database or vice versa):

SQL> --
SQL> --   List of PSUs applied to both the $OH and the DB
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.patch_id, x.patch_uid, x.rollbackable, s.status, x.description
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription',
 10               rollbackable varchar2(8) path 'rollbackable'
 11         ) x,
 12         dba_registry_sqlpatch s
 13   where x.patch_id = s.patch_id
 14     and x.patch_uid = s.patch_uid
 15     and s.bundle_series = 'PSU';

  PATCH_ID  PATCH_UID ROLLBACK STATUS
---------- ---------- -------- ---------------
DESCRIPTION
--------------------------------------------------------------------------------
  19303936   18116864 true     SUCCESS
Database Patch Set Update : 12.1.0.2.1 (19303936)


SQL> --
SQL> --   PSUs installed into the $OH but not applied to the DB
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.patch_id, x.patch_uid, x.description
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription'
 10         ) x
 11  minus
 12  select s.patch_id, s.patch_uid, s.description
 13    from dba_registry_sqlpatch s;

no rows selected

SQL> --
SQL> --   PSUs applied to the DB but not installed into the $OH
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select s.patch_id, s.patch_uid, s.description
  3    from dba_registry_sqlpatch s
  4  minus
  5  select x.patch_id, x.patch_uid, x.description
  6    from a,
  7         xmltable('InventoryInstance/patches/*'
  8            passing a.patch_output
  9            columns
 10               patch_id number path 'patchID',
 11               patch_uid number path 'uniquePatchID',
 12               description varchar2(80) path 'patchDescription',
 13         ) x;

no rows selected

SQL>

Simple queries such as those three are what can be incorporated into monitoring scripts and reports.

Some other DBMS_QOPATCH functions worth trying include:

set heading off long 50000 pages 9999 lines 180 trims on tab off
select xmltransform(dbms_qopatch.get_opatch_count, dbms_qopatch.get_opatch_xslt) from dual;
select xmltransform(dbms_qopatch.get_opatch_list, dbms_qopatch.get_opatch_xslt) from dual;
select xmltransform(dbms_qopatch.get_pending_activity, dbms_qopatch.get_opatch_xslt) from dual;

set serverout on
exec dbms_qopatch.get_sqlpatch_status;

 

Back-porting to Oracle Database 11g

An interesting question is: “can we back port this approach to 11g” and the answer is “absolutely“!

First of all, we need to create the directory object, external table, and OS batch script. For simplicity I’m keeping the name and structure of each the same as in 12c but of course you can adjust them if desired.

Using DBMS_METADATA.GET_DDL on an Oracle 12c database I get the necessary DDL to add the directory objects and external tables to an 11g database (note that I’ve updated the Oracle Home path but that’s the only change):

CREATE OR REPLACE DIRECTORY "OPATCH_LOG_DIR" AS '/u01/app/oracle/product/11.2.0/dbhome_1/QOpatch';
CREATE OR REPLACE DIRECTORY "OPATCH_SCRIPT_DIR" AS '/u01/app/oracle/product/11.2.0/dbhome_1/QOpatch';

CREATE TABLE "SYS"."OPATCH_XML_INV"
   (    "XML_INVENTORY" CLOB
   )
   ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
      DEFAULT DIRECTORY "OPATCH_SCRIPT_DIR"
      ACCESS PARAMETERS
      ( RECORDS DELIMITED BY NEWLINE CHARACTERSET UTF8
      DISABLE_DIRECTORY_LINK_CHECK
      READSIZE 8388608
      preprocessor opatch_script_dir:'qopiprep.bat'
      BADFILE opatch_script_dir:'qopatch_bad.bad'
      LOGFILE opatch_log_dir:'qopatch_log.log'
      FIELDS TERMINATED BY 'UIJSVTBOEIZBEFFQBL'
      MISSING FIELD VALUES ARE NULL
      REJECT ROWS WITH ALL NULL FIELDS
      (
        xml_inventory    CHAR(100000000)
      )
        )
      LOCATION
       ( "OPATCH_SCRIPT_DIR":'qopiprep.bat'
       )
    )
   REJECT LIMIT UNLIMITED;

 

Next I need to copy the preprocessor batch script into the 11g home (which can be copied from another machine if necessary):

!cp -r /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch /u01/app/oracle/product/11.2.0/dbhome_1/QOpatch

 

Now I don’t want to install the DBMS_QOPATCH package into a different version of the database. It’s Oracle supplied “wrapped” code meaning I can’t modify it and it probably has 12c dependencies meaning it would be invalid in Oracle 11g. Instead I performed a SQL Trace on the DBMS_QOPATCH.GET_OPATCH_LSINVENTORY package executing on the 12c database and from the resulting trace file I see that all it’s doing is:

INSERT INTO OPATCH_XINV_TAB(XML_INVENTORY) SELECT * FROM OPATCH_XML_INV
SELECT XMLTYPE(XML_INVENTORY) FROM OPATCH_XINV_TAB
DELETE FROM OPATCH_XINV_TAB

 

Not exactly complicated. Hence using the directory objects and external table created above I can execute the following on the 11g database:

SQL> select version from v$instance;

VERSION
-----------------
11.2.0.4.0

SQL> select XMLTYPE(XML_INVENTORY) patch_output from OPATCH_XML_INV;

PATCH_OUTPUT
--------------------------------------------------------------------------------
<?xml version="1.0" encoding="US-ASCII"?>
<INVENTORY isStandAlone="false">
 <HEADER>
 <ORACLE_HOME>/u01/app/oracle/product/11.2.0/dbhome_1</ORACLE_HOME>
 <CENTRAL_INVENTORY>/u01/app/oraInventory</CENTRAL_INVENTORY>
 <OPATCH_VERSION>11.2.0.3.6</OPATCH_VERSION>
 <OUI_VERSION>11.2.0.4.0</OUI_VERSION>
 <OUI_LOCATION>/u01/app/oracle/product/11.2.0/dbhome_1/oui</OUI_LOCATION>
 <LOG>/u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2014-1
2-23_15-08-04PM_1.log</LOG>
 </HEADER>
...
(output truncated)

 

So as we can see, it’s working perfectly back-ported to 11g.

However again the XML output isn’t really useful for me, I’m more interested in what PSUs have been installed. It’s easy to check that using opatch if I’m on the server:

SQL> !$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'DATABASE PSU|DATABASE PATCH SET UPDATE'
18522509   18522509  Sun Sep 21 20:58:00 MDT 2014   DATABASE PATCH SET UPDATE 11.2.0.4.3 (INCLUDES CPU
18031668   18031668  Sun Sep 21 20:57:48 MDT 2014   DATABASE PATCH SET UPDATE 11.2.0.4.2 (INCLUDES CPU
17478514   17478514  Sun Sep 21 20:57:38 MDT 2014   DATABASE PATCH SET UPDATE 11.2.0.4.1 (INCLUDES CPU

 

But again what if I’m not on the server, want to do this programatically through SQL, build it into a monitoring query or a security compliance report, etc?

Again we can query the XMLTYPE data and get exactly what we want. Specifically:

SQL> with a as (select XMLTYPE(XML_INVENTORY) patch_output from OPATCH_XML_INV)
  2  select x.* from a, xmltable(
  3     'INVENTORY/HOST/HOME/ONEOFF_LIST/INTERIM_PATCH/oneoff_inventory/base_bugs/bug'
  4     passing a.patch_output columns
  5     --row_number for ordinality,
  6     bug_number number path '@number',
  7     bug_description varchar2(256) path '@description'
  8  ) x
  9  where regexp_like(bug_description, '(DATABASE PSU|DATABASE PATCH SET UPDATE)','i')
 10  order by bug_number;

BUG_NUMBER BUG_DESCRIPTION
---------- ------------------------------------------------------------
  17478514 DATABASE PATCH SET UPDATE 11.2.0.4.1 (INCLUDES CPUJAN2014)
  18031668 DATABASE PATCH SET UPDATE 11.2.0.4.2 (INCLUDES CPUAPR2014)
  18522509 DATABASE PATCH SET UPDATE 11.2.0.4.3 (INCLUDES CPUJUL2014)

SQL>

 

Voila! By copying the technique used by the Oracle 12c database we’ve now performed an OPatch query from SQL against an 11g database. Just took about 2 minutes of setup.

Again this can be super handy if PSU patching is a regular activity for you or if you have to produce security compliance reports.

Unfortunately though Oracle 11g records less data in DBA_REGISTRY_HISTORY than 12c has in DBA_REGISTRY_SQLPATCH:

SQL> select comments, action_time from dba_registry_history
  2  where bundle_series like '%PSU' order by action_time;

COMMENTS                       ACTION_TIME
------------------------------ ------------------------------
PSU 11.2.0.4.3                 23-SEP-14 09.21.34.702876 AM

SQL>

 

Hence joining the two is more challenging but certainly possible. The data (in this case the string “11.2.0.4.3”) is present in both the DBA_REGISTRY_HISTORY and my XQuery output from my back-ported OPATCH_XML_INV external table output.

 

Conclusion

The new DBMS_QOPATCH API is a fantastic improvement with Oracle Database 12c when it comes to patch management (querying/reporting/monitoring). And by understanding the concepts and components added to 12c they can be manually back-ported to 11g. Both using DBMS_QOPATCH and even the back-port to 11g is actually pretty easy. In fact the hardest part for me (not being very familiar with XQuery) was coming up with the query to parse the XMLTYPE data.

 

References

http://docs.oracle.com/database/121/ARPLS/d_qopatch.htm

Oracle Recommended Patches — Oracle Database (Doc ID 756671.1)

Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

How do you apply a Patchset,PSU or CPU in a Data Guard Physical Standby configuration (Doc ID 278641.1)