Oracle Weblogic SAML2 Authorization

Grammatically the title has no much sense, but those were the keywords that I used to type a couple of years ago when I started to work in the integration of our JEE applications into our SSO system.

Who enjoys the feather display of a male peacock?

Who appreciates the display of feathers by a male peacock?

Female peacocks seem to get a kick out of them. They seem to play a role in mating rituals.

Who else? Why, humans, of course!

We know that humans greatly appreciate those displays, because of the aaahing and ooohing that goes on when we see them. We like those colors. We like the irridescence. We like the shapes and patterns.

If one were to speculate on why a female peacock gets all worked up about a particular male's feather display, we would inevitably hear about instinctual responses, hard-wiring, genetic determinism, and so on.

And if one were to speculate on why a human goes into raptures, we would then experience a major shift in explanation. 

Time to talk about anything but a physiological, hard-wired sort of response.

No, for humans, the attraction has to do with our big brains, our ability to create and appreciate "art". And that is most definitely not something other animals do, right?

Oh, sure, right. Like these instinctive, hard-wired bowerbird mating nests:

That clearly has nothing to do with an aesthetic sense or "art". Just instinct.

Why? Because we humans say so. We just assert this "fact."

Most convenient, eh?

Database Provisioning in Minutes: Using Enterprise Manager 12c DBaaS Snap Clone and EMC Storage

A little while back I posted a blog on the official Oracle Enterprise Manager blog on using Enterprise Manager 12c Snap Clone with EMC storage. Last week, I also presented on exactly the same topic at the RMOUG Training Days in Denver, Colorado, and there was quite a bit of interest in the subject.

Given that level of interest, I thought I’d let you know about another opportunity to hear all about it even if you weren’t able to attend the RMOUG conference. You may know that product management does a lot of webcasts around different areas of functionality with EM12c. Well, the very next one is on Snap Clone with EMC storage, in this case being co-presented by Oracle and EMC. You can click on the “Register” button on this link to register your attendance and get all the details as far as dial in numbers and so on. It’s on this Wednesday, February 25 at 8:00 A.M. PT ( 11:00 A.M. ET) – those are in US time zones BTW. Take advantage of this opportunity, and while you’re looking at the webcast page, check out some of the other fantastic opportunities that are coming up to hear about some of the really cool things you can do with EM12c!

APEX 5.0: the way to use Theme Roller

Once you have your new application created using the Universal Theme it's time to customise your theme with Theme Roller.

Run your application and click the Theme Roller link in the APEX Developer Toolbar:

Theme Roller will open. I won't go in every section, but want to highlight the most important sections in Theme Roller in this post:
  1. Style: there're two styles that come with APEX 5.0: Blue and Gray. You can start from one of those and see how your application changes color. It will set predefined colors for the different parts of the templates.

  2. Color Wheel: when you want to quickly change your colors, an easy way to see different options is by using the Color Wheel. You've two modes: monochroom (2 points) and dual color (3 points - see screenshot). By changing one point it will map the other point to a complementary color. Next you can move the third point to play more with those colors.

  3. Global Colors: if the Color Wheel is not specific enough for what you need, you can start by customising the Global Colors. Those are the main colors of the Universal Theme and used to drive the specific components. You can still customise the different components e.g. the header by clicking further down in the list (see next screenshot).

  4. Containers etc. will allow you to define the specific components. A check icon will say it's the standard color coming with the selected style. An "x" means the color was changed and an "!" means the contrast is probably not great.
Original with styleAfter changing colors

This is just awesome... but what if you don't like the changes you did?

Luckily you can Reset either your entire style or you can refresh the specific section by clicking the refresh icon. There's also an undo and redo button. But that is not all... for power users when you hit "ALT" when hovering a color you can just reset that color! (only that color will get a refresh icon in it and clicking it will reset it)

Note that all changes you're making are locally stored on your computer in your browsers cache (HTML5 local storage), so you don't effect other users by playing with the different colors.

Finally when you are done editing your color scheme you can hit the Save As button to save all colors to a new Style. When you close Theme Roller the style will go back how it was.
The final step to apply the new style so everybody sees that version, is to go to User Interface Details (Shared Components) and set the style to the new one.

Note that this blog post is written based on APEX 5.0 EA3, in the final version of APEX 5.0 (or 5.1) you might apply the new style from Theme Roller directly.

Want to know more about Theme Roller and the Universal Theme - we're doing an APEX 5.0 UI Training May 12th in the Netherlands.

RMOUG Training Days 2015

Yet again, it was a fantastic time at the RMOUG Training Days 2015 conference, as it has been every other year I have attended it. That is in no small measure due to the incredible work of the organizing committee, and in particular the Training Days Director, my colleague Kellyn Pot’Vin Gorman of fame. For me personally, the travel to get to Denver Colorado was somewhat more daunting than in previous years (see my earlier post for why!), but once I got there it all went relatively smoothly. I flew in on the Sunday before the conference started to allow me to get over any problems from the trip, but as it turned out everything was just fine.

I had three abstracts accepted for the conference:

The first two were two hour sessions on the first day of the conference (Tuesday), which was dedicated to deep dive sessions. I had originally planned these to be short presentations by me, followed by hands on labs to set up each of the different subjects. That fell through a few weeks before the conference when the hardware I had planned to use was taken away for upgrade, so instead I put together longer presentations with demos. As it turned out, that was a Good Thing (TM) as I had way more attendees (particularly at the PDBaaS presentation) than I would have had hardware for anyway! The third session was a much more traditional presentation – a bunch of slides followed by a shorter demo, and again it was very well attended. Lots of interest from attendees for all three, so from that perspective I was very happy.

Unfortunately I did have some technical issues with my laptop. I had some problems getting the material from my laptop onto the presentation screen, both from PowerPoint and the demo itself, so I’m going to have to spend some time sorting that out after I get home. :(

Having said that, the conference was still a blast. As I always do, I thoroughly enjoyed the interactions with attendees, but this time I also had the added enjoyment of interacting with a bunch of my colleagues from the Strategic Customer Program in the Enterprise Manager product management team – Kellyn Pot’Vin-Gorman, Courtney Llamas, Andrew Bulloch, and Werner de Gruyter. It’s when I interact with people like these guys that I realize just how much I still need to learn about Enterprise Manager as a product, particularly around the high end infrastructure architectures that the Strategic Customer Program folks normally work with.

Once the conference was finished, it was time to head up into the mountains at Breckenridge for some very relaxing R&R time. When I finish here it will be back to Intergalactic Headquarters near San Francisco for a week before I head back home again to be with family again. All in all, a fantastic conference as always, so thanks Kellyn and all the organizing committee!

Automatic: Nice, but Not Necessary

Editor’s note: Here’s the first post from one of our newish team members, Ben. Ben is a usability engineer with a PhD in Cognitive Psychology, and by his own account, he’s also a below average driver. Those two factoids are not necessarily related; I just don’t know what his likes and dislikes are so I’m spit-balling.

Ben applied his research chops to himself and his driving using Automatic (@automatic), a doodad that measures your driving and claims to make you a better driver. So, right up his alley.

Aside from the pure research, I’m interested in this doodad as yet another data collector for the quantified self. As we generate mounds of data through sensors, we should be able to generate personal annual reports, a la Nicholas Felton, that have recommended actions and tangible benefits.

Better living through math.

Anyway, enjoy Ben’s review.

When I first heard about Automatic (@automatic), I was quite excited—some cool new technology that will help me become a better driver. The truth is, I’m actually not a big fan of driving. Which is partly because I know I’m not as good of a driver as I could be, so Automatic was a glimmer of hope that would lead me on the way to improving my skills.

Though I will eagerly adopt automated cars once they’re out and safe, the next best thing is to get better so I no longer mildly dread driving, especially when I’m conveying others. And one issue with trying to improve is knowing what and when you’re doing something wrong, so with that in mind (and for enterprise research purposes), I tried out Automatic.

Automatic is an app for your phone plus a gadget (called the Link) that plugs into your car’s diagnostics port, which together gives you feedback on your driving and provides various ways to look at your trip data.

Automatic Link

The diagnostics port the Link plugs into is the same one that your mechanic uses to see what might be wrong when your check engine light is ominously glaring on your dashboard. Most cars after 1996 have these, but not all data is available for all cars. Mine is a 2004 Honda Civic, which doesn’t put out gas tank level data, meaning that MPG calculations may not be as accurate as they could be. But it still calculates MPG, and it seems to be reasonably accurate. I don’t, however, get the benefit of “time to fuel up” notifications, though I do wonder how much of a difference those notifications make.

The Link has its own accelerometer, so that combined with the data from the port and paired with your phone via Bluetooth, it can tell you about your acceleration, distance driven, your speed, and your location. It can also tell you what your “Check Engine” light means, and send out some messages in the result of a crash.

It gives three points of driving feedback: if you accelerate too quickly, brake too hard, or go over 70 mph. Each driving sin is relayed to you with its own characteristic tones emitted from the Link. It’s a delightful PC speaker, taking you way back to the halcyon DOS days (for those of you who were actually alive at the time). It also lets you know when it links up with your phone, and when it doesn’t successfully connect it outputs a sound much like you just did something regrettable in a mid-’80s Nintendo game.

App screenshot

One of the main motivators for the driving feedback is to save gas—though you can change the top speed alert if you’d like. From their calculations, Automatic says 70 mph is about as fast as you want to go, given the gas-spent/time-it-will-take-to-get-there tradeoff.

Automatic web dashboard

Another cool feature is that it integrates with IFTTT (@ifttt), so you can set it up to do things like: when you get home, turn the lights on (if you have smart lights); or when you leave work, send a text to your spouse; or any other number of things—useful or not!

Is It Worth It?

The big question is, is it worth $99? It’s got a great interface, a sleek little device, and a good number of features, but for me, it hasn’t been that valuable (yet). For those with the check engine light coming up, it could conceivably save a lot of money if you can prevent unnecessary service on your car. Fortunately, my Civic has never shown me the light (knock on wood), though I’ll probably be glad I have something like Automatic when it does.

I had high hopes for the driver feedback, until I saw that it’s actually pretty limited. For the most part, the quick acceleration and braking are things I already avoided, and when it told me I did them, I usually had already realized it. (Or it was a situation out of my control that called for it.) A few times it beeped at me for accelerating where it didn’t feel all that fast, but perhaps it was.

I was hoping the feedback would be more nuanced and could allow me to improve further. The alerts would be great for new drivers, but don’t offer a whole lot of value to more experienced drivers—even those of us who would consider themselves below average in driving skill (putting me in an elite group of 7% of Americans).

The Enterprise Angle

Whether it’s Automatic, or what looks like might be a more promising platform, Mojio (@getmojio), there are a few potentially compelling business reasons to check out car data-port devices.

One of the more obvious ones is to track mileage for work purposes—it gives you nice readouts of all your trips, and allows you to easily keep records. But that’s just making it a little easier for an employee to do their expense reports.

The most intriguing possibility (for me) is for businesses that manage fleets of regularly driven vehicles. An Automatic-like device could conceivably track the efficiency of cars/trucks and drivers, and let a business know if a driver needs better training, or if a vehicle is underperforming or might have some other issues. This could be done through real-time fuel efficiency, or tracking driving behavior, like what Automatic already does: hard braking and rapid acceleration.
If a truck seems to be getting significantly less mpg than it should, they can see if it needs maintenance or if the driver is driving too aggressively. Though trucks probably get regular maintenance, this kind of data may allow for preventive care that could translate to savings.

This kind of tracking could also be interesting for driver training, examining the most efficient or effective drivers and adopting an “Identify, Codify, Modify” approach.


I’d say this technology has some interesting possibilities, but may not be all that useful yet for most people. It’s fun to have a bunch of data, and to get some gentle reminders on driving practices, but the driver improvement angle from Automatic hasn’t left me feeling like I’m a better driver. It really seems that this kind of technology (though not necessarily Automatic, per se) lends itself more to fleet management, improving things at a larger scale.

Stay tuned for a review of Mojio, which is similar to Automatic, but features a cellular connection and a development platform, and hence more possibilities.Possibly Related Posts:

255 columns

Here’s a quick note, written and some strange time in (my) morning in Hpng Kong airport as I wait for my next flight – all spelling, grammar, and factual errors will be attributed to jet-lag or something.

And a happy new year to my Chinese readers.

You all know that having more than 255 columns in a table is a Bad Thing ™ – and surprisingly you don’t even have to get to 255 to hit the first bad thing about wide tables. If you’ve ever wondered what sorts of problems you can have, here are a few:

  • If you’re still running 10g and gather stats on a table with more than roughly 165 columns then the query Oracle uses to collect the stats will only handle about 165 of them at a time; so you end up doing multiple (possibly sampled) tablescans to gather the stats. The reason why I can’t give you an exact figure for the number of columns is that it depends on the type and nullity of the columns – Oracle knows that some column types are fixed length (e.g. date types, char() types) and if any columns are declared not null then Oracle doesn’t have to worry about counting nulls – so for some of the table columns Oracle will be able to eliminate one or two of the related columns it normally includes in the stats-gathering SQL statement – which means it can gather stats on a few more table columns.  The 165-ish limit doesn’t apply in 11g – though I haven’t checked to see if there’s a larger limit before the same thing happens.
  • If you have more than 255 columns in a row Oracle will split it into multiple row pieces of 255 columns each plus one row piece for “the rest”; but the split counts from the end, so if you have a table with 256 columns the first row-piece has one column the second row-piece has 255 columns. This is bad news for all sorts of operations because Oracle will have to expend extra CPU chasing the the row pieces to make use of any column not in the first row piece. The optimists among you might have expected “the rest” to be in the last row piece. If you want to be reminded how bad row-chaining can get for wide tables, just have a look at an earlier blog note of mine (starting at this comment).
  • A particularly nasty side effect of the row split comes with direct path tablescans – and that’s what Oracle does automatically when the table is large. In many cases all the row pieces for a row will be in the same block; but they might not be, and if a continuation row-piece is in a different block Oracle will do a “db file sequential read” to read that block into the buffer cache.  As an indication of how badly this can affect performance, the results I got at a recent client site showed “select count(col1) from wide_table” taking 10  minutes while “select count(column40) from wide_table” took 22 minutes because roughly one row in a hundred required a single block read to follow the chain. An important side effect of the split point is that you really need to put the columns you’re going to index near the start of the table to minimise the risk of this row chaining overhead when you create or rebuild an index.
  • On top of everything else, of course, it takes a surprisingly large amount of extra CPU to load a large table if the rows are chained. Another client test reported 140 CPU seconds to load 5M rows of 256 columns, but only 20 CPU seconds to load 255.

If you are going to have tables with more than 255 columns, think very carefully about column order – if you can get all the columns that are almost always null at the end of the row you may get lucky and find that you never need to create a secondary row piece. A recent client had about 290 columns in one table of 16M rows, and 150 columns were null for all 16M rows – unfortunately they had a mandatory “date_inserted” column at the end of the row, but with a little column re-arrangement they eliminated row chaining and saved (more than) 150 bytes storage per row.  Of course, if they have to add and back-fill a non-null column to the table they’re going to have to rebuild the table to insert the column “in the middle”, otherwise all new data will be chained and wasting 150 bytes per row, and any old data that gets updated will suffer a row migration/chain catastrophe.

Robert G. Freeman on Oracle 2015-02-18 23:14:41

This is somewhat of a personal post, but it's also related to Oracle. I've been involved in some discussions about 12c Multitenant and the fact that Oracle will be doing away with the 12c Non-Multitenant architecture someday.

These discussions have made me a bit insightful, as have some related things.

So, if you will allow me a moment to write a few personal thoughts - I'll then get back to Oracle stuff. I also wonder if you might relate to some of these thoughts.

First I want to be clear - I harbor no ill will towards anyone. I think it's possible that some have misunderstood me or my motivation, and that is life. I'm also not looking for sympathy as much as mutual understanding. I fully understand that some won't care, and they can just move along. However, it might be that you experience some of the feelings that I do, and I hope that this post and the few that will come after it, will be helpful.

I also understand that there will be those that sigh and say, he's writing a long thing again.... If you are one of those, you really might want to read what I have to say.

My Life - The "Good Old Days"
There are days when I look back on my life as a DBA some 20 years ago and for a second I miss it. Back then, all I did was sit in my cube, and dedicate myself to learning all about how Oracle worked. I poked, prodded, read and asked questions of mentors.

The internet was still an infant.. but I still managed to learn a lot online it seems. As I recall Compuserve and AOL had forums that I learned a lot in.

I also think I participated in a few news groups and so on. The point is that I got to be me, to a degree, without the vagaries of social interaction - which I both am not great at and tend to cause me a lot of discomfort.

Over time, for a number of reasons, I ignored my social discomfort and put myself "out there". I wrote books, started presenting, participating in discussions and offering opinion.

I enjoy that - I enjoy writing and presenting. I enjoy discussing things and learning from others. I love teaching and sharing thoughts on a number of topics. I have made a number of really good friends and acquaintances over those years. It has been good for me professionally.

I enjoy all of this. However, there has been a personal toll, and I feel it a bit today. Before I get into that, let me backtrack a little to last night.

Last Night

We had a friend visiting us last night. He and my wife have known each other for a long time and they both are in the medical field.

Last night they were taking an online version of the myers-briggs personality test, and they had me take it. My results were no real surprise. I am an INTP type. I've taken these types of tests before, so I knew the general results.

INTP equates to:


The assessment says about 3% of people are INTP types. Anybody that knows me probably is not shocked by the fact I'm an INTP.

After some interactions that I had today, I was feeling a lot of different ways. Then I started to mull on the fact that I'm an INTP type. I found myself getting really frustrated and wondering if, at the end, the cost of my public life was really worth what it does to me emotionally.

While the typical INTP type tends to prefer thinking over feeling - the fact is that emotions actually run very deep in us.

How did I feel today? After a lot of thought, I feel a lot of things - but for the first time I think I feel misunderstood and discriminated against. Now, that might sound odd, but I'll circle around to that later in another post and explain myself.

An Example
First - this might seem like a disjointed round about, but stay with me, it leads to a point.

I mentioned the interactions today. It has been part of a debate over Oracle's decision to eventually do away with the non-CDB model. This debate started with my reaction to a blog posting done by someone who chooses to blog anonymously.

Now, I understand wanting to be anonymous. My problem with being anonymous is that it impacts accountability. Also, by being anonymous it's easier to hide your true motivations. It seems that being anonymous is the big thing to be today. It's a wonderful shield.

On the other hand, there is nothing anonymous about this blog. You know who is writing it. You probably know that I work for Oracle - or assume I do based on the URL associated with this blog. If you know me, then you probably know various things about me, which add additional context to what I say in my blog.

Knowing that I work for Oracle, it's easy (though inaccurate) to ascribe certain attributes to me. Maybe, by writing under the banner of you think I lack the ability to think independently. Maybe you think that I am here to sell Oracle licenses or other things. Maybe you think the content of this blog is regulated in some way, or that I hold back on my opinions. None of these would be true. These are all faulty assumptions, and frankly, they are disrespectful ones at that.

About INTP's

INTP's want to be precise in our meanings and descriptions. This can prove very annoying to those who tend to be less concise. I am an INTP with ADD to boot. INTP's try to be concise, but I sometimes tend to wander a bit getting there. This is very much a part of what an INTP is - we look for different evidences and ways of looking at things. Sometimes, to those outside of us, it might look like meandering or straying off course. In reality, we are often just exploring the fringes, because sometimes the fringes offer great information and detail.

If you have ever been a co-worker or manager of mine, you are probably smiling and nodding your head. My ability to write multi-page emails is legendary. That some don't understand why, is often also apparent. I may be INTP on steroids - I don't know. I've been lucky to work with people who appreciated my INTP, and those who have not really gotten it.

I've tried, of late, to be more aware of that trait and dial things down. I find it a painful exercise and time consuming. It is a conscious exercise, not unlike trying to regulate your breathing when stressed or keep calm when your flying and you find yourself in a 1000FPM downdraft, while IFR and with about 3000 feet between you and the mountains below.

Where am I going with this?

Enough for this post... I want to dive further into being an INTP in the next post. As you read it and the one or two to follow, I'd like you to ask yourself. How do you respond to people like me, and is it possible that your response to INTP's a form of discrimination. Finally, are you possibly missing something by discounting how INTP's think and work?

And be proud of me... this entry could have been a lot longer!

Robert G. Freeman on Oracle 2015-02-18 22:28:34

In a Multitenant database,you can create user accounts within the PDB's just like you normally would. For example this command:

SQL> show con_name


SQL> create user dbargf identified by robert;

Will create a user account called dbargf within the container named TESTPDB. The user will not be created in any other container. This is known as a local user account within the CDB architecture. It is local to a unique PDB.

This isolation is in alignment with the notion that each PDB is isolated from the parent CDB.

If you had a PDB called PDBTWO, then you could create a different dbargf account in that PDB. That account would be completely separate from the TESTPDB local user account created earlier. The upshot of all of this is that, in general, the namespace for a user account is at the level of the PDB. However, there is an exception.

In the root container of a CDB you cannot create normal user accounts, as seen in this example:

SQL> show con_name


SQL> create user dbargf identified by robert;
create user dbargf identified by robert
ERROR at line 1:
ORA-65096: invalid common user or role name

This is a problem because we will probably need to create separate accounts to administer the CDB at some level (for backups, for overall management) or even across PDB's but with restricted privileges.

For example, let's say I wanted to have a DBA account called dbargf that would be able to create tablespaces in any PDB. I would create a new kind of user account called a common account.

The common account naming format is similar to a normal account name - except that it starts with a special set of characters, C## by default. Too create a common user account called dbargf we would log into the root container and use the create user command as seen here:

SQL>create user c##dbargf identified by robert;

Likewise you use the drop user command to remove a common user account.

When a common user account is created, the account is created in all of the open PDB's of the pluggable database. At the same time, the account is not granted any privileges.

If a PDB was not open when the common user account is created, it will be created when the PDB is opened. When a PDB is plugged in, the common user account will be added to that PDB.

As I mentioned before, in a non-CDB environment and in PDB's, when a user account is created it does not have any privileges and the same is true with a common user account.

For example, if we try to log into the new c##dbargf account we get a familiar error:

ORA-01045: user C##DBARGF lacks CREATE SESSION privilege; logon denied

The beauty of a common user account or role is that it's privileges can span across PDB's. For example, a common user account can have DBA privileges
in two PDB's in a CDB, but it might not have DBA privileges in the remaining PDB's.

You grant privileges to common users as you would any other user - through the grant command as seen here:

SQL> connect / as sysdba

SQL> grant create session to c##dbargf;
Grant succeeded.

SQL> connect c##dbargf/robert

When the grant is issued from the ROOT container, the default scope of that grant is just to the ROOT container. As a result of this grant then, we can connect to the root container.

C:appRobertproduct12.1.0.2dbhome_2NETWORKADMIN>sqlplus c##rgfdba/robert
SQL*Plus: Release Production on Wed Feb 18 14:15:24 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 18 2015 14:15:20 -08:00

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> exit

However if we try to connect to a PDB, we will get an error:

SQL> alter session set container=newpdb;
ORA-01031: insufficient privileges

It is in this way that the isolation of PDB’s is maintained (by default).

The default scope of all grants is limited to the container (or PDB) which they are granted. So, if you grant a privilege in the NEWPDB PDB, then that grant only has effect in the NEWPDB PDB.

As an example of this isolation, let's see what happens when we grant the create user privilege to the c##dbargf user and try to create a new common user with c##dbargf afterwards.

First, we grant the create user privilege - this is pretty much what we do today:

grant create user to c##dbargf;

However, when c##dbargf tries to create a user, we still get an error:

create user c##dbanew identified by dbanew
ERROR at line 1:
ORA-01031: insufficient privileges

This serves to underscore that by default, grants to a common user (any user for that matter) by default only apply to the container that the grant occurs in. So, the create privilege grant in this case only applies to the ROOT container.

The problem here is that when you create a common user, Oracle tries to create that user in all PDB's. Since c##dbargf does not have create user privileges in all of the PDB's, the command fails when Oracle recurses through the PDB's and tries to create the c##dbargf common user.

So, how do we deal with this? How do we grant the create user privilege to the c$$dbargf account so that it's able to create other common users.

What we do is use the new containers clause which is part of the grant command. In this example, we are using the container=all parameter to indicate that the grant should apply across all containers. Here is an example:

SQL>Connect / as sysdba

SQL> grant create session to c##dbargf container=all;
Grant succeeded.

Now, let’s try that create user command again:

SQL> create user c##dbanew identified by dbanew;
User created.

Note that we had to log in as SYSDBA to issue the grant. This is because, at this time, the SYSDBA privileged account was the only account that had the ability to grant the create session privilege across all PDB's.

We could give the c##dbargf account the ability to grant the create user privilege command to other accounts if we had included the "with admin option" option during the grant.

So, it's clear then that just because you create a common user, it's like any other user. It essentially has no rights to begin with anywhere. Next time I'll address common users in PDBs and I'll also talk a bit more about grants in the PDB world.

Robert G. Freeman on Oracle 2015-02-18 01:33:49

I have received a few comments related to my posts on the deprecation of the non-CDB architecture. I must admit, I find some reactions to be a bit confusing.

First, let me be clear - I am NOT an official voice for Oracle. This is MY blog, MY opinions and I am NOT directed by Oracle in any way with respect to what I do (or do not) write here.

So, let's quash the notion that I'm acting as some kind of Oracle Shill when I suggest that there is a major over reaction to this issue.

Let's start with a few snips from comments made in the previous posts on this subject:
"People who are not on the bleeding edge do not appreciate being forced into a buggy new code path. This is not FUD, this is experience."

"We do NOT want working code and system procedures to be replaced with something that might work in the future maybe kinda sorta if you get around to it."

"I think once Oracle is seen to be "eating it's own dogfood" more people will lose their fear of the unknown...."
Based on these quotes, I would think that somehow Oracle had announced that it was ripping out the non-CDB code now, or in That's simply not the case.

The non-CDB code isn't going to be ripped out in 12.2 either. Beyond that, I don't know, but I can't see it being ripped out for quite some time.

Why are people knee jerking about this stuff? Why are assumptions being made, completely without any foundation? I am also often confused about the fact that people look at their enterprise and it's unique issues - and assume that everyone else faces the same issues.

I am confused by the arguments like this one:
"We don't want a moving target, we don't want make-work, we want our data to be secure and reachable in a timely manner."

Yes, and THOSE (security and data itself) are moving targets in and of themselves and they NECESSITATE a moving, growing and changing database product.
Security risks are constantly changing and increasing - hacking attempts are more frequent and and more complex. The costs of intrusions are growing dramatically. Are you suggesting that responses to such things such as Data Vault, or encryption at rest - should not be added to the database product so it can remain static and, hopefully, bug free? Is the urgency to avoid bugs so critical that we weigh it higher than development of responses to these risks?

With respect to data being reachable in a timely manner. This too is a moving target. 10 years ago, the sizes of the databases we deal with now were nothing more than speculation. The unique types of data we need to deal with have increased as have the timelines to process this data.

If Oracle had decided to remain even semi-static ten years ago - do you suppose that CERN would be able to process the vast amounts of data that it does with Oracle? Do you suppose that reports that went from running in an hour to time periods of days - because of the incredible increases in data volume - would be something that customers would accept as long as the code base remained stable? It's the constant modification of the optimizer that provides the advanced abilities of the Oracle database.

The biggest of the moving targets are not the database, but rather they are in the business that the database must accomplish. Just because one enterprise does not have a need for those solutions, or can not see the benefit of those solutions, does not mean that there is not a significant set of customers that DO see the benefit in those solutions.

Then there is this statement (sorry Joel - I don't mean to seem that I'm picking on you!)
It's by no means new - the issue that immediately comes to mind is the 7.2 to 7.3 upgrade on VMS. People screamed, Oracle said "tough."
Change is always difficult for people. I agree that change can present serious challenges to the enterprise - and we can focus on those challenges and see the cup as half empty. However - change can be seen as quite the positive too. We make the choice which way we look at it.

This is an opportunity to refine how you do things in the Enterprise. It's an opportunity to do things better, more efficiently and build a smarter and more automated enterprise.

Or, you can moan and complain along the whole path. Change things begrudgingly and ignore the fact that opportunity is staring you in the face. I would argue that if you think you are too busy to deal with this change over the next several years - then perhaps you are not working as efficiently as you could be.
I'd also offer that if your enterprise is so complex, and so fragile, that you can't make the changes needed in the next five years or so - then your problem is not changing Oracle Database software code. It is the complexity that you have allowed to be baked into your enterprise. So - we can look at this in a negative light or we can see it as a call to do better across the board. To work smarter and to simplify complexity.

When will Oracle's own packaged applications be compatible with the PDB architecture. For example E-business suite which still arguably is Oracles best selling ERP suite is still not certified to run on a single instance PDB , let alone multitenant.
Here lies the proof that what Oracle is doing is giving you a LOT of notice about this change to the Oracle architecture. The CDB architecture is a new architecture, and it's true that pretty much all of the Oracle software that actually uses the database does not yet support the CDB/PDB architecture. So, I argue that in losing our cool about the fact that non-CDB will be going away is clearly a knee jerk reaction to something that's coming, for sure, but not tomorrow or anytime soon.

In this one statement should arise the notion that this isn't going to happen anytime soon. So, why are people acting like this is happening next year?
I agree with many of the points, but I kind-of disagree with the scripting aspect somewhat.
So, first let me say that I sympathize with this. However, maybe changing scripts so that they use a service name rather than OS authentication, is an overall improvement in how we manage our enterprises overall.

I'm not saying that this is not probably one of the biggest pain points of a migration to the CDB architecture - it is. I am saying that maybe the idea of using services rather than using OS authentication is a better solution, and that we should have been doing that in the first place anyway.

Most applications should be using services by now anyway. So there should not be a significant amount of pain there.

Perhaps, in an effort to look at positive, we might say that in being forced to modify our existing way of doing things, we are also forced to look at our existing security infrastructure. Are we simply allowing applications to connect via OS authentication? Is this really a best practice? I'm not sure it is. So, there is an opportunity here - if we choose to look at it that way.

Your voice carries weight. Your opinions do matter.

I think you over value my voice and its impact. :) Be that as it may, I see multitenant as the natural evolution of the database product. There will be a significant amount of time for these changes to mature, and for people to execute migration paths to this new architecture, before we see the plug pulled.
This isn't me speaking as some Oracle shill. I would feel this way should I work for Oracle or anyone else. Remember - I'm the guy that writes the new features books! :)

I think the direction Oracle is going is right on target. It addresses a number of issues that are now being addressed haphazardly with solutions like virtual machines. It addresses the performance of multiple databases on one machine sharing resources most efficiently.

If you should wish to study some of the performance benefits of the Multitenant architecture you can find them listed here:

The fact is that, all things being equal (and acknowledging that there will always be the outliers), there are significant performance gains when you use PDB's instead of stand alone databases.

I know that it's easy to take something personally. I know it's frustrating to be pulled, kicking and screaming, into something we don't think we want. I also know that we can sometimes close our minds about something when we have a negative first reaction.

I've been working with Multitenant quite a bit of late. In its solid, but not full featured yet. Is it bugless - of course not. Is the Oracle Database bugless without Multitenant? Nope. Is any large application without bugs, nope. I don't think you stop progress because of fear. You don't stop sending manned missions into space because of the risk of death. You don't stop developing your product because of the risks of bugs. If you do the later, you become a memory.
None of us want our critical data running on databases that are just memories - do we?

We might THINK that we don't want change. We might complain bitterly about change because of the fact that it inconveniences us (and how DARE they inconvenience us!!). We might think our life would be better if things remained the same. The reality - historically - is that static products cease to be meaningful in the marketplace. Otherwise, the Model-T would be selling strong, we would still be using MSDOS and there would be no complex machines like the 747.
Agility - Let my voice carry the message of being agile
If my voice carries any weight - then let agility be my message.

I see many DBA's that treat their database environments as if they were living in the 1990's. These environments lack agility - and they use more excuses than I can count to avoid being agile.

For example, I would argue that the fact that we are not using services, and instead relying on OS authentication is all about engineering for agility. Yes, it might be legacy code - but if it is, then the question is are we thinking in terms of agility and using maintenance cycles to modify our code to BE agile?
Probably not - and for many reasons I'm sure.

I argue that one root cause behind these complaints (I did NOT say the only root cause) against the demise of the non-CDB model, boils down into one thing - the ability to be agile.

Now, before you roast me for saying that, please take a moment and think about that argument and ask yourself - if it's not just a little bit possible... If I might just be a little bit right. If I am, then what we are complaining about isn't Oracle - it's how we choose to do business.

That is its own blog post or two ... or three.... And it's what I'll be talking about at the UTOUG very soon!

Note: Edited a bit for clarification... :)