Robert G. Freeman on Oracle 2015-02-18 22:28:34

In a Multitenant database,you can create user accounts within the PDB's just like you normally would. For example this command:

SQL> show con_name

CON_NAME
------------------------------
TESTPDB

SQL> create user dbargf identified by robert;

Will create a user account called dbargf within the container named TESTPDB. The user will not be created in any other container. This is known as a local user account within the CDB architecture. It is local to a unique PDB.

This isolation is in alignment with the notion that each PDB is isolated from the parent CDB.

If you had a PDB called PDBTWO, then you could create a different dbargf account in that PDB. That account would be completely separate from the TESTPDB local user account created earlier. The upshot of all of this is that, in general, the namespace for a user account is at the level of the PDB. However, there is an exception.

In the root container of a CDB you cannot create normal user accounts, as seen in this example:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> create user dbargf identified by robert;
create user dbargf identified by robert
*
ERROR at line 1:
ORA-65096: invalid common user or role name

This is a problem because we will probably need to create separate accounts to administer the CDB at some level (for backups, for overall management) or even across PDB's but with restricted privileges.

For example, let's say I wanted to have a DBA account called dbargf that would be able to create tablespaces in any PDB. I would create a new kind of user account called a common account.

The common account naming format is similar to a normal account name - except that it starts with a special set of characters, C## by default. Too create a common user account called dbargf we would log into the root container and use the create user command as seen here:

SQL>create user c##dbargf identified by robert;

Likewise you use the drop user command to remove a common user account.

When a common user account is created, the account is created in all of the open PDB's of the pluggable database. At the same time, the account is not granted any privileges.

If a PDB was not open when the common user account is created, it will be created when the PDB is opened. When a PDB is plugged in, the common user account will be added to that PDB.

As I mentioned before, in a non-CDB environment and in PDB's, when a user account is created it does not have any privileges and the same is true with a common user account.

For example, if we try to log into the new c##dbargf account we get a familiar error:

ERROR:
ORA-01045: user C##DBARGF lacks CREATE SESSION privilege; logon denied

The beauty of a common user account or role is that it's privileges can span across PDB's. For example, a common user account can have DBA privileges
in two PDB's in a CDB, but it might not have DBA privileges in the remaining PDB's.

You grant privileges to common users as you would any other user - through the grant command as seen here:

SQL> connect / as sysdba
Connected.

SQL> grant create session to c##dbargf;
Grant succeeded.

SQL> connect c##dbargf/robert
Connected.

When the grant is issued from the ROOT container, the default scope of that grant is just to the ROOT container. As a result of this grant then, we can connect to the root container.

C:appRobertproduct12.1.0.2dbhome_2NETWORKADMIN>sqlplus c##rgfdba/robert
SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 18 14:15:24 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 18 2015 14:15:20 -08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> exit

However if we try to connect to a PDB, we will get an error:

SQL> alter session set container=newpdb;
ERROR:
ORA-01031: insufficient privileges

It is in this way that the isolation of PDB’s is maintained (by default).

The default scope of all grants is limited to the container (or PDB) which they are granted. So, if you grant a privilege in the NEWPDB PDB, then that grant only has effect in the NEWPDB PDB.

As an example of this isolation, let's see what happens when we grant the create user privilege to the c##dbargf user and try to create a new common user with c##dbargf afterwards.

First, we grant the create user privilege - this is pretty much what we do today:

grant create user to c##dbargf;

However, when c##dbargf tries to create a user, we still get an error:

create user c##dbanew identified by dbanew
*
ERROR at line 1:
ORA-01031: insufficient privileges

This serves to underscore that by default, grants to a common user (any user for that matter) by default only apply to the container that the grant occurs in. So, the create privilege grant in this case only applies to the ROOT container.

The problem here is that when you create a common user, Oracle tries to create that user in all PDB's. Since c##dbargf does not have create user privileges in all of the PDB's, the command fails when Oracle recurses through the PDB's and tries to create the c##dbargf common user.

So, how do we deal with this? How do we grant the create user privilege to the c$$dbargf account so that it's able to create other common users.

What we do is use the new containers clause which is part of the grant command. In this example, we are using the container=all parameter to indicate that the grant should apply across all containers. Here is an example:

SQL>Connect / as sysdba
connected.

SQL> grant create session to c##dbargf container=all;
Grant succeeded.

Now, let’s try that create user command again:

SQL> create user c##dbanew identified by dbanew;
User created.

Note that we had to log in as SYSDBA to issue the grant. This is because, at this time, the SYSDBA privileged account was the only account that had the ability to grant the create session privilege across all PDB's.

We could give the c##dbargf account the ability to grant the create user privilege command to other accounts if we had included the "with admin option" option during the grant.

So, it's clear then that just because you create a common user, it's like any other user. It essentially has no rights to begin with anywhere. Next time I'll address common users in PDBs and I'll also talk a bit more about grants in the PDB world.

Robert G. Freeman on Oracle 2015-02-18 01:33:49

I have received a few comments related to my posts on the deprecation of the non-CDB architecture. I must admit, I find some reactions to be a bit confusing.

First, let me be clear - I am NOT an official voice for Oracle. This is MY blog, MY opinions and I am NOT directed by Oracle in any way with respect to what I do (or do not) write here.

So, let's quash the notion that I'm acting as some kind of Oracle Shill when I suggest that there is a major over reaction to this issue.

Let's start with a few snips from comments made in the previous posts on this subject:
------------------------------------------------------------------------------
-------------------------------------------------------------------------------
"People who are not on the bleeding edge do not appreciate being forced into a buggy new code path. This is not FUD, this is experience."

"We do NOT want working code and system procedures to be replaced with something that might work in the future maybe kinda sorta if you get around to it."

"I think once Oracle is seen to be "eating it's own dogfood" more people will lose their fear of the unknown...."
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Based on these quotes, I would think that somehow Oracle had announced that it was ripping out the non-CDB code now, or in 12.1.0.3. That's simply not the case.

The non-CDB code isn't going to be ripped out in 12.2 either. Beyond that, I don't know, but I can't see it being ripped out for quite some time.

Why are people knee jerking about this stuff? Why are assumptions being made, completely without any foundation? I am also often confused about the fact that people look at their enterprise and it's unique issues - and assume that everyone else faces the same issues.

I am confused by the arguments like this one:
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
"We don't want a moving target, we don't want make-work, we want our data to be secure and reachable in a timely manner."
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------

Yes, and THOSE (security and data itself) are moving targets in and of themselves and they NECESSITATE a moving, growing and changing database product.
Security risks are constantly changing and increasing - hacking attempts are more frequent and and more complex. The costs of intrusions are growing dramatically. Are you suggesting that responses to such things such as Data Vault, or encryption at rest - should not be added to the database product so it can remain static and, hopefully, bug free? Is the urgency to avoid bugs so critical that we weigh it higher than development of responses to these risks?

With respect to data being reachable in a timely manner. This too is a moving target. 10 years ago, the sizes of the databases we deal with now were nothing more than speculation. The unique types of data we need to deal with have increased as have the timelines to process this data.

If Oracle had decided to remain even semi-static ten years ago - do you suppose that CERN would be able to process the vast amounts of data that it does with Oracle? Do you suppose that reports that went from running in an hour to time periods of days - because of the incredible increases in data volume - would be something that customers would accept as long as the code base remained stable? It's the constant modification of the optimizer that provides the advanced abilities of the Oracle database.

The biggest of the moving targets are not the database, but rather they are in the business that the database must accomplish. Just because one enterprise does not have a need for those solutions, or can not see the benefit of those solutions, does not mean that there is not a significant set of customers that DO see the benefit in those solutions.

Then there is this statement (sorry Joel - I don't mean to seem that I'm picking on you!)
-------------------------------------------------------------------------------
It's by no means new - the issue that immediately comes to mind is the 7.2 to 7.3 upgrade on VMS. People screamed, Oracle said "tough."
-------------------------------------------------------------------------------
Change is always difficult for people. I agree that change can present serious challenges to the enterprise - and we can focus on those challenges and see the cup as half empty. However - change can be seen as quite the positive too. We make the choice which way we look at it.

This is an opportunity to refine how you do things in the Enterprise. It's an opportunity to do things better, more efficiently and build a smarter and more automated enterprise.

Or, you can moan and complain along the whole path. Change things begrudgingly and ignore the fact that opportunity is staring you in the face. I would argue that if you think you are too busy to deal with this change over the next several years - then perhaps you are not working as efficiently as you could be.
I'd also offer that if your enterprise is so complex, and so fragile, that you can't make the changes needed in the next five years or so - then your problem is not changing Oracle Database software code. It is the complexity that you have allowed to be baked into your enterprise. So - we can look at this in a negative light or we can see it as a call to do better across the board. To work smarter and to simplify complexity.

-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
When will Oracle's own packaged applications be compatible with the PDB architecture. For example E-business suite which still arguably is Oracles best selling ERP suite is still not certified to run on a single instance PDB , let alone multitenant.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Here lies the proof that what Oracle is doing is giving you a LOT of notice about this change to the Oracle architecture. The CDB architecture is a new architecture, and it's true that pretty much all of the Oracle software that actually uses the database does not yet support the CDB/PDB architecture. So, I argue that in losing our cool about the fact that non-CDB will be going away is clearly a knee jerk reaction to something that's coming, for sure, but not tomorrow or anytime soon.

In this one statement should arise the notion that this isn't going to happen anytime soon. So, why are people acting like this is happening next year?
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
I agree with many of the points, but I kind-of disagree with the scripting aspect somewhat.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
So, first let me say that I sympathize with this. However, maybe changing scripts so that they use a service name rather than OS authentication, is an overall improvement in how we manage our enterprises overall.

I'm not saying that this is not probably one of the biggest pain points of a migration to the CDB architecture - it is. I am saying that maybe the idea of using services rather than using OS authentication is a better solution, and that we should have been doing that in the first place anyway.

Most applications should be using services by now anyway. So there should not be a significant amount of pain there.

Perhaps, in an effort to look at positive, we might say that in being forced to modify our existing way of doing things, we are also forced to look at our existing security infrastructure. Are we simply allowing applications to connect via OS authentication? Is this really a best practice? I'm not sure it is. So, there is an opportunity here - if we choose to look at it that way.

-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Your voice carries weight. Your opinions do matter.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------

I think you over value my voice and its impact. :) Be that as it may, I see multitenant as the natural evolution of the database product. There will be a significant amount of time for these changes to mature, and for people to execute migration paths to this new architecture, before we see the plug pulled.
This isn't me speaking as some Oracle shill. I would feel this way should I work for Oracle or anyone else. Remember - I'm the guy that writes the new features books! :)

I think the direction Oracle is going is right on target. It addresses a number of issues that are now being addressed haphazardly with solutions like virtual machines. It addresses the performance of multiple databases on one machine sharing resources most efficiently.

If you should wish to study some of the performance benefits of the Multitenant architecture you can find them listed here:
http://www.oracle.com/technetwork/database/multitenant/learn-more/oraclemultitenantt5-8-final-2185108.pdf

The fact is that, all things being equal (and acknowledging that there will always be the outliers), there are significant performance gains when you use PDB's instead of stand alone databases.

I know that it's easy to take something personally. I know it's frustrating to be pulled, kicking and screaming, into something we don't think we want. I also know that we can sometimes close our minds about something when we have a negative first reaction.

I've been working with Multitenant quite a bit of late. In 12.1.0.2 its solid, but not full featured yet. Is it bugless - of course not. Is the Oracle Database bugless without Multitenant? Nope. Is any large application without bugs, nope. I don't think you stop progress because of fear. You don't stop sending manned missions into space because of the risk of death. You don't stop developing your product because of the risks of bugs. If you do the later, you become a memory.
None of us want our critical data running on databases that are just memories - do we?

We might THINK that we don't want change. We might complain bitterly about change because of the fact that it inconveniences us (and how DARE they inconvenience us!!). We might think our life would be better if things remained the same. The reality - historically - is that static products cease to be meaningful in the marketplace. Otherwise, the Model-T would be selling strong, we would still be using MSDOS and there would be no complex machines like the 747.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Agility - Let my voice carry the message of being agile
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
If my voice carries any weight - then let agility be my message.

I see many DBA's that treat their database environments as if they were living in the 1990's. These environments lack agility - and they use more excuses than I can count to avoid being agile.

For example, I would argue that the fact that we are not using services, and instead relying on OS authentication is all about engineering for agility. Yes, it might be legacy code - but if it is, then the question is are we thinking in terms of agility and using maintenance cycles to modify our code to BE agile?
Probably not - and for many reasons I'm sure.

I argue that one root cause behind these complaints (I did NOT say the only root cause) against the demise of the non-CDB model, boils down into one thing - the ability to be agile.

Now, before you roast me for saying that, please take a moment and think about that argument and ask yourself - if it's not just a little bit possible... If I might just be a little bit right. If I am, then what we are complaining about isn't Oracle - it's how we choose to do business.

That is its own blog post or two ... or three.... And it's what I'll be talking about at the UTOUG very soon!

Note: Edited a bit for clarification... :)

Detecting Source of MySQL Queries with Comments

As a MySQL DBA I already know the data changes that happen on my system. I have logs for that.

However, it’s a common problem that several years into the life of an application, the current developers won’t know where in the codebase queries come from. It’s often hard for them to find the location in the code if queries are formed dynamically; the pattern I show them to optimize doesn’t match anything in the code.

I stumbled on a trick a couple years ago that has been invaluable in tracking down these problematic queries: query comments.

Here’s an example:

When a query generally shows up in a slow query log, it might look something like this:

# Time: 150217 10:26:01
# User@Host: comments[comments] @ localhost []  Id:    13
# Query_time: 0.000231  Lock_time: 0.000108 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424186761;
select * from cars;

That logging shows me who executed the query (the comments user), the server it was executed from (localhost in this case), and what time it was executed (who, where, when).

What this doesn’t tell me is where in my codebase this query is. For example, if I want to change that select * to a more targeted select column1, column2, we may not know where to find it.

This is where comments help.

Comments can be any of three styles:

select * from cars; # comment to end of line here
select * from cars; -- comment to end of line here
select * from /* inline comment here */ cars;

When you add a comment to your query, you’ll capture the comment. Here’s an example.

On the command line:

mysql> select * from cars; # valerie - command line - testing comments

In the slow log:

# Time: 150217 11:25:24
# User@Host: comments[comments] @ localhost []  Id:     3
# Query_time: 0.000246  Lock_time: 0.000119 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424190324;
select * from cars # valerie - command line - testing comments;

This is especially useful if you are forming queries dynamically and can add the name of the piece of code, function, user, etc. to the comment. Those comments would look something like this in the slow log:

# Time: 150217 11:32:04
# User@Host: comments[comments] @ localhost []  Id:     3
# Query_time: 0.000225  Lock_time: 0.000104 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424190724;
select * from cars # userid 7695424 - index.php?search=true - display function;

 

Pythian’s Annual MySQL Community Dinner at Pedro’s

Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members since many of you will be in town for Percona Live that week. Here are the details:

What: The MySQL Community Dinner

When: Monday April 13, 2015 –  7:00 PM at Pedro’s (You are welcome to show up later, too!)

Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

Cost: Tickets are $35 USD, Includes Mexican buffet, non-alcoholic drinks, taxes, and gratuities (Pedro’s Dinner Menu 2015)

How: RSVP through Eventbrite

Attendees:

Laine Campbell

Derek Downey

Gillian Gunson

Miklos Szel

Marco Tusa

Mark Flipi

Alkin Tezuysal

Brian Kelly

Joe Muraski

Patrick Pearson

Looking forward to seeing you all at the event!

 

Travelling in a Wheelchair

It’s that time of the year again, when one of the best grass roots Oracle user group conferences takes place, the Rocky Mountain Oracle User Group Training Days in Denver Colorado. I’ve been privileged enough to present at this conference several times in the past, and always have an absolute blast at it, so I try to submit a few papers each year. This year, I had three papers accepted. Two of them were designed to be deep-dive sessions, one on delivering Schema as a Service and one on delivering Database as a Service, and the other one is a presentation on Snap Clone, which is functionality within Enterprise Manager that takes advantage of Copy-on-Write technology (available either through storage technology or CloneDB in the Oracle Database) to take thin clones of Oracle databases.

But this post isn’t about that. It isn’t even about anything remotely Oracle related. Those of you that have known me for a while know that I’ve had some unfortunate issues with both my back and my left knee (caused by the aforementioned back issues). Unfortunately this year the problems have arisen again, so both standing and walking for any length of time are particularly problematic for me. That means that this year I travelled to the US for the first time using (mostly) a wheelchair. While I can still walk through security scans and so on, anything longer than 5 minutes or so on my feet becomes pretty damn painful, so I thought this was going to be an interesting experience.

If you’ve never had to get around in a wheelchair for any length of time, you really have no understanding of just how difficult it can be to get around in them. It can, for example, take a reasonable amount of brute force to push yourself around in one of them (depending on the surface you’re going over). Even though you’re on wheels, you can get caught in relatively small cracks and nearly tip out of the chair (as Tim Gorman found on when he was pushing me around Denver airport parking lot! :) ). Heading up even relatively small hills can likewise be quite difficult.

All of which I knew to start with, as I’d been using the wheelchair around the shops at home for a while. But this was going to be my first real experience travelling with one. So I rang Qantas (the airline I was travelling with) several weeks before leaving to ask what I needed to do, and whether they needed any information. They asked me for the dimensions of the wheelchair (length, width and height) as well as the weight, and told me they had entered all the information they needed onto my booking, so it should be fairly smooth sailing. I was to be escorted to and from each flight, and someone would help me get between flights as well. They also told me that my first flight (coming to Denver means a short puddle jump in a small plane to Sydney, followed by about 15 hours to Dallas, then another hour and a bit to Denver) was going to be in a Dash 8, and the wheelchair would need to be put in the hold for that flight but I could use the wheelchair to the foot of the stairs, and it would be brought to me at the foot of the stairs again when I landed.

It all sounded too good to be true. And it was. :) Their “smooth sailing” was very hit or miss. My wife dropped me at the airport, coming in with me to check my luggage (try pulling a suitcase around when you’re in a wheelchair!). The lady at check-in was very helpful, but she had none of the details about size and weight that I had provided earlier, so we had to try and guesstimate all of that again because I couldn’t remember the exact figures – so much for all the details being on my booking! After that all got sorted out I said goodbye to my lovely wife and made my way through security. As I have had the same knee replaced twice, I always set off the security scanners and need to be patted down (unless the airport happens to have those full body scanners which make life SO much easier for me when travelling!). From there I wheeled myself off to Qantas Club for a quick latte before heading off to my first flight.

The escort to my first flight turned out to be someone saying “It’s time for you to go now”. That didn’t really matter, as it was a short trip to the gate. At the gate, I was escorted down to the tarmac (via a lift) before everyone else boarded, and as promised they put the wheelchair in the hold. Both myself and the stewardess asked for the wheelchair to be tagged as premium luggage so it would come back to me at the foot of the stairs when we got to Sydney.

Well, that didn’t happen. I had to ask the guys unloading the plane to bring it over as a separate request. What’s more, I waited with the poor stewardesses on the tarmac for 20 minutes for the escort. The stewardesses couldn’t just leave me on the tarmac, and eventually a guy came over from the next plane (apparently handled by a different company even though it was still a Qantas plane) and he took me up to the gate. He said this was a frequent occurrence for planes from Canberra, which doesn’t fill me with confidence for travelling with Qantas again if I still need a wheelchair! When I finally got up to the gate, the escort that had supposed to turn up in the first place finally arrived – no apologies to me or the guy that had helped me (poor form!).

If you haven’t travelled through Sydney before, the domestic and international terminals are separated, and you travel by bus between the two. My new escort took me up to the international transfer lounge, and left me in the hands of Terry, and went off happily having fulfilled his role for all of about 5 minutes (as you can tell I was thrilled by how hard he worked!). Terry, on the other hand, was really good. He took me back down to the tarmac (another area), loaded me up in my wheelchair (first time I’ve ever been in a wheelchair lift), and we drove over to the international lounge, having quite a friendly chat as we went. When we arrived there was an area for people who needed assistance like me to be parked in, so he left me there for someone else to take me through security.

The guy that picked me up there was, to be succinct, fantastic! He took me through Customs and security with an absolute minimum of fuss, collected all my scanned stuff like the laptop and so on while I was being patted down (which happens every time at Security because of my fake knee) and then to the gate. He even checked how long the flight was going to be to see if it was more worthwhile for me to wait in Qantas Club, which as it turned out wasn’t worthwhile as the flight was going to board in 20 minutes or so. He then made sure the wheelchair would be in the main part of the plane with me, which made it much quicker and easier to get it back when I got to Dallas.

The flight to Dallas was the one I was really dreading. It’s about 15 hours, which for someone who is 6 feet 3 inches and stuck in cattle class is normally not good. To my surprise the flight was probably only 2/3 full, so I had an empty seat on one side of me and the aisle on another, so I actually got more sleep on the flight than I normally do. I had to pop some painkillers once, but compare that to the flight to UKOUG in December where I was on heavy duty drugs every 4 hours and you can guess I was very happy with the flight. :)

When I got to Dallas, my escort must’ve been the brother of the last guy in Sydney, because again he took really, really good care of me. As this was the first port of call on US soil, I had to clear Customs and Security again. He managed to handle all of that very smoothly, including collecting my luggage and then pushing me and it to the drop-off point for the luggage – not an easy feat when you’re pushing two items at once! He even took me on the train to move to the terminal where my departure gate was, so again I was very, very happy.

Unfortunately things went downhill again when we arrived at Denver. The flight was only short (a bit over an hour), but when we arrived there was one escort and about 4 or 5 people needing help, so I walked the wheelchair up the ramp to the gate (too steep to do it in the wheelchair) and then wheeled myself out to Baggage Claim where I was met by my good friends Tim and Kellyn.

So overall, I managed to successfully travel from Canberra via Sydney and Dallas to Denver in a wheelchair. Because I travelled in the wheelchair, my pain levels on arrival were MUCH lower than when I travelled to UKOUG on crutches. The experience of transferring between planes was a bit hit or miss, ranging from no help even turning up to some absolutely fantastic help, so Qantas gets a mixed bag of marks from me.

One thing I would stress though – when things were difficult, it was still possible for me to get up and walk short distances. Compare that to the people who can’t get out of wheelchairs at all, and I had it damned easy! If you are able-bodied and can get around without any of these issues, thank your lucky stars or give thanks to whatever god is pertinent to you, because it’s only when you’re put in the place of someone who has to deal with being in a wheelchair all the time that you REALLY get an appreciation of just how hard it can be. To those who are in a wheelchair permanently, all I can say is “RESPECT!”

Latest updates to PerfSheet4, a tool for Oracle AWR data mining and visualization

Topic: This post is about the latest updates to PerfSheet4 v3.7 (February 2015). PerfSheet4 is a tool aimed at DBAs and Oracle performance analysts. It provides a simplified interface to extract and visualize AWR time series data using Excel pivot charts.

 

Log Buffer #409, A Carnival of the Vanities for DBAs

This Log Buffer Edition sheds light at some of the nifty blog post of the week from Oracle, SQL Server and MySQL.

Oracle:

Patch Set Update: Hyperion Data Relationship Management 11.1.2.3.504

The Hitchhiker’s Guide to the EXPLAIN PLAN Part 33: The mother of all SQL antipatterns?

MongoDB as a Glassfish Security Realm

E-Business Suite customers must ensure that their database remains on a level that is covered by Error Correction Support (ECS)

EM12c: How to Retrieve Passwords from the Named Credentials

SQL Server:

How does a View work on a Table with a Clustered Columnstore Index ?

How do you develop and deploy your database?

Microsoft Azure Storage Queues Part 3: Security and Performance Tips

Stairway to SQL Server Security Level 6: Execution Context and Code Signing

Centralize Your Database Monitoring Process

MySQL:

New Galera Cluster version is now released! It includes patched MySQL server 5.6.21 and Galera replication provider 3.9

Shinguz: Nagios and Icinga plug-ins for MySQL 1.0.0 have been released

The next release of MongoDB includes the ability to select a storage engine, the goal being that different storage engines will have different capabilities/advantages, and user’s can select the one most beneficial to their particular use-case. Storage engines are cool.

The MySQL grant syntax allows you to specify dynamic database names using the wildcard characters.

Oracle‘s 10 commitments to MySQL – a 5 year review

Log Buffer #410, A Carnival of the Vanities for DBAs

This Log Buffer Edition spread love of databases just before Valentine’s Day. Lovely blog posts from Oracle, SQL Server and MySQL are here for you to love.

Oracle:

Creating a Mobile-Optimized REST API Using Oracle Service Bus by Steven Davelaar.

GROUP BY – wrong results in 12.1.0.2

Using Edition-Based Redefinition to Bypass Those Pesky Triggers

It’s easy to make mistakes, or overlook defects, when constructing parallel queries – especially if you’re a developer who hasn’t been given the right tools to make it easy to test your code.

If you have a sorted collection of elements, how would you find index of specific value?

SQL Server:

How to use the IsNothing Inspection Function in SSRS

What better way to learn how to construct complex CHECK CONSTRAINTs, use the SQL 2012 window frame capability of the OVER clause and LEAD analytic function, as well as how to pivot rows into columns using a crosstab query?

SQL Server’s GROUP BY clause provides you a way to aggregate your SQL Server data and to group data on a single column, multiple columns, or even expressions. Greg Larsen discusses how to use the GROUP by clause to summarize your data.

Surely, we all know how T-SQL Control-of-flow language works? In fact it is surprisingly easy to get caught out.

Resilient T-SQL code is code that is designed to last, and to be safely reused by others.

MySQL:

The NoSQL databases are gaining increasing popularity. MongoDB, being one of the most established among them, uses JSON data model and offers great scalability and ease of use due to the dynamic data schemas..

Is upgrading RDS like a shit-storm that will not end?

Over the last few MySQL releases the size of the MySQL package have increased in size and it looks like the trend is continuing.

This article details the proper method of load balancing either a Percona XTRADB Cluster (PXC) or MariaDB Cluster.

One common job for a DBA is working with a Development Team member on a batch loading process that is taking more time than expected.

DBMS_STATS.PURGE_STATS

Prior to 11.2.0.4, DBMS_STATS.PURGE_STATS does a slow delete of stats before the parameterised input timestamp.

Why might you be purging? Here’s one such illustration:

https://jhdba.wordpress.com/tag/dbms_stats-purge_stats/

This delete can be slow if these tables are large and there are a number of reasons why they might be so, notably if MMON cannot complete the purge within its permitted timeframe.

But note that if you’re happy to purge all history, there is a special TRUNCATE option if you make the call with a magic timestamp:

exec DBMS_STATS.PURGE_STATS(DBMS_STATS.PURGE_ALL);

but Oracle Support emphasises that:

This option is planned to be used as a workaround on urgent cases and under the advice of Support…

Ah… the old magic value pattern / antipattern!

PURGE_ALL CONSTANT TIMESTAMP WITH TIME ZONE :=
 TO_TIMESTAMP_TZ('1001-01-0101:00:00-00:00','YYYY-MM-DDHH:MI:SSTZH:TZM');

As part of the upgrade to 11.2.0.4, one gotcha is that these history tables become partitioned.

I don’t have a copy of 11.2.0.4 to hand but I do have 12.1.0.2 and the tables here are daily interval partitioned so I presume this is the same.

One plus side of this newly partitioned table is that the PURGE_STATS can now drop old partitions which is quicker than delete but a minor downside is that the tables have global indexes so the recursive/internal operations have to be done with UPDATE GLOBAL INDEXES

One curiosity in the trace file from this operation was this statement:

delete /*+ dynamic_sampling(4) */ 
from   sys.wri$_optstat_histhead_history
where  savtime_date < to_date('01-01-1900', 'dd-mm-yyyy') 
and    savtime not in (select timestamp '0000-01-01 00:00:00 -0:0' + sid + serial#/86400
                       from   gv$session 
                       where  status = 'ACTIVE' 
                       and    con_id in (0, sys_context('userenv', 'con_id')))       
and    rownum <= NVL(:1, rownum)

This is deleting from the P_PERMANENT default partition but why is this necessary and what is that subquery all about, particularly the timestamp ‘0000-01-01 00:00:00 -0:0′ + sid + serial#/86400 bit?


MobaXterm 7.5

MobaXterm 7.5 has been released. You can find the downloads and changelog in the usual places.

If you are a Windows user, MobaXterm is by far the best tool I’ve ever found for SSH and X Server functionality. It’s much simpler than using Cygwin and Putty individually.

Cheers

Tim…


MobaXterm 7.5 was first posted on February 16, 2015 at 8:55 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.