ANSI expansion

Here’s a quirky little bug that appeared on the OTN database forum in the last 24 hours which (in 12c, at least) produces an issue which I can best demonstrate with the following cut-n-paste:


SQL> desc purple
 Name                                Null?    Type
 ----------------------------------- -------- ------------------------
 G_COLUMN_001                        NOT NULL NUMBER(9)
 P_COLUMN_002                                 VARCHAR2(2)

SQL> select p.*
  2  from GREEN g
  3    join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
  4    join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001;
  join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001
       *
ERROR at line 4:
ORA-01792: maximum number of columns in a table or view is 1000


SQL> select p.g_column_001, p.p_column_002
  2  from GREEN g
  3    join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
  4    join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001;

no rows selected

A query that requires “star-expansion” fails with ORA-01792, but if you explicitly expand the ‘p.*’ to list all the columns it represents the optimizer is happy.

The person who highlighted the problem supplied code to generate the tables so you can repeat the tests very easily; one of the quick checks I did was to modify the code to produce tables with a much smaller number of columns and then expanded the SQL to see what Oracle would have done with the ANSI. So, with only 3 columns each in table RED and GREEN, this is what I did:

set serveroutput on
set long 20000

variable m_sql_out clob

declare
    m_sql_in    clob :=
                        '
                        select p.*
                        from GREEN g
                        join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
                        join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001
                        ';
begin

    dbms_utility.expand_sql_text(
        m_sql_in,
        :m_sql_out
    );

end;
/

column m_sql_out wrap word
print m_sql_out

The dbms_utility.expand_sql_text() function is new to 12c, and you’ll need the execute privilege on the dbms_utility package to use it; but if you want to take advantage of it in 11g you can also find it (undocumented) in a package called dbms_sql2.

Here’s the result of the expansion (you can see why I reduced the column count to 3):


M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."G_COLUMN_001_6" "G_COLUMN_001","A1"."P_COLUMN_002_7" "P_COLUMN_002"
FROM  (SELECT "A3"."G_COLUMN_001_0" "G_COLUMN_001","A3"."G_COLUMN_002_1"
"G_COLUMN_002","A3"."G_COLUMN_003_2" "G_COLUMN_003","A3"."G_COLUMN_001_3"
"G_COLUMN_001","A3"."R_COLUMN__002_4" "R_COLUMN__002","A3"."R_COLUMN__003_5"
"R_COLUMN__003","A2"."G_COLUMN_001" "G_COLUMN_001_6","A2"."P_COLUMN_002"
"P_COLUMN_002_7" FROM  (SELECT "A5"."G_COLUMN_001"
"G_COLUMN_001_0","A5"."G_COLUMN_002" "G_COLUMN_002_1","A5"."G_COLUMN_003"
"G_COLUMN_003_2","A4"."G_COLUMN_001" "G_COLUMN_001_3","A4"."R_COLUMN__002"
"R_COLUMN__002_4","A4"."R_COLUMN__003" "R_COLUMN__003_5" FROM
"TEST_USER"."GREEN" "A5","TEST_USER"."RED" "A4" WHERE
"A5"."G_COLUMN_001"="A4"."G_COLUMN_001") "A3","TEST_USER"."PURPLE" "A2" WHERE
"A3"."G_COLUMN_001_0"="A2"."G_COLUMN_001") "A1"

Tidying this up:


SELECT
        A1.G_COLUMN_001_6 G_COLUMN_001,
        A1.P_COLUMN_002_7 P_COLUMN_002
FROM    (
        SELECT
                A3.G_COLUMN_001_0 G_COLUMN_001,
                A3.G_COLUMN_002_1 G_COLUMN_002,
                A3.G_COLUMN_003_2 G_COLUMN_003,
                A3.G_COLUMN_001_3 G_COLUMN_001,
                A3.R_COLUMN__002_4 R_COLUMN__002,
                A3.R_COLUMN__003_5 R_COLUMN__003,
                A2.G_COLUMN_001 G_COLUMN_001_6,
                A2.P_COLUMN_002 P_COLUMN_002_7
        FROM    (
                SELECT
                        A5.G_COLUMN_001 G_COLUMN_001_0,
                        A5.G_COLUMN_002 G_COLUMN_002_1,
                        A5.G_COLUMN_003 G_COLUMN_003_2,
                        A4.G_COLUMN_001 G_COLUMN_001_3,
                        A4.R_COLUMN__002 R_COLUMN__002_4,
                        A4.R_COLUMN__003 R_COLUMN__003_5
                FROM
                        TEST_USER.GREEN A5,
                        TEST_USER.RED A4
                WHERE
                        A5.G_COLUMN_001=A4.G_COLUMN_001
                ) A3,
                TEST_USER.PURPLE A2
        WHERE
                A3.G_COLUMN_001_0=A2.G_COLUMN_001
        ) A1


As you can now see, the A1 alias lists all the columns in GREEN, plus all the columns in RED, plus all the columns in PURPLE – totalling 3 + 3 + 2 = 8. (There is a little pattern of aliasing and re-aliasing that turns the join column RED.g_column_001 into G_COLUMN_001_3, making it look at first glance as if it has come from the GREEN table).

You can run a few more checks, increasing the number of columns in the RED and GREEN tables, but essentially when the total number of columns in those two tables goes over 998 then adding the two extra columns from PURPLE makes that intermediate inline view break the 1,000 column rule.

Here’s the equivalent expanded SQL if you identify the columns explicitly in the select list (even with several hundred columns in the RED and GREEN tables):


SELECT
        A1.G_COLUMN_001_2 G_COLUMN_001,
        A1.P_COLUMN_002_3 P_COLUMN_002
FROM    (
        SELECT
                A3.G_COLUMN_001_0 G_COLUMN_001,
                A3.G_COLUMN_001_1 G_COLUMN_001,
                A2.G_COLUMN_001 G_COLUMN_001_2,
                A2.P_COLUMN_002 P_COLUMN_002_3 
        FROM    (
                SELECT 
                        A5.G_COLUMN_001 G_COLUMN_001_0,
                        A4.G_COLUMN_001 G_COLUMN_001_1
                FROM 
                        TEST_USER.GREEN A5,
                        TEST_USER.RED A4
                WHERE 
                        A5.G_COLUMN_001=A4.G_COLUMN_001
                ) A3,
                TEST_USER.PURPLE A2 
        WHERE
                A3.G_COLUMN_001_0=A2.G_COLUMN_001
        ) A1


As you can see, the critical inline view now holds only the original join columns and the columns required for the select list.

If you’re wondering whether this difference in expansion could affect execution plans, it doesn’t seem to; the 10053 trace file includes the following (cosmetically altered) output:


Final query after transformations:******* UNPARSED QUERY IS *******
SELECT
        P.G_COLUMN_001 G_COLUMN_001,
        P.P_COLUMN_002 P_COLUMN_002 
FROM
        TEST_USER.GREEN   G,
        TEST_USER.RED     R,
        TEST_USER.PURPLE  P 
WHERE
        G.G_COLUMN_001=P.G_COLUMN_001 
AND     G.G_COLUMN_001=R.G_COLUMN_001

So it looks as if the routine to transform the syntax puts in a lot of redundant text, then the optimizer takes it all out again.
The problem doesn’t exist with traditional Oracle syntax, by the way, it’s an artefact of Oracle’s expansion of the ANSI syntax.


ASH

There was a little conversation on Oracle-L about ASH (active session history) recently which I thought worth highlighting – partly because it raised a detail that I had got wrong until Tim Gorman corrected me a few years ago.

Once every second the dynamic performance view v$active_session_history copies information about active sessions from v$session. (There are a couple of exceptions to the this rule – for example if a session has called dbms_lock.sleep() it will appear in v$session as state = ‘ACTIVE’, but it will not be recorded in v$active_session_history.) Each of these snapshots is referred to as a “sample” and may hold zero, one, or many rows.

The rows collected in every tenth sample are flagged for copying into the AWR where, once they’ve been copied into the underlying table, they can be seen in the view dba_hist_active_sess_history.  This is where a common misunderstanding occurs: it is not every 10th row in v$active_session_history it’s every 10th second; and if a sample happens to be empty that’s still the sample that is selected (which means there will be a gap in the output from dba_hist_active_sess_history). In effect dba_hist_active_sess_history holds copies of the information you’d get from v$session if you sampled it once every 10 seconds instead of once per second.

It’s possible to corroborate this through a fairly simple query as the rows from v$active_session_history that are going to be dumped to the AWR are as they are created:


select
        distinct case is_awr_sample when 'Y' then 'Y' end flag,
        sample_id,
        sample_time
from
        v$active_session_history
where
        sample_time > sysdate - 1/1440
order by
        2,1
;

F  SAMPLE_ID SAMPLE_TIME
- ---------- --------------------------------
     3435324 26-MAR-15 05.52.53.562 PM
     3435325 26-MAR-15 05.52.54.562 PM
     3435326 26-MAR-15 05.52.55.562 PM
     3435327 26-MAR-15 05.52.56.562 PM
     3435328 26-MAR-15 05.52.57.562 PM
     3435329 26-MAR-15 05.52.58.562 PM
     3435330 26-MAR-15 05.52.59.562 PM
     3435331 26-MAR-15 05.53.00.562 PM
Y    3435332 26-MAR-15 05.53.01.562 PM
     3435333 26-MAR-15 05.53.02.572 PM
     3435334 26-MAR-15 05.53.03.572 PM
     3435335 26-MAR-15 05.53.04.572 PM
     3435336 26-MAR-15 05.53.05.572 PM
     3435337 26-MAR-15 05.53.06.572 PM
     3435338 26-MAR-15 05.53.07.572 PM
     3435339 26-MAR-15 05.53.08.572 PM
     3435340 26-MAR-15 05.53.09.572 PM
     3435341 26-MAR-15 05.53.10.582 PM
Y    3435342 26-MAR-15 05.53.11.582 PM
     3435343 26-MAR-15 05.53.12.582 PM
     3435344 26-MAR-15 05.53.13.582 PM
     3435345 26-MAR-15 05.53.14.582 PM
     3435346 26-MAR-15 05.53.15.582 PM
     3435347 26-MAR-15 05.53.16.582 PM
     3435348 26-MAR-15 05.53.17.582 PM
     3435349 26-MAR-15 05.53.18.592 PM
     3435350 26-MAR-15 05.53.19.592 PM
     3435351 26-MAR-15 05.53.20.592 PM
Y    3435352 26-MAR-15 05.53.21.602 PM
     3435355 26-MAR-15 05.53.24.602 PM
     3435358 26-MAR-15 05.53.27.612 PM
     3435361 26-MAR-15 05.53.30.622 PM
     3435367 26-MAR-15 05.53.36.660 PM
     3435370 26-MAR-15 05.53.39.670 PM
     3435371 26-MAR-15 05.53.40.670 PM
     3435373 26-MAR-15 05.53.42.670 PM
     3435380 26-MAR-15 05.53.49.700 PM
     3435381 26-MAR-15 05.53.50.700 PM
Y    3435382 26-MAR-15 05.53.51.700 PM
     3435383 26-MAR-15 05.53.52.700 PM

40 rows selected.

As you can see at the beginning of the output the samples have a sample_time that increases one second at a time (with a little slippage), and the flagged samples appear every 10 seconds at 5.53.01, 5.53.11 and 5.53.21; but then the instance becomes fairly idle and there are several sample taken over the next 20 seconds or so where we don’t capture any active sessions; in particular there are no rows in the samples for 5.53.31, and 5.53.41; but eventually the instance gets a little busy again and we see that we’ve had active sessions in consecutive samples for the last few seconds, and we can see that we’ve flagged the sample at 5.53.51 for dumping into the AWR.

You’ll notice that I seem to be losing about 1/100th second every few seconds; this is probably a side effect of virtualisation and having a little CPU-intensive work going on in the background. If you see periods where the one second gap in v$active_session_history or 10 second gap in dba_hist_active_sess_history has been stretched by several percent you can assume that the CPU was under pressure over that period. The worst case I’ve seen to date reported gaps of 12 to 13 seconds in dba_hist_active_sess_history.  The “one second” algorithm is “one second since the last snapshot was captured” so if the process that’s doing the capture doesn’t get to the top of the runqueue in a timely fashion the snapshots slip a little.

When the AWR snapshot is taken, the flagged rows from v$active_session_history are copied to the relevant AWR table. You can adjust the frequency of sampling for both v$active_session_history, and dba_hist_active_sess_history, of course – there are hidden parameters to control both: _ash_sampling_interval (1,000 milliseconds) and _ash_disk_filter_ratio (10). There’s also a parameter controlling how much memory should be reserved in the shared pool to hold v$active_session_history.: _ash_size (1048618 bytes per session in my case).  The basic target is to keep one hour’s worth of data in memory, but if there’s no pressure for memory you can find that the v$active_session_history holds more than the hour; conversely, if there’s heavy demand for memory and lots of continuously active sessions you may find that Oracle does “emergency flushes” of v$active_session_history between the normal AWR snapshots. I have heard of people temporarily increasing the memory and reducing the interval and ratio – but I haven’t yet felt the need to do it myself.

 


PowerPoint 2013 Always Starts Minimized

This post has nothing to do with Oracle or Enterprise Manager at all, so if that’s all you’re interested in you can stop reading now. :)

Yesterday I ran into the situation where PowerPoint would only open minimized on the task bar and nothing I could do would get it to budge from there. The only way I could get PowerPoint to start and be displayed on any monitor was to start it in safe mode. [As an aside, to start PowerPoint (or indeed any Office product) in Safe Mode you just need to hold down the Control key while double-clicking either the PowerPoint program or any PowerPoint document.] After starting PowerPoint successfully in Safe Mode, I could see PowerPoint on my screen, but when I closed it down and tried to restart in normal mode, it would always start minimized.

So one answer is simply to start always in Safe Mode, which of course means a variety of things are disabled or don’t work fully. Not a very good answer. :) I searched a bit further afield, and found some Microsoft notes that basically explained the screen X-Y coordinates for PowerPoint may be corrupt (actually the notes were talking about Microsoft Word, not PowerPoint, but the explanation applies in either case). In the support note, it suggests 3 different methods to fix the issue:

  1. Maximize the program
  2. Use the move feature to reposition the program into view
  3. Edit the registry

Obviously you should work through these from top to bottom, but I had already found that the first two did nothing in my situation, so I was left with editing the registry.

IMPORTANT: As always, as the Microsoft note says, “Before you modify the registry, make sure to back it up and make sure that you understand how to restore the registry if a problem occurs. For information about how to back up, restore, and edit the registry, click the following article number to view the article in the Microsoft Knowledge Base: 256986 Description of the Microsoft Windows Registry”. In other words, do the rest of this at your own risk!!!

The note that I found from Microsoft talked about removing a registry key that for me didn’t exist. Of course, it was somewhat dated and there may be a more up to date version that I hadn’t found. In any case, here’s how I fixed the problem:

  • Run regedit.exe to start the Registry Editor
  • Navigate through the tree to HKEY_CURRENT_USERSoftwareMicrosoftOffice15.0PowerPoint
  • Right-click on “Options” and select “Export” to save a backup in case I needed it (which in fact I did – see below for the reason)
  • Delete the “Options” key completely

Voila! PowerPoint now started and displayed on the screen as I expected.

NOTE: I might also have been able to simply delete the entries Bottom, Left, Right and Top in the Options key, but I didn’t try that directly. Maybe if someone else that has this situation arise can test this and put a comment back on this post, we can see if that solution works as well. There were other options listed such as PersonalTemplates, ToolbarConfigSaved and so on that it could have been useful to keep. I added those back by editing the export I had taken beforehand to remove the Bottom, Left, Right and Top entries, then opening the Registry Editor and importing them back by choosing File -> Import, which merged the entries from my backup into the Options key.

To patch or not to patch?

Mary Ann Davidson wrote a great piece on her security blog today, which basically talked about focusing on the important vulnerabilities, not necessarily the ones that get the most press. Added to that, the risk associated with a vulnerability may well be different for you compared to everyone else, depending on how your system is used. I agree with what she is saying, but I’m going to take a slightly different angle on the subject.

Over the years I’ve come across lots of different attitudes to database patching from management and DBAs. As more DBAs are now involved in looking after middle tier products like WebLogic, some of those attitudes to patching have moved into that world too. It seems to break down into three camps.

  1. We don’t need no stinkin’ patches. If this is the way you roll you are a dumb-ass!
  2. We only patch stuff that represents a vulnerability for us. This sounds kind-of sensible, but life can get very difficult trying to decide what constitutes a threat, especially when you have to consider all layers of the technology stack.
  3. We always apply all patches. This is logically simple, but you are going to apply a lot more patches, a lot more frequently, which is going to require a lot more testing.

Patching is not just about security.

  • Support of some products is tied into the patch version. We see this with Oracle products all the time. There are some important deadlines coming up soon! :)
  • The rest of the world is moving on around you. You might be happy with your unpatched product, but things might break because of external factors. If someone turns off SSLv3 on their application server, you aren’t doing HTTPS callouts from your database to it unless you patch up to 11.2.0.3 or later. Your applications will probably get browser compatibility issues on newer browsers and mobile devices unless you keep on top of patching your development frameworks.
  • Patches like the database PSUs come with extra functionality, including backports of features from newer releases (redaction – 11.2.0.4). They can also bring with them features that make future upgrades easier (transport database – 11.2.0.3 onward).

Choosing not to patch is not really an option these days. Your company has to understand this and allocate the correct amount of resource to getting it done. That might mean more staff resources allocated to patching and subsequent testing (rather than doing “productive” work), outsource the work where you can or moving to cloud services where regular patching is part of the deal.

Cheers

Tim…


To patch or not to patch? was first posted on March 26, 2015 at 5:50 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Pythian – Data Experts Blog 2015-03-26 15:21:45

Today I will be debating Oracle 12c’s In-Memory option with Maria Colgan of Oracle (aka optimizer lady, now In-Memory lady).

This will be in a debate form with lots of Q&A from the audience. Come ask the questions you always wanted to ask.

Link to register and attend:
https://attendee.gotowebinar.com/register/7874819190629618178

Starts at 12:00pm EDT.

12c MView refresh

Some time ago I wrote a blog note describing a hack for refreshing a large materialized view with minimum overhead by taking advantage of a single-partition partitioned table. This note describes how Oracle 12c now gives you an official way of doing something similar – the “out of place” refresh.

I’ll start by creating a matieralized view and creating a couple of indexes on the resulting underlying table; then show you three different calls to refresh the view. The materialized view is based on all_objects so it can’t be made available for query rewrite (ORA-30354: Query rewrite not allowed on SYS relations) , and I haven’t created any materialized view logs so there’s no question of fast refreshes – but all I intend to do here is show you the relative impact of a complete refresh.


create materialized view mv_objects nologging
build immediate
refresh on demand
as
select
        *
from
        all_objects
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'mv_objects',
		method_opt 	 => 'for all columns size 1'
	);
end;
/

create index mv_obj_i1 on mv_objects(object_name) nologging compress;
create index mv_obj_i2 on mv_objects(object_type, owner, data_object_id) nologging compress 2;

This was a default install of 12c, so there were about 85,000 rows in the view. You’ll notice that I’ve created all the objects as “nologging” – this will have an effect on the work done during some of the refreshes.

Here are the three variants I used – all declared explicitly as complete refreshes:


begin
	dbms_mview.refresh(
		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> true
	);
end;
/

begin
	dbms_mview.refresh(
		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> false
	);
end;
/

begin
	dbms_mview.refresh(
		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> false,
		out_of_place		=> true
	);
end;
/

The first one (atomic_refresh=>true) is the one you have to use if you want to refresh several materialized views simultaneously and keep them self consistent, or if you want to ensure that the data doesn’t temporarily if all you’re worried about is a single view. The refresh works by deleting all the rows from the materialized view then executing the definition to generate and insert the replacement rows before committing. This generates a lot of undo and redo – especially if you have indexes on the materialized view as these have to be maintained “row by row” and may leave users accessing and applying a lot of undo for read-consistency purposes. An example at a recent client site refreshed a table of 6.5M rows with two indexes, taking about 10 minutes to refresh, generating 7GB of redo as it ran, and performing 350,000 “physical reads for flashback new”. This strategy does not take advantage of the nologging nature of the objects – and as a side effect of the delete/insert cycle you’re likely to see the indexes grow to roughly twice their optimal size and you may see the statistic “recursive aborts on index block reclamation” climbing as the indexes are maintained.

The second option (atomic_refresh => false) is quick and efficient – but may result in wrong results showing up in any code that references the materialized view (whether explicitly or by rewrite). The session truncates the underlying table, sets any indexes on it unusable, then reloads the table with an insert /*+ append */. The append means you get virtually no undo generated, and if the table is declared nologging you get virtually no redo. In my case, the session then dispatched two jobs to rebuild the two indexes – and since the indexes were declared nologging the rebuilds generated virtually no redo. (I could have declared them with pctfree 0, which would also have made them as small as possible).

The final option is the 12c variant – the setting atomic_refresh => false is mandatory if we want  out_of_place => true. With these settings the session will create a new table with a name of the form RV$xxxxxx where xxxxxx is the hexadecimal version of the new object id, insert the new data into that table (though not using the /*+ append */ hint), create the indexes on that table (again with names like RV$xxxxxx – where xxxxxx is the index’s object_id). Once the new data has been indexed Oracle will do some name-switching in the data dictionary (shades of exchange partition) to make the new version of the materialized view visible. A quirky detail of the process is that the initial create of the new table and the final drop of the old table don’t show up in the trace file, although the commands to drop and create indexes do appear. (The original table, though it’s dropped after the name switching, is not purged from the recyclebin.) The impact on undo and redo generation is significant – because the table is empty and has no indexes when the insert takes place the insert creates a lot less undo and redo than it would if the table had been emptied by a bulk delete – even though the insert is a normal insert and not an append; then the index creation honours my nologging definition, so produces very little redo. At the client site above, the redo generated dropped from 7GB to 200MB, and the time dropped to 200 seconds which was 99% CPU time.

Limitations, traps, and opportunities

The manuals say that the out of place refresh can only be used for materialized views that are joins or aggregates and, surprisingly, you actually can’t use the method on a view that simply extracts a subset of rows and columns from a single table.  There’s a simple workaround, though – join the table to DUAL (or some other single row table is you want to enable query rewrite).

Because the out of place refresh does an ordinary insert into a new table the resulting table will have no statistics – you’ll have to add a call to gather them. (If you’ve previously been using a non-atomic refreshes this won’t be a new problem, of course). The indexes will have up to date statistics, of course, because they will have been created after the table insert.

The big opportunity, of course, is to change a very expensive atomic refresh into a much cheaper out of place refresh – in some special cases. My client had to use the atomic_refresh=>true option in 11g because they couldn’t afford to leave the table truncated (empty) for the few minutes it took to rebuild; but they might be okay using the out_of_place => true with atomic_refresh=>false in 12c because:

  • the period when something might break is brief
  • if something does wrong the users won’t get wrong (silently missing) results they’ll an Oracle error (ORA-08103, probably)
  • the queries use this particular materialized view directly, and are all very quick and light-weight
  • most queries will probably run correctly even if they run through the moment of exchange

I don’t think we could guarantee that last statement – and Oracle Corp. may not officially confirm it – and it doesn’t matter how many times I show queries succeeding but it’s true. Thanks to “cross-DDL read-consistency” as it was called in 8i when partition-exchange appeared and because the old objects still exist in the data files, provided your query doesn’t hit a block that has been overwritten by a new object, or request a space management block that was zero-ed out on the “drop” a running query can keep on using the old location for an object after it has been replaced by a newer version. If you want to make the mechanism as safe as possible you can help – put each relevant materialized view (along with its indexes) into its own tablespace so that the only thing that is going to overwrite an earlier version of the view is the stuff you create on the next refresh.

 


UKOUG System Event : I’ve got a paper selected.

I’ve just found out I’ve got a paper selected for the UKOUG System Event on May 20th. Check out my badge. :)

I was a spectator at last year’s event. At first glance you might think much of the content is not directly related to my job, since I’m not a system administrator, virtual infrastructure administrator and I don’t use any Oracle engineered systems, appliances or storage products. Having said all that, it’s hard to be a DBA these days without having a finger in several pies. Most of the information discussed at these events is relevant, even if you are not using the exact same kit or doing the exact same job as the speaker.

Hope to see you there.

Cheers

Tim…


UKOUG System Event : I’ve got a paper selected. was first posted on March 26, 2015 at 1:10 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Linux 7.1 Preinstall Packages available for 11gR2 and 12cR1

Every so often I have a nose around the contents of the Oracle Linux public yum repositories and guess what I found in the OL7.1 base and OL7 latest repositories.

Yeeeeeaaaaahhhhhh!

The datestamps suggest they’ve been around since the 5th February, but I think these only became available with the release of OL7.1.

On the positive side, this means installations of 11g and 12c just got a whole lot easier on Oracle Linux 7. On the downside, I’ve got some minor rewrites to do. :)

Cheers

Tim…


Oracle Linux 7.1 Preinstall Packages available for 11gR2 and 12cR1 was first posted on March 26, 2015 at 9:39 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Is Your Shellshocked Poodle Freaked Over Heartbleed?


Security weenies will understand that the above title is not as nonsensical as it appears. Would that it were mere nonsense. Instead, I suspect more than a few will read the title and their heads will throb, either because the readers hit themselves in the head, accompanied by the multicultural equivalents of “oy vey” (I’d go with “aloha ‘ino”), or because the above expression makes them reach for the most potent over- the-counter painkiller available.


For those who missed it, there was a sea change in security vulnerabilities reporting last year involving a number of mass panics around “named” vulnerabilities in commonly-used – and widely-used – embedded libraries. For example, the POODLE vulnerability (an acronym for Padding Oracle On Downgraded Legacy Encryption) affects SSL version 3.0, and many products and services using SSL version 3.0 use third party library implementations. The Shellshock vulnerabilities affect GNU bash, a program that multiple Unix-based systems use to execute command lines and command scripts. These vulnerabilities (and others) were widely publicized (the cutesie names helped) and resulted in a lot of scrambling to find, fix, and patch the vulnerabilities. The cumulative result of a number of named vulnerabilities last year in widely-used and deployed libraries I refer to as the Great Shellshocked Poodle With Heartbleed Security Awakening (GSPWHSA). It was a collective IT community eye opener as to:


The degree to which common third party components are embedded in many products and services
The degree to which vendors (and customers) did not know where-all these components actually were used, or what versions of them were used
And, to some degree (more on which below) the actual severity of these issues



A slight digression on how we got to a Shellshocked Poodle with Heartbleed. Way back in the olden days (when I started working at Oracle), the Internet hadn’t taken off yet, and there weren’t as many standard ways of doing things. The growth of the Internet led to the growth of standards (e.g., SSL, now superseded by TLS) so Stuff Would Work Together. The requirement for standards-based interoperability fostered the growth of common libraries (many of them open source), because everyone realized it was dumb to, say, build your own pipes when you could license someone else’s ready-made pipe libraries. Open source/third party libraries helped people build things faster that worked together, because everyone wasn’t building everything from scratch. None of these – standards, common libraries, open source – are bad things. They are (mostly) very good things that have fostered the innovation we now take for granted.



Historically, development organizations didn’t always keep careful track of where all the third party libraries were used, and didn’t necessarily upgrade them regularly. To some degree, the “not upgrade” was understandable – unless there is a compelling reason to move from Old Reliable to New and Improved (as in, they actually are improved and there is a benefit to using the new stuff), you might as well stick with Old and Reliable. Or so it seemed.


When security researchers began focusing on finding vulnerabilities in widely-used libraries, everyone got a rude awakening that their library of libraries (that is, listing of what components were used where) needed to be a whole lot better, because customers wanted to know very quickly the answer to “is the product or cloud service I am using vulnerable?” Moreover, many vendors and service providers realized that, like it or not, they needed to aggressively move to incorporate reasonably current (patched) versions of libraries because, if the third party component you embed is not supported for the life of the product or service you are embedding it in, you can’t get a security patch when you need one: in short, “you are screwed,” as we security experts say. I’ve remarked a lot recently, with some grumbling, that people don’t do themselves any favors by continuing to incorporate libraries older than the tablets of Moses (at least God is still supporting those).


Like all religious revivals, the GSPWHSA has thus resulted in a lot of people repenting of their sins: “Forgive me, release manager, for I have sinned, I have incorporated an out-of-support library in my code.” “Three Hail Marys and four version upgrades, my son…” Our code is collectively more holy now, we all hope, instead of continuing to be hole-y. (Yes, that was a vile pun.) This is a good thing.


The second aspect of the GSPWHSA is more disturbing, and that is, for lack of a better phrase, the “marketing of security vulnerabilities.” Anybody who knows anything about business knows how marketing can – and often intends to – amplify reality. Really, I am sure I can lose 20 pounds and find true love and happiness if I only use the right perfume: that’s why I bought the perfume! Just to get the disclaimer out of the way, no, this is not another instance of the Big Bad Vendor complaining about someone outing security vulnerabilities. What’s disturbing to me is the outright intent to create branding around security vulnerabilities and willful attempt to create a mass panic – dare we say “trending?” – around them regardless of the objective threat posed by the issue. A good example is the FREAK vulnerability (CVE-2015-0204). The fix for FREAK was distributed by OpenSSL on January 8th. It was largely ignored until early March when it was given the name FREAK.  Now, there are a lot of people FREAKing out about this relatively low risk vulnerability while largely ignoring unauthenticated, network, remote code execution vulnerabilities.

Here’s how it works. A researcher first finds vulnerability in a widely-used library: the more widely-used, the better, since nobody cares about a vulnerability in Digital Buggy Whip version 1.0 that is, like, so two decades ago and hardly anybody uses. OpenSSL has been a popular target, because it is very widely used so you get researcher bragging rights and lots of free PR for finding another problem in it. Next, the researcher comes up with a catchy name. You get extra points for it being an acronym for the nature of the vulnerability, such as SUCKS – Security Undermining of Critical Key Systems. Then, you put up a website (more points for cute animated creature dancing around and singing the SUCKS song). Add links so visitors can Order the T-shirt, Download the App, and Get a Free Bumper Sticker! Get a hash tag. Develop a Facebook page and ask your friends to Like your vulnerability. (I might be exaggerating, but not by much.) Now, sit back and wait for the uninformed public to regurgitate the headlines about “New Vulnerability SUCKS!” If you are a security researcher who dreamed up all the above, start planning your speaking engagements on how the world as we know it will end, because (wait for it), “Everything SUCKS.”

Now is where the astute reader is thinking, “but wait a minute, isn’t it really a good thing to publicize the need to fix a widely-embedded library that is vulnerable?” Generally speaking, yes. Unfortunately, most of the publicity around some of these security vulnerabilities is out of proportion to the actual criticality and exploitability of the issues, which leads to customer panic. Customer panic is a good thing – sorta – if the vulnerability is the equivalent of the RMS Titanic’s “vulnerability” as exploited by a malicious iceberg. It’s not a good thing if we are talking about a rowboat with a bad case of chipped paint. The panic leads to suboptimal resource allocation as code providers (vendors and open source communities) are – to a point – forced to respond to these issues based on the amount of press they are generating instead of how serious they really are. It also means there is other more valuable work that goes undone. (Wouldn’t most customers actually prefer that vendors fix security issues in severity order instead of based on “what’s trending?”). Lastly, it creates a shellshock effect with customers, who cannot effectively deal with a continuous string of exaggerated vulnerabilities that cause their management to apply patches as soon as possible or document that their environment is free of the bug.


The relevant metric around how fast you fix things should be objective threat. If something has a Common Vulnerability Scoring System (CVSS) Base Score of 10, then I am all for widely publicizing the issue (with, of course, the Common Vulnerability Enumeration (CVE) number, so people can read an actual description, rather than “run for your lives, Godzilla is stomping your code!”) If something is CVSS 2, I really don’t care that it has a cuter critter than Bambi as a mascot and generally customers shouldn’t, either. To summarize my concerns, the willful marketing of security vulnerabilities is worrisome for security professionals because:

It creates excessive focus on issues that are not necessarily
truly critical

It creates grounds for confusion (as opposed to using CVEs)

It creates a significant support burden for organizations,* where resources would be better spent elsewhere

I would therefore, in the interests of constructive suggestions, recommend that customers assess the following criteria before calling all hands on deck over the next “branded” security vulnerability being marketed as the End of Life On Earth As We Know It:


1. Consider the source of the vulnerability information. There are some very good sites (arstechnica comes to mind) that have well-explained, readily understandable analyses of security issues. Obviously, the National Vulnerability Database (NVD) is also a great source of information.


2. Consider the actual severity of the bug (CVSS Base Score) and the exploitation scenario to determine “how bad is bad.”


3. Consider where the vulnerability exists, its implications, and whether mitigation controls exist in the environment: e.g., Heartbleed was CVSS 5.0, but the affected component (SSL), the nature of the information leakage (possible compromise of keys), and the lack of mitigation controls made it critical.


* e.g., businesses patching based on the level of hysteria rather than the level of threat


Organizations should look beyond cutesie vulnerability names so as to focus their attention where it matters most.  Inquiring about the most recent medium-severity bugs will do less in term of helping an organization secure its environment than, say applying existing patches for higher severity issues. Furthermore, it fosters a culture of “security by documentation” where organizations seek to collect information about a given bug from their cloud and software providers, while failing to apply existing patches in their environment. Nobody is perfect, but if you are going to worry, worry about vulnerabilities based on How Bad Is Bad, and not based on which ones have catchy acronyms, mascots or have generated a lot of press coverage.





Try Oracle 12c VM with Delphix download


9104210308_a63b5ae5c4_z
 photo by Jose Maria Cuellar (CC 2.0)

Thanks to Leighton Nelson who pointed out  that :

Oracle has a pre-installed Linux VM with 12c

Delphix as well has a pre-installed   trial version 

Download both of these and start them up in Virtualbox and you can start virtualizing your PDBs.
The Oracle pre-installed VM has a few eccentricities that have to be addressed before using it. There is no IP address and there is no history of the install and it’s missing a glibc library. After we address these in the instructions below you can link to the PDB and then provision that PDB back as a virtual PDB (vPDB) or provision that PDB to another 12c instance on another machine as a vPDB.

Here is a video of linking a PDB to Delphix and provisioning  a vPDB

before doing the above with the Oracle pre-installed VM, follow these steps:

After importing and starting, change network to “Bridge Adaptor” and restarted services
get the ip address with “ifconfig -a” and vi  /etc/hosts to add hostname “oraclevm” with the  ip address. Also increase /dev/shm or otherwise will get “MEMORY_TARGET not supported on this system”
su -
service network restart
# note the IP of the machine
ifconfig -a

# add hostname “oraclevm” and current IP to hosts
vi /etc/hosts
hostname oraclevm
echo oraclevm  >  /proc/sys/kernel/hostname

#umount tmpfs might give errors, but seems to do the trick anway
umount tmpfs
mount -t tmpfs shmfs -o size=1500m /dev/shm

mkdir /mnt/provision
chmod 755 /mnt
chmod 755 /mnt/provision

yum install glibc.i686

yum might get error

Existing lock /var/run/yum.pid: another copy is running as PID ….

Waited a few minutes and was able to run. Oracle VM must run some yum stuff just after starting up. After one try, the running yum process never seemed to exit, so rebooted VM and upon reboot was able to run yum

 

 

Back as Oracle unset TWO_TASK as it blocks “/ as sysdba” connections and get rid of the .bash output as it messes up scp and Delphix trying to put the toolkit onto the box

unset TWO_TASK
cp .bashrc .bashrc.orig
cat .bashrc.orig | grep -v cat > .bashrc

Set up  Delphix c##delphix container user and delphix PDB user

sqlplus / as sysdba << EOF
create user c##delphix identified by delphix;
grant create session to c##delphix;
alter user c##delphix set container_data=all container=current;
grant select any dictionary to c##delphix;
create or replace view v_x$kccfe as select * from x$kccfe;
grant select on v_x$kccfe to c##delphix;
create synonym c##delphix.x$kccfe for v_x$kccfe;
alter session set container=orcl;
create user delphix identified by delphix;
grant create session to delphix;
grant select any dictionary to delphix;
create or replace view v_x$kccfe as select * from x$kccfe;
grant select on v_x$kccfe to delphix;
create synonym delphix.x$kccfe for v_x$kccfe;
EOF

make a toolkit directory

mkdir ~/toolkit

Add the inventory directory and the inventory file with version info that Delphix uses

mkdir $ORACLE_HOME/inventory
mkdir $ORACLE_HOME/inventory/ContentsXML
cat << EOF >  $ORACLE_HOME/inventory/ContentsXML/comps.xml
<?xml version=”1.0″ standalone=”yes” ?>
<PRD_LIST>
<TL_LIST>
<COMP NAME=”oracle.server” VER=”12.1.0.2.0″ BUILD_NUMBER=”0″ REP_VER=”0.0.0.0.0″ RELEASE=”Production”INV_LOC=”Components/oracle.server/12.1.0.2.0/1/” LANGS=”en” XML_INV_LOC=”Components21/oracle.server/12.1.0.2.0/”ACT_INST_VER=”12.1.0.2.0″ DEINST_VER=”11.2.0.0.0″ INSTALL_TIME=”2014.
Sep.19 15:31:29 EDT” INST_LOC=”/home/oracle/base/oracle12102/oracle.server”>
</COMP>
</TL_LIST>
</PRD_LIST>
EOF 

Put database in archive log mode

sqlplus / as sysdba << EOF
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;
alter system switch logfile;
alter system switch logfile;
EOF

Use the machine name in listener.ora and tnsnames.ora instead of IP from 0.0.0.0
LISTENER =  (DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = oraclevm )(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)

vi $ORACLE_HOME/network/admin/listener.ora
lsnrctl stop
lsnrctl start
sqlplus / as sysdba << EOF
alter system register;
EOF

Then in Delphix UI, go to

Manage -> Environments
click “+” sign
add host
after host is added click on the database tab  (far right disk icon)
on database tab for environment, click on “discover”
for login/password give  c##delphix / delphix  (the CDB login we created)
this will discover the PDBs
now click on Delphix or Manage-> Databases -> My databases
click “+” sign
select the orcl PDB on cdb1
for login/password give delphix / delphix (the PDB login we created)

 

After creating the  vPDB it shows up for the listener

lsnrct stat

Service “vorc_fd7” has 1 instance(s).
Instance “cdb1″, status READY, has 1 handler(s) for this service…

I add an entry to $ORACLE_HOME/network/admin/tnsnames.ora

DELPHIX =
  (DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = oraclevm)(PORT = 1521))
    (CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = vorc_fd7)
)
)

and then connect with SQL*Plus

sqlplus delphix/delphix@DELPHIX

 

2401421989_e05053cbf7_z

photo by Yannis (CC 2.0)