Execution Plans

This is the index to a series of articles I’ve been writing for redgate, published on their AllThingsOracle site, about generating and reading execution plans. I’ve completed a few articles that haven’t yet been published, but I’ll add their URLs when they’re available.

I don’t really know how many parts it’s going to end up as – there’s an awful lot that that you could say about reading execution plans, even when you’re trying to cover just the basics; every time I’ve started writing an episode in the series it’s turned into two episodes.  I’ve delivered 10 parts to redgate so far; the active URLs below are the ones that they are currently online.

Chapter 11 is about to be published, so I’ve popped this catalogue to the top of the stack.  Episode 12 is written, but waiting for its final proof read.

 


Oracle database operating system memory allocation management for PGA – part 3: Oracle 11.2.0.4 and AMM: Quiz

This is a series of blogposts on how the Oracle database makes use of PGA. Earlier posts can be found here (PGA limiting for Oracle 12) and here (PGA limiting for Oracle 11.2).

Today a little wednesday fun: a quiz.

What do you think will happen in the following situation (leave a response as comment please!):

-Oracle Linux x86_64 6u6.
-Oracle database 11.2.0.4 PSU 4
-Oracle database (single instance) with the following parameter set: memory_target=1G. No other memory related parameters set.

Run the pga_filler script (which can be found here (PGA limiting for Oracle 12)), with grow_until set to 2100000000 (approximately 2.1G).

I’ll try to create a blogpost on the outcome and an explanation on short notice!


Tagged: amm, fun, memory, oracle, pga, quiz

New Adventures in Virtual Reality

Back in the early 90s I ventured into virtual reality and was sick for a whole day afterwards.

We have since learned that people become queazy when their visual systems and vestibular systems get out of sync. You have to get the visual response lag below a certain threshold. It’s a very challenging technical problem which Oculus now claims to have cracked. With ever more sophisticated algorithms and ever faster processors, I think we can soon put this issue behind us.

Anticipating this, there has recently been a resurgence of interest in VR. Google’s Cardboard project (and Unity SDK for developers) makes it easy for anyone to turn their smartphone into a VR headset just by placing it into a cheap cardboard viewer. VR apps are also popping up for iPhones and 3D side-by-side videos are all over YouTube.

Some of my AppsLab colleagues are starting to experiment with VR again, so I thought I’d join the party. I bought a cheap cardboard viewer at a bookstore. It was a pain to put together, and my iPhone 5S rattles around in it, but it worked well enough to give me a taste.

I downloaded an app called Roller Coaster VR and had a wild ride. I could look all around while riding and even turn 180 degrees to ride backwards! To start the ride I stared intently at a wooden lever until it released the carriage.

My first usability note: between rides it’s easy to get turned around so that the lever is directly behind you. The first time I handed it to my wife she looked right and left but couldn’t find the lever at all. So this is a whole new kind of discoverability issue to think about as a designer.

Despite appearances, my roller coaster ride (and subsequent zombie hunt through a very convincing sewer) is research.  We care about VR because it is an emerging interaction that will sooner or later have significant applications in the enterprise.  VR is already being used to interact with molecules, tumors, and future buildings, uses cases that really need all three dimensions.  We can think of other uses cases as well; Jake suggested training for service technicians (e.g. windmills) and accident re-creation for insurance adjusters.

That said, both Jake and I remain skeptical.  There are many problems to work through before new technology like this can be adopted at an enterprise scale. Consider the the idea of immersive virtual meetings.  Workers from around the world, in home offices or multiple physical meeting rooms, could instantly meet all together  in a single virtual room, chat naturally with each other, pick up subtle facial expressions, and even make presentations appear in mid air at the snap of a finger.  This has been a holy grail for decades, and with Oculus being acquired by Facebook you might think the time has finally come.

Not quite yet.  There will be many problems to overcome first, not all of them technical.  In fact VR headsets may be the easiest part.

A few of the other technical problems:

  • Bandwidth.  I still can’t even demo simple animations in a web conference because the U.S.  internet system is too slow.  I could do it in Korea or Sweden or China or Singapore, but not here anytime soon.  Immersive VR will require even more bandwidth.
  •  Cameras. If you want to see every subtle facial expression in the room, you’ll need cameras pointing at every face from every angle (or at least one 360 camera spinning in the center of the table).  For those not in the room you’ll need more than just a web cam pointing at someone’s forehead, especially if you want to recreate them as a 3D avatar.  (You’ll need better microphones too, which might turn out to be even harder.)  This is technically possible now, Hollywood can do it, but it will be awhile before it’s cheap, effortless, and ubiquitous.
  •  Choreography.  Movie directors make it look easy, and even as individuals we’re pretty good about scanning a crowded room and following a conversation.  But in a 3-dimensional meeting room full of 3-dimensional people there are many angles to choose from every second.  We will expect our virtual eyes to capture at least as much detail as our real eyes that instinctively turn to catch words and expressions before they happen.  Even if we accept that any given participant will see a limited subset of what the overall system can see, creating a satisfying immersive presence will require at least some artificial intelligence.  There are probably a lot of subtle challenges like this.

And a non-technical problem:

  • Privacy.  Any virtual meeting which can me transmitted can also be recorded and viewed by others not in the meeting.  This includes off-color remarks (now preserved for the ages or at least for future office parties), unflattering camera angles, surreptitious nose picking, etc.  We’ve learned from our own research that people *love* the idea of watching other people but are often uncomfortable about being watched themselves.  Many people are just plain camera shy – and even less fond of microphones.  Some coworkers are uncomfortable with our team’s weekly video conferences.  “Glasshole” is now a word in the dictionary – and glassholes sometimes get beaten up.

So for virtual meetings to happen on an enterprise scale, all of the above problems will have to be solved and some of our attitudes will have to change.  We’ll have to find the right balance as a society – and the lawyers will have to sign off on it.  This may take awhile.

But that doesn’t mean our group won’t keep pushing the envelope (and riding a few virtual roller coasters).  We just have to balance our nerdish enthusiasm with a healthy dose of skepticism about the speed of enterprise innovation.

What are your thoughts about the viability of virtual reality in the enterprise?  Your comments are always appreciated!Possibly Related Posts:

SLOB 2.2 Not Generating AWR reports? Testing Large User Counts With Think Time? Think Processes and SLOB_DEBUG.

I’ve gotten a lot of reports of folks branching out into SLOB 2.2 large user count testing with the SLOB 2.2 Think Time feature. I’m also getting reports that some of the same folks are not getting the resultant AWR reports one expects from a SLOB test.

If you are not getting your AWR reports there is the old issue I blogged about here (click here). That old issue was related to a Redhat bug.  However, if you have addressed that problem, and still are not getting your AWR reports from large user count testing, it might be something as simple as the processes initialization parameter. After all, most folks have been accustomed to generating massive amounts of physical I/O with SLOB at low session counts.

I’ve made a few changes to runit.sh that will help future folks should they fall prey to the simple processes initialization parameter folly. The fixes will go into SLOB 2.2.1.3. The following is a screen shot of these fixes and what one should expect to see in such situation in the future. In the meantime, do take note of SLOB_DEBUG as mentioned in the screenshot:

 

slob2.2-processes-folly


Filed under: oracle

RMOUG Training Days 2015!

OK, Oracle EM hat off, RMOUG Training Days hat on! :)

So as many of you know, RMOUG Board of Directors made the smart move after I joined Oracle, instead of losing a valuable member of the board, they moved me to being a non-voting board member emeritus and realized that I could still serve as the Training Days Conference Director.  The conference is by far, the most demanding position on the board and its a role that I relish and have the skills for.  Oracle is happy.  RMOUG is happy.  Membership is happy.  Training Days is taken care of… :)

This year I’m taking it up a notch and I wanted to talk about why RMOUG Training Days 2015 is the one conference you DON’T want to miss!

Project O.W.L.

The OWL is not just our mascot, it stands for Oracle Without Limits and Project O.W.L. is a new event at Training Days that will offer the attendee some great, new opportunities to learn, to interact with those in the industry and to immerse in the technology we love.  The event will center behind our great exhibition area and will have the following:

RAC Attack

RAC Attack will be back this year and better than ever!  Learn all the ins and outs of an Oracle RAC by building on on your laptop!  Experts will be on hand from the ACE and Oracle community to help you with your questions and make you a RAC Attack ninja!

Clone Attack

Delphix is bringing Clone Attack for those who want to find out how quickly you can provision environments!  Find out how much space and time savings can be reached and do it all on a VM on your laptop!

Oracle Engineered System and Hardware Demo

Want to get up close and personal with some great Oracle hardware?  You’ll get the chance at Project O.W.L.  Oracle is going to be bringing some of the newest, coolest appliances and engineered systems so you can find out just how cool it really is!

Stump the Chump

Have a real technical conundrum?  Want to see if you have the tech question that our experts can’t answer?  We’ll have opportunities to ask the experts your tough questions and if they can’t get you an answer, you’ll get a “I stumped the chump” button to wear proudly at the conference!

New Attendee Recommendation Initiative

I was first introduced to Training Days by the recommendation of a Senior DBA I worked with back in 2004.  There is nothing more valuable than word of mouth and we are going to reward that at TD2015.  If you recommend someone new to Training Days and they list your name on their registration form as the one that recommended them, we’ll reward you with a $25 amazon gift card after the conference!

Special Interest Meetup Lunch

Our SIGs are an important part of our membership.  Show your support the first day by taking your box lunch and sitting in on one of the SIG meetups!  There are so many special interest groups to be a part of, so find out what you’ve been missing out on!

  • Hyperion
  • Big Data and Cloud
  • Enterprise Manager
  • Higher Education
  • Database 12c
  • APEX
  • and others!

ACE Lunches

We’re bringing back our ACE lunches both days again!  If you aren’t in on a SIG lunch, sit with your favorite ACE, ACE Associate or ACE Director and find out what got them where they are in the Oracle community.  Talk tech with the best in the industry!

Deep Dive and Hands on Lab

For those with a full registration pass, (we have single day passes for those that can’t get away for the full conference…)  the first 1/2 day is our gift to you!  From 1-5:15pm on the 17th, you will get to immerse yourself in hands on labs and deep dives from the best of the best in Oracle database, development ADF and APEX and even one of my favorites, Enterprise Manager Database as a Service!  This is a first come first serve sessions that day, (until we hit the room capacity limit, trust me, you do not want to tick off the fire marshal at the convention center… :)) so get there early and get the most out of your full registration!

Two Full Days, 100 Sessions!

Yes, you heard me right-  two days, 100 sessions, 9 tracks!  We have Steven Feuerstein, Jeff Smith, Iggy Fernandez, Kyle Hailey, David Peake, Scott Spendolini, Alex Gorbachev, Graham Wood, Bryn Llewellyn, Carlos Sierra and John King.  New speakers this year for RMOUG, (we work very hard to introduce new speakers into our schedule…) include Bjoern Rost, Rene Antunez, Werner De Gruyter, (Yoda!) and Wayne Van Sluys.

Professional Development and More!

I will be heading up our ever popular Women in Technology series on the first full day again this year!  The panel is starting to form and I look forward to everyone who attends getting the most out of the session and to further their love of their tech career.  This session is not just for women, but for father’s of daughters, husbands who want more for their wives, managers of women employees and even those that are hoping to hire more diversity in their departments!

Jeff Smith and I will be doing another year of Social Media for the Database Professional!  Come learn HOW to do Social Media instead of just the WHY.  We’ll teach you how to automate, find the easy button for social media and how to find your social media style to make it work for you and your career.

The schedule is set up so that there is something impressive all day every day for the 2015 conference and I’m excited to share it with everyone!  Registration is open, so don’t miss out on what is going to be the best Training Days yet!

 



Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon


Copyright © DBA Kevlar [RMOUG Training Days 2015!], All Right Reserved. 2014.

Oracle database operating system memory allocation management for PGA – part 2: Oracle 11.2

This is the second part of a series of blogpost on Oracle database PGA usage. See the first part here. The first part described SGA and PGA usage, their distinction (SGA being static, PGA being variable), the problem (no limitation for PGA allocations outside of sort, hash and bitmap memory), a resolution for Oracle 12 (PGA_AGGREGATE_LIMIT), and some specifics about that (it doesn’t look like a very hard limit).

But this leaves out Oracle version 11.2. In reality, the vast majority of the database that I deal with at the time of writing is at version 11.2, and my guess is that this is not just the databases I deal with, but a general tendency. This could change in the coming time with the desupport of Oracle 11.2, however I suspect the installed base of Oracle version 12 to increase gradually and smoothly instead of in a big bang.

With version 11.2 there’s no PGA_AGGREGATE_LIMIT. This simply means there is no official way to limit the PGA. Full stop. However, there is an undocumented event to limit PGA usage: event 10261. This means that if you want to use this in a production database, you should ask Oracle support to bless the usage of it. On the other hand, Oracle corporation made this event public in an official white paper: Exadata consolidation best practices.

Let’s test event 10261! I’ve got the same table (T2) setup, a description how to set this up, and the anonymous PL/SQL code to allocate PGA using a collection is in the first part. I am using a database version 11.2.0.4 with PSU 4 applied. The reason for choosing this version is that if you run a serious business on Oracle 11.2, THAT should be the version you should be running on!
(disclaimer: everything shown in this blogpost is purely for educational purposes. Do test everything thoroughly before applying this to a production system. Behaviour can or may be different in your specific situation)
The reason for this disclaimer: Bernhard (@bdcbuning_gridit) tweeted that he was warned that when setting it at the instance level, it could crash the instance. I am not sure if this means setting it at runtime, this event is always evaluated at the instance level.

Okay, let’s replicate more or less the test done to Oracle version 12.1.0.2 in the first part. In this database PGA_AGGREGATE_SIZE is set to 500M, now let’s try to set the event to 600M, which means we set the PGA limit to 600M:
This is setting the event on runtime:

SYS@v11204 AS SYSDBA> alter system set events = '10261 trace name context forever, level 600000';

System altered.

This is setting the event in the spfile (which means you need a restart of the instance to activate this event, or the above syntax to set it on runtime):

SYS@v11204 AS SYSDBA> alter system set event = '10261 trace name context forever, level 600000' scope=spfile;

System altered.

The level is the amount of memory to which the PGA must be limited, in kilobytes.

Now start the anonymous PL/SQL block to fill up the PGA with a collection, again set to 900M:

TS@v11204 > @pga_filler
declare
*
ERROR at line 1:
ORA-10260: limit size (600000) of the PGA heap set by event 10261 exceeded
ORA-06512: at line 20

That’s nice! There’s actually a meaningful, describing error message which explains why this PL/SQL block ended!

Let’s look at the actual PGA memory used, as reported by v$pgastat:

SYS@v11204 AS SYSDBA> select value/power(1024,2) from v$pgastat where name = 'maximum PGA allocated';

VALUE/POWER(1024,2)
-------------------
	 676.078125

This is different than setting PGA_AGGREGATE_LIMIT, however there’s still more memory allocated than set as the limit (600000KB), but lesser (676M in 11.2.0.4 versus 1041M in 12.1.0.2). The outside visibility of the limiting happening is different too: there is NO notice of a process hitting the PGA limit set in the alert.log file nor the process’ trace file(!). Another difference is even SYS is limited, a test with the procedure running as SYS gotten me the ORA-10260 too, PGA_AGGREGATE_LIMIT does not limit SYS.

Event 10261 has got the same description to at least as low as version 11.2.0.1. Here’s a test with with the event 10261 set at version 11.2.0.3 to 600M:

TS@v11203 > @pga_filler
declare
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [723], [123552], [top uga heap], [], [], [], [], [], [], [], [], []
ORA-06512: at line 20

As has been detailed in the Oracle white paper, prior to version 11.2.0.4, an ORA-600 [723] is signalled when event 10261 is set, and more PGA memory is allocated as has been specified as limit. The amount of total allocated PGA is 677M, so roughly the same as with version 11.2.0.4.

Because this is a genuine ORA-600 (internal error, ‘OERI’), this gives messages in the alert.log file:

Tue Dec 16 10:40:09 2014
Errors in file /u01/app/oracle/diag/rdbms/v11203/v11203/trace/v11203_ora_8963.trc  (incident=9279):
ORA-00600: internal error code, arguments: [723], [123552], [top uga heap], [], [], [], [], [], [], [], [], []
Incident details in: /u01/app/oracle/diag/rdbms/v11203/v11203/incident/incdir_9279/v11203_ora_8963_i9279.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.

The process’ trace file in the trace directory only points to the incident file, no further details are available there.
The incident trace file contains a complete diagnostics dump.

The behaviour is identical with Oracle 11.2.0.2.

Summary
The limiting of the total amount of PGA memory used must be done using an undocumented event prior to Oracle version 12. The event is 10261. The event is made known in an official white paper. Still I would open a service request with Oracle to ask blessing for setting this. This does not mean this functionality is not needed, I would deem it highly important in almost any environment, even when running a single database: this setting, when done appropriately, protects your system from over allocating memory, which could mean entering the swapping death-spiral. The protection means a process gets an ORA message, and the PGA allocation aborted and deallocated.

With version 11.2.0.4 hitting the limit as set with event 10261 is not published, outside of the process getting the ORA-10260.

With versions prior to 11.2.0.4 (11.2.0.3 and 11.2.0.2 verified) processes do get an ORA-600 [723], which is also visible in the alert.log, and incidents are created accordingly.

When a limit has been set using event 10261, it still means more memory is allocated than set as limit (approximately 677M when 600M is set), but this is way less than with the PGA_AGGREGATE_LIMIT (1041M when 600M is set) in my specific situation. Test this in your own environment when you start using this.

Important addendum:
A very good comment to emphasise on the behaviour of using/setting event 10261 by Alexander Sidorov: this event sets a limit per process, not for the entire instance!! (tested with 11.2.0.4 and 11.2.0.3)


Tagged: consolidation, event, internals, oracle, performance, pga_aggregate_target, tuning

Oracle database operating system memory allocation management for PGA

This post is about memory management on the operating system level of an Oracle database. The first question that might pop in your head is: isn’t this a solved problem? The answer is: yes, if you use Oracle’s AMM (Automatic Memory Management) feature, which let’s you set a limit for the Oracle datababase’s two main memory area’s: SGA and PGA. But in my opinion any serious, real life, usage of an Oracle database on Linux will be (severely) constrained in performance because of the lack of huge pages with AMM, and I personally witnessed very strange behaviour and process deaths with the AMM feature and high demand for memory.

This means that I strongly advise customers to use Oracle’s ASMM (Automatic Shared Memory Management) feature. In the newer versions of 11.2 I found this to be working very well. Earlier versions like 10.2 could suffer from an ever growing shared pool (which also means an ever shrinking buffer cache), especially when bind variables weren’t used. This still could happen, but it seems the SGA memory management feature in 11.2 handles this well in most cases. The ASMM feature means a fixed memory area is allocated for the SGA. SGA allocation has always been fixed outside of the AMM feature, as far as I know.

When ASMM doesn’t work, meaning the memory areas are getting sized wrong and performance is influenced by that, the last option is to size the memory area’s yourself. However, since version 11.2.0.2 Oracle will resize when the memory manager thinks it’s feasible. See Kurt van Meerbeek’s article about that.

That leaves the PGA (Process Global Area) as a memory area on itself. Most databases are using the automatic PGA memory management, which is enabled once the PGA_AGGREGATE_TARGET parameter is set to a non zero value. A common misunderstanding is this setting is actually limiting the overall PGA usage of an instance. The truth is automatic PGA memory management will make attempts to adhere to the PGA_AGGREGATE_TARGET value. These are the actual words in the official Oracle documentation: ‘attempts to adhere’!

This means sort memory, hash memory and bitmap memory will be actively limited in size per process by automatic PGA memory management, any attempt to allocate more than automatic PGA memory management allows will result in moving some contents of these memory areas to the assigned temporary tablespace of the database user, to make room for new data.

However, there are more memory area’s allocatable per process, which are never swapped to disk, thus always will stay in memory, and these could not be limited in an officially supported way prior to Oracle version 12. Two structures which are allocated in PGA and never swapped to disk are PL/SQL collections and PL/SQL tables. Creating and filling these requires the usage of PL/SQL (hence their names); the reason for mentioning this is that if your database is not used by PL/SQL but only SQL, you almost certainly will not run into the problem I describe below.

You might be thinking: wait a minute! Does this mean a developer can just create such a structure, and allocate whatever he/she likes, with all the consequences that it can have, like the operating system starting to swap, and can do that for every single process? Yes, this is what this means. This is why Oracle introduced a parameter called PGA_AGGREGATE_LIMIT with Oracle 12, to effectively limit the overall PGA heap size.

In case you wonder what this means, or even doubting my words, I have written a little program to demonstrate this behaviour.

This is the source code to create my test table T2:

exec dbms_random.seed('abracadabra');
create table t2
as
with generator as (
    select      rownum      id
    from        dual
    connect by
                rownum <= 1000
)
select
    rownum                                                id,
    trunc((rownum-1)/50)                            clustered,
    mod(rownum,20000)                               scattered,
    trunc(dbms_random.value(0,20000))               randomized,
    trunc(sysdate) + dbms_random.value(-180, 180)   random_date,
    dbms_random.string('l',6)                       random_string,
    lpad(rownum,10,0)                               vc_small,
    rpad('x',100,'x')                               vc_padding
from
    generator   g1,
    generator   g2
where
    rownum <= 1000000
;
exec dbms_stats.gather_table_stats(null,'T2');

This is a very smart way to generate a table. I actually borrowed this from Jonathan Lewis.

Next up, I created a small anonymous PL/SQL block to take the contents from the T2 table, and store them in a collection until I hit the limit in the variable ‘grow_until’.

declare
	type sourcetab is table of t2%ROWTYPE;
	c_tmp		sourcetab;
	c_def		sourcetab	:= sourcetab();
	v_b_p		number		:= 0;
	v_c_p		number		:= 0;
	v_b_u		number		:= 0;
	v_c_u		number		:= 0;
	grow_until	number		:= 700000000;
	p_a_t		number;
begin
	select value into v_b_p from v$mystat m, v$statname n where m.statistic#=n.statistic# and name = 'session pga memory max';
	select value into v_b_u from v$mystat m, v$statname n where m.statistic#=n.statistic# and name = 'session uga memory max';
	select value into p_a_t from v$parameter where name = 'pga_aggregate_target';
	select * bulk collect into c_tmp from t2;
	while v_c_p < grow_until loop
		for c in c_tmp.first .. c_tmp.last loop
			c_def.extend(1);
			c_def(c_def.last) := c_tmp(c);
			select value into v_c_p from v$mystat m, v$statname n where m.statistic#=n.statistic# and name = 'session pga memory max';
			select value into v_c_u from v$mystat m, v$statname n where m.statistic#=n.statistic# and name = 'session uga memory max';
			if v_c_p >= grow_until then
				exit;
			end if;
		end loop;
	end loop;
	dbms_output.put_line('vbp : '||v_b_p);
	dbms_output.put_line('vcp : '||v_c_p);
	dbms_output.put_line('vbu : '||v_b_u);
	dbms_output.put_line('vcu : '||v_c_u);
	dbms_output.put_line('pat : '||p_a_t);
end;
/

Please mind the session needs to have create table, create session granted, enough quota in the default tablespace and select on v_$mystat, v_$parameter and v_$statname granted.

This is run on an Oracle 12.1.0.2 database:

TS@v12102 > @pga_filler
vbp : 3535368
vcp : 700051976
vbu : 1103192
vcu : 4755704
pat : 524288000

PL/SQL procedure successfully completed.

The begin sizes of the UGA (vbu) and PGA (vbp) are 1’103’192 and 3’535’368. The PGA_AGGREGATE_TARGET size is set to 524’288’000 (500MB). I did set the grow_until variable to 700’000’000 (roughly 700MB), which is more than PGA_AGGREGATE_TARGET. After running this, it’s easy to spot the values of vcu (UGA allocation) and vcp (PGA allocation). vcu grew to 4’755’704 during the run, however vcp grew to 700’051’976, a little more than 700MB! This shows that the collection is stored in the PGA, and that the collection grew beyond the value set with PGA_AGGREGATE_TARGET.

This behaviour is consistent in versions 12.1.0.1, 11.2.0.4, 11.2.0.3, 11.2.0.2 and 11.2.0.1.

Let me emphasise once again that the above proof of concept code managed to allocate more memory than was set for the overall PGA usage of the entire instance. This can have an enormous, devastating impact on a consolidated database setup (meaning having multiple instances running on a single machine). Typically, once memory consumption of all the processes exceeds physically available memory, the operating system tries to use the swap device, to which it will swap memory pages in and out depending on memory usage of active (=on CPU) processes. Mild swapping shows as severely slowed-down processing (because a number of memory pages for processing need to be read from the swap device and placed in memory, from which the former contents need to be written to the swap device), heavy swapping shows as the machine coming down to a standstill.

Please mind that a diagnosis on the state of memory usage (alias swapping), just by looking at the amount of used swap (as can be seen in the ‘top’ output, or ‘swapon -s’) could be misleading. It’s also important to look at actual swapping in and out, as can be seen with ‘vmstat 1′ (si/so columns) or swap -W. I’ve found several systems which had been running for some time (approximately longer than a month) that had swap usage, sometimes up to 40%, while no ‘active swapping’, so memory pages being transfered to and from the swap device, was happening.

Luckily, starting with Oracle 12 you can actually limit overall PGA usage using the parameter PGA_AGGREGATE_LIMIT. The default value is the greater of (list from Oracle documentation):
a) 2GB
b) 200% of PGA_AGGREGATE_TARGET parameter (or lower if 200% > (90% of physical memory – total SGA size) but not below 100%)
c) 3MB * PROCESSES parameter
The parameter can not set below it’s default value, except when set in a pfile or spfile.

Let’s set the PGA_AGGREGATE_LIMIT to 600MB and see what happens when we start doing a large allocation again:

SQL> alter system set pga_aggregate_limit=600m scope=spfile;

System altered.

SQL> startup force;

Okay, let’s run the pga_filler.sql script again, and try to allocate 900MB. This means the “grow_until” variable must be set to 900000000.
PLEASE MIND this is done as a regular user, the SYS user and background processes other than job queue processes are not subject to the limiting.

TS@v12102 > @pga_filler
declare
*
ERROR at line 1:
ORA-01423: error encountered while checking for extra rows in exact fetch
ORA-00039: error during periodic action
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
ORA-06512: at line 21

Great! Exactly like we expect, right?
Well…yes, but let’s look at the alert.log

Sat Dec 13 15:08:57 2014
Errors in file /u01/app/oracle/diag/rdbms/v12102/v12102/trace/v12102_ora_4147.trc  (incident=46599):
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
Incident details in: /u01/app/oracle/diag/rdbms/v12102/v12102/incident/incdir_46599/v12102_ora_4147_i46599.trc
Sat Dec 13 15:09:07 2014
Dumping diagnostic data in directory=[cdmp_20141213150907], requested by (instance=1, osid=4147), summary=[incident=46599].
Sat Dec 13 15:09:09 2014
Sweep [inc][46599]: completed
Sweep [inc2][46599]: completed

Okay, essentially, this tells us nothing interesting, except for the tracefile. Let’s look in/u01/app/oracle/diag/rdbms/v12102/v12102/trace/v12102_ora_4147.trc, being the tracefile as indicated in the above alert.log snippet:

*** 2014-12-13 15:08:57.351
Process may have gone over pga_aggregate_limit
Just allocated 65536 bytes
Dumping short stack in preparation for potential ORA-4036
----- Abridged Call Stack Trace -----
ksedsts()+244<-ksm_pga_limit_short_stack()+1016<-ksm_check_over_limit()+469<-ksmarfg()+574<-kghgex()+1376<-kghfnd()+361<-kghalo()+4422<-kghgex()+414<-kghfnd()+361<-kghalo()+4422<-kghgex()+414<-kghfnd()+361<-kghalo()+4422<-kghgex()+414<-kghalf()+1003<-klmalf()+103
<-kllcqas()+194<-kcblasm()+108<-kxhfNewBuffer()+607<-qerhjSplitBuild()+632
----- End of Abridged Call Stack Trace -----
=======================================
PRIVATE MEMORY SUMMARY FOR THIS PROCESS
---------------------------------------
******************************************************
PRIVATE HEAP SUMMARY DUMP
781 MB total:
   781 MB commented, 646 KB permanent
   208 KB free (0 KB in empty extents),
     779 MB,   2 heaps:   "koh-kghu call  "            57 KB free held
------------------------------------------------------
Summary of subheaps at depth 1
779 MB total:
   778 MB commented, 110 KB permanent
    63 KB free (0 KB in empty extents),
     667 MB, 42786 chunks:  "pmuccst: adt/record       "
      83 MB, 5333 chunks:  "pl/sql vc2                "

Actually, this is the end of the tracefile. It seems that the pga limit dump (the text in between “Process may have gone over pga_aggregate_limit” to the private memory summary heap dumps) occurs several times before an actual ORA-4036 is triggered. In my private test instance, where I am obviously the only user process doing something, I get a pga limit dump approximately 20 times before the ORA-4036 is actually triggered:

sending 4036 interrupt
Incident 46599 created, dump file: /u01/app/oracle/diag/rdbms/v12102/v12102/incident/incdir_46599/v12102_ora_4147_i46599.trc
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

Did you actually spot the oddity here?

Remember the PGA_AGGREGATE_LIMIT was set to 600M. Now look at the process’ PGA/Private heap summary dump above: it says 781M. Please mind the 781M is the PGA heap of a SINGLE process! When looking at the total PGA allocated for the entire instance, it’s even more:

SYS@v12102 AS SYSDBA> select value/power(1024,2) "MB" from v$pgastat where name = 'maximum PGA allocated';

	MB
----------
1041.16699

So…despite PGA_AGGREGATE_LIMIT set to 600M, according to the v$pgastat view, there’s 1041MB allocated for PGA. Please mind I haven’t looked into how accurate v$pgastat is, but I tend to believe this.

Summary.
I’ve seen PGA_AGGREGATE_TARGET being used as a calculation value for actual PGA usage of an instance. This is simply wrong. The actual amount of PGA memory allocated by the instance is highly depended on what is done, and can be less than PGA_AGGREGATE_TARGET, or more. Automatic PGA can control three per process memory area’s: the sort, hash and bitmap memory area’s. These are sized based on the setting of PGA_AGGREGATE_TARGET and the actual PGA memory usage instance wide. If more memory is needed for sort, hash or bitmap memory than is made available by the memory manager, excess memory needed is allocated in the temporary tablespace. Any other PGA memory allocation is always done, regardless of the setting of PGA_AGGREGATE_TARGET.

Starting with Oracle 12, it seems the actual PGA allocation now can actually be limited with the new parameter PGA_AGGREGATE_LIMIT. However, during some simple testing it shows that actually more memory is allocated than set with PGA_AGGREGATE_LIMIT as limit. I haven’t tested it in more situations, this post is meant to grow awareness that the actual limit as set by PGA_AGGREGATE_LIMIT might not be that hard as you would expect.

Please mind, PGA_AGGREGATE_LIMIT seems to truly limit PGA usage instance wide, not limit the PGA heap per process, as event 10251 (PGA usage limiting way for Oracle 11.2) does. However, once again: PGA_AGGREGATE_LIMIT seems to try to be smart and actually does not limit at the exact size set, but beyond that.

The next post will introduce a way to limit PGA usage in Oracle 11.2. Stay tuned!


Tagged: consolidation, database, exadata, linux, memory usage, oracle, pga, pga_aggregate_limit, pga_aggregate_target, swap, swapping, uga

Watch: Hadoop vs. Riak

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“Riak and Hadoop are quite different data platforms,” Alex says. “Hadoop is actually the system that would process the data that Riak is collecting.” Learn how the two systems are complementary rather than competitive by watching Alex’s video Hadoop vs. Riak.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 


Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Develop against the versions in the Yum repositories!

I saw a tweet by Simon Haslam at the weekend that read,

“I’m v surprised to read that @Atlassian Stash needs newer git than in the latest RHEL/OL6 repos!”

I had a similar experience at the weekend too, but this time with Oracle REST Data Services (ORDS) 3.0 EA. According to the documentation, it requires the following version of Tomcat.

“Apache Tomcat Release 7.0.56 or later”

So let’s have a look at what is typically available from the Yum repositories at the time of writing.

  • RHEL6/OL6 : tomcat6-6.0.24-80.el6.x86_64.rpm
  • EPEL 6 : tomcat-7.0.33-4.el6.noarch.rpm
  • RHEL7/OL7 : tomcat-7.0.42-8.el7_0.noarch.rpm

Realistically, nobody is moving to RHEL7/OL7 for Oracle products yet, since the only database version certified is Oracle 12c (12.1.0.2 onward), so the vast majority of kit out there will be running RHEL6/OL6. With that in mind, the requirement for “Apache Tomcat Release 7.0.56 or later” represents a bit of a support nightmare. As soon as people require a piece of software on a server that is not available from a default (or similar) Yum repository, the product is outside the typical sys admin patching cycle and someone has to make a specific effort to keep the custom installation up to date. You know what that means right? Yeah, it never gets patched… It’s not that installing this stuff is difficult. It’s just the impact it has on sys admin teams that is the problem.

We all want to use the latest and greatest, but there seems to be a disconnect between what the developers do and what is reasonably supportable out here in the real world. Developing stuff against product versions that are not in Yum repos is a mistake IMHO.

I’m going to play around with ORDS on older versions of Tomcat to see what happens…

Cheers

Tim…


Develop against the versions in the Yum repositories! was first posted on December 15, 2014 at 11:04 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Day 3+ of UKOUG Tech 14

Sorry it’s taken a couple of days to finish off this series of blog posts on UKOUG Tech 14. I had expected to get some time on the trip back to write this, but as you’ll see if you keep reading, that really didn’t happen. So let’s go back to Day 3 of the conference.

I started the day by attending Patrick Hurley’s “Adventures in Database Administration” session. This was quite different to the normal sorts of sessions you get at a conference. In fact, it was more like a roundtable session. For a start, Patrick did the whole presentation without a single slide, quite a refreshing difference! Basically he covered his career from the days when he became a DBA (with a brief nod to his development days prior to that). He also made the session much more interactive, getting feedback from attendees on when they had similar experiences. So while I didn’t learn anything new about the latest and greatest product, or deep technical details about something I probably will never be able to apply in my working life, I found this to be quite an enjoyable session overall.

Just before lunch, there was a session on “Rapid Database Cloning using SMU (Snap Management Utility) and ZFSSA”, by Jacco Landlust from Oracle. I think this session was a late add-on, as it wasn’t in the printed agenda, just the agenda boards and mobile app. Unfortunately for Jacco, I was the only attendee for that session, but I was quite interested to see how SMU worked and where it stands relative to Snap Clone (part of the EM product suite that also does rapid database cloning). SMU is specific for ZFS, whereas Snap Clone is storage agnostic (it provides both hardware and software options, including CloneDB – see my earlier posts here and here for more details on Snap Clone). SMU is also a much more standalone product, resulting in differences in clustering, security, auditing and logging. However, SMU does provide its much more limited functionality at a lower price, so for smaller sites it may be a viable alternative for cloning. I found Jacco’s session quite interesting as well, since I hadn’t been aware of SMU before. As I mentioned in my earlier post on Day 2 of the conference, there is also cloning functionality in ODA, so Oracle has quite a range of products to perform cloning, dependant on your hardware and other requirements, so it’s great to see the full range of capabilities in the different Oracle products.

The last session of the day is always a hard slot for a presenter to end up with, even more so when it’s the last day of the conference! This time, there were quite a few sessions I would have liked to get to, ranging from “Oracle Multitenant: Oracle’s Current and Future Architecure” by Dominic Giles, “Deep Dive into ASH and AWR in EM12c and Beyond”, by Kellyn Pot’Vin-Gorman, “Database as a Service on the Oracle Database Appliance Platform” by Marc Fielding and Maris Elsins, “Optimizing and Simplifying Complex SQL with Advanced Grouping” by Jared Still, and the “Women in IT” roundtable. I’d seen Kellyn’s presentation before, so that made it easy to cross that one off the list (sorry, Kellyn!). I would have liked to get to the Women in IT session as they are always interesting (though it did surprise me that it was on at the same time as Kellyn’s presentation as she is very passionate about Women in IT), but in the end went to Marc and Maris’s presentation. I had already seen the slides as Marc and Maris had sent them to me earlier in the week to validate some of their comments about EM12c, but you get so much more out of a presentation than just what’s on the slides if you attend the presentation. Of course, that’s if you have at least half decent presenters, and in this case both Marc and Maris are far more than half decent presenters! Again, it was a very interesting presentation to me personally, so that was a good way to round out the official part of the conference.

From there I went down to the Smuggler’s Cove bar and restaurant for a few drinks with some of the other attendees. The restaurant part was booked out for dinner, so we had dinner at another restaurant on Albert Dock, and that was the end of UKOUG Tech 14 for me! All in all, I found the conference to be very enjoyable. It has quite a number of presentations I wanted to get to (more than I could physically attend, unfortunately, as is often the case!) but of course, the best part of any conference for me is the networking you do with other attendees. It was great to meet up with quite a few people that I had interacted with before but hadn’t had the chance to physically meet, and of course it’s always enjoyable to catch up with old friends as well. Faye Wood and the other conference organizers did a great job, with most things working fairly smoothly. The only advice I would offer the organizers would be to not use that conference venue again, though. For that, there are three reasons:

  1. The presentation rooms off the exhibition hall itself were awful, from a sound perspective. It was impossible to NOT get sound bleed from the other presentation rooms, which made it physically quite difficult to hear at times. That was probably worse for me than most attendees, as I already have a hearing loss that makes it difficult for me to hear in noisy environments.
  2. The sessions in the main part of the venue were fine – unless it happened to rain hard or hail, which it did quite a few times over the course of the conference. Whatever the material was that the roof was made of, it was incredibly noisy when that happened and again I had a lot of difficulty hearing.
  3. A number of presenters I spoke to had difficulty getting their slides to present on the screens for attendees to see. My experience was (I think) unfortunately the worst, but others also had issues.

On Thursday, I went to Barclays to cover some of the material I had presented at the conference, as well as having a discussion about some of their specific issues with EM. This session was organized by Ian Carney, a good friend of mine from my days with Oracle in the USA. He had organized for quite a few of the Oracle speakers at the conference to visit Barclays at different times during the week, including Larry Carpenter, Uwe Hesse, Joel Goodman, Maria Colgan, and Graham Wood, so I felt quite honoured to also be asked to present with such an illustrious group.

From there I headed off to Manchester Airport, and the long trip home. This time I flew to Heathrow, then on to Dubai, Sydney and Canberra. Thankfully, I missed the issues they had at Heathrow because of problems in their computer systems. Unfortunately I wasn’t allowed to use the British Airways at Manchester lounge with my Qantas Club membership. As is often the case, if you are flying domestically and NOT flying business class, there is no reciprocity between the lounges, which has never impressed me much! :( As I had quite a distance to get around at Heathrow (and again at Sydney), the airline staff organized wheelchairs to get me from the gate to my next point of departure which did make life MUCH easier! Thankfully, I could use the BA lounge at Heathrow, but they were having issues with their wifi which meant I couldn’t get this blog posted there. The changes at both Dubai and Sydney were pretty tight, so I went direct to the gates both times, but at least this time my luggage made it with me! :)

I must say it’s great to be back home again, and to sleep in my own bed. To the conference organizers, a job well done. Apart from the sound issues, I really enjoyed my time at UKOUG Tech 14, and hopefully, I’ll be able to make it back again in the not too distant future!